PK D artistName Oracle Corporation book-info cover-image-hash 984126329 cover-image-path OEBPS/dcommon/oracle-logo.jpg package-file-hash 387343521 publisher-unique-id E25608-04 unique-id 556127215 genre Oracle Documentation itemName Oracle® Data Guard Concepts and Administration, 11g Release 2 (11.2) releaseDate 2012-12-18T11:02:33Z year 2012 PK9`;XSPKD PKYuPKD Cover

Oracle Corporation

PK[pTOPKD What's New in Oracle Data Guard?

What's New in Oracle Data Guard?

The features and enhancements described in this preface were added to Oracle Data Guard in Oracle Database 11g.

Oracle Database 11g Release 2 (11.2.0.3) New Features in Oracle Data Guard

The following new features are specific to SQL Apply in Oracle Data Guard 11g Release 2 (11.2.0.3):

  • Support for XMLType data stored as binary XML

  • Support for XMLType data stored in object-relational format

Support for both these storage formats requires that the primary database be running Oracle Database 11g Release 2 (11.2.0.3) or higher with a redo compatibility setting of 11.2.0.3 or higher. See "Datatype Considerations" for more information about supported data types.

New Features in Oracle Data Guard 11.2

The following sections describe the new features and enhancements that were added in Oracle Data Guard 11g Release 2 (11.2):

New 11.2 Features Common to Redo Apply and SQL Apply

  • As of Oracle Database 11g Release 2 (11.2.0.2), Oracle Data Guard is fully integrated with Oracle Real Application Clusters One Node (Oracle RAC One Node).

  • A Data Guard configuration can now consist of a primary database and up to 30 standby databases.

  • The FAL_CLIENT database initialization parameter is no longer required.

  • The default archive destination used by the Oracle Automatic Storage Management (Oracle ASM) feature and the fast recovery area feature has changed from LOG_ARCHIVE_DEST_10 to LOG_ARCHIVE_DEST_1.

  • Redo transport compression is no longer limited to compressing redo data only when a redo gap is being resolved. When compression is enabled for a destination, all redo data sent to that destination is compressed.

  • The new ALTER SYSTEM FLUSH REDO SQL statement can be used at failover time to flush unsent redo from a mounted primary database to a standby database, thereby allowing a zero data loss failover to be performed even if the primary database is not running in a zero data loss data protection mode. See Section 8.2.2 for more information.

New 11.2 Features Specific to Redo Apply

  • You can configure apply lag tolerance in a real-time query environment by using the new STANDBY_MAX_DATA_DELAY parameter.

  • You can use the new ALTER SESSION SYNC WITH PRIMARY SQL statement to ensure that a suitably configured physical standby database is synchronized with the primary database as of the time the statement is issued.

  • The V$DATAGUARD_STATS view has been enhanced to a greater degree of accuracy in many of its columns, including apply lag and transport lag.

  • You can view a histogram of apply lag values on the physical standby. To do so, query the new V$STANDBY_EVENT_HISTOGRAM view.

  • A corrupted data block in a primary database can be automatically replaced with an uncorrupted copy of that block from a physical standby database that is operating in real-time query mode. A corrupted block in a physical standby database can also be automatically replaced with an uncorrupted copy of the block from the primary database.


See Also:

Section 9.2, "Opening a Physical Standby Database" for more information about each of these features

New 11.2 Features Specific to SQL Apply

  • Logical standby databases support tables with basic table compression, OLTP table compression, and Hybrid Columnar Compression.


    See Also:


  • Logical standby and the LogMiner utility support tables with SecureFiles LOB columns. Compression and encryption operations on SecureFiles LOB columns are also supported. (De-duplication operations and fragment-based operations are not supported.)

  • Changes made in the context of XA global transactions on an Oracle RAC primary database are replicated on a logical standby database.

  • Online redefinition performed at the primary database using the DBMS_REDEFINITION PL/SQL package is transparently replicated on a logical standby database.

  • Logical Standby supports the use of editions at the primary database, including the use of edition-based redefinition to upgrade applications with minimal downtime.


    See Also:


  • Logical standby databases support Streams Capture. This allows you to offload processing from the primary database in one-way information propagation configurations and make the logical standby the hub that propagates information to multiple databases. Streams Capture can also propagate changes that are local to the logical standby database.

New Features in Oracle Data Guard 11.1

The following sections describe the new features and enhancements that were added in Oracle Data Guard 11g Release 1 (11.1):

New 11.1 Features Common to Redo Apply and SQL Apply

  • Compression of redo traffic over the network in a Data Guard configuration

    This feature improves redo transport performance when resolving redo gaps by compressing redo before it is transmitted over the network.


    See Also:

    "COMPRESSION" attribute

  • Redo transport response time histogram

    The V$REDO_DEST_RESP_HISTOGRAM dynamic performance view contains a histogram of response times for each SYNC redo transport destination. The data in this view can be used to assist in the determination of an appropriate value for the LOG_ARCHIVE_DEST_n NET_TIMEOUT attribute.


    See Also:

    "NET_TIMEOUT" attribute

  • Faster role transitions

  • Strong authentication for redo transport network sessions

    Redo transport network sessions can now be authenticated using SSL. This provides strong authentication and makes the use of remote login password files optional in a Data Guard configuration.

  • Simplified Data Guard management interface

    The SQL statements and initialization parameters used to manage a Data Guard configuration have been simplified through the deprecation of redundant SQL clauses and initialization parameters.


    See Also:


  • Enhancements around DB_UNIQUE_NAME

    You can now find the DB_UNIQUE_NAME of the primary database from the standby database by querying the new PRIMARY_DB_UNIQUE_NAME column in the V$DATABASE view. Also, Oracle Data Guard release 11g ensures each database's DB_UNIQUE_NAME is different. After upgrading to 11g, any databases with the same DB_UNIQUE_NAME will not be able to communicate with each other.

  • Use of physical standby database for rolling upgrades

    A physical standby database can now take advantage of the rolling upgrade feature provided by a logical standby. Through the use of the new KEEP IDENTITY clause option to the SQL ALTER DATABASE RECOVER TO LOGICAL STANDBY statement, a physical standby database can be temporarily converted into a logical standby database for the rolling upgrade, and then reverted back to the original configuration of a primary database and a physical standby database when the upgrade is done.

  • Heterogeneous Data Guard Configuration

    This feature allows a mix of Linux and Windows primary and standby databases in the same Data Guard configuration.

New 11.1 Features Specific to Redo Apply

New 11.1 Features Specific to SQL Apply

  • Support Transparent Data Encryption (TDE)

    Data Guard SQL Apply can be used to provide data protection for the primary database with Transparent Data Encryption enabled. This allows a logical standby database to provide data protection for applications with advanced security requirements.

  • Dynamic setting of Data Guard SQL Apply parameters

    You can now configure specific SQL Apply parameters without requiring SQL Apply to be restarted. Using the DBMS_LOGSTDBY.APPLY_SET package, you can dynamically set initialization parameters, thus improving the manageability, uptime, and automation of a logical standby configuration.

    In addition, the APPLY_SET and APPLY_UNSET subprograms include two new parameters: LOG_AUTO_DEL_RETENTION_TARGET and EVENT_LOG_DEST.


    See Also:

    DBMS_LOGSTDBY PL/SQL package in the Oracle Database PL/SQL Packages and Types Reference

  • Enhanced Oracle RAC switchover support for logical standby databases

    When switching over to a logical standby database where either the primary database or the standby database is using Oracle RAC, the SWITCHOVER command can be used without having to shut down any instance, either at the primary or at the logical standby database.

  • Enhanced DDL handling in Oracle Data Guard SQL Apply

    SQL Apply will execute parallel DDLs in parallel (based on availability of parallel servers).

  • Use of the PL/SQL DBMS_SCHEDULER package to create Scheduler jobs on a standby database

    Scheduler Jobs can be created on a standby database using the PL/SQL DBMS_SCHEDULER package and can be associated with an appropriate database role so that they run when intended (for example, when the database is the primary, standby, or both).

PK& UUPKD LOG_ARCHIVE_DEST_n Parameter Attributes

15 LOG_ARCHIVE_DEST_n Parameter Attributes

This chapter provides reference information for the attributes of the LOG_ARCHIVE_DEST_n initialization parameter, where n is an integer between 1 and 31. The following list shows the attributes for the parameter:

Usage Notes

  • Each database in a Data Guard configuration will typically have one destination with the LOCATION attribute for the archival of the online and standby redo logs and one destination with the REMOTE attribute for every other database in the configuration.

  • If configured, each LOG_ARCHIVE_DEST_1 through LOG_ARCHIVE_DEST_10 destination must contain either a LOCATION or SERVICE attribute to specify a local disk directory or a remotely accessed database, respectively. Each LOG_ARCHIVE_DEST_11 through LOG_ARCHIVE_DEST_31 destination must contain a SERVICE attribute.

    All other attributes are optional.

  • LOG_ARCHIVE_DEST_11 through LOG_ARCHIVE_DEST_31 cannot be specified as an ALTERNATE redo transport location.

  • LOG_ARCHIVE_DEST_11 through LOG_ARCHIVE_DEST_31 can only be used when the COMPATIBLE initialization parameter is set to 11.2.0.0 or later.


Note:

Several attributes of the LOG_ARCHIVE_DEST_n initialization parameter have been deprecated. These attributes are supported for backward compatibility only and are documented in the Oracle Database Reference.


See Also:

Chapter 6 for more information about defining LOG_ARCHIVE_DEST_n destinations and setting up redo transport services


AFFIRM and NOAFFIRM

Controls whether a redo transport destination acknowledges received redo data before or after writing it to the standby redo log:

  • AFFIRM—specifies that a redo transport destination acknowledges received redo data after writing it to the standby redo log.

  • NOAFFIRM—specifies that a redo transport destination acknowledges received redo data before writing it to the standby redo log.

CategoryAFFIRMNOAFFIRM
Data typeKeywordKeyword
Valid valuesNot applicableNot applicable
Default ValueNot applicableNot applicable
Requires attributesSERVICESERVICE
Conflicts with attributesNOAFFIRMAFFIRM
Corresponds toAFFIRM column of the V$ARCHIVE_DEST viewAFFIRM column of the V$ARCHIVE_DEST view

Usage Notes

  • If neither the AFFIRM nor the NOAFFIRM attribute is specified, the default is AFFIRM when the SYNC attribute is specified and NOAFFIRM when the ASYNC attribute is specified.

  • Specification of the AFFIRM attribute without the SYNC attribute is deprecated and will not be supported in future releases.


See also:

SYNC and ASYNC attributes

Examples

The following example shows the AFFIRM attribute for a remote destination.

LOG_ARCHIVE_DEST_3='SERVICE=stby1 SYNC AFFIRM'
LOG_ARCHIVE_DEST_STATE_3=ENABLE

ALTERNATE

Specifies an alternate archiving destination to be used when the original destination fails.

CategoryALTERNATE=LOG_ARCHIVE_DEST_n
Data TypeString
Valid ValueA LOG_ARCHIVE_DEST_n destination, where n is a value from 1 through 10.
Default ValueNone. If an alternate destination is not specified, then redo transport services do not automatically change to another destination.
Requires attributesNot applicable
Conflicts with attributesNone Foot 1 
Corresponds toALTERNATE and STATUS columns of the V$ARCHIVE_DEST view

Footnote 1 If the REOPEN attribute is specified with a nonzero value, the ALTERNATE attribute is ignored. If the MAX_FAILURE attribute is also specified with a nonzero value, and the failure count exceeds the specified failure threshold, the ALTERNATE destination is enabled. Therefore, the ALTERNATE attribute does not conflict with a nonzero REOPEN attribute value.

Usage Notes

  • LOG_ARCHIVE_DEST_11 through LOG_ARCHIVE_DEST_31 cannot be specified as an alternate redo transport location.

  • The ALTERNATE attribute is optional. If an alternate destination is not specified, then redo transport services do not automatically change to another destination if the original destination fails.

  • You can specify only one alternate destination for each LOG_ARCHIVE_DEST_n parameter, but several enabled destinations can share the same alternate destination.

  • Ideally, an alternate destination should specify either:

    • A different disk location on the same local standby database system (shown in Example 15-1)

    • A different network route to the same standby database system (shown in Example 15-2)

    • A remote standby database system that closely mirrors that of the enabled destination

  • If no enabled destinations reference an alternate destination, the alternate destination is implied to be deferred, because there is no automatic method of enabling the alternate destination. However, you can enable (or defer) alternate destinations at runtime using ALTER SYSTEM.

  • Any destination can be designated as an alternate destination, given the following restrictions:

    • At least one local mandatory destination is enabled.

    • The number of enabled destinations must meet the defined LOG_ARCHIVE_MIN_SUCCEED_DEST parameter value.

    • A destination cannot be its own alternate.

  • Increasing the number of enabled destinations decreases the number of available alternate archiving destinations.

  • When a destination fails, its alternate destination is enabled on the next archival operation. There is no support for enabling the alternate destination in the middle of the archival operation because that would require rereading already processed blocks. This is identical to the REOPEN attribute behavior.

  • If the REOPEN attribute is specified with a nonzero value, the ALTERNATE attribute is ignored unless the MAX_FAILURE attribute has a nonzero value. If the MAX_FAILURE and REOPEN attributes have nonzero values and the failure count exceeds the specified failure threshold, the ALTERNATE destination is enabled. Therefore, the ALTERNATE attribute does not conflict with a nonzero REOPEN attribute value.

Examples

In the sample initialization parameter file in Example 15-1, LOG_ARCHIVE_DEST_1 automatically fails over to the alternate destination LOG_ARCHIVE_DEST_2 on the next archival operation if an error occurs or the device becomes full.

Example 15-1 Automatically Failing Over to an Alternate Destination

LOG_ARCHIVE_DEST_1='LOCATION=/disk1 MANDATORY MAX_FAILURE=1 
 ALTERNATE=LOG_ARCHIVE_DEST_2'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_2='LOCATION=/disk2 MANDATORY'
LOG_ARCHIVE_DEST_STATE_2=ALTERNATE

In both this example and in the following example, the MAX_FAILURE attribute must be specified or the destination will not fail over to the alternate destination when a problem is encountered. There is no default value for MAX_FAILURE; you must supply a value.

Example 15-2 Defining an Alternate Oracle Net Service Name to the Same Standby Database

This example shows how to define an alternate Oracle Net service name to the same standby database.

LOG_ARCHIVE_DEST_1='LOCATION=/disk1 MANDATORY MAX_FAILURE=1'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_2='SERVICE=stby1_path1 ALTERNATE=LOG_ARCHIVE_DEST_3'
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_3='SERVICE=stby1_path2'
LOG_ARCHIVE_DEST_STATE_3=ALTERNATE

COMPRESSION

The COMPRESSION attribute is used to specify whether redo data is compressed before transmission to a redo transport destination.


Note:

Redo transport compression is a feature of the Oracle Advanced Compression option. You must purchase a license for this option before using the redo transport compression feature.

CategoryCOMPRESSION=ENABLE or DISABLE
Data TypeBoolean
Valid valuesENABLE or DISABLE
Default valueDISABLE
Requires attributesNone
Conflicts with attributesNone
Corresponds toCOMPRESSION column of the V$ARCHIVE_DEST view

Usage Notes

  • The COMPRESSION attribute is optional. If it is not specified, the default compression behavior is DISABLE.

Example

The following example shows the COMPRESSION attribute with the LOG_ARCHIVE_DEST_n parameter.

LOG_ARCHIVE_DEST_3='SERVICE=denver SYNC COMPRESSION=ENABLE'
LOG_ARCHIVE_DEST_STATE_3=ENABLE

DB_UNIQUE_NAME

Specifies a unique name for the database at this destination.

CategoryDB_UNIQUE_NAME=name
Data TypeString
Valid valuesThe name must match the value that was defined for this database with the DB_UNIQUE_NAME parameter.
Default valueNone
Requires attributesNone
Conflicts with attributesNone
Corresponds toDB_UNIQUE_NAME column of the V$ARCHIVE_DEST view

Usage Notes

  • This attribute is optional if:

    • The LOG_ARCHIVE_CONFIG=DG_CONFIG initialization parameter is not specified.

    • This is a local destination (specified with the LOCATION attribute).

  • This attributes is required if the LOG_ARCHIVE_CONFIG=DG_CONFIG initialization parameter is specified and if this is a remote destination (specified with the SERVICE attribute).

  • Use the DB_UNIQUE_NAME attribute to clearly identify the relationship between a primary and standby databases. This attribute is particularly helpful if there are multiple standby databases in the Data Guard configuration.

  • The name specified by the DB_UNIQUE_NAME must match one of the DB_UNIQUE_NAME values in the DG_CONFIG list. Redo transport services validate that the DB_UNIQUE_NAME attribute of the database at the specified destination matches the DB_UNIQUE_NAME attribute or the connection to that destination is refused.

  • The name specified by the DB_UNIQUE_NAME attribute must match the name specified by the DB_UNIQUE_NAME initialization parameter of the database identified by the destination.

Example

In the following example, the DB_UNIQUE_NAME parameter specifies boston (DB_UNIQUE_NAME=boston), which is also specified with the DB_UNIQUE_NAME attribute on the LOG_ARCHIVE_DEST_1 parameter. The DB_UNIQUE_NAME attribute on the LOG_ARCHIVE_DEST_2 parameter specifies the chicago destination. Both boston and chicago are listed in the LOG_ARCHIVE_CONFIG=DG_CONFIG parameter.

DB_UNIQUE_NAME=boston
LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston,denver)'
LOG_ARCHIVE_DEST_1='LOCATION=/arch1/ 
  VALID_FOR=(ALL_LOGFILES,ALL_ROLES) 
  DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_2='SERVICE=Sales_DR 
  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) 
  DB_UNIQUE_NAME=chicago'

DELAY

Specifie a minimum time lag between when redo data from the primary site is archived on a standby site and when the archived redo log file is applied to the standby database or any standbys cascaded from it.

CategoryDELAY[=minutes]
Data TypeNumeric
Valid values>=0 minutes
Default Value30 minutes
Requires attributesSERVICE
Conflicts with attributesLOCATION, VALID_FOR=(*,STANDBY_ROLE)
Corresponds toDELAY_MINS and DESTINATION columns of the V$ARCHIVE_DEST view

Usage Notes

  • The DELAY attribute is optional. By default there is no delay.

  • The DELAY attribute indicates the archived redo log files at the standby destination are not available for recovery until the specified time interval has expired. The time interval is expressed in minutes, and it starts when the redo data is successfully transmitted to, and archived at, the standby site.

  • The DELAY attribute may be used to protect a standby database from corrupted or erroneous primary data. However, there is a tradeoff because during failover it takes more time to apply all of the redo up to the point of corruption.

  • The DELAY attribute does not affect the transmittal of redo data to a standby destination.

  • If you have real-time apply enabled, any delay that you set will be ignored.

  • Changes to the DELAY attribute take effect the next time redo data is archived (after a log switch). In-progress archiving is not affected.

  • You can override the specified delay interval at the standby site, as follows:

    • For a physical standby database:

      SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY;
      
    • For a logical standby database:

      SQL> ALTER DATABASE START LOGICAL STANDBY APPLY NODELAY;
      
  • The DELAY value that a cascaded standby uses is the value that was set for the LOG_ARCHIVE_DEST_n parameter on the primary that shipped the redo to the cascading standby.


See Also:

Oracle Database SQL Language Reference for more information about these ALTER DATABASE statements

Examples

You can use the DELAY attribute to set up a configuration where multiple standby databases are maintained in varying degrees of synchronization with the primary database. However, this protection incurs some overhead during failover, because it takes Redo Apply more time to apply all the redo up to the corruption point.

For example, assume primary database A has standby databases B and C. Standby database B is set up as the disaster recovery database and therefore has no time lag. Standby database C is set up with a 2-hour delay, which is enough time to allow user errors to be discovered before they are propagated to the standby database.

The following example shows how to specify the DELAY attribute for this configuration:

LOG_ARCHIVE_DEST_1='LOCATION=/arch/dest MANDATORY'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_2='SERVICE=stbyB SYNC AFFIRM'
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_3='SERVICE=stbyC DELAY=120'
LOG_ARCHIVE_DEST_STATE_3=ENABLE

Note:

Alternatively, you can use Flashback Database to revert the database to a point-in-time or SCN in a different database incarnation as long as there is sufficient flashback log data. Using Flashback Database is described in Oracle Database Backup and Recovery User's Guide.


LOCATION and SERVICE

Each destination must specify either the LOCATION or the SERVICE attribute to identify either a local disk directory or a remote database destination where redo transport services can transmit redo data.

LOG_ARCHIVE_DEST_1 through LOG_ARCHIVE_DEST_10 destinations can contain either a LOCATION attribute or a SERVICE attribute.

LOG_ARCHIVE_DEST_11 through LOG_ARCHIVE_DEST_31 destinations can only contain a SERVICE attribute.

CategoryLOCATION=local_disk_directory or USE_DB_RECOVERY_FILE_DESTSERVICE=net_service_name
Data typeString valueString value
Valid valuesNot applicableNot applicable
Default ValueNoneNone
Requires attributesNot applicableNot applicable
Conflicts with attributesSERVICE, DELAY, NOREGISTER, SYNC, ASYNC, NET_TIMEOUT, AFFIRM,NOAFFIRM, COMPRESSION, MAX_CONNECTIONSLOCATION
Corresponds toDESTINATION and TARGET columns of the V$ARCHIVE_DEST viewDESTINATION and TARGET columns of the V$ARCHIVE_DEST view

Usage Notes

  • Either the LOCATION or the SERVICE attribute must be specified. There is no default.

  • The LOG_ARCHIVE_DEST_11 through LOG_ARCHIVE_DEST_31 parameters do not support the LOCATION attribute.

  • If you are specifying multiple attributes, specify the LOCATION or SERVICE attribute first in the list of attributes.

  • You must specify at least one local disk directory with the LOCATION attribute. This ensures that local archived redo log files are accessible should media recovery of a database be necessary. You can specify up to thirty additional local or remote destinations.

  • For the LOCATION attribute, you can specify one of the following:

    • LOCATION=local_disk_directory

      This specifies a unique directory path name for a disk directory on the system that hosts the database. This is the local destination for archived redo log files.

    • LOCATION=USE_DB_RECOVERY_FILE_DEST

      To configure a fast recovery area, specify the directory or Oracle Storage Manager disk group that will serve as the fast recovery area using the DB_RECOVERY_FILE_DEST initialization parameter. For more information about fast recovery areas, see Oracle Database Backup and Recovery User's Guide.

  • When you specify a SERVICE attribute:

    • You identify remote destinations by specifying the SERVICE attribute with a valid Oracle Net service name (SERVICE=net_service_name) that identifies the remote Oracle database instance to which the redo data will be sent.

      The Oracle Net service name that you specify with the SERVICE attribute is translated into a connection descriptor that contains the information necessary for connecting to the remote database.


      See Also:

      Oracle Database Net Services Administrator's Guide for details about setting up Oracle Net service names

    • Transmitting redo data to a remote destination requires a network connection and an Oracle database instance associated with the remote destination to receive the incoming redo data.

  • To verify the current settings for LOCATION and SERVICE attributes, query the V$ARCHIVE_DEST fixed view:

    • The TARGET column identifies if the destination is local or remote to the primary database.

    • The DESTINATION column identifies the values that were specified for a destination. For example, the destination parameter value specifies the Oracle Net service name identifying the remote Oracle instance where the archived redo log files are located.

Examples

Example 1   Specifying the LOCATION Attribute
LOG_ARCHIVE_DEST_2='LOCATION=/disk1/oracle/oradata/payroll/arch/'
LOG_ARCHIVE_DEST_STATE_2=ENABLE
Example 2   Specifying the SERVICE Attribute
LOG_ARCHIVE_DEST_3='SERVICE=stby1'
LOG_ARCHIVE_DEST_STATE_3=ENABLE

MANDATORY

Specifies that filled online log files must be successfully archived to the destination before they can be reused.

CategoryMANDATORY
Data typeKeyword
Valid valuesNot applicable
Default valueNot applicable
Requires attributesNot applicable
Conflicts with attributesOptional
Corresponds toBINDING column of the V$ARCHIVE_DEST view

Usage Notes

  • The LOG_ARCHIVE_DEST_11 through LOG_ARCHIVE_DEST_31 parameters do not support the MANDATORY attribute.

  • If MANDATORY is not specified, then, by default, the destination is considered to be optional.

    At least one destination must succeed, even if all destinations are optional. If archiving to an optional destination fails, the online redo log file is still available for reuse and may be overwritten eventually. However, if the archival operation of a mandatory destination fails, online redo log files cannot be overwritten.

  • The LOG_ARCHIVE_MIN_SUCCEED_DEST=n parameter (where n is an integer from 1 to 10) specifies the number of destinations that must archive successfully before online redo log files can be overwritten.

    All MANDATORY destinations and optional local destinations contribute to satisfying the LOG_ARCHIVE_MIN_SUCCEED_DEST=n count. If the value set for the LOG_ARCHIVE_MIN_SUCCEED_DEST parameter is met, the online redo log file is available for reuse. For example, you can set the parameter as follows:

    # Database must archive to at least two locations before 
    # overwriting the online redo log files.
    LOG_ARCHIVE_MIN_SUCCEED_DEST = 2 
    
  • You must have at least one local destination, which you can declare MANDATORY or leave as optional.

    At least one local destination is operationally treated as mandatory, because the minimum value for the LOG_ARCHIVE_MIN_SUCCEED_DEST parameter is 1.

  • The failure of any mandatory destination makes the LOG_ARCHIVE_MIN_SUCCEED_DEST parameter irrelevant.

  • The LOG_ARCHIVE_MIN_SUCCEED_DEST parameter value cannot be greater than the number of mandatory destinations plus the number of optional local destinations.

  • The BINDING column of the V$ARCHIVE_DEST fixed view specifies how failure affects the archival operation

Examples

The following example shows the MANDATORY attribute:

LOG_ARCHIVE_DEST_1='LOCATION=/arch/dest MANDATORY'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_3='SERVICE=denver MANDATORY'
LOG_ARCHIVE_DEST_STATE_3=ENABLE

MAX_CONNECTIONS

Enables multiple network connections to be used when sending an archived redo log file to a redo transport destination. Using multiple network connections can improve redo transport performance over high-latency network links.

CategoryDescription
Data typeInteger
Valid values1 to 20
Default value1
Requires attributesNone
Conflicts with attributesNone
Corresponds toMAX_CONNECTIONS column of the V$ARCHIVE_DEST view of the primary database

Usage Notes

  • The MAX_CONNECTIONS attribute is optional. If it is specified, it is only used when redo transport services use ARCn processes for archival.

    • If MAX_CONNECTIONS is set to 1 (the default), redo transport services use a single ARCn process to transmit redo data to the remote destination.

    • If MAX_CONNECTIONS is set to a value greater than 1, redo transport services use multiple ARCn processes to transmit redo in parallel to archived redo log files at the remote destination. Each archiver (ARCn) process uses a separate network connection.

  • With multiple ARCn processes, redo transmission occurs in parallel, thus increasing the speed at which redo is transmitted to the remote destination.

  • Redo that is received from an ARCn process is written directly to an archived redo log file. Therefore, it cannot be applied in real-time as it is received.

  • The actual number of archiver processes in use at any given time may vary based on the archiver workload and the value of the LOG_ARCHIVE_MAX_PROCESSES initialization parameter. For example, if the total of MAX_CONNECTIONS attributes on all destinations exceeds the value of LOG_ARCHIVE_MAX_PROCESSES, then Data Guard will use as many ARCn processes as possible, but the number may be less than what is specified by the MAX_CONNECTIONS attribute.

  • When using multiple ARCn processes in an Oracle RAC environment, configure the primary instance to transport redo data to a single standby database instance. If redo transport services are not configured as such, then archival will return to the default behavior for remote archival, which is to transport redo data using a single ARCn process.

Examples

The following example shows the MAX_CONNECTIONS attribute:

LOG_ARCHIVE_DEST_1='LOCATION=/arch/dest'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_3='SERVICE=denver MAX_CONNECTIONS=3'
LOG_ARCHIVE_DEST_STATE_3=ENABLE

MAX_FAILURE

Controls the consecutive number of times redo transport services attempt to reestablish communication and transmit redo data to a failed destination before the primary database gives up on the destination.

CategoryMAX_FAILURE=count
Data typeNumeric
Valid value>=0
Default valueNone
Requires attributesREOPEN
Conflicts with attributesNone
Corresponds toMAX_FAILURE, FAILURE_COUNT, and REOPEN_SECS columns of the V$ARCHIVE_DEST view

Usage Notes

  • The MAX_FAILURE attribute is optional. By default, there are an unlimited number of archival attempts to the failed destination.

  • This attribute is useful for providing failure resolution for destinations to which you want to retry transmitting redo data after a failure, but not retry indefinitely.

  • When you specify the MAX_FAILURE attribute, you must also set the REOPEN attribute. Once the specified number of consecutive attempts is exceeded, the destination is treated as if the REOPEN attribute was not specified.

  • You can view the failure count in the FAILURE_COUNT column of the V$ARCHIVE_DEST fixed view. The related column REOPEN_SECS identifies the REOPEN attribute value.


    Note:

    Once the failure count for the destination reaches the specified MAX_FAILURE attribute value, the only way to reuse the destination is to modify the MAX_FAILURE attribute value or any attribute. This has the effect of resetting the failure count to zero (0).

  • The failure count is reset to zero (0) whenever the destination is modified by an ALTER SYSTEM SET statement. This avoids the problem of setting the MAX_FAILURE attribute to a value less than the current failure count value.

  • Once the failure count is greater than or equal to the value set for the MAX_FAILURE attribute, the REOPEN attribute value is implicitly set to zero, which causes redo transport services to transport redo data to an alternate destination (defined with the ALTERNATE attribute) on the next archival operation.

  • Redo transport services attempt to archive to the failed destination indefinitely if you do not specify the MAX_FAILURE attribute (or if you specify MAX_FAILURE=0), and you specify a nonzero value for the REOPEN attribute. If the destination has the MANDATORY attribute, the online redo log file is not reusable until it has been archived to this destination.

Examples

The following example allows redo transport services up to three consecutive archival attempts, tried every 5 seconds, to the arc_dest destination. If the archival operation fails after the third attempt, the destination is treated as if the REOPEN attribute was not specified.

LOG_ARCHIVE_DEST_1='LOCATION=/arc_dest REOPEN=5 MAX_FAILURE=3'
LOG_ARCHIVE_DEST_STATE_1=ENABLE

NET_TIMEOUT

Specifies the number of seconds that the LGWR background process will block waiting for a redo transport destination to acknowledge redo data sent to it. If an acknowledgement is not received within NET_TIMEOUT seconds, an error is logged and the redo transport session to that destination is terminated.

CategoryNET_TIMEOUT=seconds
Data typeNumeric
Valid values1Foot 1  to 1200
Default value30 seconds
Requires attributesSYNC
Conflicts with attributesASYNC (If you specify the ASYNC attribute, redo transport services ignores it; no error is returned.)
Corresponds toNET_TIMEOUT column of the V$ARCHIVE_DEST view of the primary database

Footnote 1 Although a minimum value of 1 second is allowed, Oracle recommends a minimum value of 8 to 10 seconds to avoid disconnecting from the standby database due to transient network errors.

Usage Notes

  • The NET_TIMEOUT attribute is optional. However, if you do not specify the NET_TIMEOUT attribute it will be set to 30 seconds, but the primary database can potentially stall. To avoid this situation, specify a small, nonzero value for the NET_TIMEOUT attribute so the primary database can continue operation after the user-specified timeout interval expires when waiting for status from the network server.

Examples

The following example shows how to specify a 10-second network timeout value on the primary database with the NET_TIMEOUT attribute.

LOG_ARCHIVE_DEST_2='SERVICE=stby1 SYNC NET_TIMEOUT=10'
LOG_ARCHIVE_DEST_STATE_2=ENABLE

NOREGISTER

Indicates that the location of the archived redo log file should not be recorded at the corresponding destination.

CategoryNOREGISTER
Data typeKeyword
Valid valuesNot applicable
Default valueNot applicable
Requires attributesSERVICE
Conflicts with attributesLOCATION
Corresponds toDESTINATION and TARGET columns of the V$ARCHIVE_DEST view

Usage Notes

  • The NOREGISTER attribute is optional if the standby database destination is a part of a Data Guard configuration.

  • The NOREGISTER attribute is required if the destination is not part of a Data Guard configuration.

  • This attribute pertains to remote destinations only. The location of each archived redo log file is always recorded in the primary database control file.

Examples

The following example shows the NOREGISTER attribute:

LOG_ARCHIVE_DEST_5='NOREGISTER'

REOPEN

Specifies the minimum number of seconds before redo transport services should try to reopen a failed destination.

CategoryREOPEN [=seconds]
Data TypeNumeric
Valid values>=0 seconds
Default Value300 seconds
Requires attributesNone
Conflicts with attributesNot applicable
Corresponds toREOPEN_SECS and MAX_FAILURE columns of the V$ARCHIVE_DEST view

Usage Notes

  • The REOPEN attribute is optional.

  • Redo transport services attempt to reopen failed destinations at log switch time.

  • Redo transport services check if the time of the last error plus the REOPEN interval is less than the current time. If it is, redo transport services attempt to reopen the destination.

  • REOPEN applies to all errors, not just connection failures. These errors include, but are not limited to, network failures, disk errors, and quota exceptions.

  • If you specify REOPEN for an optional destination, it is possible for the Oracle database to overwrite online redo log files if there is an error. If you specify REOPEN for a MANDATORY destination, redo transport services will stall the primary database when it is not possible to successfully transmit redo data. When this situation occurs, consider the following options:

    • Change the destination by deferring the destination, specifying the destination as optional, or changing the SERVICE attribute value.

    • Specify an alternate destination.

    • Disable the destination.

Examples

The following example shows the REOPEN attribute.

LOG_ARCHIVE_DEST_3='SERVICE=stby1 MANDATORY REOPEN=60'
LOG_ARCHIVE_DEST_STATE_3=ENABLE

SYNC and ASYNC

Specifies whether the synchronous (SYNC) or asynchronous (ASYNC) redo transport mode is to be used.

CategorySYNCASYNC
Data typeKeywordKeyword
Valid valuesNot applicableNot applicable
Default valueNot applicableNone
Requires attributesNoneNone
Conflicts with attributesASYNC, LOCATIONSYNC, LOCATION
Corresponds toTRANSMIT_MODE column of the V$ARCHIVE_DEST viewTRANSMIT_MODE and column of the V$ARCHIVE_DEST view

Usage Notes

  • The LOG_ARCHIVE_DEST_11 through LOG_ARCHIVE_DEST_31 parameters do not support the SYNC attribute.

  • The redo data generated by a transaction must have been received by every enabled destination that has the SYNC attribute before that transaction can commit.

  • The redo data generated by a transaction need not have been received at a destination that has the ASYNC attribute before that transaction can commit. This is the default behavior if neither SYNC or ASYNC is specified.

Examples

The following example shows the SYNC attribute with the LOG_ARCHIVE_DEST_n parameter.

LOG_ARCHIVE_DEST_3='SERVICE=stby1 SYNC'
LOG_ARCHIVE_DEST_STATE_3=ENABLE

TEMPLATE

Defines a directory specification and format template for names of archived redo log files at the destination. The template is used to generate a filename that is different from the default filename format defined by the LOG_ARCHIVE_FORMAT initialization parameter at the redo destination.

CategoryTEMPLATE=filename_template_%t_%s_%r
Data TypeString value
Valid valuesNot applicable
Default valueNone
Requires attributes ...SERVICE
Conflicts with attributes ...LOCATION
Corresponds to ...REMOTE_TEMPLATE and REGISTER columns of the V$ARCHIVE_DEST view

Usage Notes

  • The TEMPLATE attribute is optional. If TEMPLATE is not specified, archived redo logs are named using the value of the LOG_ARCHIVE_FORMAT initialization parameter.

  • The TEMPLATE attribute overrides the LOG_ARCHIVE_FORMAT initialization parameter setting at the remote archival destination.

  • The TEMPLATE attribute is valid only with remote destinations (that is, destinations specified with the SERVICE attribute).

  • The value you specify for filename_template must contain the %s, %t, and %r directives described in Table 15-1.

    Table 15-1 Directives for the TEMPLATE Attribute

    DirectiveDescription

    %t

    Substitute the instance thread number.

    %T

    Substitute the instance thread number, zero filled.

    %s

    Substitute the log file sequence number.

    %S

    Substitute the log file sequence number, zero filled.

    %r

    Substitute the resetlogs ID.

    %R

    Substitute the resetlogs ID, zero filled.


  • The filename_template value is transmitted to the destination, where it is translated and validated before creating the filename.


VALID_FOR

Specifies whether redo data will be written to a destination, based on the following factors:

  • Whether the database is currently running in the primary or the standby role

  • Whether online redo log files, standby redo log files, or both are currently being archived on the database at this destination

CategoryVALID_FOR=(redo_log_type, database_role)
Data TypeString value
Valid valuesNot applicable
Default ValueVALID_FOR=(ALL_LOGFILES, ALL_ROLES)
Requires attributesNone
Conflicts with attributesNone
Corresponds toVALID_NOW, VALID_TYPE, and VALID_ROLE columns in the V$ARCHIVE_DEST view

Usage Notes

  • The VALID_FOR attribute is optional. However, Oracle recommends that the VALID_FOR attribute be specified for each redo transport destination at each database in a Data Guard configuration so that redo transport continues after a role transition to any standby database in the configuration.

  • To configure these factors for each LOG_ARCHIVE_DEST_n destination, you specify this attribute with a pair of keywords: VALID_FOR=(redo_log_type,database_role):

    • The redo_log_type keyword identifies the destination as valid for archiving one of the following:

      • ONLINE_LOGFILE—This destination is valid only when archiving online redo log files.

      • STANDBY_LOGFILE—This destination is valid only when archiving standby redo log files.

      • ALL_LOGFILES— This destination is valid when archiving either online redo log files or standby redo log files.

    • The database_role keyword identifies the role in which this destination is valid for archiving:

      • PRIMARY_ROLE—This destination is valid only when the database is running in the primary role.

      • STANDBY_ROLE—This destination is valid only when the database is running in the standby role.

      • ALL_ROLES—This destination is valid when the database is running in either the primary or the standby role.

  • If you do not specify the VALID_FOR attribute for a destination, by default, archiving online redo log files and standby redo log files is enabled at the destination, regardless of whether the database is running in the primary or the standby role. This default behavior is equivalent to setting the (ALL_LOGFILES,ALL_ROLES) keyword pair on the VALID_FOR attribute.

  • The VALID_FOR attribute enables you to use the same initialization parameter file for both the primary and standby roles.

Example

The following example shows the default VALID_FOR keyword pair:

LOG_ARCHIVE_DEST_1='LOCATION=/disk1/oracle/oradata VALID_FOR=(ALL_LOGFILES, ALL_ROLES)'

When this database is running in either the primary or standby role, destination 1 archives all log files to the /disk1/oracle/oradata local directory location.

PKd(HHPKD Managing Physical and Snapshot Standby Databases

9 Managing Physical and Snapshot Standby Databases

This chapter describes how to manage physical and snapshot standby databases. The following topics are discussed:

See Oracle Data Guard Broker to learn how the Data Guard broker simplifies the management of physical and snapshot standby databases.

9.1 Starting Up and Shutting Down a Physical Standby Database

This section describes how to start up and shut down a physical standby database.

9.1.1 Starting Up a Physical Standby Database

Use the SQL*Plus STARTUP command to start a physical standby database. The SQL*Plus STARTUP command starts, mounts, and opens a physical standby database in read-only mode when it is invoked without any arguments.

Once mounted or opened, a physical standby database can receive redo data from the primary database.

See Section 7.3 for information about Redo Apply and Section 9.2 for information about opening a physical standby database in read-only mode.


Note:

When Redo Apply is started on a physical standby database that has not yet received redo data from the primary database, an ORA-01112 message may be returned. This indicates that Redo Apply is unable to determine the starting sequence number for media recovery. If this occurs, manually retrieve an archived redo log file from the primary database and register it on the standby database, or wait for redo transport to begin before starting Redo Apply.

9.1.2 Shutting Down a Physical Standby Database

Use the SQL*Plus SHUTDOWN command to stop Redo Apply and shut down a physical standby database. Control is not returned to the session that initiates a database shutdown until shutdown is complete.

If the primary database is up and running, defer the standby destination on the primary database and perform a log switch before shutting down the physical standby database.

9.2 Opening a Physical Standby Database

A physical standby database can be opened for read-only access and used to offload queries from a primary database.


Note:

A physical standby database that is opened in read-only mode is subject to the same restrictions as any other Oracle database opened in read-only mode. For more information, see Oracle Database Administrator's Guide.

If a license for the Oracle Active Data Guard option has been purchased, Redo Apply can be active while the physical standby database is open, thus allowing queries to return results that are identical to what would be returned from the primary database. This capability is known as the real-time query feature. See Section 9.2.1 for more details.

If a license for the Oracle Active Data Guard option has not been purchased, a physical standby database cannot be open while Redo Apply is active, so the following rules must be observed when opening a physical standby database instance or starting Redo Apply:

  • Redo Apply must be stopped before any physical standby database instance is opened.

  • If one or more physical standby instances are open, those instances must be stopped or restarted in a mounted state before starting Redo Apply.


Note:

A license for the Oracle Active Data Guard option is also included with the Oracle GoldenGate product. Documentation for Oracle GoldenGate, including licensing information, can be found at:

http://download.oracle.com/docs/cd/E15881_01/index.htm



See Also:


9.2.1 Real-time query

The COMPATIBLE database initialization parameter must be set to 11.0 or higher to use the real-time query feature of the Oracle Active Data Guard option.

A physical standby database instance cannot be opened if Redo Apply is active on a mounted instance of that database. Use the following SQL statements to stop Redo Apply, open a standby instance read-only, and restart Redo Apply:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
SQL> ALTER DATABASE OPEN;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE -
> DISCONNECT;

Note:

If Redo Apply is active on an open instance, additional instances can be opened without having to stop Redo Apply.

Redo Apply cannot be started on a mounted physical standby instance if any instance of that database is open. The instance must be opened before starting Redo Apply on it.

Example: Querying V$DATABASE to Check the Standby's Open Mode

This example shows how the value of the V$DATABASE.OPEN_MODE column changes when a physical standby is open in real-time query mode.

  1. Start up and open a physical standby instance, and perform the following SQL query to show that the database is open in read-only mode:

    SQL> SELECT open_mode FROM V$DATABASE;
     
    OPEN_MODE
    --------------------
    READ ONLY
    
  2. Issue the following SQL statement to start Redo Apply:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE -
    > DISCONNECT;
     
    Database altered.
    
  3. Now that the standby is in real-time query mode (that is, the standby is open in read-only mode and Redo Apply is active), the V$DATABASE.OPEN_MODE column changes to indicate the following:

    SQL> SELECT open_mode FROM V$DATABASE;
     
    OPEN_MODE
    --------------------
    READ ONLY WITH APPLY
    

9.2.1.1 Monitoring Apply Lag in a Real-time Query Environment

If you are using real-time query to offload queries from a primary database to a physical standby database, you may want to monitor the apply lag to ensure that it is within acceptable limits.

The current apply lag is the difference, in elapsed time, between when the last applied change became visible on the standby and when that same change was first visible on the primary. This metric is computed to the nearest second.

To obtain the apply lag, query the V$DATAGUARD_STATS view. For example:

SQL> SELECT name, value, datum_time, time_computed FROM V$DATAGUARD_STATS -
> WHERE name like 'apply lag';
     
    NAME         VALUE            DATUM_TIME              TIME_COMPUTED
    ---------    -------------    -------------------     -------------------
    apply lag    +00 00:00:00     05/27/2009 08:54:16     05/27/2009 08:54:17

The apply lag metric is computed using data that is periodically received from the primary database. The DATUM_TIME column contains a timestamp of when this data was last received by the standby database. The TIME_COMPUTED column contains a timestamp taken when the apply lag metric was calculated. The difference between the values in these columns should be less than 30 seconds. If the difference is larger than this, the apply lag metric may not be accurate.

To obtain a histogram that shows the history of apply lag values since the standby instance was last started, query the V$STANDBY_EVENT_HISTOGRAM view. For example:

SQL> SELECT * FROM V$STANDBY_EVENT_HISTOGRAM WHERE NAME = 'apply lag' - 
> AND COUNT > 0;

NAME             TIME       UNIT         COUNT        LAST_TIME_UPDATED
---------     ---------   --------    -----------    ------------------------
apply lag         0        seconds        79681          06/18/2009 10:05:00
apply lag         1        seconds         1006          06/18/2009 10:03:56
apply lag         2        seconds           96          06/18/2009 09:51:06
apply lag         3        seconds            4          06/18/2009 04:12:32
apply lag         4        seconds            1          06/17/2009 11:43:51
apply lag         5        seconds            1          06/17/2009 11:43:52

6 rows selected

To evaluate the apply lag over a time period, take a snapshot of V$STANDBY_EVENT_HISTOGRAM at the beginning of the time period and compare that snapshot with one taken at the end of the time period.

9.2.1.2 Configuring Apply Lag Tolerance in a Real-time Query Environment

The STANDBY_MAX_DATA_DELAY session parameter can be used to specify a session-specific apply lag tolerance, measured in seconds, for queries issued by non-administrative users to a physical standby database that is in real-time query mode. This capability allows queries to be safely offloaded from the primary database to a physical standby database, because it is possible to detect if the standby database has become unacceptably stale.

If STANDBY_MAX_DATA_DELAY is set to the default value of NONE, queries issued to a physical standby database will be executed regardless of the apply lag on that database.

If STANDBY_MAX_DATA_DELAY is set to a non-zero value, a query issued to a physical standby database will be executed only if the apply lag is less than or equal to STANDBY_MAX_DATA_DELAY. Otherwise, an ORA-3172 error is returned to alert the client that the apply lag is too large.

If STANDBY_MAX_DATA_DELAY is set to 0, a query issued to a physical standby database is guaranteed to return the exact same result as if the query were issued on the primary database, unless the standby database is lagging behind the primary database, in which case an ORA-3172 error is returned.

Use the ALTER SESSION SQL statement to set STANDBY_MAX_DATA_DELAY. For example:

SQL> ALTER SESSION SET STANDBY_MAX_DATA_DELAY=2

9.2.1.3 Forcing Redo Apply Synchronization in a Real-time Query Environment

Issue the following SQL statement on a physical standby database to ensure that all redo data received from the primary database has been applied to a physical standby database:

SQL> ALTER SESSION SYNC WITH PRIMARY;

This statement will block until all redo data received by the standby database at the time that this command is issued has been applied to the physical standby database. An ORA-3173 error is returned immediately, and synchronization will not occur, if the redo transport status at the standby database is not SYNCHRONIZED or if Redo Apply is not active.

You can ensure that Redo Apply synchronization occurs in specific cases by using the SYS_CONTEXT('USERENV','DATABASE_ROLE') function to create a standby-only trigger (that is, a trigger that is enabled on the primary but that only takes certain actions if it is running on a standby). For example, you could create the following trigger that would execute the ALTER SESSION SYNC WITH PRIMARY statement for a specific user connection at logon:

CREATE TRIGGER adg_logon_sync_trigger
 AFTER LOGON ON user.schema
  begin
    if (SYS_CONTEXT('USERENV', 'DATABASE_ROLE')  IN ('PHYSICAL STANDBY')) then
      execute immediate 'alter session sync with primary';
    end if;
  end;

9.2.1.4 Real-time Query Restrictions

The apply lag control and Redo Apply synchronization mechanisms described above require that the client be connected and issuing queries to a physical standby database that is in real-time query mode.

The following additional restrictions apply if STANDBY_MAX_DATA_DELAY is set to 0 or if the ALTER SESSION SYNC WITH PRIMARY SQL statement is used:

  • The standby database must receive redo data via the SYNC transport.

  • The redo transport status at the standby database must be SYNCHRONIZED and the primary database must be running in either maximum protection mode or maximum availability mode.

  • Real-time apply must be enabled.

  • Active Data Guard achieves high performance of real-time queries in an Oracle RAC environment through the use of cache fusion. This allows the Data Guard apply instance and queries to work out of cache and not be slowed down by disk I/O limitations. A consequence of this is that an unexpected failure of the apply instance leaves buffers in inconsistent states across all the open Oracle RAC instances. To ensure data consistency and integrity, Data Guard closes all the other open instances in the Oracle RAC configuration, and brings them to a mounted state. You must manually reopen the instances - at which time the data is automatically made consistent, followed by restarting redo apply on one of the instances. Note that in a Data Guard broker configuration, the instances are automatically reopened and redo apply is automatically restarted on one of the instances.


    See Also:

    • Oracle Data Guard Broker for more information about how the broker handles apply instance failures

    • The My Oracle Support note 1357597.1 at http://support.oracle.com for additional information about apply instance failures in an Active Data Guard Oracle RAC standby


9.2.1.5 Automatic Repair of Corrupt Data Blocks

If a corrupt data block is encountered when a primary database is accessed, it is automatically replaced with an uncorrupted copy of that block from a physical standby database. This requires the following conditions:

  • The physical standby database must be operating in real-time query mode, which requires an Active Data Guard license.

  • The physical standby database must be running real-time apply.

Also keep the following in mind:

  • Automatic repair is supported with any Data Guard protection mode. However, the effectiveness of repairing a corrupt block at the primary using the noncorrupt version of the block from the standby depends on how closely the standby apply is synchronized with the redo generated by the primary.

  • When an automatic block repair has been performed, a message is written to the database alert log.

  • If automatic block repair is not possible, an ORA-1578 error is returned.

If a corrupt data block is discovered on a physical standby database, the server attempts to automatically repair the corruption by obtaining a copy of the block from the primary database if the following database initialization parameters are configured on the standby database:

  • The LOG_ARCHIVE_CONFIG parameter is configured with a DG_CONFIG list and a LOG_ARCHIVE_DEST_n parameter is configured for the primary database.

    or

  • The FAL_SERVER parameter is configured and its value contains an Oracle Net service name for the primary database.

9.2.1.6 Manual Repair of Corrupt Data Blocks

The RMAN RECOVER BLOCK command is used to manually repair a corrupted data block. This command searches several locations for an uncorrupted copy of the data block. By default, one of the locations is any available physical standby database operating in real-time query mode. The EXCLUDE STANDBY option of the RMAN RECOVER BLOCK command can be used to exclude physical standby databases as a source for replacement blocks.


See Also:

Oracle Database Backup and Recovery Reference for more information about the RMAN RECOVER BLOCK command

9.2.1.7 Tuning Queries on a Physical Standby Database

Appendix D of the Active Data Guard 11g Best Practices white paper describes how to tune queries for optimal performance on a physical standby database. This paper is available on the Oracle Maximum Availability Architecture (MAA) home page at:

http://www.oracle.com/goto/maa

9.2.1.8 Adding Temp Files to a Physical Standby Database

If you are using a standby to offload queries from the primary database, and the nature of the workload requires more temp table space than is automatically created when the standby is first created, then you may need to manually add additional space.

To add temporary files to the physical standby database, perform the following tasks:

  1. Identify the tablespaces that should contain temporary files. Do this by entering the following command on the standby database:

    SQL> SELECT TABLESPACE_NAME FROM DBA_TABLESPACES
    2> WHERE CONTENTS = 'TEMPORARY';
     
    TABLESPACE_NAME
    --------------------------------
    TEMP1
    TEMP2
    
  2. For each tablespace identified in the previous query, add a new temporary file to the standby database. The following example adds a new temporary file called TEMP1 with size and reuse characteristics that match the primary database temporary files:

    SQL> ALTER TABLESPACE TEMP1 ADD TEMPFILE
    2> '/arch1/boston/temp01.dbf'
    3> SIZE 40M REUSE;
    

9.3 Primary Database Changes That Require Manual Intervention at a Physical Standby

Most structural changes made to a primary database are automatically propagated through redo data to a physical standby database. Table 9-1 lists primary database structural and configuration changes which require manual intervention at a physical standby database.

Table 9-1 Primary Database Changes That Require Manual Intervention at a Physical Standby

ReferencePrimary Database ChangeAction Required on Physical Standby Database

Section 9.3.1


Add a datafile or create a tablespace

No action is required if the STANDBY_FILE_MANAGEMENT database initialization parameter is set to AUTO. If this parameter is set to MANUAL, the new datafile must be copied to the physical standby database.

Section 9.3.2


Drop or delete a tablespace or datafile

Delete datafile from primary and physical standby database after the redo data containing the DROP or DELETE command is applied to the physical standby.

Section 9.3.3


Use transportable tablespaces

Move tablespace between the primary and the physical standby database.

Section 9.3.4


Rename a datafile

Rename the datafile on the physical standby database.

Section 9.3.5


Add or drop a redo log file group

Evaluate the configuration of the redo log and standby redo log on the physical standby database and adjust as necessary.

Section 9.3.6


Perform a DML or DDL operation using the NOLOGGING or UNRECOVERABLE clause

Copy the datafile containing the unlogged changes to the physical standby database.

Section 9.3.7


Grant or revoke administrative privileges or change the password of a user who has administrative privileges

If the REMOTE_LOGIN_PASSWORDFILE initialization parameter is set to SHARED or EXCLUSIVE, replace the password file on the physical standby database with a fresh copy of the password file from the primary database.

Section 9.3.8


Reset the TDE master encryption key

Replace the database encryption wallet on the physical standby database with a fresh copy of the database encryption wallet from the primary database.

Chapter 14


Change initialization parameters

Evaluate whether a corresponding change must be made to the initialization parameters on the physical standby database.


9.3.1 Adding a Datafile or Creating a Tablespace

The STANDBY_FILE_MANAGEMENT database initialization parameter controls whether the addition of a datafile to the primary database is automatically propagated to a physical standby databases.

  • If the STANDBY_FILE_MANAGEMENT parameter on the physical standby database is set to AUTO, any new datafiles created on the primary database are automatically created on the physical standby database.

  • If the STANDBY_FILE_MANAGEMENT database parameter on the physical standby database is set to MANUAL, a new datafile must be manually copied from the primary database to the physical standby databases after it is added to the primary database.

Note that if an existing datafile from another database is copied to a primary database, that it must also be copied to the standby database and that the standby control file must be re-created, regardless of the setting of STANDBY_FILE_MANAGEMENT parameter.

9.3.1.1 Using the STANDBY_FILE_MANAGEMENT Parameter with Raw Devices


Note:

Do not use the following procedure with databases that use Oracle Managed Files. Also, if the raw device path names are not the same on the primary and standby servers, use the DB_FILE_NAME_CONVERT database initialization parameter to convert the path names.

By setting the STANDBY_FILE_MANAGEMENT parameter to AUTO whenever new datafiles are added or dropped on the primary database, corresponding changes are made in the standby database without manual intervention. This is true as long as the standby database is using a file system. If the standby database is using raw devices for datafiles, then the STANDBY_FILE_MANAGEMENT parameter will continue to work, but manual intervention is needed. This manual intervention involves ensuring the raw devices exist before Redo Apply applies the redo data that will create the new datafile.On the primary database, create a new tablespace where the datafiles reside in a raw device. At the same time, create the same raw device on the standby database. For example:

SQL> CREATE TABLESPACE MTS2 DATAFILE '/dev/raw/raw100' size 1m;
Tablespace created.
 
SQL> ALTER SYSTEM SWITCH LOGFILE; 
System altered.

The standby database automatically adds the datafile because the raw devices exist. The standby alert log shows the following:

Fri Apr  8 09:49:31 2005
Media Recovery Log /u01/MILLER/flash_recovery_area/MTS_STBY/archivelog/2005_04_08/o1_mf_1_7_15ffgt0z_.arc
Recovery created file /dev/raw/raw100
Successfully added datafile 6 to media recovery
Datafile #6: '/dev/raw/raw100'
Media Recovery Waiting for thread 1 sequence 8 (in transit)

However, if the raw device was created on the primary system but not on the standby, then Redo Apply will stop due to file-creation errors. For example, issue the following statements on the primary database:

SQL> CREATE TABLESPACE MTS3 DATAFILE '/dev/raw/raw101' size 1m;
Tablespace created.
 
SQL> ALTER SYSTEM SWITCH LOGFILE;
System altered.

The standby system does not have the /dev/raw/raw101 raw device created. The standby alert log shows the following messages when recovering the archive:

Fri Apr  8 10:00:22 2005
Media Recovery Log /u01/MILLER/flash_recovery_area/MTS_STBY/archivelog/2005_04_08/o1_mf_1_8_15ffjrov_.arc
File #7 added to control file as 'UNNAMED00007'.
Originally created as:
'/dev/raw/raw101'
Recovery was unable to create the file as:
'/dev/raw/raw101'
MRP0: Background Media Recovery terminated with error 1274
Fri Apr  8 10:00:22 2005
Errors in file /u01/MILLER/MTS/dump/mts_mrp0_21851.trc:
ORA-01274: cannot add datafile '/dev/raw/raw101' - file could not be created
ORA-01119: error in creating database file '/dev/raw/raw101'
ORA-27041: unable to open file
Linux Error: 13: Permission denied
Additional information: 1
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
Fri Apr  8 10:00:22 2005
Errors in file /u01/MILLER/MTS/dump/mts_mrp0_21851.trc:
ORA-01274: cannot add datafile '/dev/raw/raw101' - file could not be created
ORA-01119: error in creating database file '/dev/raw/raw101'
ORA-27041: unable to open file
Linux Error: 13: Permission denied
Additional information: 1
Fri Apr  8 10:00:22 2005
MTS; MRP0: Background Media Recovery process shutdown
ARCH: Connecting to console port...

9.3.1.2 Recovering from Errors

To correct the problems described in Section 9.3.1.1, perform the following steps:

  1. Create the raw device on the standby database and assign permissions to the Oracle user.

  2. Query the V$DATAFILE view. For example:

    SQL> SELECT NAME FROM V$DATAFILE;
    
    NAME -------------------------------------------------------------------------------
    /u01/MILLER/MTS/system01.dbf
    /u01/MILLER/MTS/undotbs01.dbf
    /u01/MILLER/MTS/sysaux01.dbf
    /u01/MILLER/MTS/users01.dbf
    /u01/MILLER/MTS/mts.dbf
    /dev/raw/raw100
    /u01/app/oracle/product/10.1.0/dbs/UNNAMED00007
    
    SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=MANUAL;
    
    SQL> ALTER DATABASE CREATE DATAFILE -
    > '/u01/app/oracle/product/10.1.0/dbs/UNNAMED00007' -
    >  AS -
    > '/dev/raw/raw101';
    
  3. In the standby alert log you should see information similar to the following:

    Fri Apr  8 10:09:30 2005
    alter database create datafile
    '/dev/raw/raw101' as '/dev/raw/raw101'
    
    Fri Apr  8 10:09:30 2005
    Completed: alter database create datafile
    '/dev/raw/raw101' a
    
  4. On the standby database, set STANDBY_FILE_MANAGEMENT to AUTO and restart Redo Apply:

    SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
    SQL> RECOVER MANAGED STANDBY DATABASE DISCONNECT;
    

At this point Redo Apply uses the new raw device datafile and recovery continues.

9.3.2 Dropping Tablespaces and Deleting Datafiles

When a tablespace is dropped or a datafile is deleted from a primary database, the corresponding datafile(s) must be deleted from the physical standby database. The following example shows how to drop a tablespace:

SQL> DROP TABLESPACE tbs_4;
SQL> ALTER SYSTEM SWITCH LOGFILE;

To verify that deleted datafiles are no longer part of the database, query the V$DATAFILE view.

Delete the corresponding datafile on the standby system after the redo data that contains the previous changes is applied to the standby database. For example:

% rm /disk1/oracle/oradata/payroll/s2tbs_4.dbf

On the primary database, after ensuring the standby database applied the redo information for the dropped tablespace, you can remove the datafile for the tablespace. For example:

% rm /disk1/oracle/oradata/payroll/tbs_4.dbf

9.3.2.1 Using DROP TABLESPACE INCLUDING CONTENTS AND DATAFILES

You can issue the SQL DROP TABLESPACE INCLUDING CONTENTS AND DATAFILES statement on the primary database to delete the datafiles on both the primary and standby databases. To use this statement, the STANDBY_FILE_MANAGEMENT initialization parameter must be set to AUTO. For example, to drop the tablespace at the primary site:

SQL> DROP TABLESPACE INCLUDING CONTENTS AND DATAFILES tbs_4;
SQL> ALTER SYSTEM SWITCH LOGFILE;

9.3.3 Using Transportable Tablespaces with a Physical Standby Database

You can use the Oracle transportable tablespaces feature to move a subset of an Oracle database and plug it in to another Oracle database, essentially moving tablespaces between the databases.

To move or copy a set of tablespaces into a primary database when a physical standby is being used, perform the following steps:

  1. Generate a transportable tablespace set that consists of datafiles for the set of tablespaces being transported and an export file containing structural information for the set of tablespaces.

  2. Transport the tablespace set:

    1. Copy the datafiles and the export file to the primary database.

    2. Copy the datafiles to the standby database.

    The data files must have the same path name on the primary and standby databases unless the DB_FILE_NAME_CONVERT database initialization parameter has been configured. If DB_FILE_NAME_CONVERT has not been configured and the path names of the data files are not the same on the primary and standby databases, issue the ALTER DATABASE RENAME FILE statement to rename the data files. Do this after Redo Apply has failed to apply the redo generated by plugging the tablespace into the primary database. The STANDBY_FILE_MANAGEMENT initialization parameter must be set to MANUAL before renaming the data files, and should be reset to the previous value after renaming the data files.

  3. Plug in the tablespace.

    Invoke the Data Pump utility to plug the set of tablespaces into the primary database. Redo data will be generated and applied at the standby site to plug the tablespace into the standby database.

For more information about transportable tablespaces, see Oracle Database Administrator's Guide.

9.3.4 Renaming a Datafile in the Primary Database

When you rename one or more datafiles in the primary database, the change is not propagated to the standby database. Therefore, if you want to rename the same datafiles on the standby database, you must manually make the equivalent modifications on the standby database because the modifications are not performed automatically, even if the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO.

The following steps describe how to rename a datafile in the primary database and manually propagate the changes to the standby database.

  1. To rename the datafile in the primary database, take the tablespace offline:

    SQL> ALTER TABLESPACE tbs_4 OFFLINE;
    
  2. Exit from the SQL prompt and issue an operating system command, such as the following UNIX mv command, to rename the datafile on the primary system:

    % mv /disk1/oracle/oradata/payroll/tbs_4.dbf 
    /disk1/oracle/oradata/payroll/tbs_x.dbf
    
  3. Rename the datafile in the primary database and bring the tablespace back online:

    SQL> ALTER TABLESPACE tbs_4 RENAME DATAFILE -
    > '/disk1/oracle/oradata/payroll/tbs_4.dbf' -
    >  TO '/disk1/oracle/oradata/payroll/tbs_x.dbf';
    
    SQL> ALTER TABLESPACE tbs_4 ONLINE;
    
  4. Connect to the standby database and stop Redo Apply:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    
  5. Shut down the standby database:

    SQL> SHUTDOWN;
    
  6. Rename the datafile at the standby site using an operating system command, such as the UNIX mv command:

    % mv /disk1/oracle/oradata/payroll/tbs_4.dbf /disk1/oracle/oradata/payroll/tbs_x.dbf
    
  7. Start and mount the standby database:

    SQL> STARTUP MOUNT;
    
  8. Rename the datafile in the standby control file. Note that the STANDBY_FILE_MANAGEMENT database initialization parameter must be set to MANUAL in order to rename a datafile. This parameter can be reset to its previous value after renaming a datafile.

    SQL> ALTER DATABASE RENAME FILE '/disk1/oracle/oradata/payroll/tbs_4.dbf' -
    > TO '/disk1/oracle/oradata/payroll/tbs_x.dbf';
    
  9. On the standby database, restart Redo Apply:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE -
    > DISCONNECT FROM SESSION;
    

If you do not rename the corresponding datafile at the standby system, and then try to refresh the standby database control file, the standby database will attempt to use the renamed datafile, but it will not find it. Consequently, you will see error messages similar to the following in the alert log:

ORA-00283: recovery session canceled due to errors
ORA-01157: cannot identify/lock datafile 4 - see DBWR trace file
ORA-01110: datafile 4: '/Disk1/oracle/oradata/payroll/tbs_x.dbf'

9.3.5 Add or Drop a Redo Log File Group

The configuration of the redo log and standby redo log on a physical standby database should be reevaluated and adjusted as necessary after adding or dropping a log file group on the primary database.

Take the following steps to add or drop a log file group or standby log file group on a physical standby database:

  1. Stop Redo Apply.

  2. If the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO, change the value to MANUAL.

  3. Add or drop a log file group.


    Note:

    An online logfile group must always be manually cleared before it can be dropped from a physical standby database. For example:
    ALTER DATABASE CLEAR LOGFILE GROUP 3;
    

    An online logfile group that has a status of CURRENT or CLEARING_CURRENT cannot be dropped from a physical standby database. An online logfile group that has this status can be dropped after a role transition.


  4. Restore the STANDBY_FILE_MANAGEMENT initialization parameter and the Redo Apply options to their original states.

  5. Restart Redo Apply.

9.3.6 NOLOGGING or Unrecoverable Operations

When you perform a DML or DDL operation using the NOLOGGING or UNRECOVERABLE clause, the standby database is invalidated and may require substantial DBA administrative activities to repair. You can specify the SQL ALTER DATABASE or SQL ALTER TABLESPACE statement with the FORCELOGGING clause to override the NOLOGGING setting. However, this statement will not repair an already invalidated database.

See Section 13.4 for information about recovering after the NOLOGGING clause is used.

9.3.7 Refresh the Password File

If the REMOTE_LOGIN_PASSWORDFILE database initialization parameter is set to SHARED or EXCLUSIVE, the password file on a physical standby database must be replaced with a fresh copy from the primary database after granting or revoking administrative privileges or changing the password of a user with administrative privileges.

Failure to refresh the password file on the physical standby database may cause authentication of redo transport sessions or connections as SYSDBA or SYSOPER to the physical standby database to fail.

9.3.8 Reset the TDE Master Encryption Key

The database encryption wallet on a physical standby database must be replaced with a fresh copy of the database encryption wallet from the primary database whenever the TDE master encryption key is reset on the primary database.

Failure to refresh the database encryption wallet on the physical standby database will prevent access to encrypted columns on the physical standby database that are modified after the master encryption key is reset on the primary database.

9.4 Recovering Through the OPEN RESETLOGS Statement

Data Guard allows recovery on a physical standby database to continue after the primary database has been opened with the RESETLOGS option. When an ALTER DATABASE OPEN RESETLOGS statement is issued on the primary database, the incarnation of the database changes, creating a new branch of redo data.

When a physical standby database receives a new branch of redo data, Redo Apply automatically takes the new branch of redo data. For physical standby databases, no manual intervention is required if the standby database did not apply redo data past the new resetlogs SCN (past the start of the new branch of redo data). The following table describes how to resynchronize the standby database with the primary database branch.

If the standby database. . .Then. . .Perform these steps. . .
Has not applied redo data past the new resetlogs SCN (past the start of the new branch of redo data) and the new redo branch from OPEN RESETLOGS has been registered at the standbyRedo Apply automatically takes the new branch of redo.No manual intervention is necessary. The MRP automatically resynchronizes the standby database with the new branch of redo data.

Note: To check whether the new redo branch has been registered at the standby, perform the following query at the primary and standby and verify that the results match:

SELECT resetlogs_id, resetlogs_change# FROM V$DATABASE_INCARNATION WHERE status='CURRENT'
Has applied redo data past the new resetlogs SCN (past the start of the new branch of redo data) and Flashback Database is enabled on the standby databaseThe standby database is recovered in the future of the new branch of redo data.
  1. Follow the procedure in Section 13.3.1 to flash back a physical standby database.
  2. Restart Redo Apply to continue application of redo data onto new reset logs branch.

The MRP automatically resynchronizes the standby database with the new branch.

Has applied redo data past the new resetlogs SCN (past the start of the new branch of redo data) and Flashback Database is not enabled on the standby databaseThe primary database has diverged from the standby on the indicated primary database branch.Re-create the physical standby database following the procedures in Chapter 3.
Is missing intervening archived redo log files from the new branch of redo dataThe MRP cannot continue until the missing log files are retrieved.Locate and register missing archived redo log files from each branch.
Is missing archived redo log files from the end of the previous branch of redo data.The MRP cannot continue until the missing log files are retrieved.Locate and register missing archived redo log files from the previous branch.

See Oracle Database Backup and Recovery User's Guide for more information about database incarnations, recovering through an OPEN RESETLOGS operation, and Flashback Database.

9.5 Monitoring Primary, Physical Standby, and Snapshot Standby Databases

This section describes where to find useful information for monitoring primary and standby databases.

Table 9-2 summarizes common primary database management actions and where to find information related to these actions.

Table 9-2 Sources of Information About Common Primary Database Management Actions

Primary Database ActionPrimary Site InformationStandby Site Information

Enable or disable a redo thread

  • Alert log

  • V$THREAD

Alert log

Display database role, protection mode, protection level, switchover status, fast-start failover information, and so forth

V$DATABASE

V$DATABASE

Add or drop a redo log file group

  • Alert log

  • V$LOG

  • STATUS column of V$LOGFILE

Alert log

CREATE CONTROLFILE

Alert log

Alert log

Monitor Redo Apply

  • Alert log

  • V$ARCHIVE_DEST_STATUS

  • Alert log

  • V$ARCHIVED_LOG

  • V$LOG_HISTORY

  • V$MANAGED_STANDBY

Change tablespace status

  • V$RECOVER_FILE

  • DBA_TABLESPACES

  • Alert log

  • V$RECOVER_FILE

  • DBA_TABLESPACES

Add or drop a datafile or tablespace

  • DBA_DATA_FILES

  • Alert log

  • V$DATAFILE

  • Alert log

Rename a datafile

  • V$DATAFILE

  • Alert log

  • V$DATAFILE

  • Alert log

Unlogged or unrecoverable operations

  • V$DATAFILE

  • V$DATABASE

Alert log

Monitor redo transport

  • V$ARCHIVE_DEST_STATUS

  • V$ARCHIVED_LOG

  • V$ARCHIVE_DEST

  • Alert log

  • V$ARCHIVED_LOG

  • Alert log

Issue OPEN RESETLOGS or CLEAR UNARCHIVED LOGFILES statements

Alert log

Alert log

Change initialization parameter

Alert log

Alert log


9.5.1 Using Views to Monitor Primary, Physical, and Snapshot Standby Databases

This section shows how to use dynamic performance views to monitor primary, physical standby, and snapshot standby databases.

The following dynamic performance views are discussed:


See Also:

Oracle Database Reference for complete reference information about views

9.5.1.1 V$DATABASE

The following query displays the data protection mode, data protection level, database role, and switchover status for a primary, physical standby or snapshot standby database:

SQL> SELECT PROTECTION_MODE, PROTECTION_LEVEL, –
> DATABASE_ROLE ROE+LE, SWITCHOVER_STATUS –
> FROM V$DATABASE;

The following query displays fast-start failover status:

SQL> SELECT FS_FAILOVER_STATUS "FSFO STATUS", -
> FS_FAILOVER_CURRENT_TARGET TARGET, -
> FS_FAILOVER_THRESHOLD THRESHOLD, -
> FS_FAILOVER_OBSERVER_PRESENT "OBSERVER PRESENT" –
> FROM V$DATABASE;

9.5.1.2 V$MANAGED_STANDBY

The following query displays Redo Apply and redo transport status on a physical standby database:

SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#,-
> BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
 
PROCESS STATUS       THREAD#    SEQUENCE#  BLOCK#     BLOCKS
------- ------------ ---------- ---------- ---------- ----------
RFS     ATTACHED     1          947        72         72
MRP0    APPLYING_LOG 1          946        10         72

The sample output shows that a RFS process completed archiving a redo log file with a sequence number of 947 and that Redo Apply is actively applying an archived redo log file with a sequence number of 946. Redo Apply is currently recovering block number 10 of the 72-block archived redo log file.

9.5.1.3 V$ARCHIVED_LOG

The following query displays information about archived redo log files that have been received by a physical or snapshot standby database from a primary database:

SQL> SELECT THREAD#, SEQUENCE#, FIRST_CHANGE#, -
> NEXT_CHANGE# FROM V$ARCHIVED_LOG;
 
THREAD#    SEQUENCE#  FIRST_CHANGE# NEXT_CHANGE#
---------- ---------- ------------- ------------
1          945        74651         74739
1          946        74739         74772
1          947        74772         74795

The sample output shows that three archived redo log files have been received from the primary database.

9.5.1.4 V$LOG_HISTORY

The following query displays archived log history information:

SQL> SELECT THREAD#, SEQUENCE#, FIRST_CHANGE#, -
> NEXT_CHANGE# FROM V$LOG_HISTORY;

9.5.1.5 V$DATAGUARD_STATUS

The following query displays messages generated by Data Guard events that caused a message to be written to the alert log or to a server process trace file:

SQL> SELECT MESSAGE FROM V$DATAGUARD_STATUS;

9.5.1.6 V$ARCHIVE_DEST

The following query shows the status of each redo transport destination, and for redo transport destinations that are standby databases, the SCN of the last primary database redo applied at that standby database:

SQL> SELECT DEST_ID, APPLIED_SCN FROM V$ARCHIVE_DEST WHERE TARGET='STANDBY';
   
DEST_ID    STATUS    APPLIED_SCN
---------- --------- -----------
2          VALID     439054
3          VALID     439054 

9.6 Tuning Redo Apply

The Active Data Guard 11g Best Practices (includes best practices for Redo Apply) white paper describes how to optimize Redo Apply and media recovery performance. This paper is available on the Oracle Maximum Availability Architecture (MAA) home page at:

http://www.oracle.com/goto/maa


See Also:

My Oracle Support note 454848.1 at http://support.oracle.com for information about the installation and use of the Standby Statspack, which can be used to collect Redo Apply performance data from a physical standby database

9.7 Managing a Snapshot Standby Database

A snapshot standby database is a fully updatable standby database. A snapshot standby database receives and archives, but does not apply, redo data from a primary database. Redo data received from the primary database is applied when a snapshot standby database is converted back into a physical standby database, after discarding all local updates to the snapshot standby database.

A snapshot standby database typically diverges from its primary database over time because redo data from the primary database is not applied as it is received. Local updates to the snapshot standby database will cause additional divergence. The data in the primary database is fully protected however, because a snapshot standby can be converted back into a physical standby database at any time, and the redo data received from the primary will then be applied.

A snapshot standby database provides disaster recovery and data protection benefits that are similar to those of a physical standby database. Snapshot standby databases are best used in scenarios where the benefit of having a temporary, updatable snapshot of the primary database justifies increased time to recover from primary database failures.

9.7.1 Converting a Physical Standby Database into a Snapshot Standby Database

Perform the following steps to convert a physical standby database into a snapshot standby database:

  1. Stop Redo Apply, if it is active.

  2. Ensure that the database is mounted, but not open.

  3. Ensure that a fast recovery area has been configured. It is not necessary for flashback database to be enabled.

  4. Issue the following SQL statement to perform the conversion:

    SQL> ALTER DATABASE CONVERT TO SNAPSHOT STANDBY;
    

Note:

A physical standby database that is managed by the Data Guard broker can be converted into a snapshot standby database using either DGMGRL or Oracle Enterprise Manager. See Oracle Data Guard Broker for more details.

9.7.2 Using a Snapshot Standby Database

A snapshot standby database can be opened in read-write mode and is fully updatable.

A snapshot standby database has the following characteristics:

  • A snapshot standby database cannot be the target of a switchover or failover. A snapshot standby database must first be converted back into a physical standby database before performing a role transition to it.

  • A snapshot standby database cannot be the only standby database in a Maximum Protection Data Guard configuration.


Note:

Flashback Database is used to convert a snapshot standby database back into a physical standby database. Any operation that cannot be reversed using Flashback Database technology will prevent a snapshot standby from being converted back to a physical standby.

For information about some of the limitations of Flashback Database, see Oracle Database Backup and Recovery User's Guide.


9.7.3 Converting a Snapshot Standby Database into a Physical Standby Database

Perform the following steps to convert a snapshot standby database into a physical standby database:

  1. On an Oracle Real Applications Cluster (Oracle RAC) database, shut down all but one instance.

  2. Ensure that the database is mounted, but not open.

  3. Issue the following SQL statement to perform the conversion:

    SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
    

The database is dismounted after conversion and must be restarted.

Redo data received while the database was a snapshot standby database will be automatically applied when Redo Apply is started.


Note:

A snapshot standby database must be opened at least once in read-write mode before it can be converted into a physical standby database.

PK-NT+E+PKD Oracle Data Guard Concepts and Administration, 11g Release 2 (11.2)

Oracle® Data Guard

Concepts and Administration

11g Release 2 (11.2)

E25608-04

December 2012


Oracle Data Guard Concepts and Administration, 11g Release 2 (11.2)

E25608-04

Copyright © 1999, 2012, Oracle and/or its affiliates. All rights reserved.

Primary Author:  Kathy Rich

Contributors: Andy Adams, Beldalker Anand, Rick Anderson, Andrew Babb, Pam Bantis, Tammy Bednar, Barbara Benton, Chipper Brown, Larry Carpenter, George Claborn, Laurence Clarke, Jay Davison, Jeff Detjen, Ray Dutcher, B.G. Garin, Mahesh Girkar, Yosuke Goto, Ray Guzman, Susan Hillson, Mark Johnson, Rajeev Jain, Joydip Kundu, J. William Lee, Steve Lee, Steve Lim, Nitin Karkhanis, Steve McGee, Bob McGuirk, Joe Meeks, Steve Moriarty, Muthu Olagappan, Deborah Owens, Ashish Ray, Antonio Romero, Mike Schloss, Vivian Schupmann, Mike Smith, Vinay Srihali, Morris Tao, Lawrence To, Doug Utzig, Ric Van Dyke, Doug Voss, Ron Weiss, Jingming Zhang

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

PKR_PKD SQL Statements Relevant to Data Guard

16 SQL Statements Relevant to Data Guard

This chapter summarizes the SQL and SQL*Plus statements that are useful for performing operations on standby databases in a Data Guard environment. This chapter includes the following topics:

This chapter contains only the syntax and a brief summary of particular SQL statements. You must refer to the Oracle Database SQL Language Reference for complete syntax and descriptions about these and other SQL statements.

See Chapter 14 for a list of initialization parameters that you can set and dynamically update using the ALTER SYSTEM SET statement.

16.1 ALTER DATABASE Statements

Table 16-1 describes ALTER DATABASE statements that are relevant to Data Guard.

Table 16-1 ALTER DATABASE Statements Used in Data Guard Environments

ALTER DATABASE StatementDescription

ADD [STANDBY] LOGFILE [THREAD integer] [GROUP integer] filespec

Adds one or more online redo log file groups or standby redo log file groups to the specified thread, making the log files available to the instance to which the thread is assigned.

See Section 9.3.5 for an example of this statement.

ADD [STANDBY] LOGFILE MEMBER 'filename' [REUSE] TO logfile-descriptor

Adds new members to existing online redo log file groups or standby redo log file groups.

[ADD|DROP] SUPPLEMENTAL LOG DATA {PRIMARY KEY|UNIQUE INDEX} COLUMNS

This statement is for logical standby databases only.

Use it to enable full supplemental logging before you create a logical standby database. This is necessary because supplemental logging is the source of change to a logical standby database. To implement full supplemental logging, you must specify either the PRIMARY KEY COLUMNS or the UNIQUE INDEX COLUMNS keyword on this statement.

See Oracle Database SQL Language Reference for more information.

COMMIT TO SWITCHOVER

Performs a switchover to:

  • Change the current primary database to the standby database role

  • Change one standby database to the primary database role.

Note: On logical standby databases, you issue the ALTER DATABASE PREPARE TO SWITCHOVER statement to prepare the database for the switchover before you issue the ALTER DATABASE COMMIT TO SWITCHOVER statement.

See Section 8.2.1 and Section 8.3.1 for examples of this statement. Also see Oracle Database SQL Language Reference for information about the complete syntax for this statement.

CONVERT TO [[PHYSICAL|SNAPSHOT] STANDBY] DATABASE

Converts a physical standby database into a snapshot standby database and vice versa.

CREATE [PHYSICAL|LOGICAL] STANDBY CONTROLFILE AS 'filename' [REUSE]

Creates a control file to be used to maintain a physical or a logical standby database. Issue this statement on the primary database.

See Section 3.2.2 for an example of this statement.

DROP [STANDBY] LOGFILE logfile_descriptor

Drops all members of an online redo log file group or standby redo log file group.

See Section 9.3.5 for an example of this statement.

DROP [STANDBY] LOGFILE MEMBER 'filename'

Drops one or more online redo log file members or standby redo log file members.

[NO]FORCE LOGGING

Controls whether or not the Oracle database logs all changes in the database except for changes to temporary tablespaces and temporary segments. The [NO]FORCE LOGGING clause is required to prevent inconsistent standby databases.:

The primary database must be mounted but not open when you issue this statement. See Section 3.1.1 for an example of this statement.

GUARD

Controls user access to tables in a logical standby database. Possible values are ALL, STANDBY, and NONE. See Section 10.2 for more information.

MOUNT [STANDBY DATABASE]

Mounts a standby database, allowing the standby instance to receive redo data from the primary instance.

OPEN

Opens a previously started and mounted database:

  • Physical standby databases are opened in read-only mode, restricting users to read-only transactions and preventing the generating of redo data.

  • Logical standby database are opened in read/write mode.

PREPARE TO SWITCHOVER

This statement is for logical standby databases only.

It prepares the primary database and the logical standby database for a switchover by building the LogMiner dictionary before the switchover takes place. After the dictionary build has completed, issue the ALTER DATABASE COMMIT TO SWITCHOVER statement to switch the roles of the primary and logical standby databases.

See Section 8.3.1 for examples of this statement. Also see Oracle Database SQL Language Reference for information about the complete syntax for this statement.

RECOVER MANAGED STANDBY DATABASE [ { DISCONNECT [FROM SESSION] | USING CURRENT LOGFILE | NODELAY | UNTIL CHANGE integer }...]

This statement starts and controls Redo Apply on physical standby databases. You can use the RECOVER MANAGED STANDBY DATABASE clause on a physical standby database that is mounted, open, or closed. See Step 4 in Section 3.2.6 and Section 7.3 for examples.

Note: Several clauses and keywords were deprecated and are supported for backward compatibility only. See Oracle Database SQL Language Reference for more information about these clauses.

RECOVER MANAGED STANDBY DATABASE CANCEL

The CANCEL clause cancels Redo Apply on a physical standby database after applying the current archived redo log file.

Note: Several clauses and keywords were deprecated and are supported for backward compatibility only. See Oracle Database SQL Language Reference for more information about these clauses.

RECOVER MANAGED STANDBY DATABASE FINISH

The FINISH clause initiates failover on the target physical standby database and recovers the current standby redo log files. Use the FINISH clause only in the event of the failure of the primary database. This clause overrides any delay intervals specified.

See Step 4 in Section 8.2.2 for examples.

Note: Several clauses and keywords were deprecated and are supported for backward compatibility only. See Oracle Database SQL Language Reference for more information about these clauses.

REGISTER [OR REPLACE] [PHYSICAL|LOGICAL] LOGFILE filespec

Allows the registration of manually copied archived redo log files.

Note: This command should be issued only after manually copying the corresponding archived redo log file to the standby database. Issuing this command while the log file is in the process of being copied or when the log file does not exist may result in errors on the standby database at a later time.

RECOVER TO LOGICAL STANDBY new_database_name

Instructs apply services to continue applying changes to the physical standby database until you issue the command to convert the database to a logical standby database. See Section 4.2.4.1 for more information.

RESET DATABASE TO INCARNATION integer

Resets the target recovery incarnation for the database from the current incarnation to a different incarnation.

SET STANDBY DATABASE TO MAXIMIZE {PROTECTION|AVAILABILITY|PERFORMANCE}

Use this clause to specify the level of protection for the data in your Data Guard configuration. You specify this clause from the primary database, which must be mounted but not open.

START LOGICAL STANDBY APPLY INITIAL [scn-value] ] [NEW PRIMARY dblink]

This statement is for logical standby databases only.It starts SQL Apply on a logical standby database. See Section 7.4.1 for examples of this statement.

{STOP|ABORT} LOGICAL STANDBY APPLY

This statement is for logical standby databases only.Use the STOP clause to stop SQL Apply on a logical standby database in an orderly fashion. Use the ABORT clause to stop SQL Apply abruptly. See Section 8.3.2 for an example of this statement.

ACTIVATE [PHYSICAL|LOGICAL] STANDBY DATABASE FINISH APPLY]

Performs a failover. The standby database must be mounted before it can be activated with this statement.

Note: Do not use the ALTER DATABASE ACTIVATE STANDBY DATABASE statement to failover because it causes data loss. Instead, use the following best practices:

  • For physical standby databases, use the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE statement with the FINISH keyword to perform the role transition as quickly as possible with little or no data loss and without rendering other standby databases unusable.

  • For logical standby databases, use the ALTER DATABASE PREPARE TO SWITCHOVER and ALTER DATABASE COMMIT TO SWITCHOVER statements.


16.2 ALTER SESSION Statements

Table 16-2 describes the ALTER SESSION statements that are relevant to Data Guard.

Table 16-2 ALTER SESSION Statements Used in Data Guard Environments

ALTER SESSION StatementDescription

ALTER SESSION [ENABLE|DISABLE] GUARD

This statement is for logical standby databases only.

This statement allows privileged users to turn the database guard on and off for the current session.

See Section 10.5.4 for more information.

ALTER SESSION SYNC WITH PRIMARY

This statement is for physical standby databases only.

This statement synchronizes a physical standby database with the primary database, by blocking until all redo data received by the physical standby at the time of statement invocation has been applied.

See Section 9.2.1.3 for more information.


16.3 ALTER SYSTEM Statements

Table 16-3 describes the ALTER SYSTEM statements that are relevant to Data Guard.

Table 16-3 ALTER SYSTEM Statements Used in Data Guard Environments

ALTER SYSTEM StatementDescription

ALTER SYSTEM FLUSH REDO TO target_db_name [[NO] CONFIRM APPLY]

This statement flushes redo data from a primary database to a standby database and optionally waits for the flushed redo data to be applied to a physical or logical standby database.

This statement must be issued on a mounted, but not open, primary database.


PKORMC\\PKD List of Examples PKhh!  PKD Getting Started with Data Guard

2 Getting Started with Data Guard

A Data Guard configuration contains a primary database and up to thirty associated standby databases. This chapter describes the following considerations for getting started with Data Guard:

2.1 Standby Database Types

A standby database is a transactionally consistent copy of an Oracle production database that is initially created from a backup copy of the primary database. Once the standby database is created and configured, Data Guard automatically maintains the standby database by transmitting primary database redo data to the standby system, where the redo data is applied to the standby database.

A standby database can be one of these types: a physical standby database, a logical standby database, or a snapshot standby database. If needed, either a physical or a logical standby database can assume the role of the primary database and take over production processing. A Data Guard configuration can include any combination of these types of standby databases.

2.1.1 Physical Standby Databases

A physical standby database is an exact, block-for-block copy of a primary database. A physical standby is maintained as an exact copy through a process called Redo Apply, in which redo data received from a primary database is continuously applied to a physical standby database using the database recovery mechanisms.

A physical standby database can be opened for read-only access and used to offload queries from a primary database. If a license for the Oracle Active Data Guard option has been purchased, Redo Apply can be active while the physical standby database is open, thus allowing queries to return results that are identical to what would be returned from the primary database. This capability is known as the real-time query feature.


See Also:


Benefits of a Physical Standby Database

A physical standby database provides the following benefits:

  • Disaster recovery and high availability

    A physical standby database is a robust and efficient disaster recovery and high availability solution. Easy-to-manage switchover and failover capabilities allow easy role reversals between primary and physical standby databases, minimizing the downtime of the primary database for planned and unplanned outages.

  • Data protection

    A physical standby database can prevent data loss, even in the face of unforeseen disasters. A physical standby database supports all datatypes, and all DDL and DML operations that the primary database can support. It also provides a safeguard against data corruptions and user errors. Storage level physical corruptions on the primary database will not be propagated to a standby database. Similarly, logical corruptions or user errors that would otherwise cause data loss can be easily resolved.

  • Reduction in primary database workload

    Oracle Recovery Manager (RMAN) can use a physical standby database to off-load backups from a primary database, saving valuable CPU and I/O cycles.

    A physical standby database can also be queried while Redo Apply is active, which allows queries to be offloaded from the primary to a physical standby, further reducing the primary workload.

  • Performance

    The Redo Apply technology used by a physical standby database is the most efficient mechanism for keeping a standby database updated with changes being made at a primary database because it applies changes using low-level recovery mechanisms which bypass all SQL level code layers.

2.1.2 Logical Standby Databases

A logical standby database is initially created as an identical copy of the primary database, but it later can be altered to have a different structure. The logical standby database is updated by executing SQL statements. This allows users to access the standby database for queries and reporting at any time. Thus, the logical standby database can be used concurrently for data protection and reporting operations.

Data Guard automatically applies information from the archived redo log file or standby redo log file to the logical standby database by transforming the data in the log files into SQL statements and then executing the SQL statements on the logical standby database. Because the logical standby database is updated using SQL statements, it must remain open. Although the logical standby database is opened in read/write mode, its target tables for the regenerated SQL are available only for read-only operations. While those tables are being updated, they can be used simultaneously for other tasks such as reporting, summations, and queries. Moreover, these tasks can be optimized by creating additional indexes and materialized views on the maintained tables.

A logical standby database has some restrictions on datatypes, types of tables, and types of DDL and DML operations. See Appendix C for information on data type and DDL support on logical standby databases.

Benefits of a Logical Standby Database

A logical standby database is ideal for high availability (HA) while still offering data recovery (DR) benefits. Compared to a physical standby database, a logical standby database provides significant additional HA benefits:

  • Protection against additional kinds of failure

    Because logical standby analyzes the redo and reconstructs logical changes to the database, it can detect and protect against certain kinds of hardware failure on the primary that could potentially be replicated through block level changes. Oracle supports having both physical and logical standbys for the same primary server.

  • Efficient use of resources

    A logical standby database is open read/write while changes on the primary are being replicated. Consequently, a logical standby database can simultaneously be used to meet many other business requirements, for example it can run reporting workloads that would problematical for the primary's throughput. It can be used to test new software releases and some kinds of applications on a complete and accurate copy of the primary's data. It can host other applications and additional schemas while protecting data replicated from the primary against local changes. It can be used to assess the impact of certain kinds of physical restructuring (for example, changes to partitioning schemes). Because a logical standby identifies user transactions and replicates only those changes while filtering out background system changes, it can efficiently replicate only transactions of interest.

  • Workload distribution

    Logical standby provides a simple turnkey solution for creating up-to-the-minute, consistent replicas of a primary database that can be used for workload distribution. As the reporting workload increases, additional logical standbys can be created with transparent load distribution without affecting the transactional throughput of the primary server.

  • Optimized for reporting and decision support requirements

    A key benefit of logical standby is that significant auxiliary structures can be created to optimize the reporting workload; structures that could have a prohibitive impact on the primary's transactional response time. A logical standby can have its data physically reorganized into a different storage type with different partitioning, have many different indexes, have on-demand refresh materialized views created and maintained, and it can be used to drive the creation of data cubes and other OLAP data views.

  • Minimizing downtime on software upgrades

    Logical standby can be used to greatly reduce downtime associated with applying patchsets and new software releases. A logical standby can be upgraded to the new release and then switched over to become the active primary. This allows full availability while the old primary is converted to a logical standby and the patchset is applied.

2.1.3 Snapshot Standby Databases

A snapshot standby database is a type of updatable standby database that provides full data protection for a primary database. A snapshot standby database receives and archives, but does not apply, redo data from its primary database. Redo data received from the primary database is applied when a snapshot standby database is converted back into a physical standby database, after discarding all local updates to the snapshot standby database.

A snapshot standby database typically diverges from its primary database over time because redo data from the primary database is not applied as it is received. Local updates to the snapshot standby database will cause additional divergence. The data in the primary database is fully protected however, because a snapshot standby can be converted back into a physical standby database at any time, and the redo data received from the primary will then be applied.

Benefits of a Snapshot Standby Database

A snapshot standby database is a fully updatable standby database that provides disaster recovery and data protection benefits that are similar to those of a physical standby database. Snapshot standby databases are best used in scenarios where the benefit of having a temporary, updatable snapshot of the primary database justifies the increased time to recover from primary database failures.

The benefits of using a snapshot standby database include the following:

  • It provides an exact replica of a production database for development and testing purposes, while maintaining data protection at all times.

  • It can be easily refreshed to contain current production data by converting to a physical standby and resynchronizing.

The ability to create a snapshot standby, test, resynchronize with production, and then again create a snapshot standby and test, is a cycle that can be repeated as often as desired. The same process can be used to easily create and regularly update a snapshot standby for reporting purposes where read/write access to data is required.

2.2 User Interfaces for Administering Data Guard Configurations

You can use the following interfaces to configure, implement, and manage a Data Guard configuration:

  • Oracle Enterprise Manager

    Enterprise Manager provides a GUI interface for the Data Guard broker that automates many of the tasks involved in creating, configuring, and monitoring a Data Guard environment. See Oracle Data Guard Broker and the Oracle Enterprise Manager online Help for information about the GUI and its wizards.

  • SQL*Plus Command-line interface

    Several SQL*Plus statements use the STANDBY keyword to specify operations on a standby database. Other SQL statements do not include standby-specific syntax, but they are useful for performing operations on a standby database. See Chapter 16 for a list of the relevant statements.

  • Initialization parameters

    Several initialization parameters are used to define the Data Guard environment. See Chapter 14 for a list of the relevant initialization parameters.

  • Data Guard broker command-line interface (DGMGRL)

    The DGMGRL command-line interface is an alternative to using Oracle Enterprise Manager. The DGMGRL command-line interface is useful if you want to use the broker to manage a Data Guard configuration from batch programs or scripts. See Oracle Data Guard Broker for complete information.

2.3 Data Guard Operational Prerequisites

The following sections describe operational requirements for using Data Guard:

2.3.1 Hardware and Operating System Requirements

As of Oracle Database 11g, Data Guard provides increased flexibility for Data Guard configurations in which the primary and standby systems may have different CPU architectures, operating systems (for example, Windows & Linux), operating system binaries (32-bit/64-bit), or Oracle database binaries (32-bit/64-bit).

This increased mixed-platform flexibility is subject to the current restrictions documented in the My Oracle Support notes 413484.1 and 1085687.1 at http://support.oracle.com.

Note 413484.1 discusses mixed-platform support and restrictions for physical standbys.

Note 1085687.1 discusses mixed-platform support and restrictions for logical standbys.

The same release of Oracle Database Enterprise Edition must be installed on the primary database and all standby databases, except during rolling database upgrades using logical standby databases.


See Also:


2.3.2 Oracle Software Requirements

The following list describes Oracle software requirements for using Data Guard:

  • Oracle Data Guard is available only as a feature of Oracle Database Enterprise Edition. It is not available with Oracle Database Standard Edition.


    Note:

    It is possible to simulate a standby database environment with databases running Oracle Database Standard Edition. You can do this by manually transferring archived redo log files using an operating system copy utility or using custom scripts that periodically send archived redo log files from one database to the other. The consequence is that this configuration does not provide the ease-of-use, manageability, performance, and disaster-recovery capabilities available with Data Guard

  • Using Data Guard SQL Apply, you will be able to perform a rolling upgrade of the Oracle database software from patch set release n (minimally, this must be release 10.1.0.3) to any higher versioned patch set or major version release. During a rolling upgrade, you can run different releases of the Oracle database on the primary and logical standby databases while you upgrade them, one at a time. For complete information, see Chapter 12, "Using SQL Apply to Upgrade the Oracle Database" and the ReadMe file for the applicable Oracle Database 10g patch set release.

  • The COMPATIBLE database initialization parameter must be set to the same value on all databases in a Data Guard configuration, except when using a logical standby database, which can have a higher COMPATIBLE setting than the primary database.

  • If you are currently running Oracle Data Guard on Oracle8i database software, see Oracle Database Upgrade Guide for complete information about upgrading to Oracle Data Guard 11g.

  • The primary database must run in ARCHIVELOG mode. See Oracle Database Administrator's Guide for more information.

  • The primary database can be a single instance database or an Oracle Real Application Clusters (Oracle RAC) database. The standby databases can be single instance databases or Oracle RAC databases, and these standby databases can be a mix of physical, logical, and snapshot types. See Oracle Database High Availability Overview for more information about configuring and using Oracle Data Guard with Oracle RAC.

  • Each primary database and standby database must have its own control file.

  • If a standby database is located on the same system as the primary database, the archival directories for the standby database must use a different directory structure than the primary database. Otherwise, the standby database may overwrite the primary database files.

  • To protect against unlogged direct writes in the primary database that cannot be propagated to the standby database, turn on FORCE LOGGING at the primary database before performing datafile backups for standby creation. Keep the database in FORCE LOGGING mode as long as the standby database is required.

  • The user accounts you use to manage the primary and standby database instances must have SYSDBA system privileges.

  • For operational simplicity, Oracle recommends that when you set up Oracle Automatic Storage Management (Oracle ASM) and Oracle Managed Files (OMF) in a Data Guard configuration that you set it up symmetrically on the primary and standby database(s). That is, if any database in the Data Guard configuration uses Oracle ASM, OMF, or both, then every database in the configuration should use Oracle ASM, OMF, or both, respectively, unless you are purposely implementing a mixed configuration for migration or maintenance purposes. See the scenario in Section 13.5 for more information.


    Note:

    Because some applications that perform updates involving time-based data cannot handle data entered from multiple time zones, consider setting the time zone for the primary and remote standby systems to be the same to ensure the chronological ordering of records is maintained after a role transition.

2.4 Standby Database Directory Structure Considerations

The directory structure of the various standby databases is important because it determines the path names for the standby datafiles, archived redo log files, and standby redo log files. If possible, the datafiles, log files, and control files on the primary and standby systems should have the same names and path names and use Optimal Flexible Architecture (OFA) naming conventions. The archival directories on the standby database should also be identical between sites, including size and structure. This strategy allows other operations such as backups, switchovers, and failovers to execute the same set of steps, reducing the maintenance complexity.


See Also:

Your operating system-specific Oracle documentation for more information about Optimal Flexible Architecture (OFA)

Otherwise, you must set the filename conversion parameters (as shown in Table 2-1) or rename the datafile. Nevertheless, if you need to use a system with a different directory structure or place the standby and primary databases on the same system, you can do so with a minimum of extra administration.

The three basic configuration options are illustrated in Figure 2-1. These include:

  • A standby database on the same system as the primary database that uses a different directory structure than the primary system. This is illustrated in Figure 2-1 as Standby1.

    If you have a standby database on the same system as the primary database, you must use a different directory structure. Otherwise, the standby database attempts to overwrite the primary database files.

  • A standby database on a separate system that uses the same directory structure as the primary system. This is illustrated in Figure 2-1 as Standby2. This is the recommended method.

  • A standby database on a separate system that uses a different directory structure than the primary system. This is illustrated in Figure 2-1 as Standby3.


    Note:

    If any database in the Data Guard configuration uses Oracle ASM, OMF, or both, then every database in the configuration should use Oracle ASM, OMF, or both, respectively. See Chapter 13 for a scenario describing how to set up OMF in a Data Guard configuration.

Figure 2-1 Possible Standby Configurations

Description of Figure 2-1 follows
Description of "Figure 2-1 Possible Standby Configurations"

Table 2-1 describes possible configurations of primary and standby databases and the consequences of each.

Table 2-1 Standby Database Location and Directory Options

Standby SystemDirectory StructureConsequences

Same as primary system

Different than primary system (required)

  • You can either manually rename files or set up the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT initialization parameters on the standby database to automatically update the path names for primary database datafiles and archived redo log files and standby redo log files in the standby database control file. (See Section 3.1.4.)

  • The standby database does not protect against disasters that destroy the system on which the primary and standby databases reside, but it does provide switchover capabilities for planned maintenance.

Separate system

Same as primary system

  • You do not need to rename primary database files, archived redo log files, and standby redo log files in the standby database control file, although you can still do so if you want a new naming scheme (for example, to spread the files among different disks).

  • By locating the standby database on separate physical media, you safeguard the data on the primary database against disasters that destroy the primary system.

Separate system

Different than primary system

  • You can either manually rename files or set up the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT initialization parameters on the standby database to automatically rename the datafiles (see Section 3.1.4).

  • By locating the standby database on separate physical media, you safeguard the data on the primary database against disasters that destroy the primary system.


PK7" zzPKD Using RMAN to Back Up and Restore Files

11 Using RMAN to Back Up and Restore Files

This chapter describes backup strategies using Oracle Recovery Manager (RMAN) with Data Guard and standby databases. RMAN can perform backups with minimal effect on the primary database and quickly recover from the loss of individual datafiles, or the entire database. RMAN and Data Guard can be used together to simplify the administration of a Data Guard configuration.

This chapter contains the following topics:


Note:

Because a logical standby database is not a block-for-block copy of the primary database, you cannot use a logical standby database to back up the primary database.


See Also:


11.1 About RMAN File Management in a Data Guard Configuration

RMAN uses a recovery catalog to track filenames for all database files in a Data Guard environment. A recovery catalog is a database schema used by RMAN to store metadata about one or more Oracle databases. The catalog also records where the online redo logs, standby redo logs, tempfiles, archived redo logs, backup sets, and image copies are created.

11.1.1 Interchangeability of Backups in a Data Guard Environment

RMAN commands use the recovery catalog metadata to behave transparently across different physical databases in the Data Guard environment. For example, you can back up a tablespace on a physical standby database and restore and recover it on the primary database. Similarly, you can back up a tablespace on a primary database and restore and recover it on a physical standby database.


Note:

Backups of logical standby databases are not usable at the primary database.

Backups of standby control files and nonstandby control files are interchangeable. For example, you can restore a standby control file on a primary database and a primary control file on a physical standby database. This interchangeability means that you can offload control file backups to one database in a Data Guard environment. RMAN automatically updates the filenames for database files during restore and recovery at the databases.

11.1.2 Association of Backups in a Data Guard Environment

The recovery catalog tracks the files in the Data Guard environment by associating every database file or backup file with a DB_UNIQUE_NAME. The database that creates a file is associated with the file. For example, if RMAN backs up the database with the unique name of standby1, then standby1 is associated with this backup. A backup remains associated with the database that created it unless you use the CHANGE ... RESET DB_UNIQUE_NAME to associate the backup with a different database.

11.1.3 Accessibility of Backups in a Data Guard Environment

The accessibility of a backup is different from its association. In a Data Guard environment, the recovery catalog considers disk backups as accessible only to the database with which it is associated, whereas tape backups created on one database are accessible to all databases. If a backup file is not associated with any database, then the row describing it in the recovery catalog view shows null for the SITE_KEY column. By default, RMAN associates files whose SITE_KEY is null with the target database.

RMAN commands such as BACKUP, RESTORE, and CROSSCHECK work on any accessible backup. For example, for a RECOVER COPY operation, RMAN considers only image copies that are associated with the database as eligible to be recovered. RMAN considers the incremental backups on disk and tape as eligible to recover the image copies. In a database recovery, RMAN considers only the disk backups associated with the database and all files on tape as eligible to be restored.

To illustrate the differences in backup accessibility, assume that databases prod and standby1 reside on different hosts. RMAN backs up datafile 1 on prod to /prmhost/disk1/df1.dbf on the production host and also to tape. RMAN backs up datafile 1 on standby1 to /sbyhost/disk2/df1.dbf on the standby host and also to tape. If RMAN is connected to database prod, then you cannot use RMAN commands to perform operations with the /sbyhost/disk2/df1.dbf backup located on the standby host. However, RMAN does consider the tape backup made on standby1 as eligible to be restored.


Note:

You can FTP a backup from a standby host to a primary host or vice versa, connect as TARGET to the database on this host, and then CATALOG the backup. After a file is cataloged by the target database, the file is associated with the target database.

11.2 About RMAN Configuration in a Data Guard Environment

In a Data Guard configuration, the process of backing up control files, datafiles, and archived logs can be offloaded to the standby system, thereby minimizing the effect of backups on the production system. These backups can be used to recover the primary or standby database.

RMAN uses the DB_UNIQUE_NAME initialization parameter to distinguish one database site from another database site. Thus, it is critical that the uniqueness of DB_UNIQUE_NAME be maintained in a Data Guard configuration.

Only the primary database must be explicitly registered using the RMAN REGISTER DATABASE command. You do this after connecting RMAN to the recovery catalog and primary database as target.

Use the RMAN CONFIGURE command to set the RMAN configurations. When the CONFIGURE command is used with the FOR DB_UNIQUE_NAME option, it sets the RMAN site-specific configuration for the database with the DB_UNIQUE_NAME you specify.

For example, after connecting to the recovery catalog, you could use the following commands at an RMAN prompt to set the default device type to SBT for the BOSTON database that has a DBID of 1625818158. The RMAN SET DBID command is required only if you are not connected to a database as target.

SET DBID 1625818158;
CONFIGURE DEFAULT DEVICE TYPE TO SBT FOR DB_UNIQUE_NAME BOSTON;

11.3 Recommended RMAN and Oracle Database Configurations

This section describes the following RMAN and Oracle Database configurations, each of which can simplify backup and recovery operations:

Configuration Assumptions

The configurations described in this section make the following assumptions:

  • The standby database is a physical standby database, and backups are taken only on the standby database. See Section 11.9.1 for procedural changes if backups are taken on both primary and standby databases.

  • An RMAN recovery catalog is required so that backups taken on one database server can be restored to another database server. It is not sufficient to use only the control file as the RMAN repository because the primary database will have no knowledge of backups taken on the standby database.

    The RMAN recovery catalog organizes backup histories and other recovery-related metadata in a centralized location. The recovery catalog is configured in a database and maintains backup metadata. A recovery catalog does not have the space limitations of the control file and can store more historical data about backups.

    A catalog server, physically separate from the primary and standby sites, is recommended in a Data Guard configuration because a disaster at either site will not affect the ability to recover the latest backups.


    See Also:

    Oracle Database Backup and Recovery User's Guide for more information about managing a recovery catalog

  • All databases in the configuration use Oracle Database 11g Release 1 (11.1).

  • Oracle Secure Backup software or 3rd-party media management software is configured with RMAN to make backups to tape.

11.3.1 Oracle Database Configurations on Primary and Standby Databases

The following Oracle Database configurations are recommended on every primary and standby database in the Data Guard environment:

  • Configure a fast recovery area for each database (the recovery area is local to a database).

    The fast recovery area is a single storage location on a file system or Oracle Automatic Storage Management (Oracle ASM) disk group where all files needed for recovery reside. These files include the control file, archived logs, online redo logs, flashback logs, and RMAN backups. As new backups and archived logs are created in the fast recovery area, older files (which are either outside of the retention period, or have been backed up to tertiary storage) are automatically deleted to make room for them. In addition, notifications can be set up to alert the DBA when space consumption in the fast recovery area is nearing its predefined limit. The DBA can then take action, such as increasing the recovery area space limit, adding disk hardware, or decreasing the retention period.

    Set the following initialization parameters to configure the fast recovery area:

    DB_RECOVERY_FILE_DEST = <mount point or Oracle ASM Disk Group>
    DB_RECOVERY_FILE_DEST_SIZE = <disk space quota>
    

    See Also:

    Oracle Database Backup and Recovery User's Guide for more information about configuring a fast recovery area

  • Use a server parameter file (SPFILE) so that it can be backed up to save instance parameters in backups.

  • Enable Flashback Database on primary and standby databases.

    When Flashback Database is enabled, Oracle Database maintains flashback logs in the fast recovery area. These logs can be used to roll the database back to an earlier point in time, without requiring a complete restore.


    See Also:

    Oracle Database Backup and Recovery User's Guide for more information about enabling Flashback Database

11.3.2 RMAN Configurations at the Primary Database

To simplify ongoing use of RMAN, you can set a number of persistent configuration settings for each database in the Data Guard environment. These settings control many aspects of RMAN behavior. For example, you can configure the backup retention policy, default destinations for backups to tape or disk, default backup device type, and so on. You can use the CONFIGURE command to set and change RMAN configurations. The following RMAN configurations are recommended at the primary database:

  1. Connect RMAN to the primary database and recovery catalog.

  2. Configure the retention policy for the database as n days:

    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF <n> DAYS;
    

    This configuration lets you keep the backups necessary to perform database recovery to any point in time within the specified number of days.

    Use the DELETE OBSOLETE command to delete any backups that are not required (per the retention policy in place) to perform recovery within the specified number of days.

  3. Specify when archived logs can be deleted with the CONFIGURE ARCHIVELOG DELETION POLICY command. For example, if you want to delete logs after ensuring that they shipped to all destinations, use the following configuration:

    CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO ALL STANDBY;
    

    If you want to delete logs after ensuring that they were applied on all standby destinations, use the following configuration:

    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    
  4. Configure the connect string for the primary database and all standby databases, so that RMAN can connect remotely and perform resynchronization when the RESYNC CATALOG FROM DB_UNIQUE_NAME command is used. When you connect to the target instance, you must provide a net service name. This requirement applies even if the other database instance from where the resynchronization is done is on the local host. The target and remote instances must use the same SYSDBA password, which means that both instances must already have password files. You can create the password file with a single password so you can start all the database instances with that password file. For example, if the TNS alias to connect to a standby in Boston is boston_conn_str, you can use the following command to configure the connect identifier for the BOSTON database site:

    CONFIGURE DB_UNIQUE_NAME BOSTON CONNECT IDENTIFIER 'boston_conn_str';
    

    Note that the 'boston_conn_str' does not include a username and password. It contains only the Oracle Net service name that can be used from any database site to connect to the BOSTON database site.

    After connect identifiers are configured for all standby databases, you can verify the list of standbys by using the LIST DB_UNIQUE_NAME OF DATABASE command.


See Also:


11.3.3 RMAN Configurations at a Standby Database Where Backups are Performed

The following RMAN configurations are recommended at a standby database where backups are done:

  1. Connect RMAN to the standby database (where backups are performed) as target, and to the recovery catalog.

  2. Enable automatic backup of the control file and the server parameter file:

    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    
  3. Skip backing up datafiles for which there already exists a valid backup with the same checkpoint:

    CONFIGURE BACKUP OPTIMIZATION ON;
    
  4. Configure the tape channels to create backups as required by media management software:

    CONFIGURE CHANNEL DEVICE TYPE SBT PARMS '<channel parameters>';
    
  5. Specify when the archived logs can be deleted with the CONFIGURE ARCHIVELOG DELETION POLICY command.

    Because the logs are backed up at the standby site, it is recommended that you configure the BACKED UP option for the log deletion policy.


See Also:

Oracle Database Backup and Recovery User's Guide for more information about enabling deletion policies for archived redo logs

11.3.4 RMAN Configurations at a Standby Where Backups Are Not Performed

The following RMAN configurations are recommended at a standby database where backups are not done:

  1. Connect RMAN to the standby database as target, and to the recovery catalog.

  2. Enable automatic deletion of archived logs once they are applied at the standby database:

    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    

11.4 Backup Procedures

This section describes the RMAN scripts and procedures used to back up Oracle Database in a Data Guard configuration. The following topics are covered:


Note:

Oracle's Maximum Availability Architecture (MAA) best practices recommend that backups be taken at both the primary and the standby databases to reduce MTTR, in case of double outages and to avoid introducing new site practices upon switchover and failover.

Backups of Server Parameter Files

Prior to Oracle Database 11g, backups of server parameter files (SPFILEs) were assumed to be usable at any other standby database. However, in practice, it is not possible for all standby databases to use the same SPFILE. To address this problem, RMAN does not allow an SPFILE backup taken at one database site to be used at another database site. This restriction is in place only when the COMPATIBLE initialization parameter is set to 11.0.0.

The standby database allows you to offload all backup operations to one specific standby database, except the backups of SPFILE. However, if the COMPATIBLE initialization parameter is set to 11.0.0, the SPFILE can be backed up to disk and cataloged manually at standby sites where backups are written to tape. The additional metadata stored in SPFILE backup sets enables RMAN to identify which database SPFILE is contained in which backup set. Thus, the appropriate SPFILE backup is chosen during restore from tape.

11.4.1 Using Disk as Cache for Tape Backups

The fast recovery area on the standby database can serve as a disk cache for tape backup. Disk is used as the primary storage for backups, with tape providing long term, archival storage. Incremental tape backups are taken daily and full tape backups are taken weekly. The commands used to perform these backups are described in the following sections.

11.4.1.1 Commands for Daily Tape Backups Using Disk as Cache

When deciding on your backup strategy, Oracle recommends that you take advantage of daily incremental backups. Datafile image copies can be rolled forward with the latest incremental backups, thereby providing up-to-date datafile image copies at all times. RMAN uses the resulting image copy for media recovery just as it would use a full image copy taken at that system change number (SCN), without the overhead of performing a full image copy of the database every day. An additional advantage is that the time-to-recover is reduced because the image copy is updated with the latest block changes and fewer redo logs are required to bring the database back to the current state.

To implement daily incremental backups, a full database backup is taken on the first day, followed by an incremental backup on day two. Archived redo logs can be used to recover the database to any point in either day. For day three and onward, the previous day's incremental backup is merged with the datafile copy and a current incremental backup is taken, allowing fast recovery to any point within the last day. Redo logs can be used to recover the database to any point during the current day.

The script to perform daily backups looks as follows (the last line, DELETE ARCHIVELOG ALL is only needed if the fast recovery area is not used to store logs):

RESYNC CATALOG FROM DB_UNIQUE_NAME ALL;
RECOVER COPY OF DATABASE WITH TAG 'OSS';
BACKUP DEVICE TYPE DISK INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'OSS' DATABASE;
BACKUP DEVICE TYPE SBT ARCHIVELOG ALL;
BACKUP BACKUPSET ALL;
DELETE ARCHIVELOG ALL;

The standby control file will be automatically backed up at the conclusion of the backup operation because the control file auto backup is enabled.

Explanations for what each command in the script does are as follows:

  • RESYNC CATALOG FROM DB_UNIQUE_NAME ALL

    Resynchronizes the information from all other database sites (primary and other standby databases) in the Data Guard setup that are known to recovery catalog. For RESYNC CATALOG FROM DB_UNIQUE_NAME to work, RMAN should be connected to the target using the Oracle Net service name and all databases must use the same password file.

  • RECOVER COPY OF DATABASE WITH TAG 'OSS'

    Rolls forward level 0 copy of the database by applying the level 1 incremental backup taken the day before. In the example script just shown, the previous day's incremental level 1 was tagged OSS. This incremental is generated by the BACKUP DEVICE TYPE DISK ... DATABASE command. On the first day this command is run there will be no roll forward because there is no incremental level 1 yet. A level 0 incremental will be created by the BACKUP DEVICE TYPE DISK ... DATABASE command. Again on the second day there is no roll forward because there is only a level 0 incremental. A level 1 incremental tagged OSS will be created by the BACKUP DEVICE TYPE DISK ... DATABASE command. On the third and following days, the roll forward will be performed using the level 1 incremental tagged OSS created on the previous day.

  • BACKUP DEVICE TYPE DISK INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'OSS' DATABASE

    Create a new level 1 incremental backup. On the first day this command is run, this will be a level 0 incremental. On the second and following days, this will be a level 1 incremental.

  • BACKUP DEVICE TYPE SBT ARCHIVELOG ALL

    Backs up archived logs to tape according to the deletion policy in place.

  • BACKUP BACKUPSET ALL

    Backs up any backup sets created as a result of incremental backup creation.

  • DELETE ARCHIVELOG ALL

    Deletes archived logs according to the log deletion policy set by the CONFIGURE ARCHIVELOG DELETION POLICY command. If the archived logs are in a fast recovery area, then they are automatically deleted when more open disk space is required. Therefore, you only need to use this command if you explicitly want to delete logs each day.

11.4.1.2 Commands for Weekly Tape Backups Using Disk as Cache

To back up all recovery-related files to tape, use the following command once a week:

BACKUP RECOVERY FILES;

This ensures that all current incremental, image copy, and archived log backups on disk are backed up to tape.

11.4.2 Performing Backups Directly to Tape

Oracle's Media Management Layer (MML) API lets third-party vendors build a media manager, software that works with RMAN and the vendor's hardware to allow backups to sequential media devices such as tape drives. A media manager handles loading, unloading, and labeling of sequential media such as tapes. You must install Oracle Secure Backup or third-party media management software to use RMAN with sequential media devices.

Take the following steps to perform backups directly to tape, by default:

  1. Connect RMAN to the standby database (as the target database) and recovery catalog.

  2. Execute the CONFIGURE command as follows:

    CONFIGURE DEFAULT DEVICE TYPE TO SBT;
    

In this scenario, full backups are taken weekly, with incremental backups taken daily on the standby database.


See Also:

Oracle Database Backup and Recovery User's Guide for more information about how to configure RMAN for use with a media manager

11.4.2.1 Commands for Daily Backups Directly to Tape

Take the following steps to perform daily backups directly to tape:

  1. Connect RMAN to the standby database (as target database) and to the recovery manager.

  2. Execute the following RMAN commands:

    RESYNC CATALOG FROM DB_UNIQUE_NAME ALL;
    BACKUP AS BACKUPSET INCREMENTAL LEVEL 1 DATABASE PLUS ARCHIVELOG; 
    DELETE ARCHIVELOG ALL;
    

These commands resynchronize the information from all other databases in the Data Guard environment. They also create a level 1 incremental backup of the database, including all archived logs. On the first day this script is run, if no level 0 backups are found, then a level 0 backup is created.

The DELETE ARCHIVELOG ALL command is necessary only if all archived log files are not in a fast recovery area.

11.4.2.2 Commands for Weekly Backups Directly to Tape

One day a week, take the following steps to perform a weekly backup directly to tape:

  1. Connect RMAN to the standby database (as target database) and to the recovery catalog.

  2. Execute the following RMAN commands:

    BACKUP AS BACKUPSET INCREMENTAL LEVEL 0 DATABASE PLUS ARCHIVELOG;
    DELETE ARCHIVELOG ALL;
    

These commands resynchronize the information from all other databases in the Data Guard environment, and create a level 0 database backup that includes all archived logs.

The DELETE ARCHIVELOG ALL command is necessary only if all archived log files are not in a fast recovery area.

11.5 Registering and Unregistering Databases in a Data Guard Environment

Only the primary database must be explicitly registered using the REGISTER DATABASE command. You do this after connecting RMAN to the recovery catalog and primary database as TARGET.

A new standby is automatically registered in the recovery catalog when you connect to a standby database or when the CONFIGURE DB_UNIQUE_NAME command is used to configure the connect identifier.

To unregister information about a specific standby database, you can use the UNREGISTER DB_UNIQUE_NAME command. When a standby database is completely removed from a Data Guard environment, the database information in the recovery catalog can also be removed after you connect to another database in the same Data Guard environment. The backups that were associated with the database that was unregistered are still usable by other databases. You can associate these backups with any other existing database by using the CHANGE BACKUP RESET DB_UNIQUE_NAME command.

When the UNREGISTER DB_UNIQUE_NAME command is used with the INCLUDING BACKUPS option, the metadata for all the backup files associated with the database being removed is also removed from the recovery catalog.

11.6 Reporting in a Data Guard Environment

Use the RMAN LIST, REPORT, and SHOW commands with the FOR DB_UNIQUE_NAME clause to view information about a specific database.

For example, after connecting to the recovery catalog, you could use the following commands to display information for a database with a DBID of 1625818158 and to list the databases in the Data Guard environment. The SET DBID command is required only if you are not connected to a database as TARGET. The last three commands list archive logs, database file names, and RMAN configuration information for a database with a DB_UNIQUE_NAME of BOSTON.

SET DBID 1625818158;
LIST DB_UNIQUE_NAME OF DATABASE;
LIST ARCHIVELOG ALL FOR DB_UNIQUE_NAME BOSTON;
REPORT SCHEMA FOR DB_UNIQUE_NAME BOSTON;
SHOW ALL FOR DB_UNIQUE_NAME BOSTON;

11.7 Performing Backup Maintenance in a Data Guard Environment

The files in a Data Guard environment (datafiles, archived logs, backup pieces, image copies, and proxy copies) are associated with a database through use of the DB_UNIQUE_NAME parameter. Therefore, it is important that the value supplied for DB_UNIQUE_NAME be unique for each database in a Data Guard environment. This information, along with file-sharing attributes, is used to determine which files can be accessed during various RMAN operations.

File sharing attributes state that files on disk are accessible only at the database with which they are associated, whereas all files on tape are assumed to be accessible by all databases. RMAN commands such as BACKUP and RESTORE, as well as other maintenance commands, work according to this assumption. For example, during a roll-forward operation of an image copy at a database, only image copies associated with the database are rolled forward. Likewise, all incremental backups on disk and all incremental backups on tape will be used to roll forward the image copies. Similarly, during recovery operations, only disk backups associated with the database and files on tape will be considered as sources for backups.


See Also:

Oracle Database Backup and Recovery Reference for detailed information about RMAN commands

11.7.1 Changing Metadata in the Recovery Catalog

You can use the RMAN CHANGE command with various operands to change metadata in the recovery catalog, as described in the following sections.

Changing File Association from One Standby Database to Another

Use the CHANGE command with the RESET DB_UNIQUE_NAME option to alter the association of files from one database to another within a Data Guard environment. The CHANGE command is useful when disk backups or archived logs are transferred from one database to another and you want to use them on the database to which they were transferred. The CHANGE command can also change the association of a file from one database to another database, without having to directly connect to either database using the FOR DB_UNIQUE_NAME and RESET DB_UNIQUE_NAME TO options.

Changing DB_UNIQUE_NAME for a Database

If the value of the DB_UNIQUE_NAME initialization parameter changes for a database, the same change must be made in the Data Guard environment. The RMAN recovery catalog, after connecting to that database instance, will know both the old and new value for DB_UNIQUE_NAME. To merge the information for the old and new values within the recovery catalog schema, you must use the RMAN CHANGE DB_UNIQUE_NAME command. If RMAN is not connected to the instance with the changed DB_UNIQUE_NAME parameter, then the CHANGE DB_UNIQUE_NAME command can also be used to rename the DB_UNIQUE_NAME in the recovery catalog schema. For example, if the instance parameter value for a database was changed from BOSTON_A to BOSTON_B, the following command should be executed at the RMAN prompt after connecting to a target database and recovery catalog:

CHANGE DB_UNIQUE_NAME FROM BOSTON_A TO BOSTON_B;

Making Backups Unavailable or Removing Their Metadata

Use CHANGE command options such as AVAILABLE, UNAVAILABLE, KEEP, and UNCATALOG to make backups available or unavailable for restore and recovery purposes, and to keep or remove their metadata.


See Also:

Oracle Database Backup and Recovery Reference for more information about the RMAN CHANGE command

11.7.2 Deleting Archived Logs or Backups

Use the DELETE command to delete backup sets, image copies, archived logs, or proxy copies. To delete only files that are associated with a specific database, you must use the FOR DB_UNIQUE_NAME option with the DELETE command.

File metadata is deleted for all successfully deleted files associated with the current target database (or for files that are not associated with any known database). If a file could not be successfully deleted, you can use the FORCE option to remove the file's metadata.

When a file associated with another database is deleted successfully, its metadata in the recovery catalog is also deleted. Any files that are associated with other databases, and that could not be successfully deleted, are listed at the completion of the DELETE command, along with instructions for you to perform the same operation at the database with which the files are associated (files are grouped by database). Note that the FORCE option cannot be used to override this behavior. If you are certain that deleting the metadata for the non-deletable files will not cause problems, you can use the CHANGE RESET DB_UNIQUE_NAME command to change the metadata for association of files with the database and use the DELETE command with the FORCE option to delete the metadata for the file.


See Also:

Oracle Database Backup and Recovery Reference for more information about the RMAN DELETE command

11.7.3 Validating Recovery Catalog Metadata

Use the CROSSCHECK command to validate and update file status in the recovery catalog schema. To validate files associated with a specific database, use the FOR DB_UNIQUE_NAME option with the CROSSCHECK command.

Metadata for all files associated with the current target database (or for any files that are not associated with any database), will be marked AVAILABLE or EXPIRED according to the results of the CROSSCHECK operation.

If a file associated with another database is successfully inspected, its metadata in the recovery catalog is also changed to AVAILABLE. Any files that are associated with other databases, and that could not be inspected successfully, are listed at the completion of the CROSSCHECK command, along with instructions for you to perform the same operation at the database with which the files are associated (files are grouped by site). If you are certain of the configuration and still want to change status metadata for unavailable files, you can use the CHANGE RESET DB_UNIQUE_NAME command to change metadata for association of files with the database and execute the CROSSCHECK command to update status metadata to EXPIRED.


See Also:

Oracle Database Backup and Recovery Reference for more information about the RMAN CROSSCHECK command

11.8 Recovery Scenarios in a Data Guard Environment

The examples in the following sections assume you are restoring files from tape to the same system on which the backup was created. If you need to restore files to a different system, you need to configure the channels for that system before executing restore and recover commands. You can set the configuration for a nonexistent database using the SET DBID command and the CONFIGURE command with FOR DB_UNIQUE_NAME. See the Media Management documentation for more information about how to access RMAN backups from different systems.

The following scenarios are described in this section:

11.8.1 Recovery from Loss of Datafiles on the Primary Database

You can recover from loss of datafiles on the primary database by using backups or by using the files on a standby database, as described in the following sections.

Using Backups

Issue the following RMAN commands to restore and recover datafiles. You must be connected to both the primary and recovery catalog databases.

RESTORE DATAFILE n,m...;
RECOVER DATAFILE n,m...;

Issue the following RMAN commands to restore and recover tablespaces. You must be connected to both the primary and recovery catalog databases.

RESTORE TABLESPACE tbs_name1, tbs_name2, ...
RECOVER TABLESPACE tbs_name1, tbs_name2, ...

Using Files On a Standby Database

As of Oracle 11g, you can use files on a standby database to recover a lost datafile. This works well if the standby is up-to-date and the network connection is sufficient enough to support the file copy between the standby and primary.

Start RMAN and take the following steps to copy the datafiles from the standby to the primary:

  1. Connect to the standby database as the target database:

    CONNECT TARGET sys@standby
    

    You are prompted for a password:

    target database Password: password
    
  2. Connect to the primary database as the auxiliary database:

    CONNECT AUXILIARY sys@primary
    

    You are prompted for a password:

    target database Password: password
    
  3. Back up the datafile on the standby host across the network to a location on the primary host. For example, suppose that /disk1/df2.dbf is the name of datafile 2 on the standby host. Suppose that /disk8/datafile2.dbf is the name of datafile 2 on the primary host. The following command would copy datafile 2 over the network to /disk9/df2copy.dbf:

    BACKUP AS COPY DATAFILE 2 AUXILIARY FORMAT '/disk9/df2copy.dbf';
    
  4. Exit the RMAN client as follows:

    EXIT;
    
  5. Start RMAN and connect to the primary database as target, and to the recovery catalog:

    CONNECT TARGET sys@primary;
    target database Password: password
    
    CONNECT CATALOG rman@catdb;
    recovery catalog database Password: password
    
  6. Use the CATALOG DATAFILECOPY command to catalog this datafile copy so that RMAN can use it.:

    CATALOG DATAFILECOPY '/disk9/df2copy.dbf';
    

    Then use the SWITCH DATAFILE command to switch the datafile copy so that /disk9/df2copy.dbf becomes the current datafile:

    RUN {
      SET NEWNAME FOR DATAFILE 2 TO '/disk9/df2copy.dbf';
      SWITCH DATAFILE 2;
    }
    

11.8.2 Recovery from Loss of Datafiles on the Standby Database

To recover the standby database after the loss of one or more datafiles, you must restore the lost files to the standby database from the backup using the RMAN RESTORE DATAFILE command. If all the archived redo log files required for recovery of damaged files are accessible on disk by the standby database, restart Redo Apply.

If the archived redo log files required for recovery are not accessible on disk, use RMAN to recover the restored datafiles to an SCN/log sequence greater than the last log applied to the standby database, and then restart Redo Apply to continue the application of redo data, as follows:

  1. Connect SQL*Plus to the standby database.

  2. Stop Redo Apply using the SQL ALTER DATABASE ... statement.

  3. In a separate terminal, start RMAN and connect to both the standby and recovery catalog databases (use the TARGET keyword to connect to the standby instance).

  4. Issue the following RMAN commands to restore and recover datafiles on the standby database:

    RESTORE DATAFILE <n,m,...>;
    RECOVER DATABASE;
    

    To restore a tablespace, use the RMAN 'RESTORE TABLESPACE tbs_name1, tbs_name2, ...' command.

  5. At the SQL*Plus prompt, restart Redo Apply using the SQL ALTER DATABASE ... statement.


See Also:

Section 7.3 and Section 7.4 for more information about starting and stopping Redo Apply

11.8.3 Recovery from Loss of a Standby Control File

Oracle software allows multiplexing of the standby control file. To ensure the standby control file is multiplexed, check the CONTROL_FILES initialization parameter, as follows:

SQL> SHOW PARAMETER CONTROL_FILES;

NAME                                 TYPE        VALUE
 ------------------------------------ ----------- ------------------------------
control_files                        string      <cfilepath1>,<cfilepath2>

If one of the multiplexed standby control files is lost or is not accessible, Oracle software stops the instance and writes the following messages to the alert log:

ORA-00210: cannot open the specified controlfile
ORA-00202: controlfile: '/disk1/oracle/dbs/scf3_2.f'
ORA-27041: unable to open file

You can copy an intact copy of the control file over the lost copy, then restart the standby instance using the following SQL statements:

SQL> STARTUP MOUNT;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

You can restore the control file from backups by executing the RESTORE CONTROLFILE command and then the RECOVER DATABASE command. The RECOVER DATABASE command automatically fixes the file names in the control file to match the files existing at that database, and recovers the database to the most recently received log sequence at the database.

The other alternative is to create a new control file from the primary database, copy it to all multiplexed locations, and manually rename the data file names to match files existing on disk.

11.8.4 Recovery from Loss of the Primary Control File

Oracle software allows multiplexing of the control file on the primary database. If one of the control files cannot be updated on the primary database, the primary database instance is shut down automatically.

You can restore the control file from backups by executing the RESTORE CONTROLFILE command and the RECOVER DATABASE command. The RECOVER DATABASE command automatically fixes the file names in the control file to match the files existing at that database, and recovers the database.

The other alternative is to create a new control file using CREATE CONTROLFILE SQL command. It is possible to re-create the control file provided all data files and online logs are not lost.


See Also:

Oracle Database Backup and Recovery User's Guide for detailed information about using RMAN to recover from the loss of control files

11.8.5 Recovery from Loss of an Online Redo Log File

Oracle recommends multiplexing the online redo log files. The loss of all members of an online redo log group causes Oracle software to terminate the instance. If only some members of a log file group cannot be written, they will not be used until they become accessible. The views V$LOGFILE and V$LOG contain more information about the current status of log file members in the primary database instance.

When Oracle software is unable to write to one of the online redo log file members, the following alert messages are returned:

ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: '/disk1/oracle/dbs/t1_log1.f'
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3

If the access problem is temporary due to a hardware issue, correct the problem and processing will continue automatically. If the loss is permanent, a new member can be added and the old one dropped from the group.

To add a new member to a redo log group, issue the following statement:

SQL> ALTER DATABASE ADD LOGFILE MEMBER 'log_file_name' REUSE TO GROUP n;

You can issue this statement even when the database is open, without affecting database availability.

If all members of an inactive group that has been archived are lost, the group can be dropped and re-created.

In all other cases (loss of all online log members for the current ACTIVE group, or an inactive group which has not yet been archived), you must fail over to the standby database. Refer to Chapter 8 for the failover procedure.

11.8.6 Incomplete Recovery of the Primary Database

Incomplete recovery of the primary database is normally done in cases such as when the database is logically corrupted (by a user or an application) or when a tablespace or datafile was accidentally dropped from database.

Depending on the current database checkpoint SCN on the standby database instances, you can use one of the following procedures to perform incomplete recovery of the primary database. All the procedures are in order of preference, starting with the one that is the least time consuming.

Using Flashback Database Using Flashback Database is the recommended procedure when the Flashback Database feature is enabled on the primary database, none of the database files are lost, and the point-in-time recovery is greater than the oldest flashback SCN or the oldest flashback time. See Section 13.3 for the procedure to use Flashback Database to do point-in-time recovery.

Using the standby database instance This is the recommended procedure when the standby database is behind the desired incomplete recovery time, and Flashback Database is not enabled on the primary or standby databases:

  1. Recover the standby database to the desired point in time.

    RECOVER DATABASE UNTIL TIME 'time';
    

    Alternatively, incomplete recovery time can be specified using the SCN or log sequence number:

    RECOVER DATABASE UNTIL SCN incomplete recovery SCN';
    RECOVER DATABASE UNTIL LOGSEQ incomplete recovery log sequence number THREAD thread number;
    
  2. Open the standby database in read-only mode to verify the state of database.

    If the state is not what is desired, use the LogMiner utility to look at the archived redo log files to find the right target time or SCN for incomplete recovery. Alternatively, you can start by recovering the standby database to a point that you know is before the target time, and then open the database in read-only mode to examine the state of the data. Repeat this process until the state of the database is verified to be correct. Note that if you recover the database too far (that is, past the SCN where the error occurred) you cannot return it to an earlier SCN.

  3. Activate the standby database using the SQL ALTER DATABASE ACTIVATE STANDBY DATABASE statement. This converts the standby database to a primary database, creates a new resetlogs branch, and opens the database. See Section 9.4 to learn how the standby database reacts to the new reset logs branch.

Using the primary database instance If all of the standby database instances have already been recovered past the desired point in time and Flashback Database is not enabled on the primary or standby database, then this is your only option.

Use the following procedure to perform incomplete recovery on the primary database:

  1. Use LogMiner or another means to identify the time or SCN at which all the data in the database is known to be good.

  2. Using the time or SCN, issue the following RMAN commands to do incomplete database recovery and open the database with the RESETLOGS option (after connecting to catalog database and primary instance that is in MOUNT state):

    RUN 
    {
    SET UNTIL TIME 'time';
    RESTORE DATABASE;
    RECOVER DATABASE;
    }
    ALTER DATABASE OPEN RESETLOGS;
    

After this process, all standby database instances must be reestablished in the Data Guard configuration.

11.9 Additional Backup Situations

The following sections describe how to modify the backup procedures for other configurations, such as when the standby and primary databases cannot share backup files; the standby instance is only used to remotely archive redo .8log files; or the standby database filenames are different than the primary database.

11.9.1 Standby Databases Too Geographically Distant to Share Backups

If the standby databases are far apart from one another, the backups taken on them may not be easily accessible by the primary system or other standby systems. Perform a complete backup of the database on all systems to perform recovery operations. The fast recovery area can reside locally on the primary and standby systems (that is, the fast recovery area does not have to be the same for the primary and standby databases).

In this scenario, you can still use the general strategies described in Section 11.8, with the following exceptions:

  • Backup files created by RMAN must be tagged with the local system name, and with RESTORE operations that tag must be used to restrict RMAN from selecting backups taken on the same host. In other words, the BACKUP command must use the TAG system name option when creating backups; the RESTORE command must use the FROM TAG system name option; and the RECOVER command must use the FROM TAG system name ARCHIVELOG TAG system name option.

  • Disaster recovery of the standby site:

    1. Start the standby instance in the NOMOUNT state using the same parameter files with which the standby was operating earlier.

    2. Create a standby control file on the primary instance using the SQL ALTER DATABASE CREATE STANDBY CONTROLFILE AS filename statement, and use the created control file to mount the standby instance.

    3. Issue the following RMAN commands to restore and recover the database files:

      RESTORE DATABASE FROM TAG 'system name';
      RECOVER DATABASE FROM TAG 'system name' ARCHIVELOG TAG 'system name';
      
    4. Restart Redo Apply.

The standby instance will fetch the remaining archived redo log files.

11.9.2 Standby Database Does Not Contain Datafiles, Used as a FAL Server

Use the same procedure described in Section 11.4, with the exception that the RMAN commands that back up database files cannot be run against the FAL server. The FAL server can be used as a backup source for all archived redo log files, thus off-loading backups of archived redo log files to the FAL server.

11.9.3 Standby Database File Names Are Different From Primary Database


Note:

As of Oracle Database 11g, the recovery catalog can resynchronize the file names from each standby database site. However, if the file names from a standby database were never resynchronized for some reason, then you can use the procedure described in this section to do so.

If the database filenames are not the same on the primary and standby databases that were never resynchronized, the RESTORE and RECOVER commands you use will be slightly different. To obtain the actual datafile names on the standby database, query the V$DATAFILE view and specify the SET NEWNAME option for all the datafiles in the database:

RUN 
{
SET NEWNAME FOR DATAFILE 1 TO 'existing file location for file#1 from V$DATAFILE';
SET NEWNAME FOR DATAFILE 2 TO 'existing file location for file#2 from V$DATAFILE';
…
…
 SET NEWNAME FOR DATAFILE n TO 'existing file location for file#n from V$DATAFILE';
 RESTORE {DATAFILE <n,m,…> | TABLESPACE tbs_name_1, 2, …| DATABASE;
SWITCH DATAFILE ALL; 
RECOVER DATABASE {NOREDO};
}

Similarly, the RMAN DUPLICATE command should also use the SET NEWNAME option to specify new filenames during standby database creation. Or you could set the LOG_FILE_NAME_CONVERT and DB_FILE_NAME_CONVERT parameters.


See Also:

Section 13.5, "Creating a Standby Database That Uses OMF or Oracle ASM" for information about precedence rules when both the DB_FILE_NAME_CONVERT and DB_CREATE_FILE_DEST parameters are set on the standby

11.10 Using RMAN Incremental Backups to Roll Forward a Physical Standby Database

In some situations, RMAN incremental backups can be used to synchronize a physical standby database with the primary database. You can use the RMAN BACKUP INCREMENTAL FROM SCN command to create a backup on the primary database that starts at the current SCN of the standby, which can then be used to roll the standby database forward in time.

The steps described in this section apply to situations in which RMAN incremental backups may be useful because the physical standby database either:

  • Lags far behind the primary database

  • Has widespread nologging changes

  • Has nologging changes on a subset of datafiles


Note:

Oracle recommends the use of a recovery catalog when performing this operation. These steps are possible without a recovery catalog, but great care must be taken to correct the file names in the restored control file.


See Also:

Oracle Database Backup and Recovery User's Guide for more information about RMAN incremental backups

11.10.1 Steps for Using RMAN Incremental Backups

Except where stated otherwise, the following steps apply to all three situations just listed.

  1. Stop Redo Apply on the standby database:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    
  2. On the standby database, compute the FROM SCN for the incremental backup. This is done differently depending on the situation:

    • On a standby that lags far behind the primary database, query the V$DATABASE view and record the current SCN of the standby database:

      SQL> SELECT CURRENT_SCN FROM V$DATABASE;
      
      CURRENT_SCN
      -----------
           233995
      
    • On a standby that has widespread nologging changes, query the V$DATAFILE view to record the lowest FIRST_NONLOGGED_SCN:

      SQL> SELECT MIN(FIRST_NONLOGGED_SCN) FROM V$DATAFILE -
      > WHERE FIRST_NONLOGGED_SCN>0;
      
      MIN(FIRST_NONLOGGED_SCN)
      ------------------------
                        223948
      
    • On a standby that has nologging changes on a subset of datafiles, query the V$DATAFILE view, as follows:

      SQL> SELECT FILE#, FIRST_NONLOGGED_SCN FROM V$DATAFILE -
      > WHERE FIRST_NONLOGGED_SCN > 0;
      
      FILE#      FIRST_NONLOGGED_SCN
      ---------- -------------------
               4              225979
               5              230184
      
  3. Connect to the primary database as the RMAN target and create an incremental backup from the current SCN (for a standby lagging far behind the primary) or from the lowest FIRST_NONLOGGED_SCN (for a standby with widespread nologging changes) of the standby database that was recorded in step 2:

    RMAN> BACKUP INCREMENTAL FROM SCN 233995 DATABASE FORMAT '/tmp/ForStandby_%U' tag 'FORSTANDBY';
    

    If the standby has nologging changes on a subset of datafiles, then create an incremental backup for each datafile listed in the FIRST_NONLOGGED_SCN column (recorded in step 2), as follows:

    RMAN> BACKUP INCREMENTAL FROM SCN 225979 DATAFILE 4 FORMAT '/tmp/ForStandby_%U' TAG 'FORSTANDBY';
    RMAN> BACKUP INCREMENTAL FROM SCN 230184 DATAFILE 5 FORMAT '/tmp/ForStandby_%U' TAG 'FORSTANDBY';
    

    The BACKUP commands shown generate datafile backups, as well as a control file backup that will be used in step 7.

  4. If the backup pieces are not on shared storage, then transfer all the backup pieces created on the primary to the standby:

    scp /tmp/ForStandby_* standby:/tmp
    
  5. If you had to copy the backup pieces in the previous step, or if you are not connected to the recovery catalog for the entire process, then you must catalog the new backup pieces on the standby (otherwise, go on to the next step):

    RMAN> CATALOG START WITH '/tmp/ForStandby';
    
  6. Connect to the standby database as the RMAN target and execute the REPORT SCHEMA statement to ensure that the standby database site is automatically registered and that the files names at the standby site are displayed:

    RMAN> REPORT SCHEMA;
    
  7. Connect to the standby database as the RMAN target and apply incremental backups by executing the following commands. Note that the RESTORE STANDBY CONTROLFILE FROM TAG command only works if you are connected to the recovery catalog for the entire process. Otherwise, you must use the RESTORE STANDBY CONTROLFILE FROM '<control file backup filename>' command.

    RMAN> STARTUP FORCE NOMOUNT;
    RMAN> RESTORE STANDBY CONTROLFILE FROM TAG 'FORSTANDBY';
    RMAN> ALTER DATABASE MOUNT;
    RMAN> RECOVER DATABASE NOREDO;
    

    Note:

    If a recovery catalog is used, then the RMAN RECOVER command will fix the path names for datafiles in the standby control file. If no recovery catalog is used, then you must manually edit the file names in your standby control file or use the RMAN SET NEWNAME command to assign the datafile names. See Oracle Database Backup and Recovery Reference for more information about the RMAN RECOVER and SET NEWNAME commands.

  8. On standbys that have widespread nologging changes or that have nologging changes on a subset of datafiles, query the V$DATAFILE view to verify there are no datafiles with nologged changes. The following query should return zero rows:

    SQL> SELECT FILE#, FIRST_NONLOGGED_SCN FROM V$DATAFILE -
    > WHERE FIRST_NONLOGGED_SCN > 0;
    

    Note:

    The incremental backup will become obsolete in 7 days, or you can remove it now using the RMAN DELETE command.

  9. Start Redo Apply on the physical standby database:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE -
    > USING CURRENT LOGFILE DISCONNECT FROM SESSION;
    
PKP^W=8.8PKD Concepts and Administration PK%L G PKD Upgrading and Downgrading Databases in a Data Guard Configuration

B Upgrading and Downgrading Databases in a Data Guard Configuration

The procedures in this appendix describe how to upgrade and downgrade an Oracle database when a physical or logical standby database is present in the Data Guard configuration.

This appendix contains the following topics:

B.1 Before You Upgrade the Oracle Database Software

Consider the following points before beginning to upgrade your Oracle Database software:

  • If you are using the Data Guard broker to manage your configuration, follow the instructions in the Oracle Data Guard Broker manual for information about removing or disabling the broker configuration.

  • The procedures in this appendix are to be used in conjunction with the ones contained in the Oracle Database Upgrade Guide for 11g Release 2 (11.2).

  • Check for nologging operations. If nologging operations have been performed then you must update the standby database. See Section 13.4, "Recovering After the NOLOGGING Clause Is Specified" for details.

  • Make note of any tablespaces or datafiles that need recovery due to OFFLINE IMMEDIATE. Tablespaces or datafiles should be recovered and either online or offline prior to upgrading.

B.2 Upgrading Oracle Database with a Physical Standby Database in Place

Perform the following steps to upgrade to Oracle Database 11g Release 2 (11.2) when a physical standby database is present in the configuration:

  1. Review and perform the steps listed in the "Preparing to Upgrade" chapter of the Oracle Database Upgrade Guide.

  2. Install the new release of the Oracle software into a new Oracle home on the physical standby database and primary database systems, as described in the Oracle Database Upgrade Guide.

  3. Shut down the primary database.

  4. Shut down the physical standby database(s).

  5. Stop all listeners, agents, and other processes running in the Oracle homes that are to be upgraded. Perform this step on all nodes in an Oracle Real Application Clusters (Oracle RAC) environment.

  6. If Oracle Automatic Storage Management (Oracle ASM) is in use, shut down all databases that use Oracle ASM, and then shut down all Oracle ASM instance(s).

  7. Restart all listeners, agents, and other processes stopped in step 4.

  8. Mount the physical standby database(s) on the new Oracle home (upgraded version). See Section 3.2.6 for information on how to start a physical standby database.


    Note:

    The standby database(s) should not be opened until the primary database upgrade is completed.

  9. Start Redo Apply on the physical standby database(s). See Section 3.2.6 for information on how to start Redo Apply.

  10. Upgrade the primary database as described in the Oracle Database Upgrade Guide. Note that the physical standby database(s) will be upgraded when the redo generated by the primary database as it is upgraded is applied.

  11. Open the upgraded primary database.

  12. If Active Data Guard was being used prior to the upgrade, then refer to Section 9.2.1 for information about how to reenable it after upgrading.

  13. Optionally, modify the COMPATIBLE initialization parameter, following the procedure described in Section B.4.

B.3 Upgrading Oracle Database with a Logical Standby Database in Place


Note:

This appendix describes the traditional method for upgrading your Oracle Database software with a logical standby database in place. A second method in Chapter 12, "Using SQL Apply to Upgrade the Oracle Database" describes how to upgrade with a logical standby database in place in a rolling fashion to minimize downtime. Use the steps from only one method to perform the complete upgrade. Do not attempt to use both methods or to combine the steps from the two methods as you perform the upgrade process.

The procedure described in this section assumes that the primary database is running in MAXIMUM PERFORMANCE data protection mode.


Perform the following steps to upgrade to Oracle Database 11g Release 2 (11.2) when a logical standby database is present in the configuration:

  1. Review and perform the steps listed in the "Preparing to Upgrade" chapter of the Oracle Database Upgrade Guide.

  2. Set the data protection mode to MAXIMUM PERFORMANCE at the primary database, if needed:

    SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;
    
  3. On the primary database, stop all user activity and defer the remote archival destination associated with the logical standby database (for this procedure, it is assumed that LOG_ARCHIVE_DEST_2 is associated with the logical standby database):

    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER SCOPE=BOTH;
    SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
    
  4. Stop SQL Apply on the logical standby database:

    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    
  5. On the primary database install the newer release of the Oracle software as described in the Oracle Database Upgrade Guide.

  6. On the logical standby database, install the newer release of the Oracle software as described in Oracle Database Upgrade Guide.


    Note:

    Steps 5 and 6 can be performed concurrently (in other words, the primary and the standby databases can be upgraded concurrently) to reduce downtime during the upgrade procedure.

  7. On the upgraded logical standby database, restart SQL Apply. If you are using Oracle RAC, start up the other standby database instances:

    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    
  8. Open the upgraded primary database and allow users to connect. If you are using Oracle RAC, start up the other primary database instances.

    Also, enable archiving to the upgraded logical standby database, as follows:

    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
    
  9. Optionally, reset to the original data protection mode if you changed it in Step 2.

  10. Optionally, modify the COMPATIBLE initialization parameter, following the procedure described in Section B.4.

B.4 Modifying the COMPATIBLE Initialization Parameter After Upgrading

When you upgrade to a new release of Oracle Database, certain new features might make your database incompatible with your previous release. Oracle Database enables you to control the compatibility of your database with the COMPATIBLE initialization parameter.

After the upgrade is complete, you can increase the setting of the COMPATIBLE initialization parameter to the maximum level for the new Oracle Database release. When you are certain that you no longer need the ability to downgrade your database back to its original version, set the COMPATIBLE initialization parameter based on the compatibility level you want for your new database.

In a Data Guard configuration, if you decide to increase the setting of the COMPATIBLE initialization parameter after upgrading, then it is important that you perform the following steps in the order shown (note that the standby database should have a COMPATIBLE setting equal to, or higher than, the primary):

  1. Increase the value of the COMPATIBLE initialization parameter on all standby databases in the configuration first, as follows:

    1. Ensure that apply is current on the standby database(s).

    2. On one instance of each standby database, execute the following SQL statement:

      ALTER SYSTEM SET COMPATIBLE=<value> SCOPE=SPFILE;
      
    3. If Redo Apply or SQL Apply is running, then stop them.

    4. Restart all instances of the standby database(s).

    5. If you previously stopped Redo Apply or SQL Apply, then restart them.

  2. Increase the value of the COMPATIBLE initialization parameter on the primary database, as follows:

    1. On one instance of the primary database, execute the following SQL statement:

      ALTER SYSTEM SET COMPATIBLE=<value> SCOPE=SPFILE;
      
    2. Restart all instances of the primary database.


See Also:


B.5 Downgrading Oracle Database with No Logical Standby in Place

Perform the following steps to downgrade Oracle Database in a Data Guard configuration that does not contain a logical standby database:

  1. Ensure that all physical standby databases are mounted, but not open.


    Note:

    The standby database(s) should not be opened until all redo generated by the downgrade of the primary database has been applied.

  2. Start Redo Apply, in real-time apply mode, on the physical standby database(s).

  3. Downgrade the primary database using the procedure described in Oracle Database Upgrade Guide, keeping the following in mind:

    • At each step of the downgrade procedure where a script is executed, execute the script only at the primary database. Do not perform the next downgrade step until all redo generated by the execution of the script at the primary database has been applied to each physical standby database.

    • At each step of the downgrade procedure where an action other than running a script is performed, perform the step at the primary database first and then at each physical standby database. Do not perform the next downgrade step at the primary database until the action has been performed at each physical standby database.

  4. If it becomes necessary to perform a failover during a downgrade, perform the failover and then continue with the downgrade procedure at the new primary database.

B.6 Downgrading Oracle Database with a Logical Standby in Place

Perform the following steps to downgrade Oracle Database in a Data Guard configuration that contains a logical standby database or a mixture of logical and physical standby databases.

  1. Issue the following command at the primary database (database P, for the sake of this discussion) before you downgrade it:

    SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO LOGICAL STANDBY;
    

    Database P is no longer in the primary database role.

  2. Wait for all standby databases in the configuration to finish applying all available redo. To determine whether each standby database has finished applying all available redo, run the following query at each standby database:

    SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
     
    SWITCHOVER_STATUS
    -----------------
    TO PRIMARY
    

    Do not continue on to step 3 until the query returns a value of TO PRIMARY for all standby databases in the configuration.

  3. Downgrade the logical standby databases using the procedures described in Oracle Database Upgrade Guide, keeping the following in mind:

    • At each step of the downgrade procedure where a script is executed, execute the script only at the logical standby databases. Do not perform the next downgrade step until all redo generated by executing the script at the logical standby database that was most recently in the primary role (database P) has been applied to each physical standby database.

    • At each step of the downgrade procedure where an action other than running a script is performed, first perform the step at the logical standby database that was most recently in the primary role (database P), and then perform the step at each physical standby database. Do not perform the next downgrade step at the logical standby database that was most recently in the primary role (database P) until the action has been performed at each physical standby database.

  4. After the logical standby that was most recently in the primary role (database P) has been successfully downgraded, open it, and issue the following command:

    SQL> ALTER DATABASE ACTIVATE LOGICAL STANDBY DATABASE;
    

    Database P is now back in the primary role.

  5. At each of the logical standby databases in the configuration, issue the following command (note that the command requires that a database link back to the primary exist in all of the logical standby databases):

    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE NEW PRIMARY
    prim_db_link;
    
PK2QNLNPKD Data Guard and Oracle Real Application Clusters

D Data Guard and Oracle Real Application Clusters

An Oracle Data Guard configuration can consist of any combination of single-instance and Oracle Real Application Clusters (Oracle RAC) multiple-instance databases. This chapter summarizes the configuration requirements and considerations that apply when using Oracle Data Guard with Oracle RAC databases. It contains the following sections:

D.1 Configuring Standby Databases in an Oracle RAC Environment

You can configure a standby database to protect a primary database using Oracle RAC. The following table describes the possible combinations of instances in the primary and standby databases:

Instance CombinationsSingle-Instance Standby DatabaseMulti-Instance Standby Database
Single-instance primary databaseYesYes
Multi-instance primary databaseYesYes

In each scenario, each instance of the primary database transmits its redo data to an instance of the standby database.

D.1.1 Setting Up a Multi-Instance Primary with a Single-Instance Standby

Figure D-1 illustrates an Oracle RAC database with two primary database instances (a multi-instance primary database) transmitting redo data to a single-instance standby database.

Figure D-1 Transmitting Redo Data from a Multi-Instance Primary Database

Description of Figure D-1 follows
Description of "Figure D-1 Transmitting Redo Data from a Multi-Instance Primary Database"

In this case, Instance 1 of the primary database archives redo data to local archived redo log files 1, 2, 3, 4, 5 and transmits the redo data to the standby database destination, while Instance 2 archives redo data to local archived redo log files 32, 33, 34, 35, 36 and transmits the redo data to the same standby database destination. The standby database automatically determines the correct order in which to apply the archived redo log files.

To set up a primary database in an Oracle RAC environment

Follow the instructions in Chapter 3 (for physical standby database creation) or Chapter 4 (for logical standby database creation) to configure each primary instance.

To set up a single instance standby database

Follow the instructions in Chapter 3 (for physical standby database creation) or Chapter 4 (for logical standby database creation) to define the LOG_ARCHIVE_DEST_n and LOG_ARCHIVE_FORMAT parameters to specify the location of the archived redo log files and standby redo log files.

D.1.2 Setting Up Oracle RAC Primary and Standby Databases

This section describes how to configure an Oracle RAC primary database to send redo data to an Oracle RAC standby database.

D.1.2.1 Configuring an Oracle RAC Standby Database to Receive Redo Data

Perform the following steps to configure an Oracle RAC standby database to receive redo data from a primary database:

  1. Create a standby redo log on the standby database. The redo log files in the standby redo log must reside in a location that can be accessed by all of the standby database instances, such as on a cluster file system or Oracle ASM instance. See Section 6.2.3.1 for more information about creating a standby redo log.

  2. Configure standby redo log archival on each standby database instance. The standby redo log must be archived to a location that can be accessed by all of the standby database instances, and every standby database instance must be configured to archive the standby redo log to the same location. See Section 6.2.3.2 for more information about configuring standby redo log archival.

D.1.2.2 Configuring an Oracle RAC Primary Database to Send Redo Data

Configure each instance of the Oracle RAC primary database to send its redo data to the Oracle RAC standby database. Section 6.2.2 describes how to configure an Oracle database instance to send redo data to another database.

Oracle recommends the following best practices when configuring an Oracle RAC primary database to send redo data to an Oracle RAC standby database:

  1. Use the same LOG_ARCHIVE_DEST_n parameter on each primary database instance to send redo data to a given standby database.

  2. Set the SERVICE attribute of each LOG_ARCHIVE_DEST_n parameter that corresponds to a given standby database to the same net service name.

  3. The net service name should resolve to an Oracle Net connect descriptor that contains an address list, and that address list should contain connection data for each standby database instance.

D.2 Configuration Considerations in an Oracle RAC Environment

This section contains the Data Guard configuration information that is specific to Oracle RAC environments. It contains the following topics:

D.2.1 Format for Archived Redo Log Filenames

The format for archived redo log filenames is in the form of log_%parameter, where %parameter can include one or more of the parameters in Table D-1.

Table D-1 Directives for the LOG_ARCHIVE_FORMAT Initialization Parameter

DirectivesDescription

%a

Database activation ID.

%A

Database activation ID, zero filled.

%d

Database ID.

%D

Database ID, zero filled.

%t

Instance thread number.

%T

Instance thread number, zero filled.

%s

Log file sequence number.

%S

Log file sequence number, zero filled.

%r

Resetlogs ID.

%R

Resetlogs ID, zero filled.


For example:

LOG_ARCHIVE_FORMAT = log%d_%t_%s_%r.arc

The thread parameters %t or %T are mandatory for Oracle RAC to uniquely identify the archived redo log files with the LOG_ARCHIVE_FORMAT parameter.

D.2.2 Data Protection Modes

If any instance of an Oracle RAC primary database loses connectivity with a standby database, all other primary database instances stop sending redo to the standby database for the number of seconds specified on the LOG_ARCHIVE_DEST_n REOPEN attribute, after which all primary database instances attempt to reconnect to the standby database.

The following list describes the behavior of the protection modes in Oracle RAC environments:

  • Maximum protection configuration

    If a lost destination is the last participating SYNC destination, the instance loses connectivity and will be shut down. Other instances in an Oracle RAC configuration that still have connectivity to the standby destinations will recover the lost instance and continue sending to their standby destinations. Only when every instance in an Oracle RAC configuration loses connectivity to the last standby destination will the primary database be shut down.

D.2.3 Role Transitions

This section contains information about switchovers.

D.2.3.1 Switchovers

For an Oracle RAC database, only one primary instance can be active during a switchover when the target database is a physical standby. Therefore, before a switchover to a physical standby database, shut down all but one primary instance. After the switchover completes, restart the instances that were shut down during the switchover. This limitation does not exist when performing a switchover to a logical standby database.


Note:

The SQL ALTER DATABASE statement used to perform the switchover automatically creates redo log files if they do not already exist. Because this can significantly increase the time required to complete the COMMIT operation, Oracle recommends that you manually add redo log files when creating physical standby databases.

D.3 Troubleshooting

This section provides help troubleshooting problems with Oracle RAC.

D.3.1 Switchover Fails in an Oracle RAC Configuration

When your database is using Oracle RAC, active instances prevent a switchover from being performed. When other instances are active, an attempt to switch over fails with the following error message:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY; 

ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY * 
ORA-01105: mount is incompatible with mounts by other instances 

Action: Query the GV$INSTANCE view as follows to determine which instances are causing the problem:

SQL> SELECT INSTANCE_NAME, HOST_NAME FROM GV$INSTANCE - 
> WHERE INST_ID <> (SELECT INSTANCE_NUMBER FROM V$INSTANCE);

INSTANCE_NAME HOST_NAME 
------------- --------- 
INST2         standby2 

In the previous example, the identified instance must be manually shut down before the switchover can proceed. You can connect to the identified instance from your instance and issue the SHUTDOWN statement remotely, for example:

SQL> CONNECT SYS@standby2 AS SYSDBA
Enter Password:
SQL> SHUTDOWN;
SQL> EXIT
PK7FFPKD Role Transitions

8 Role Transitions

A Data Guard configuration consists of one database that functions in the primary role and one or more databases that function in the standby role. To see the current role of the databases, query the DATABASE_ROLE column in the V$DATABASE view.

The number, location, and type of standby databases in a Data Guard configuration and the way in which redo data from the primary database is propagated to each standby database determine the role-management options available to you in response to a primary database outage.

This chapter describes how to manage role transitions in a Data Guard configuration. It contains the following topics:


Note:

This chapter describes how to perform role transitions manually, using SQL statements. Do not use the procedures described in this chapter to perform role transitions in a Data Guard configuration that is managed by the broker. The role transition procedures provided in Oracle Data Guard Broker should be used instead.


See Also:

Oracle Data Guard Broker for information about using the Oracle Data Guard broker to:
  • Simplify switchovers and failovers by allowing you to invoke them using either a single key click in Oracle Enterprise Manager or a single command in the DGMGRL command-line interface.

  • Enable fast-start failover to fail over automatically when the primary database becomes unavailable. When fast-start failover is enabled, the Data Guard broker determines if a failover is necessary and initiates the failover to the specified target standby database automatically, with no need for DBA intervention.


8.1 Introduction to Role Transitions

A database operates in one of the following mutually exclusive roles: primary or standby. Data Guard enables you to change these roles dynamically by issuing the SQL statements described in this chapter, or by using either of the Data Guard broker's interfaces. Oracle Data Guard supports the following role transitions:

  • Switchover

    Allows the primary database to switch roles with one of its standby databases. There is no data loss during a switchover. After a switchover, each database continues to participate in the Data Guard configuration with its new role.

  • Failover

    Changes a standby database to the primary role in response to a primary database failure. If the primary database was not operating in either maximum protection mode or maximum availability mode before the failure, some data loss may occur. If Flashback Database is enabled on the primary database, it can be reinstated as a standby for the new primary database once the reason for the failure is corrected.

Section 8.1.1, "Preparing for a Role Transition" helps you choose the role transition that best minimizes downtime and risk of data loss. Switchovers and failovers are described in more detail in Section 8.1.3, "Switchovers" and Section 8.1.4, "Failovers", respectively.


See Also:

Oracle Data Guard Broker for information about event notification and database connection failover support available to database clients when a broker-managed failover occurs

8.1.1 Preparing for a Role Transition

Before starting any role transition, perform the following preparations:

  • Verify that each database is properly configured for the role that it is about to assume. See Chapter 3, "Creating a Physical Standby Database" and Chapter 4, "Creating a Logical Standby Database" for information about how to configure database initialization parameters, ARCHIVELOG mode, standby redo logs, and online redo logs on primary and standby databases.


    Note:

    You must define the LOG_ARCHIVE_DEST_n and LOG_ARCHIVE_DEST_STATE_n parameters on each standby database so that when a switchover or failover occurs, all standby sites continue to receive redo data from the new primary database.

  • Verify that there are no redo transport errors or redo gaps at the standby database by querying the V$ARCHIVE_DEST_STATUS view on the primary database.

    For example, the following query would be used to check the status of the standby database associated with LOG_ARCHIVE_DEST_2:

    SQL> SELECT STATUS, GAP_STATUS FROM V$ARCHIVE_DEST_STATUS WHERE DEST_ID = 2;
     
    STATUS GAP_STATUS
    --------- ------------------------
    VALID NO GAP
    

    Do not proceed until the value of the STATUS column is VALID and the value of the GAP_STATUS column is NOGAP, for the row that corresponds to the standby database.

  • Ensure temporary files exist on the standby database that match the temporary files on the primary database.

  • Remove any delay in applying redo that may be in effect on the standby database that will become the new primary database.

  • Before performing a switchover from an Oracle RAC primary database to a physical standby database, shut down all but one primary database instance. Any primary database instances shut down at this time can be started after the switchover completes.

  • Before performing a switchover to a physical standby database that is in real-time query mode, consider bringing all instances of that standby database to the mounted but not open state to achieve the fastest possible role transition and to cleanly terminate any user sessions connected to the physical standby database prior to the role transition.

8.1.2 Choosing a Target Standby Database for a Role Transition

For a Data Guard configuration with multiple standby databases, there are a number of factors to consider when choosing the target standby database for a role transition. These include the following:

  • Locality of the standby database.

  • The capability of the standby database (hardware specifications—such as the number of CPUs, I/O bandwidth available, and so on).

  • The time it will take to perform the role transition. This is affected by how far behind the standby database is in terms of application of redo data, and how much flexibility you have in terms of trading off application availability with data loss.

  • Standby database type.

The type of standby chosen as the role transition target determines how other standby databases in the configuration will behave after the role transition. If the new primary was a physical standby before the role transition, all other standby databases in the configuration will become standbys of the new primary. If the new primary was a logical standby before the role transition, then all other logical standbys in the configuration will become standbys of the new primary, but physical standbys in the configuration will continue to be standbys of the old primary and will therefore not protect the new primary. In the latter case, a future switchover or failover back to the original primary database will return all standbys to their original role as standbys of the current primary. For the reasons described above, a physical standby is generally the best role transition target in a configuration that contains both physical and logical standbys.


Note:

A snapshot standby cannot be the target of a role transition. If you wish to use a snapshot standby database as a target for a role transition, first convert it to a physical standby database and allow all redo received from the primary database to be applied. See Section 9.7.3, "Converting a Snapshot Standby Database into a Physical Standby Database".

Data Guard provides the V$DATAGUARD_STATS view that can be used to evaluate each standby database in terms of the currency of the data in the standby database, and the time it will take to perform a role transition if all available redo data is applied to the standby database. For example:

SQL> COLUMN NAME FORMAT A24
SQL> COLUMN VALUE FORMAT A16     
SQL> COLUMN DATUM_TIME FORMAT A24
SQL> SELECT NAME, VALUE, DATUM_TIME FROM V$DATAGUARD_STATS;
 
NAME                     VALUE            DATUM_TIME
------------------------ ---------------- ------------------------
transport lag            +00 00:00:00     06/18/2009 12:22:06
apply lag                +00 00:00:00     06/18/2009 12:22:06
apply finish time        +00 00:00:00.000
estimated startup time   9

This query output shows that the standby database has received and applied all redo generated by the primary database. These statistics were computed using data received from the primary database as of 12:22.06 on 06/18/09.

The apply lag and transport lag metrics are computed based on data received from the primary database. These metrics become stale if communications between the primary and standby database are disrupted. An unchanging value in the DATUM_TIME column for the apply lag and transport lag metrics indicates that these metrics are not being updated and have become stale, possibly due to a communications fault between the primary and standby databases.

8.1.3 Switchovers

A switchover is typically used to reduce primary database downtime during planned outages, such as operating system or hardware upgrades, or rolling upgrades of the Oracle database software and patch sets (described in Chapter 12, "Using SQL Apply to Upgrade the Oracle Database").

A switchover takes place in two phases. In the first phase, the existing primary database undergoes a transition to a standby role. In the second phase, a standby database undergoes a transition to the primary role.

Figure 8-1 shows a two-site Data Guard configuration before the roles of the databases are switched. The primary database is in San Francisco, and the standby database is in Boston.

Figure 8-1 Data Guard Configuration Before Switchover

Description of Figure 8-1 follows
Description of "Figure 8-1 Data Guard Configuration Before Switchover"

Figure 8-2 shows the Data Guard environment after the original primary database was switched over to a standby database, but before the original standby database has become the new primary database. At this stage, the Data Guard configuration temporarily has two standby databases.

Figure 8-2 Standby Databases Before Switchover to the New Primary Database

Description of Figure 8-2 follows
Description of "Figure 8-2 Standby Databases Before Switchover to the New Primary Database"

Figure 8-3 shows the Data Guard environment after a switchover took place. The original standby database became the new primary database. The primary database is now in Boston, and the standby database is now in San Francisco.

Figure 8-3 Data Guard Environment After Switchover

Description of Figure 8-3 follows
Description of "Figure 8-3 Data Guard Environment After Switchover"

Preparing for a Switchover

Ensure the prerequisites listed in Section 8.1.1 are satisfied. In addition, the following prerequisites must be met for a switchover:

8.1.4 Failovers

A failover is typically used only when the primary database becomes unavailable, and there is no possibility of restoring it to service within a reasonable period of time. The specific actions performed during a failover vary based on whether a logical or a physical standby database is involved in the failover, the state of the Data Guard configuration at the time of the failover, and on the specific SQL statements used to initiate the failover.

Figure 8-4 shows the result of a failover from a primary database in San Francisco to a physical standby database in Boston.

Figure 8-4 Failover to a Standby Database

Description of Figure 8-4 follows
Description of "Figure 8-4 Failover to a Standby Database"

Preparing for a Failover

If possible, before performing a failover, you should transfer as much of the available and unapplied primary database redo data as possible to the standby database.

Ensure the prerequisites listed in Section 8.1.1, "Preparing for a Role Transition" are satisfied. In addition, the following prerequisites must be met for a failover:

  • If a standby database currently running in maximum protection mode will be involved in the failover, first place it in maximum performance mode by issuing the following statement on the standby database:

    SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;
    

    Then, if appropriate standby databases are available, you can reset the desired protection mode on the new primary database after the failover completes.

    This is required because you cannot fail over to a standby database that is in maximum protection mode. In addition, if a primary database in maximum protection mode is still actively communicating with the standby database, issuing the ALTER DATABASE statement to change the standby database from maximum protection mode to maximum performance mode will not succeed. Because a failover removes the original primary database from the Data Guard configuration, these features serve to protect a primary database operating in maximum protection mode from the effects of an unintended failover.


    Note:

    Do not fail over to a standby database to test whether or not the standby database is being updated correctly. Instead:

8.1.5 Role Transition Triggers

The DB_ROLE_CHANGE system event is signaled whenever a role transition occurs. This system event is signaled immediately if the database is open when the role transition occurs, or the next time the database is opened if it is closed when a role transition occurs.

The DB_ROLE_CHANGE system event can be used to fire a trigger that performs a set of actions whenever a role transition occurs.

8.2 Role Transitions Involving Physical Standby Databases

The following sections describe how to perform a switchover or failover to a physical standby database:

8.2.1 Performing a Switchover to a Physical Standby Database

This section describes how to perform a switchover to a physical standby database.A switchover is initiated on the primary database and is completed on the target standby database.

Step 1   Verify that the primary database can be switched to the standby role.

Query the SWITCHOVER_STATUS column of the V$DATABASE view on the primary database.For example:

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS 
 ----------------- 
 TO STANDBY 
 1 row selected 

A value of TO STANDBY or SESSIONS ACTIVE indicates that the primary database can be switched to the standby role. If neither of these values is returned, a switchover is not possible because redo transport is either misconfigured or is not functioning properly. See Chapter 6 for information about configuring and monitoring redo transport.

Step 2   Initiate the switchover on the primary database.

Issue the following SQL statement on the primary database to switch it to the standby role:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH -
> SESSION SHUTDOWN;

This statement converts the primary database into a physical standby database. The current control file is backed up to the current SQL session trace file before the switchover. This makes it possible to reconstruct a current control file, if necessary.


Note:

The WITH SESSION SHUTDOWN clause can be omitted from the switchover statement if the query performed in the previous step returned a value of TO STANDBY.

Step 3   Shut down and then mount the former primary database.
SQL> SHUTDOWN ABORT;
SQL> STARTUP MOUNT;

At this point in the switchover process, the original primary database is a physical standby database (see Figure 8-2).

Step 4   Verify that the switchover target is ready to be switched to the primary role.

Query the SWITCHOVER_STATUS column of the V$DATABASE view on the standby database.

For example:

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS 
----------------- 
TO_PRIMARY 
1 row selected

A value of TO PRIMARY or SESSIONS ACTIVE indicates that the standby database is ready to be switched to the primary role. If neither of these values is returned, verify that Redo Apply is active and that redo transport is configured and working properly. Continue to query this column until the value returned is either TO PRIMARY or SESSIONS ACTIVE.

Step 5   Switch the target physical standby database role to the primary role.

Issue the following SQL statement on the target physical standby database:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;

Note:

The WITH SESSION SHUTDOWN clause can be omitted from the switchover statement if the query performed in the previous step returned a value of TO PRIMARY.

Step 6   Open the new primary database.
SQL> ALTER DATABASE OPEN;
Step 7   Start Redo Apply on the new physical standby database.

For example:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE -
> DISCONNECT FROM SESSION;
Step 8   Restart Redo Apply if it has stopped at any of the other physical standby databases in your Data Guard configuration.

For example:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE -
> DISCONNECT FROM SESSION;

8.2.2 Performing a Failover to a Physical Standby Database

This section describes how to perform a failover to a physical standby database.

Step 1   Flush any unsent redo from the primary database to the target standby database.

If the primary database can be mounted, it may be possible to flush any unsent archived and current redo from the primary database to the standby database. If this operation is successful, a zero data loss failover is possible even if the primary database is not in a zero data loss data protection mode.

Ensure that Redo Apply is active at the target standby database.

Mount, but do not open the primary database. If the primary database cannot be mounted, go to Step 2.

Issue the following SQL statement at the primary database:

SQL> ALTER SYSTEM FLUSH REDO TO target_db_name;

For target_db_name, specify the DB_UNIQUE_NAME of the standby database that is to receive the redo flushed from the primary database.

This statement flushes any unsent redo from the primary database to the standby database, and waits for that redo to be applied to the standby database.

If this statement completes without any errors, go to Step 5. If the statement completes with any error, or if it must be stopped because you cannot wait any longer for the statement to complete, continue with Step 2.

Step 2   Verify that the standby database has the most recently archived redo log file for each primary database redo thread.

Query the V$ARCHIVED_LOG view on the target standby database to obtain the highest log sequence number for each redo thread.

For example:

SQL> SELECT UNIQUE THREAD# AS THREAD, MAX(SEQUENCE#) -
> OVER (PARTITION BY thread#) AS LAST from V$ARCHIVED_LOG;

    THREAD       LAST
---------- ----------
         1        100

If possible, copy the most recently archived redo log file for each primary database redo thread to the standby database if it does not exist there, and register it. This must be done for each redo thread.

For example:

SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE 'filespec1';
Step 3   Identify and resolve any archived redo log gaps.

Query the V$ARCHIVE_GAP view on the target standby database to determine if there are any redo gaps on the target standby database.

For example:

SQL> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;

THREAD#    LOW_SEQUENCE# HIGH_SEQUENCE#
---------- ------------- --------------
         1            90             92

In this example the gap comprises archived redo log files with sequence numbers 90, 91, and 92 for thread 1.

If possible, copy any missing archived redo log files to the target standby database from the primary database and register them at the target standby database. This must be done for each redo thread.

For example:

SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE 'filespec1';
Step 4   Repeat Step 3 until all gaps are resolved.

The query executed in Step 3 displays information for the highest gap only. After resolving a gap, you must repeat the query until no more rows are returned.

If, after performing Step 2 through Step 4, you are not able to resolve all gaps in the archived redo log files (for example, because you do not have access to the system that hosted the failed primary database), some data loss will occur during the failover.

Step 5   Stop Redo Apply.

Issue th4e˚e following SQL statement on the target standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Step 6   Finish applying all received redo data.

Issue the following SQL statement on the target standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;

If this statement completes without any errors, proceed to Step 7.

If an error occurs, some received redo data was not applied. Try to resolve the cause of the error and re-issue the statement before proceeding to the next step.

Note that if there is a redo gap that was not resolved in Step 3 and Step 4, you will receive an error stating that there is a redo gap.

If the error condition cannot be resolved, a failover can still be performed (with some data loss) by issuing the following SQL statement on the target standby database:

SQL> ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE;

Proceed to Step 9 when the ACTIVATE statement completes.

Step 7   Verify that the target standby database is ready to become a primary database.

Query the SWITCHOVER_STATUS column of the V$DATABASE view on the target standby database.

For example:

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
-----------------
TO PRIMARY
1 row selected

A value of either TO PRIMARY or SESSIONS ACTIVE indicates that the standby database is ready to be switched to the primary role. If neither of these values is returned, verify that Redo Apply is active and continue to query this view until either TO PRIMARY or SESSIONS ACTIVE is returned.

Step 8   Switch the physical standby database to the primary role.

Issue the following SQL statement on the target standby database:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;

Note:

The WITH SESSION SHUTDOWN clause can be omitted from the switchover statement if the query of the SWITCHOVER_STATUS column performed in the previous step returned a value of TO PRIMARY.

Step 9   Open the new primary database.
SQL> ALTER DATABASE OPEN;
Step 10   Back up the new primary database.

Oracle recommends that a full backup be taken of the new primary database.

Step 11   Restart Redo Apply if it has stopped at any of the other physical standby databases in your Data Guard configuration.

For example:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE -
> DISCONNECT FROM SESSION;
Step 12   Optionally, restore the failed primary database.

After a failover, the original primary database can be converted into a physical standby database of the new primary database using the method described in Section 13.2 or Section 13.7, or it can be re-created as a physical standby database from a backup of the new primary database using the method described in Section 3.2.

Once the original primary database is running in the standby role, a switchover can be performed to restore it to the primary role.

8.3 Role Transitions Involving Logical Standby Databases

The following sections describe how to perform switchovers and failovers involving a logical standby database:


Note:

Logical standby does not replicate database services. In the event of a failover or switchover to a logical standby, mid-tiers connecting to services in the primary will not be able to connect (since the creation of the service is not replicated), or will connect to an incorrect edition (since the modification of the service attribute is not replicated).

Oracle Clusterware does not replicate the services it manages to logical standbys. You must manually keep them synchronized between the primary and standby. See Oracle Clusterware Administration and Deployment Guide for more information about Oracle Clusterware.


8.3.1 Performing a Switchover to a Logical Standby Database

When you perform a switchover that changes roles between a primary database and a logical standby database, always initiate the switchover on the primary database and complete it on the logical standby database. These steps must be performed in the order in which they are described or the switchover will not succeed.

Step 1   Verify it is possible to perform a switchover on the primary database.

On the current primary database, query the SWITCHOVER_STATUS column of the V$DATABASE fixed view on the primary database to verify it is possible to perform a switchover.

For example:

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
-----------------
TO STANDBY
1 row selected

A value of TO STANDBY or SESSIONS ACTIVE in the SWITCHOVER_STATUS column indicates that it is possible to switch the primary database to the logical standby role. If one of these values is not displayed, then verify the Data Guard configuration is functioning correctly (for example, verify all LOG_ARCHIVE_DEST_n parameter values are specified correctly). See Oracle Database Reference for information about other valid values for the SWITCHOVER_STATUS column of the V$DATABASE view.

Step 2   Prepare the current primary database for the switchover.

To prepare the current primary database for a logical standby database role, issue the following SQL statement on the primary database:

SQL> ALTER DATABASE PREPARE TO SWITCHOVER TO LOGICAL STANDBY;

This statement notifies the current primary database that it will soon switch to the logical standby role and begin receiving redo data from a new primary database. You perform this step on the primary database in preparation to receive the LogMiner dictionary to be recorded in the redo stream of the current logical standby database, as described in Step 3.

The value PREPARING SWITCHOVER is displayed in the V$DATABASE.SWITCHOVER_STATUS column if this operation succeeds.

Step 3   Prepare the target logical standby database for the switchover.

Use the following statement to build a LogMiner dictionary on the logical standby database that is the target of the switchover:

SQL> ALTER DATABASE PREPARE TO SWITCHOVER TO PRIMARY; 

This statement also starts redo transport services on the logical standby database that begins transmitting its redo data to the current primary database and to other standby databases in the Data Guard configuration. The sites receiving redo data from this logical standby database accept the redo data but they do not apply it.

The V$DATABASE.SWITCHOVER_STATUS on the logical standby database initially shows PREPARING DICTIONARY while the LogMiner dictionary is being recorded in the redo stream. Once this has completed successfully, the SWITCHOVER_STATUS column shows PREPARING SWITCHOVER.

Step 4   Ensure the current primary database is ready for the future primary database's redo stream.

Before you can complete the role transition of the primary database to the logical standby role, verify the LogMiner dictionary was received by the primary database by querying the SWITCHOVER_STATUS column of the V$DATABASE fixed view on the primary database. Without the receipt of the LogMiner dictionary, the switchover cannot proceed, because the current primary database will not be able to interpret the redo records sent from the future primary database. The SWITCHOVER_STATUS column shows the progress of the switchover.

When the query returns the TO LOGICAL STANDBY value, you can proceed with Step 5. For example:

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
-----------------
TO LOGICAL STANDBY
1 row selected

Note:

You can cancel the switchover operation by issuing the following statements in the order shown:
  1. Cancel switchover on the primary database:

    SQL> ALTER DATABASE PREPARE TO SWITCHOVER CANCEL;
    
  2. Cancel the switchover on the logical standby database:

    SQL> ALTER DATABASE PREPARE TO SWITCHOVER CANCEL;
    

Step 5   Switch the primary database to the logical standby database role.

To complete the role transition of the primary database to a logical standby database, issue the following SQL statement:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO LOGICAL STANDBY; 

This statement waits for all current transactions on the primary database to end and prevents any new users from starting new transactions, and establishes a point in time where the switchover will be committed.

Executing this statement will also prevent users from making any changes to the data being maintained in the logical standby database. To ensure faster execution, ensure the primary database is in a quiet state with no update activity before issuing the switchover statement (for example, have all users temporarily log off the primary database). You can query the V$TRANSACTION view for information about the status of any current in-progress transactions that could delay execution of this statement.

The primary database has now undergone a role transition to run in the standby database role.

When a primary database undergoes a role transition to a logical standby database role, you do not have to shut down and restart the database.

Step 6   Ensure all available redo has been applied to the target logical standby database that is about to become the new primary database.

After you complete the role transition of the primary database to the logical standby role and the switchover notification is received by the standby databases in the configuration, you should verify the switchover notification was processed by the target standby database by querying the SWITCHOVER_STATUS column of the V$DATABASE fixed view on the target standby database. Once all available redo records are applied to the logical standby database, SQL Apply automatically shuts down in anticipation of the expected role transition.

The SWITCHOVER_STATUS value is updated to show progress during the switchover. When the status is TO PRIMARY, you can proceed with Step 7.

For example:

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
-----------------
TO PRIMARY
1 row selected

See Oracle Database Reference for information about other valid values for the SWITCHOVER_STATUS column of the V$DATABASE view.

Step 7   Switch the target logical standby database to the primary database role.

On the logical standby database that you want to switch to the primary role, use the following SQL statement to switch the logical standby database to the primary role:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

There is no need to shut down and restart any logical standby databases that are in the Data Guard configuration. As described in Section 8.1.2, all other logical standbys in the configuration will become standbys of the new primary, but any physical standby databases will remain standbys of the original primary database.

Step 8   Start SQL Apply on the new logical standby database.

On the new logical standby database, start SQL Apply:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

8.3.2 Performing a Failover to a Logical Standby Database

This section describes how to perform failovers involving a logical standby database. A failover role transition involving a logical standby database necessitates taking corrective actions on the failed primary database and on all bystander logical standby databases. If Flashback Database was not enabled on the failed primary database, you must re-create the database from backups taken from the current primary database. Otherwise, you can follow the procedure described in Section 13.2 to convert a failed primary database to be a logical standby database for the new primary database.

Depending on the protection mode for the configuration and the attributes you chose for redo transport services, it might be possible to automatically recover all or some of the primary database modifications.

Step 1   Flush any unsent redo from the primary database to the target standby database.

If the primary database can be mounted, it may be possible to flush any unsent archived and current redo from the primary database to the standby database. If this operation is successful, a zero data loss failover is possible even if the primary database is not in a zero data loss data protection mode.

Ensure that Redo Apply is active at the target standby database.

Mount, but do not open the primary database.

Issue the following SQL statement at the primary database:

SQL> ALTER SYSTEM FLUSH REDO TO target_db_name;

For target_db_name, specify the DB_UNIQUE_NAME of the standby database that is to receive the redo flushed from the primary database.

This statement flushes any unsent redo from the primary database to the standby database, and waits for that redo to be applied to the standby database.

Step 2   Copy and register any missing archived redo log files to the target logical standby database slated to become the new primary database.

Depending on the condition of the components in the configuration, you might have access to the archived redo log files on the primary database. If so, do the following:

  1. Determine if any archived redo log files are missing on the logical standby database.

  2. Copy missing log files from the primary database to the logical standby database.

  3. Register the copied log files.

You can register an archived redo log file with the logical standby database by issuing the following statement. For example:

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE -
> '/disk1/oracle/dbs/log-%r_%s_%t.arc';
Database altered.
Step 3   Enable remote destinations.

If you have not previously configured role-based destinations, identify the initialization parameters that correspond to the remote logical standby destinations for the new primary database, and manually enable archiving of redo data for each of these destinations.

For example, to enable archiving for the remote destination defined by the LOG_ARCHIVE_DEST_2 parameter, issue the following statement:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=BOTH;

To ensure this change will persist if the new primary database is later restarted, update the appropriate text initialization parameter file or server parameter file. In general, when the database operates in the primary role, you must enable archiving to remote destinations, and when the database operates in the standby role, you must disable archiving to remote destinations.

Step 4   Activate the new primary database.

Issue the following statement on the target logical standby database (that you are transitioning to the new primary role):

SQL> ALTER DATABASE ACTIVATE LOGICAL STANDBY DATABASE FINISH APPLY;

This statement stops the RFS process, applies remaining redo data in the standby redo log file before the logical standby database becomes a primary database, stops SQL Apply, and activates the database in the primary database role.

If the FINISH APPLY clause is not specified, then unapplied redo from the current standby redo log file will not be applied before the standby database becomes the primary database.

Step 5   Recovering other standby databases after a failover

Follow the method described in Section 13.1 to ensure existing logical standby databases can continue to provide protection for the new primary database.

Step 6   Back up the new primary database.

Back up the new primary database immediately after the Data Guard database failover. Immediately performing a backup is a necessary safety measure, because you cannot recover changes made after the failover without a complete backup copy of the database.

Step 7   Restore the failed primary database.

After a failover, the original primary database can be converted into a logical standby database of the new primary database using the method described in Section 13.2, or it can be recreated as a logical standby database from a backup of the new primary database as described in Chapter 4.

Once the original primary database has been converted into a standby database, a switchover can be performed to restore it to the primary role.

8.4 Using Flashback Database After a Role Transition

After a role transition, you can optionally use the FLASHBACK DATABASE command to revert the databases to a point in time or system change number (SCN) prior to when the role transition occurred. If you flash back a primary database, you must flash back all of its standby databases to either the same (or earlier) SCN or time.When flashing back primary or standby databases in this way, you do not have to be aware of past switchovers. Oracle can automatically flashback across past switchovers if the SCN/time is before any past switchover.


Note:

Flashback Database must be enabled on the databases before the role transition occurs. See Oracle Database Backup and Recovery User's Guide for more information

8.4.1 Using Flashback Database After a Switchover

After a switchover, you can return databases to a time or system change number (SCN) prior to when the switchover occurred using the FLASHBACK DATABASE command.

If the switchover involved a physical standby database, the primary and standby database roles are preserved during the flashback operation. That is, the role in which the database is running does not change when the database is flashed back to the target SCN or time to which you flashed back the database. A database running in the physical standby role after the switchover but prior to the flashback will still be running in the physical standby database role after the Flashback Database operation.

If the switchover involved a logical standby database, flashing back changes the role of the standby database to what it was at the target SCN or time to which you flashed back the database.

8.4.2 Using Flashback Database After a Failover

You can use Flashback Database to convert the failed primary database to a point in time before the failover occurred and then convert it into a standby database. See Section 13.2, "Converting a Failed Primary Into a Standby Database Using Flashback Database" for the complete step-by-step procedure.

PK4PKD Preface

Preface

Oracle Data Guard is the most effective solution available today to protect the core asset of any enterprise—its data, and make it available on a 24x7 basis even in the face of disasters and other calamities. This guide describes Oracle Data Guard technology and concepts, and helps you configure and implement standby databases.

Audience

Oracle Data Guard Concepts and Administration is intended for database administrators (DBAs) who administer the backup, restoration, and recovery operations of an Oracle database system.

To use this document, you should be familiar with relational database concepts and basic backup and recovery administration. You should also be familiar with the operating system environment under which you are running Oracle software.

Documentation Accessibility

For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.

Access to Oracle Support

Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.

Related Documents

Readers of Oracle Data Guard Concepts and Administration should also read:

  • The beginning of Oracle Database Concepts, that provides an overview of the concepts and terminology related to the Oracle database and serves as a foundation for the more detailed information in this guide.

  • The chapters in the Oracle Database Administrator's Guide that deal with managing the control files, online redo log files, and archived redo log files.

  • The chapter in the Oracle Database Utilities that discusses LogMiner technology.

  • Oracle Data Guard Broker that describes the graphical user interface and command-line interface for automating and centralizing the creation, maintenance, and monitoring of Oracle Data Guard configurations.

  • Oracle Database High Availability Overview for information about how Oracle Data Guard is used as a key component in high availability and disaster recovery environments.

  • Oracle Enterprise Manager online Help system

Conventions

The following text conventions are used in this document:

ConventionMeaning
boldfaceBoldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary.
italicItalic type indicates book titles, emphasis, or placeholder variables for which you supply particular values.
monospaceMonospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter.

PK~ PKD Index

Index

A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  Z 

A

activating
a logical standby database, 8.3.2, 16.1
a physical standby database, 11.8.6, 16.1
Active Data Guard
and physical standby databases, 2.1.1, 9.2
and the real-time query feature, 9.2.1
adding
datafiles, 9.3.1, A.10.1.1, A.10.1.1
indexes on logical standby databases, 2.1.2, 10.5.4.1
new or existing standby databases, 1.3
online redo log files, 9.3.5
tablespaces, 9.3.1
adjusting
initialization parameter file
for logical standby database, 4.2.4.2
AFFIRM attribute, 15
ALTER DATABASE statement
ABORT LOGICAL STANDBY clause, 16.1
ACTIVATE STANDBY DATABASE clause, 8.3.2, 11.8.6, 16.1, 16.1
ADD STANDBY LOGFILE clause, 16.1, A.1.1
ADD STANDBY LOGFILE MEMBER clause, 16.1, A.1.1, A.1.1
ADD SUPPLEMENTAL LOG DATA clause, 16.1
CLEAR UNARCHIVED LOGFILES clause, 9.5
COMMIT TO SWITCHOVER clause, 8.3.1, 8.3.1, 8.3.1, 16.1
in Oracle Real Application Clusters, D.3.1
troubleshooting, A.4.2, A.4.2, A.4.3
CREATE CONTROLFILE clause, 9.5
CREATE DATAFILE AS clause, A.1.1
CREATE STANDBY CONTROLFILE clause, 3.2.2, A.1.3
REUSE clause, 16.1
DROP LOGFILE clause, A.1.1
DROP STANDBY LOGFILE MEMBER clause, 16.1, 16.1, 16.1, A.1.1
FORCE LOGGING clause, 2.3.2, 3.1.1, 13.4, 13.4, 16.1
GUARD clause, 10.2
MOUNT STANDBY DATABASE clause, 16.1
OPEN READ ONLY clause, 16.1
OPEN RESETLOGS clause, 3.2.2, 9.5
PREPARE TO SWITCHOVER clause, 8.3.1, 8.3.1, 16.1
RECOVER MANAGED STANDBY DATABASE clause, 3.2.6, 4.2.5, 16.1, 16.1, 16.1
background process, 7.3.1
canceling, 7.3.2
controlling Redo Apply, 7.3.1, 11.8.2
failover, 16.1
foreground session, 7.3.1
overriding the delay interval, 7.2.2
starting real time apply, 7.3.1
REGISTER LOGFILE clause, 16.1, A.4.1
RENAME FILE clause, A.1.1, A.1.1
SET STANDBY DATABASE clause
TO MAXIMIZE AVAILABILITY clause, 16.1
TO MAXIMIZE PERFORMANCE clause, 8.1.4
TO MAXIMIZE PROTECTION clause, 16.1
START LOGICAL STANDBY APPLY clause, 7.4.1, 12.5, A.6
IMMEDIATE keyword, 7.4.1
starting SQL Apply, 4.2.5
STOP LOGICAL STANDBY APPLY clause, 7.4.2, 8.3.2, 16.1
ALTER SESSION DISABLE GUARD statement
overriding the database guard, 10.5.4
ALTER SESSION statement
ENABLE GUARD clause, 16.2
ALTER SYSTEM statement
SWITCH LOGFILE clause, 3.2.7
ALTER TABLESPACE statement, 9.3.4, 13.4.2, A.10.1.1
FORCE LOGGING clause, 9.3.6
alternate archive destinations
setting up initialization parameters for, A.2
ALTERNATE attribute, 15, 15
LOG_ARCHIVE_DEST_n initialization parameter, A.2
LOG_ARCHIVE_DEST_STATE_n initialization parameter, 6.2.2
ANALYZER process, 10.1
APPLIER process, 10.1
apply lag
monitoring in a real-time query environment, 9.2.1.1
apply lag tolerance
configuring in a real-time query environment, 9.2.1.2
apply services
defined, 1.2.2, 7.1
delaying application of redo data, 7.2.2, 15
real-time apply
defined, 7.2.1, 7.2.1
monitoring with LOG_ARCHIVE_TRACE, F.2
Redo Apply
defined, 7.1, 7.3
monitoring, 7.3.3
starting, 7.3.1
stopping, 7.3.2
SQL Apply
defined, 1.2.2, 7.1, 7.1
monitoring, 7.4.3
starting, 7.4.1
stopping, 7.4.2
applying
redo data immediately, 7.2.1
redo data on standby database, 1.2, 1.2.2, 7
SQL statements to logical standby databases, 7.4
applying state, 10.4.1
AQ_TM_PROCESSES dynamic parameter, A.4.2
archive destinations
alternate, A.2
archived redo log files
accessing information about, 9.5.1.3
applying
Redo Apply technology, 1.2.2
SQL Apply technology, 1.2.2
delaying application, 15
on the standby database, 7.2.2
deleting unneeded, 10.4.2
destinations
disabling, 6.2.2
enabling, 6.2.2
managing gaps, 1.7
See also gap management
manually transferring, 2.3.2
redo data transmitted, 1.2.2, 7.1
registering
during failover, 8.3.2
standby databases and, 7.3.3, 7.4.3, 9.5.1
troubleshooting switchover problems, A.4.1
ARCHIVELOG mode
software requirements, 2.3.2
archiver processes (ARCn)
influenced by MAX_CONNECTIONS attribute, 15
archiving
real-time apply, 7.2.1
specifying
failure resolution policies for, 15
standby redo logs, 6.2.3.2
to a fast recovery area, 6.2.3.2.2
to a local file system, 6.2.3.2.3
to failed destinations, 15
ASM
See Automatic Storage Management (ASM)
ASYNC attribute, 15
attributes
deprecated for the LOG_ARCHIVE_DEST_n initialization parameter, 15
AUD$ table
replication on logical standbys, C.12.2
automatic block repair, 9.2.1.5
automatic detection of missing log files, 1.2.1, 1.7
automatic failover, 1.2.3
Automatic Storage Management (ASM)
creating a standby database that uses, 13.5
automatic switchover, 1.2.3
See also switchovers

B

BACKUP INCREMENTAL FROM SCN command
scenarios using, 11.10
backup operations
after failovers, 8.3.2
after unrecoverable operations, 13.4.3, 13.4.3
configuring on a physical standby database, 1.1.3
datafiles, 13.4.2
offloading on the standby database, 1.7
primary databases, 1.1.2
used by the broker, 1.3
using RMAN, 11
basic readable standby database See simulating a standby database environment
batch processing
on a logical standby database, 10.1.1.4
benefits
Data Guard, 1.7
logical standby database, 2.1.2
of a rolling upgrade, 12.1
physical standby database, 2.1.1
BFILE data types
in logical standby databases, C.1.2
block repair, automatic, 9.2.1.5
broker
command-line interface, 1.7
defined, 1.3
graphical user interface, 1.7
BUILDER process, 10.1

C

cascading redo data, 6.3
configuration requirements, 6.3
data protection considerations, 6.3.2
restrictions, 6.3
character sets
changing on primary databases, 13.8
configurations with differing, C.15
checklist
tasks for creating physical standby databases, 3.2, 3.2
tasks for creating standby databases, 4.2, 4.2
checkpoints
V$LOGSTDBY_PROGRESS view, 10.1.1.3
chunking
transactions, 10.1.1.1
CJQ0 process, A.4.2
CLEAR UNARCHIVED LOGFILES clause
of ALTER DATABASE, 9.5
collections data types
in logical standby databases, C.1.2
command-line interface
broker, 1.7
commands, Recovery Manager
DUPLICATE, E.2.1
COMMIT TO SWITCHOVER clause
of ALTER DATABASE, 8.3.1, 8.3.1, 16.1
in Oracle Real Application Clusters, D.3.1
troubleshooting, A.4.2, A.4.2, A.4.3
COMMIT TO SWITCHOVER TO PRIMARY clause
of ALTER DATABASE, 8.3.1
communication
between databases in a Data Guard configuration, 1.1
COMPATIBLE initialization parameter
setting after upgrading Oracle Database software, B.4
setting for a rolling upgrade, 12.2, 12.5, 12.5
complementary technologies, 1.6
COMPRESSION attribute, 15
configuration options
creating with Data Guard broker, 1.3
overview, 1.1
physical standby databases
location and directory structure, 2.4
standby databases
delayed standby, 7.2.2
configuring
backups on standby databases, 1.1.3
disaster recovery, 1.1.3
initialization parameters
for alternate archive destinations, A.2
listener for physical standby databases, 3.2.5
no data loss, 1.2.3
physical standby databases, 2.4
reporting operations on a logical standby database, 1.1.3
standby databases at remote locations, 1.1.3
constraints
handled on a logical standby database, 10.6.3
Context
unsupported data types, C.1.2
Context data types
in logical standby databases, C.1.2
control files
copying, 3.2.4
creating for standby databases, 3.2.2
CONVERT TO SNAPSHOT STANDBY clause on the ALTER DATABASE statement, 16.1
converting
a logical standby database to a physical standby database
aborting, 4.2.4.1
a physical standby database to a logical standby database, 4.2.4.1
COORDINATOR process, 10.1
LSP background process, 10.1
copying
control files, 3.2.4
CREATE CONTROLFILE clause
of ALTER DATABASE, 9.5
CREATE DATABASE statement
FORCE LOGGING clause, 13.4
CREATE DATAFILE AS clause
of ALTER DATABASE, A.1.1
CREATE STANDBY CONTROLFILE clause
of ALTER DATABASE, 3.2.2, 16.1, A.1.3
CREATE TABLE AS SELECT (CTAS) statements
applied on a logical standby database, 10.1.1.5
creating
indexes on logical standby databases, 10.5.4.1

D

data availability
balancing against system performance requirements, 1.7
Data Guard broker
defined, 1.3
distributed management framework, 8
failovers, 1.3
fast-start, 8
manual, 1.3, 8
fast-start failover, 1.3
switchovers, 8
Data Guard configurations
archiving to standby destinations using the log writer process, 7.2.1
defined, 1.1
protection modes, 1.4
upgrading Oracle Database software, B
data loss
due to failover, 1.2.3
switchover and, 8.1
data protection
balancing against performance, 1.7
benefits, 1.7
flexibility, 1.7
provided by Data Guard, 1
data protection modes
enforced by redo transport services, 1.2.1
overview, 1.4, 1.4
Data Pump utility
using transportable tablespaces with physical standby databases, 9.3.3
data types
BFILE, C.1.2
collections in logical standby databases, C.1.2
ROWID, C.1.2
Spatial, Image, and Context, C.1.2
UROWID, C.1.2
user-defined, C.1.2
database guard, 10.5.4
overriding, 10.5.4
database incarnation
changes with OPEN RESETLOGS, 9.4, 9.4
database roles
primary, 1.1.1, 8.1
standby, 1.1.2, 8.1
transitions, 1.2.3
database schema
physical standby databases, 1.1.2
databases
failover and, 8.1.4
role transition and, 8.1
surviving disasters and data corruptions, 1
upgrading software versions, 12.1
datafiles
adding to primary database, 9.3.1
monitoring, 9.5, 13.4.2
renaming on the primary database, 9.3.4
DB_FILE_NAME_CONVERT initialization parameter
setting at standby site after a switchover, A.4.4
setting on physical standby database, 3.2.3
when planning standby location and directory structure, 2.4
DB_NAME initialization parameter, 3.1.4
DB_ROLE_CHANGE system event, 8.1.5
DB_UNIQUE_NAME attribute, 15
DB_UNIQUE_NAME initialization parameter, A.4.3
required with LOG_ARCHIVE_CONFIG parameter, 14
setting database initialization parameters, 3.1.4
DBA_DATA_FILES view, 9.5
DBA_LOGMNR_PURGED_LOG view
list archived redo log files that can be deleted, 10.4.2
DBA_LOGSTDBY_EVENTS view, 10.3.1, 17, A.6
recording unsupported operations in, 10.5.1
DBA_LOGSTDBY_HISTORY view, 17
DBA_LOGSTDBY_LOG view, 10.3.2, 17
DBA_LOGSTDBY_NOT_UNIQUE view, 17
DBA_LOGSTDBY_PARAMETERS view, 17
DBA_LOGSTDBY_SKIP view, 17
DBA_LOGSTDBY_SKIP_TRANSACTION view, 17
DBA_LOGSTDBY_UNSUPPORTED view, 17
DBA_TABLESPACES view, 9.5
DBMS_ALERT, C.9.2
DBMS_AQ, C.9.2
DBMS_DESCRIBE, C.9.1
DBMS_JAVA, C.9.2
DBMS_LOB, C.9.1
DBMS_LOGSTDBY package
INSTANTIATE_TABLE procedure, 10.5.5
SKIP procedure, A.6
SKIP_ERROR procedure, A.3
SKIP_TRANSACTION procedure, A.6
DBMS_LOGSTDBY.BUILD procedure
building a dictionary in the redo data, 4.2.3.2
DBMS_METADATA, C.9.1
DBMS_OBFUSCATION_TOOLKIT, C.9.1
DBMS_OUTPUT, C.9.1
DBMS_PIPE, C.9.1
DBMS_RANDOM, C.9.1
DBMS_REDEFINITION, C.9.2
DBMS_REFRESH, C.9.2
DBMS_REGISTRY, C.9.2
DBMS_SCHEDULER, C.9.1
DBMS_SPACE_ADMIN, C.9.2
DBMS_SQL, C.9.1
DBMS_TRACE, C.9.1
DBMS_TRANSACTION, C.9.1
DBSNMP process, A.4.2
DDL statements
supported by SQL Apply, C
DDL Statements
that use DBLINKS, C.12.1
DDL transactions
applied on a logical standby database, 10.1.1.5
applying to a logical standby database, 10.1.1.5
DEFER attribute
LOG_ARCHIVE_DEST_STATE_n initialization parameter, 6.2.2
DELAY attribute, 15
LOG_ARCHIVE_DEST_n initialization parameter, 7.2.2
DELAY option
of ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
cancelling, 7.2.2
delaying
application of archived redo log files, 15
application of redo log files, 7.2.2
deleting
archived redo log files
indicated by the DBA_LOGMNR_PURGED_LOG view, 10.4.2
not needed by SQL Apply, 10.4.2
deprecated attributes
on the LOG_ARCHIVE_DEST_n initialization parameter, 15
destinations
displaying with V$ARCHIVE_DEST view, 17
role-based definitions, 15
detecting
missing archived redo log files, 1.2.1, 1.7
DG_CONFIG attribute, 15
DGMGRL command-line interface
invoking failovers, 1.3, 8
simplifying switchovers, 1.3, 8
dictionary
building a LogMiner, 4.2.3.2
direct path inserts
SQL Apply DML considerations, 10.1.1.4
directory locations
Optimal Flexible Architecture (OFA), 2.4, 2.4
set up with ASM, 2.4
set up with OMF, 2.4
structure on standby databases, 2.4
disabling
a destination for archived redo log files, 6.2.2
disaster recovery
benefits, 1.7
configuring, 1.1.3
provided by Data Guard, 1
provided by standby databases, 1.1.3
disk I/O
controlling with the AFFIRM and NOAFFIRM attributes, 15
distributed transactions, C.13
DML
batch updates on a logical standby database, 10.1.1.4
DML transactions
applying to a logical standby database, 10.1.1.4
downgrading
Oracle Database software, B.6
DROP STANDBY LOGFILE clause
of ALTER DATABASE, A.1.1
DROP STANDBY LOGFILE MEMBER clause
of ALTER DATABASE, 16.1, 16.1, 16.1, A.1.1
dropping
online redo log files, 9.3.5
dynamic parameters
AQ_TM_PROCESSES, A.4.2
JOB_QUEUE_PROCESSES, A.4.2

E

ENABLE attribute
LOG_ARCHIVE_DEST_STATE_n initialization parameter, 6.2.2
ENABLE GUARD clause
of ALTER SESSION, 16.2
enabling
database guard on logical standby databases, 16.2
destinations for archived redo log files, 6.2.2
real-time apply
on logical standby databases, 7.4.1
on physical standby databases, 7.3.1
extensible indexes
supported by logical standby databases, C.1.2

F

failovers, 1.2.3
Data Guard broker, 1.3, 8
defined, 1.2.3, 8.1
displaying history with DBA_LOGSTDBY_HISTORY, 17
fast-start failover, 8
flashing back databases after, 8.4
logical standby databases and, 8.3.2
manual versus automatic, 1.2.3
performing backups after, 8.3.2
physical standby databases and, 16.1
preparing for, 8.1.4
simplifying with Data Guard broker, 8
transferring redo data before, 8.1.4
viewing characteristics for logical standby databases, 10.3.3
with maximum performance mode, 8.1.4
with maximum protection mode, 8.1.4
failure resolution policies
specifying for redo transport services, 15
fast-start failover
automatic failover, 1.3, 8
monitoring, 9.5
FGA_LOG$ table
replication on logical standbys, C.12.2
file specifications
renaming on the logical standby database, 10.5.3
Flashback Database
after a role transition, 8.4
after OPEN RESETLOGS, 13.3
after role transitions, 8.4
characteristics complementary to Data Guard, 1.6
physical standby database, 13.2.1
FORCE LOGGING clause
of ALTER DATABASE, 2.3.2, 3.1.1, 13.4, 13.4, 16.1
of ALTER TABLESPACE, 9.3.6
of CREATE DATABASE, 13.4

G

gap management
automatic detection and resolution, 1.2.1, 1.7
detecting missing log files, 1.7
registering archived redo log files
during failover, 8.3.2
GV$INSTANCE view, D.3.1

H

high availability
benefits, 1.7
provided by Data Guard, 1
provided by Oracle RAC and Data Guard, 1.6

I

idle state, 10.4.1
Image data types
in logical standby databases, C.1.2
incarnation of a database
changed, 9.4, 9.4
initialization parameters
DB_UNIQUE_NAME, 3.1.4, A.4.3
LOG_ARCHIVE_MIN_SUCCEED_DEST, 15
LOG_ARCHIVE_TRACE, F.2
LOG_FILE_NAME_CONVERT, E.2.2.4
modifying for physical standby databases, 3.2.3
setting for both the primary and standby roles, 15
INITIALIZING state, 10.4.1
INSTANTIATE_TABLE procedure
of DBMS_LOGSTDBY, 10.5.5

J

JOB_QUEUE_PROCESSES dynamic parameter, A.4.2

K

KEEP IDENTITY clause, 4.2.4.1

L

latency
on logical standby databases, 10.1.1.4, 10.1.1.5
listener.ora file
configuring, 3.2.5
redo transport services tuning and, A.7
troubleshooting, A.1.2, A.7
loading dictionary state, 10.4.1
LOCATION attribute, 15
setting
LOG_ARCHIVE_DEST_n initialization parameter, A.2
log apply services
Redo Apply
monitoring, 9.5.1
starting, 9.1.1
stopping, 9.1.2
tuning for Redo Apply, 9.6
log writer process (LGWR)
ASYNC network transmission, 15
NET_TIMEOUT attribute, 15
SYNC network transmission, 15
LOG_ARCHIVE_CONFIG initialization parameter, 3.1.4, 3.1.4, 3.2.3
example, 15
listing unique database names defined with, 17
relationship to DB_UNIQUE_NAME parameter, 14
relationship to DG_CONFIG attribute, 15
LOG_ARCHIVE_DEST_n initialization parameter
AFFIRM attribute, 15
ALTERNATE attribute, 15, 15, A.2
ASYNC attribute, 15
COMPRESSION attribute, 15
DB_UNIQUE_NAME attribute, 15
DELAY attribute, 7.2.2, 15
deprecated attributes, 15
LOCATION attribute, 15, A.2
MANDATORY attribute, 15
MAX_CONNECTIONS attribute, 15
MAX_FAILURE attribute, 15
NET_TIMEOUT attribute, 15
NOAFFIRM attribute, 15
NOALTERNATE attribute, A.2
NODELAY attribute, 7.2.2
NOREGISTER attribute, 15
REOPEN attribute, 15, 15
SERVICE attribute, 15
SYNC attribute, 15
VALID_FOR attribute, 15
LOG_ARCHIVE_DEST_STATE_n initialization parameter
ALTERNATE attribute, 6.2.2
DEFER attribute, 6.2.2
ENABLE attribute, 6.2.2
LOG_ARCHIVE_MAX_PROCESSES initialization parameter
relationship to MAX_CONNECTIONS, 15
LOG_ARCHIVE_MIN_SUCCEED_DEST initialization parameter, 15
LOG_ARCHIVE_TRACE initialization parameter, F.2
LOG_FILE_NAME_CONVERT initialization parameter
setting at standby site after a switchover, A.4.4
setting on physical standby databases, 3.2.3
when planning standby location and directory structure, 2.4
logical change records (LCR)
converted by PREPARER process, 10.1
exhausted cache memory, 10.1.1.2
staged, 10.1
logical standby databases, 1.1.2
adding
datafiles, A.10.1.1
indexes, 2.1.2, 10.5.4.1
tables, 10.5.5
background processes, 10.1
benefits, 2.1.2
controlling user access to tables, 10.2
creating, 4
converting from a physical standby database, 4.2.4.1
with Data Guard broker, 1.3
data types
supported, C, C.1.1
unsupported, C.1.2
database guard
overriding, 10.5.4
executing SQL statements on, 1.1.2
failovers, 8.3.2
displaying history of, 17, 17
handling failures, A.3
viewing characteristics with V$LOGSTDBY_STATS, 10.3.3
logical standby process (LSP) and, 10.1
materialized views
creating on, 2.1.2
support for, C.11
monitoring, 7.4.3, 17
renaming the file specification, 10.5.3
setting up a skip handler, 10.5.3
SQL Apply, 1.2.2
resynchronizing with primary database branch of redo, 10.6.5
skipping DDL statements, C.11
skipping SQL statements, C.11
starting real-time apply, 7.4.1
stopping, 7.4.2
technology, 7.1
transaction size considerations, 10.1.1.1
starting
real-time apply, 7.4.1, 7.4.1
states
applying, 10.4.1
idle, 10.4.1
initializing, 10.4.1
loading dictionary, 10.4.1
waiting on gaps, 10.4.1
support for primary databases with Transparent Data Encryption, C.2
switchovers, 8.3.1, 8.3.1
throughput and latency, 10.1.1.4, 10.1.1.5
upgrading, B.3
rolling upgrades, 2.3.2
logical standby process (LSP)
COORDINATOR process, 10.1
LogMiner dictionary
using DBMS_LOGSTDBY.BUILD procedure to build, 4.2.3.2
when creating a logical standby database, 4.2.4.1

M

managed recovery operations
See Redo Apply
MANDATORY attribute, 15
materialized views
creating on logical standby databases, 2.1.2
MAX_CONNECTIONS attribute
configuring Oracle RAC for parallel archival, 15
reference, 15
MAX_FAILURE attribute, 15
maximum availability mode
introduction, 1.4
maximum performance mode, 8.1.4
introduction, 1.4
maximum performance protection mode, 5.1
maximum protection mode
for Oracle Real Application Clusters, D.2.2
introduction, 1.4
standby databases and, 8.1.4
memory
exhausted LCR cache, 10.1.1.2
missing log sequence
See also gap management
detecting, 1.7, 1.7
modifying
a logical standby database, 10.5.4
initialization parameters for physical standby databases, 3.2.3
monitoring
primary database events, 9.5
tablespace status, 9.5
MOUNT STANDBY DATABASE clause
of ALTER DATABASE, 16.1
multimedia data types
in logical standby databases, C.1.2
unsupported by logical standby databases, C.1.2

N

NET_TIMEOUT attribute, 15
network connections
configuring multiple, 15
in an Oracle RAC environment, 15
network I/O operations
network timers
NET_TIMEOUT attribute, 15
tuning
redo transport services, A.7
network timeouts
acknowledging, 15
no data loss
data protection modes overview, 1.4
ensuring, 1.2.3
guaranteeing, 1.2.3
provided by maximum availability mode, 1.4
provided by maximum protection mode, 1.4
NOAFFIRM attribute, 15
NOALTERNATE attribute
LOG_ARCHIVE_DEST_n initialization parameter, A.2
NODELAY attribute
LOG_ARCHIVE_DEST_n initialization parameter, 7.2.2
NOREGISTER attribute, 15

O

OMF
See Oracle Managed Files (OMF)
on-disk database structures
physical standby databases, 1.1.2
online redo log files
adding, 9.3.5
dropping, 9.3.5
OPEN READ ONLY clause
of ALTER DATABASE, 16.1
OPEN RESETLOGS
flashing back after, 13.3
OPEN RESETLOGS clause
database incarnation change, 9.4, 9.4
of ALTER DATABASE, 3.2.2, 9.5
recovery, 9.4, 9.4
operational requirements, 2.3, 2.3.2
Optimal Flexible Architecture (OFA)
directory structure, 2.4, 2.4
ORA-01102 message
causing switchover failures, A.4.3
Oracle Automatic Storage Management (ASM), 2.4
Oracle Database software
requirements for upgrading with SQL Apply, 12.2
upgrading, 2.3.2, B.1
upgrading with SQL Apply, 12.1
Oracle Enterprise Manager
invoking failovers, 1.3, 8
invoking switchovers, 1.3, 8
Oracle Managed Files (OMF), 2.4
creating a standby database that uses, 13.5
Oracle Net
communication between databases in a Data Guard configuration, 1.1
Oracle Real Application Clusters
characteristics complementary to Data Guard, 1.6
configuring for multiple network connections, 15
primary databases and, 1.1.1, D.1.1
setting
maximum data protection, D.2.2
standby databases and, 1.1.2, D.1
Oracle Recovery Manager utility (RMAN)
backing up files on a physical standby database, 11
Oracle Standard Edition
simulating a standby database environment, 2.3.2

P

pageout considerations, 10.1.1.2
pageouts
SQL Apply, 10.1.1.2
parallel DML (PDML) transactions
SQL Apply, 10.1.1.3, 10.1.1.4
patch set releases
upgrading, 2.3.2
performance
balancing against data availability, 1.7
balancing against data protection, 1.7
physical standby databases
and Oracle Active Data Guard, 2.1.1
applying redo data, 7.1, 7.3
Redo Apply technology, 7.3
applying redo log files
starting, 7.3.1
benefits, 2.1.1
configuration options, 2.4
converting datafile path names, 3.2.3
converting log file path names, 3.2.3
converting to a logical standby database, 4.2.4.1
creating
checklist of tasks, 3.2
configuring a listener, 3.2.5
directory structure, 2.4
initialization parameters for, 3.2.3
with Data Guard broker, 1.3
defined, 1.1.2
failover
checking for updates, 8.1.4
flashing back after failover, 13.2.1
monitoring, 7.3.3, 9.5.1, 17
opening for read-only or read/write access, 9.2
read-only, 9.2
recovering through OPEN RESETLOGS, 9.4
Redo Apply, 1.2.2
resynchronizing with primary database branch of redo, 9.4, 9.4
role transition and, 8.2
rolling forward with BACKUP INCREMENTAL FROM SCN command, 11.10
shutting down, 9.1.2
starting
apply services, 7.3.1
real-time apply, 7.3.1
synchronizing with the primary database, 11.10
tuning the log apply rate, 9.6
upgrading, B.2
using transportable tablespaces, 9.3.3
PL/SQL supplied packages
supported, C.9.1
unsupported, C.9.2
PREPARE TO SWITCHOVER clause
of ALTER DATABASE, 8.3.1, 8.3.1, 16.1
PREPARER process, 10.1
staging LCRs in SGA, 10.1
primary database
backups and, 8.3.2
configuring
on Oracle Real Application Clusters, 1.1.1
single-instance, 1.1.1
datafiles
adding, 9.3.1
defined, 1.1.1
failover and, 8.1
gap resolution, 1.7
initialization parameters
and physical standby database, 3.2.3
monitoring events on, 9.5
network connections
avoiding network hangs, 15
handling network timeouts, 15
Oracle Real Application Clusters and
setting up, D.1.1
preparing for
physical standby database creation, 3.1
prerequisite conditions for
logical standby database creation, 4.1
redo transport services on, 1.2.1
reducing workload on, 1.7
switchover, 8.1.3
tablespaces
adding, 9.3.1
primary databases
ARCHIVELOG mode, 2.3.2
software requirements, 2.3.2
primary key columns
logged with supplemental logging, 4.2.3.2, 10.1.1.4
primary role, 1.1.1
processes
CJQ0, A.4.2
DBSNMP, A.4.2
preventing switchover, A.4.2
QMN0, A.4.2
SQL Apply architecture, 10.1, 10.4.1
protection modes
maximum availability mode, 1.4
maximum performance, 5.1
maximum performance mode, 1.4
maximum protection mode, 1.4
monitoring, 9.5
setting on a primary database, 5.2

Q

QMN0 process, A.4.2
queries
offloading on the standby database, 1.7

R

READER process, 10.1
read-only operations, 1.2.2
physical standby databases and, 9.2
real-time apply
defined, 7.2.1
overview of log apply services, 1.2
starting, 7.3.1
on logical standby, 7.4.1
starting on logical standby databases, 7.4.1
starting on physical standby databases, 7.3.1
stopping
on logical standby, 7.4.2
on physical standby databases, 9.1.2
tracing data with LOG_ARCHIVE_TRACE initialization parameter, F.2
real-time query feature, 9.2
and Oracle Active Data Guard, 9.2, 9.2.1
configuring apply lag tolerance, 9.2.1.2
forcing Redo Apply synchronization, 9.2.1.3
monitoring apply lag, 9.2.1.1
restrictions, 9.2.1.4
using, 9.2.1
RECORD_UNSUPPORTED_OPERATIONS
example, 10.5.1
RECOVER MANAGED STANDBY DATABASE CANCEL clause
aborting, 4.2.4.1
RECOVER MANAGED STANDBY DATABASE clause
canceling the DELAY control option, 7.2.2
of ALTER DATABASE, 3.2.6, 4.2.5, 7.3.1, 16.1, 16.1, 16.1, 16.1
background process, 7.3.1
controlling Redo Apply, 7.3.1, 11.8.2
foreground session, 7.3.1
overriding the delay interval, 7.2.2
starting real time apply, 7.3.1
RECOVER TO LOGICAL STANDBY clause
converting a physical standby database to a logical standby database, 4.2.4.1
recovering
from errors, A.10.1
logical standby databases, 10.6.5
physical standby databases
after an OPEN RESETLOGS, 9.4, 9.4
through resetlogs, 9.4, 10.6.5
Recovery Manager
characteristics complementary to Data Guard, 1.6
commands
DUPLICATE, E.2.1
standby database
creating, E.2.1
LOG_FILE_NAME_CONVERT initialization parameter, E.2.2.4
preparing using RMAN, E.2.2
re-creating
a table on a logical standby database, 10.5.5
Redo Apply
defined, 1.2.2, 7.1
flashing back after failover, 13.2.1
starting, 3.2.6, 7.3.1
stopping, 9.1.2
technology, 1.2.2
tuning the log apply rate, 9.6
redo data
applying
through Redo Apply technology, 1.2.2
through SQL Apply technology, 1.2.2
to standby database, 7.1
to standby databases, 1.1.2
applying during conversion of a physical standby database to a logical standby database, 4.2.4.1
archiving on the standby system, 1.2.2, 7.1
building a dictionary in, 4.2.3.2
cascading, 6.3
manually transferring, 2.3.2
transmitting, 1.1.2, 1.2.1
redo gaps, 6.4.3
manual resolution, 6.4.3.1
reducing resolution time, 6.4.3
redo log files
delaying application, 7.2.2
redo logs
automatic application on physical standby databases, 7.3.1
update standby database tables, 1.7
redo transport services, 6
archive destinations
alternate, A.2
re-archiving to failed destinations, 15
authenticating sessions
using a password file, 6.2.1.2
using SSL, 6.2.1.1
configuring, 6.2
configuring security, 6.2.1
defined, 1.2.1
gap detection, 6.4.3
handling archive failures, 15
monitoring status, 6.4.1
network
tuning, A.7
protection modes
maximum availability mode, 1.4
maximum performance mode, 1.4
maximum protection mode, 1.4
receiving redo data, 6.2.3
sending redo data, 6.2.2
synchronous and asynchronous disk I/O, 15
wait events, 6.4.4
REGISTER LOGFILE clause
of ALTER DATABASE, 16.1, A.4.1
REGISTER LOGICAL LOGFILE clause
of ALTER DATABASE, 8.3.2
registering
archived redo log files
during failover, 8.3.2
RELY constraint
creating, 4.1.2
remote file server process (RFS)
log writer process and, 7.2.1
RENAME FILE clause
of ALTER DATABASE, A.1.1, A.1.1
renaming
datafiles
on the primary database, 9.3.4
setting the STANDBY_FILE_MANAGEMENT parameter, 9.3.4
REOPEN attribute, 15, 15
reporting operations
configuring, 1.1.3
offloading on the standby database, 1.7
performing on a logical standby database, 1.1.2
requirements
of a rolling upgrade, 12.2
restart considerations
SQL Apply, 10.1.1.3
resynchronizing
logical standby databases with a new branch of redo, 10.6.5
physical standby databases with a new branch of redo, 9.4, 9.4
retrieving
missing archived redo log files, 1.2.1, 1.7
RMAN
incremental backups, 11.10
rolling forward physical standby databases, 11.10
RMAN BACKUP INCREMENTAL FROM SCN command, 11.10
RMAN backups
accessibility in Data Guard environment, 11.1.3
association in Data Guard environment, 11.1.2
interchangeability in Data Guard environment, 11.1.1
role management services
defined, 8
role transition triggers, 8.1.5
DB_ROLE_CHANGE system event, 8.1.5
role transitions, 1.2.3, 8.1
choosing a type of, 8.1.1
defined, 1.2.3
flashing back the databases after, 8.4
logical standby database and, 8.3
monitoring, 9.5
physical standby databases and, 8.2
reversals, 1.2.3, 8.1
role-based destinations
setting, 15
rollback
after switchover failures, A.4.5
rolling upgrade
software requirements, 2.3.2
rolling upgrades
benefits, 12.1
patch set releases, 2.3.2
requirements, 12.2
setting the COMPATIBLE initialization parameter, 12.2, 12.5, 12.5
unsupported data types and storage attributes, 12.4
use of KEEP IDENTITY clause, 4.2.4.1
ROWID data types
in logical standby databases, C.1.2

S

scenarios
recovering
after NOLOGGING is specified, 13.4
schemas
identical to primary database, 1.1.2
SCN
using for incremental backups, 11.10
sequences
unsupported on logical standby databases, C.10
SERVICE attribute, 15
SET STANDBY DATABASE clause
of ALTER DATA, 16.1
of ALTER DATABASE, 8.1.4, 16.1
shutting down
physical standby database, 9.1.2
simulating
standby database environment, 2.3.2
skip handler
setting up on a logical standby database, 10.5.3
SKIP procedure
of DBMS_LOGSTDBY, A.6
SKIP_ERROR procedure
of the DBMS_LOGSTDBY package, A.3
SKIP_TRANSACTION procedure
of DBMS_LOGSTDBY, A.6
snapshot standby databases, 1.1.2
managing, 9.7
software requirements, 2.3.2
rolling upgrades, 2.3.2, 2.3.2
Spatial data types
in logical standby databases, C.1.2
SQL Apply, 7.4.2, 10.1.1.2
after an OPEN RESETLOGS, 10.6.5
ANALYZER process, 10.1
APPLIER process, 10.1
applying CREATE TABLE AS SELECT (CTAS) statements, 10.1.1.5
applying DDL transactions, 10.1.1.5, 10.1.1.5
applying DML transactions, 10.1.1.4
architecture, 10.1, 10.4.1
BUILDER process, 10.1
COORDINATOR process, 10.1
defined, 1.2.2, 7.1
deleting archived redo log files, 10.4.2
parallel DML (PDML) transactions, 10.1.1.3, 10.1.1.4
performing a rolling upgrade, 12.1
PREPARER process, 10.1
READER process, 10.1
requirements for rolling upgrades, 12.2
restart considerations, 10.1.1.3
rolling upgrades, 2.3.2
starting
real-time apply, 7.4.1
stopping
real-time apply, 7.4.2
support for DDL statements, C
support for PL/SQL supplied packages, C.9.1
supported data types, C.1.1
transaction size considerations, 10.1.1.1
unsupported data types, C.1.2
unsupported PL/SQL supplied packages, C.9.2
viewing current activity, 10.1
of processes, 10.1
what to do if it stops, A.6
SQL sessions
causing switchover failures, A.4.2
SQL statements
executing on logical standby databases, 1.1.2, 1.2.2
skipping on logical standby databases, C.11
standby database
creating logical, 4
standby databases
about creating using RMAN, E.2.1
apply services on, 7.1
applying redo data on, 7
applying redo log files on, 1.2.2, 1.7
ARCn processes using multiple network connections, 15
configuring, 1.1
maximum number of, 2
on Oracle Real Application Clusters, 1.1.2, D.1
on remote locations, 1.1.3
single-instance, 1.1.2
creating, 1.1.2, 3
checklist of tasks, 4.2
directory structure considerations, 2.4
if primary uses ASM or OMF, 13.5
on remote host with same directory structure, E.3
with a time lag, 7.2.2
defined, 2.1
failover
preparing for, 8.1.4
failover to, 8.1.4
LOG_FILE_NAME_CONVERT initialization parameter, E.2.2.4
operational requirements, 2.3, 2.3.2
preparing to use RMAN, E.2.2
recovering through OPEN RESETLOGS, 9.4
resynchronizing with the primary database, 1.7
rolling forward with RMAN incremental backups, 11.10
SET AUXNAME command, E.2.2.4
SET NEWNAME command, E.2.2.4
software requirements, 2.3.2
starting apply services on physical, 7.3.1
See also physical standby databases
standby redo log files
and real-time apply, 7.2.1
standby redo logs
archiving to a fast recovery area, 6.2.3.2.2
archiving to a local file system, 6.2.3.2.3
configuring archival of, 6.2.3.2
creating and managing, 6.2.3.1
standby role, 1.1.2
STANDBY_FILE_MANAGEMENT initialization parameter
when renaming datafiles, 9.3.4
START LOGICAL STANDBY APPLY clause
IMMEDIATE keyword, 7.4.1
of ALTER DATABASE, 4.2.5, 7.4.1, 12.5, A.6
starting
logical standby databases, 4.2.5
physical standby databases, 3.2.6
real-time apply, 7.4.1, 7.4.1
on logical standby databases, 7.4.1, 7.4.1
on physical standby databases, 7.3.1, 7.3.1
Redo Apply, 3.2.6, 7.3.1, 9.1.1
SQL Apply, 4.2.5, 7.4.1
STOP LOGICAL STANDBY APPLY clause
of ALTER DATABASE, 7.4.2, 8.3.2, 16.1
stopping
real-time apply
on logical standby databases, 7.4.2
real-time apply on physical standby databases, 7.3.2
Redo Apply, 7.3.2
SQL Apply, 7.4.2
storage attributes
unsupported during a rolling upgrade, 12.4
streams capture
running on a logical standby, 10.6.6
supplemental logging
setting up to log primary key and unique-index columns, 4.2.3.2, 10.1.1.4
supported data types
for logical standby databases, C, C.12
supported PL/SQL supplied packages, C.9.1
SWITCH LOGFILE clause
of ALTER SYSTEM, 3.2.7
SWITCHOVER_STATUS column
of V$DATABASE view, A.4.1
switchovers, 1.2.3
choosing a target standby database, 8.1.2
defined, 1.2.3, 8.1
displaying history with DBA_LOGSTDBY_HISTORY, 17
fails with ORA-01102, A.4.3
flashing back databases after, 8.4
logical standby databases and, 8.3.1
manual versus automatic, 1.2.3
monitoring, 9.5
no data loss and, 8.1
preparing for, 8.1.3
prevented by
active SQL sessions, A.4.2
CJQ0 process, A.4.2
DBSNMP process, A.4.2
processes, A.4.2
QMN0 process, A.4.2
seeing if the last archived redo log file was transmitted, A.4.1
setting DB_FILE_NAME_CONVERT after, A.4.4
setting LOG_FILE_NAME_CONVERT after, A.4.4
simplifying with Data Guard broker, 1.3, 8
starting over, A.4.5
typical use for, 8.1.3
SYNC attribute, 15
system events
role transitions, 8.1.5
system global area (SGA)
logical change records staged in, 10.1
system resources
efficient utilization of, 1.7

T

tables
logical standby databases
adding on, 10.5.5
re-creating tables on, 10.5.5
unsupported on, C.10
tablespaces
adding
a new datafile, A.10.1.1
to primary database, 9.3.1
monitoring status changes, 9.5
moving between databases, 9.3.3
target standby database
for switchover, 8.1.2
terminating
network connection, 15
text indexes
supported by logical standby databases, C.1.2
throughput
on logical standby databases, 10.1.1.4, 10.1.1.5
time lag
delaying application of archived redo log files, 7.2.2, 15
in standby database, 7.2.2
TIME_COMPUTED column, 8.1.2
TIME_COMPUTED column of the V$DATAGUARD_STATS view, 8.1.2
tnsnames.ora file
redo transport services tuning and, A.7
troubleshooting, A.1.2, A.4.4, A.7
trace files
levels of tracing data, F.2
setting, F.1
tracking real-time apply, F.2
transaction size considerations
SQL Apply, 10.1.1.1
Transparent Data Encryption
support by SQL Apply, C.2
transportable tablespaces
using with a physical standby database, 9.3.3
triggers
handled on a logical standby database, 10.6.3
role transitions, 8.1.5
troubleshooting
if SQL Apply stops, A.6
last redo data was not transmitted, A.4.1
listener.ora file, A.1.2, A.7
logical standby database failures, A.3
processes that prevent switchover, A.4.2
SQL Apply, A.6
switchovers, A.4
active SQL sessions, A.4.2
ORA-01102 message, A.4.3
roll back and start over, A.4.5
tnsnames.ora file, A.1.2, A.4.4, A.7
tuning
log apply rate for Redo Apply, 9.6

U

unique-index columns
logged with supplemental logging, 4.2.3.2, 10.1.1.4
unrecoverable operations, 13.4.2
backing up after, 13.4.3
unsupported data types
during a rolling upgrade, 12.4
unsupported operations
capturing in DBA_LOGSTDBY_EVENTS view, 10.5.1
unsupported PL/SQL supplied packages, C.9.2
upgrading
Oracle Database software, 2.3.2, 12.1, B, B.1
setting the COMPATIBLE initialization parameter, B.4
UROWID data types
in logical standby databases, C.1.2
user-defined data types
in logical standby databases, C.1.2
USING CURRENT LOGFILE clause
starting real time apply, 7.3.1

V

V$ARCHIVE_DEST view, 17, A.1.2
displaying information for all destinations, 17
V$ARCHIVE_DEST_STATUS view, 17
V$ARCHIVE_GAP view, 17
V$ARCHIVED_LOG view, 9.5.1.3, 17, A.4.1
V$DATABASE view, 17
monitoring fast-start failover, 9.5
SWITCHOVER_STATUS column and, A.4.1
V$DATABASE_INCARNATION view, 17
V$DATAFILE view, 13.4.2, 13.4.3, 17
V$DATAGUARD_CONFIG view, 17
listing database names defined with LOG_ARCHIVE_CONFIG, 17
V$DATAGUARD_STATS view, 8.1.2, 17
V$DATAGUARD_STATUS view, 9.5.1.5, 17
V$FS_FAILOVER_STATS view, 17
V$LOG view, 17
V$LOG_HISTORY view, 9.5.1.4, 17
V$LOGFILE view, 17
V$LOGSTDBY_PROCESS view, 10.1, 10.3.4, 10.3.4, 10.4.1, 10.7.3.1, 10.7.3.2, 17
V$LOGSTDBY_PROGRESS view, 10.3.5, 17
RESTART_SCN column, 10.1.1.3
V$LOGSTDBY_STATE view, 8.1.2, 10.3.6, 10.4.1, 17
V$LOGSTDBY_STATS view, 10.1, 10.3.7, 17
failover characteristics, 10.3.3
V$LOGSTDBY_TRANSACTION view, 17
V$MANAGED_STANDBY view, 9.5.1.2, 9.5.1.2, 17
V$REDO_DEST_RESP_HISTOGRAM
using to monitor synchronous redo transport response time, 6.4.2
V$REDO_DEST_RESP_HISTOGRAM view, 17
V$SESSION view, A.4.2
V$STANDBY_EVENT_HISTOGRAM view, 17
V$STANDBY_LOG view, 17
V$THREAD view, 9.5
VALID_FOR attribute, 15
verifying
logical standby databases, 4.2.6
physical standby databases, 3.2.7
versions
upgrading Oracle Database software, 12.1
views
DBA_LOGSTDBY_EVENTS, 10.3.1, 17, A.6
DBA_LOGSTDBY_HISTORY, 17
DBA_LOGSTDBY_LOG, 10.3.2, 17
DBA_LOGSTDBY_NOT_UNIQUE, 17
DBA_LOGSTDBY_PARAMETERS, 17
DBA_LOGSTDBY_SKIP, 17
DBA_LOGSTDBY_SKIP_TRANSACTION, 17
DBA_LOGSTDBY_UNSUPPORTED, 17
GV$INSTANCE, D.3.1
V$ARCHIVE_DEST, 17, A.1.2
V$ARCHIVE_DEST_STATUS, 17
V$ARCHIVED_GAP, 17
V$ARCHIVED_LOG, 9.5.1.3, 17
V$DATABASE, 17
V$DATABASE_INCARNATION, 17
V$DATAFILE, 13.4.2, 13.4.3, 17
V$DATAGUARD_CONFIG, 17
V$DATAGUARD_STATS, 17
V$DATAGUARD_STATUS, 9.5.1.5, 17
V$FS_FAILOVER_STATS, 17
V$LOG, 17
V$LOG_HISTORY, 9.5.1.4, 17
V$LOGFILE, 17
V$LOGSTDBY_PROCESS, 10.1, 10.3.4, 17
V$LOGSTDBY_PROGRESS, 10.3.5, 17
V$LOGSTDBY_STATE, 10.3.6, 17
V$LOGSTDBY_STATS, 10.1, 10.3.7, 17
V$LOGSTDBY_TRANSACTION, 17
V$MANAGED_STANDBY, 9.5.1.2, 9.5.1.2, 17
V$REDO_DEST_RESP_HISTOGRAM, 17
V$SESSION, A.4.2
V$STANDBY_EVENT_HISTOGRAM, 17
V$STANDBY_LOG, 17
V$THREAD, 9.5

W

wait events
for redo transport services, 6.4.4
WAITING FOR DICTIONARY LOGS state, 10.4.1
waiting on gap state, 10.4.1

Z

zero downtime instantiation
logical standby databases, 4.2
PK7ɧ PKD>>ݻ<<<!,JIIũԝ⃷ )ջn (n | #JHb*BȱǏ 1ɓ(F2˗0cM8 AΛVIѣDHT`-n!")ISOCj5R\JL$ z8(T(@"IN%5?ĵYJdڵr8! TIB4g9mJ dU Dװ}-@IJ<vc|M( >@(a ժB:ة.RK-d❏SYrDGX݄yƕdF@]eӱVAeᆰW$ &bYA%V]r]x<#(uo {zW(؍q٢F?v饗"YUZdZJXe%l $;i@OOlfYIu_*(~+y蠈&zX裐e ]WrhbA&e*)XUjJ"Pi8*꫰Rt ]%0t$*! '^mޙ1nNd(&Ysd(їDUڑ5(S@M\jcw dAEo~dM6Id&nA`.hޒ '•W-'-7jTkr;niWK< "@ej} ~+'t8-*̂T'!qPY.pA!<( .4ԝ|Ӄu>_,V4Ah43Քw{PVvA6}tډ..ְjy靪:=N{옂0#ң/+B#࿃>̏R3"g(7o0ZM--b/>G+dx& ĵV rN}<\ܠ"eU B3N4⊊H-0j‚tPEA* /AԒ8e% bNg>m nCY@LZ_$!E#Kwci`rvM̈4@4MP,nJpC(IrBWټL)s`zuT' ))(|Rbh]!r`7QV3 Rb䔩pO)S$jLY3ЕL*]b5 jrHԠ5lNC u) p!g4̉vvE`;yx>ÇWd󟆱3 ЂD AjBDD ;2DY@64 HGQ\(I]h@Җ(FgJ8)N%T@ jNGӢĦBM*x *C5TaTꔩ:IO UNi:JֲuX݄S dUX犘D^dLի{ ="% fH`c'kX e1B 0 n6جeQ;p-l [r6u%vݬnA5nqq\&UBs\FwŮo]v׹(ՎwyˮW{_׼Uo~ݻ_׾o;`ؿ0{]K0xVm<ov0abp7[×P" U`bD\-!n|:6PBE>r'dDKDd)GYS6D|e%d9 [rf2+gr$=ě'gCyuv3qgByψ,hBZ·sL?7:Ћ.V:nmcivzu",=jLZӧt,Q[ܞֶ[O<ҺUl[Wz~m{bsVnl6[n;mꆗ6m݆`r+XFw;nufw7M pW,`~g)!YZ!ֈ+&N' ì !O>{.()`UNwh0ךk}ܜ9Gyχkµ :1 PG$dHD X"{kqf@ 4Z|E?F`EK @u NbwySDj \pSoȸCpk/MeګY}(,U@fA lOU @!U8~)@򘇾KaTtK#c'j^@zQBoB  Q7%|w hpbbQ2xw>P |h[`5\7ւ7 7oQ-^ %^>_Uwȅ 6hF&ZPHsf.e8Ox^( '~gO،8Ә evwQU.(h wG[}eXbI`Ysbf؅yu X'8 ppЉ g` 0h pgHF+6#@oFf l (Ip+@r Q@jCF (|8%IPViP[vF&fJp`ف{ޢ{xg O}=yZpHJwVlhb(Ѓ L9F}@Pb(9^К@)Yhysva9Xv }AH~F1PAwJ;ZO&Wꯗʘx8gəp) #8/õG胼) -\[I_C(^KZH[ pu%v^ S4[b:Z@؂O@}I~75sP󸞵0Dx~#tH&8e(XvpY( 땀w[8fI``ق~(J `K_YA  ;1`A R^HWe!t v3 W8}*I0^ПXx}-ە` ˭k*Ux8-@Ȋaȅ3 HrJ  ؖO斻ۖ 7KӮ88J(lKrԷagHgdeKdi P 7R ;Q땜W^tX 05gSВxgZZs{}k(l2\-i[{Z?zž0¬blUǖsúiaT)OiWV-:g[#ڼ.AD9ˆ=Xx̊oVhM+P,@誐ɒ<ɔ\ɖQZwF>8Y\©[SZo{ggẇ~\yٚ-)e[Qܖ˷f Ljyf|œ 6ա#V%o^OQl\ݜɆqs<φ7d?[ $I|͍͝L Lz T =Q W:]M_$F}Vx*MMY &,ұZ)M  Ɓp1;,0+ 'Yɼ`s01MPӖ`ZФNӓ= L 3 IyӋ X  &yg /zԞj_ N]*z mi8=Vxd9eZZ9軎p]r M C{=ȍ%E}ԘW}jզ}-(XL!4P!ct*߁K^T9:@,jۮmx~~꽞MTgxĘ% J IN}9M녍p g݉L !k{6Z4*B  .Ɣhރ;h.^\.'U&PL&^U[Ŝ| {^ !.5Fע P? b.~N=t;_}@7e]xܭWh@M> Ŝ^˜ٹꨁ\|mC?Ek Eٙ%NrSq.!Єy<ۗc76&?å[FvL{^? z |s~=8#4@Mt\ϵάکj,tߕ^ g( /eq/JI IJII!%I$I˖Ó$I&ۑԄ 舡۩$J%I ۊ <:yȐ#.+UL(l= X0Hd`L8QJ)͙(hQ*@%4m*D"`pX"ĨOA pEEc[%Np=J;fH202C)7јFq+>4.c8:[6HD"*%k"w/ fnq2hn+s+, $YGկPb9]0 , P_6,hJV]N kpPjc+C|$3߁| / yYM['dJ/-ubހ%iD FĬZ`SDY!% 635H)6)DiH&@%AA5' ( ID[2`HQKiÁJl(g8D(rxMҥ@.7"^| r|#<<.r礔Vj}jM" :߄v$ JȦ+} ks(sU~Re+&=ǘ?Ebl(RmCO~T覫M%++֋.PrDMXZC|J{I GK),i C,ؐ]@l-+2l ^f`əj$5<ϫÁhDm5SvLTyÅ8&@0uG6mop۶ ?<`x|4dКRe̟ڊRYws sgyu砇.zl 6$z kMDdgjViTCӿcan$V͕ PjI'8l)xŽU n4R 0M&QБ 'C!ҽIk  P ,)hBT=s:&!0JG%@ +1ACo GH#d&+@؃$RqeX%""ջё QS!g%gx4  Ҷ|f0 6M&F n-1 }CO$@A0Q +#;&&OD}j`CC-Q AO89%X#xEg#)Ebe Q %2L6Vo9T1!=Ҏ,,dpDRltet^/s0l'[Q("Jytj@ ρಓ Ѵ~wg;FSD=)E a$J 'EcebTfa}Ԟ<zds 3?җEu1:+[]DOhKb s"WDhfֻ0 A $ @Pd,ɏpnY@0a9rԇAa*)h$inyD J~hJy 0|~ BXiZYsHuy0{ \~ 68{pDuw/鋵HwxX08aƕhD[ܧyْ@^Gm0wIyuyGٜWu֏zɜC} jEYDߙlڙ9By$QVY0 o'V7TszkR%Jx@(::i o?~FʞZ1wt7E,ڢ.02:4Z#ji<#g@:ڞ"$=j?|C Q5Ezy 牃)G(yy5`hքF5 J 8x~ DʥtlezFNFA@n@N2JjKTnw:Ei{8Go@N'r 4n <9ZwzZL2IpwL"Ӂvsש~I JzRCt *J!˱Il(0z燦IIjzItЩ ꬸe7Gze |WKJࡴ:խ*z~< ;\iꨡ%b yY. 4{$GC PBpyJyyH0`Xg8{mRv,DyX[>b$ itQLDGzYs7uµi[6(`h,`[h[KhK;h;{;K˺뺢 Kkh(`,˻뻄&K`˫J Kۻ˽뽃ffh{hK0v뛾KKhK۾˿J0{0 |  Ll|<<%'L "13&<`':Å?Ä,ELă` KJQ RLWAGM l0 dl#a\cp|s,DŽk\y|tv<xz,~gw[pJx-Ž<5 ),×LÙl,,ɕɘ|áɣXlŨCIOU Y]L_lS Ƨ,˩\˭ǀJo\Ȅ<\le<Ǭ n<\́|ȝxTs*l+`|΅΄<|l<*`+p]M}=m !=%M" &MmLL(:}?AӘE G-IMԜkMOQԠU Wm?Mʸ`p,4fpc}jmin=Mjt]MJ ?suxw.H_ q_׌o)+jj A"{؞}M]LڗU*X}Q|'2'W:5W ׷{$sۜYa+З緟mMS ,b >A$zh,$ʰq;{é޺Pbi=Gz:wA b9vg͸ ʲ D.,x{"Jw~ 9~Z0G%tt0['.wM_O| $rj7h 1>n~둴tSW5n)$d+O_]`4.Φ hn*e$ȫy8V7zۅI`2價rj[ޱ$>Kem),;=.? me޷Χ}9,듼(0˲ ~[ \ΠCrb"J|HV_v03$?ŭmN]h}4NXKsAج 's$INs nh3nr* طo٣NFp;N ?h䶁MIRlW?y[ L*N|R0M r( z{z)?Doص LNPL:jX\?m(PU/ju4Op5K *@v[Զ F ҆Q@bTIwF C0NH S>uf[m&ߨBugRkG  5Vmt|M8^{So[DoďFϊ"τ*$M4矢vgo`SJI $"J"I ƙIljIӃIڕ䆛II I!IҩՔA-т [L`~I.D X5PĽcȱcq;ɗC4k1dy.$ BYJ,6 J0ԧDZ$#TgNxhA +{(#d0Jj}iU L u FVh\H觠*z:Z=HѪ棗Y$I $t#Wf4b]Kk ''dtЎ JV*{5al$Vfjtkj"$VF۪Ɍ[nZ$ %+ ʫ5 k3@wnF,YOqdw ,$l(A5F8aB^i~x4ۊ麛f"PG-5:.M2eV@AIҜzYPj\+akDܺ< ݭ,gƩm\;]7#mVLO6iv--}(dMYb҅ qSu"= ['c9N"Z+Yvk3ҫ1qS |c"xjW4XEHO?e3|Sq/)C)atC O&qOs:"dlN]gJy~ G Lt6,)Rk_׾j$~CD#8pcW"@ ^}hc=@9|dA ZӠ":F@P jC>\ކB0pM`![``dX fD0sk!'/ %=g"Ji_ 1ҭ H75{3}&A-`u"D`pQAPne(ehҎJpPKO[#hIL.@%-EyKSPXW|&!y&&;HƍD%1%<`$0 H`@$ HY(0l=ܾR P g)M9t*E<{(?H@AQ T఩͚Bd4FJfq)$#x~4X*B%m*RDfQj P=+ jjA`UJe%gS]@UpUA$a@f# 97fBK6U .Ҁ1FI44%X(T*~%bִ,g[d׹%@$hE+^$ 0|m_1¹P|Oet`D}e9 ,vJvWgaZy鸛VR=y#Ƚ%/!0A#x#˰^m$$|س4OB>iT_{ZVP;>ubAj yﴭ*Pƈ(k\[>s=j,+S\pY*ט*“XL22";"Vgp"jl$қ=Gل=:7@-xM@ ij+ v; K٫h 0zV4YKO ?hPJ4J*#$E$"iG!8CB1 |u|-6 @r{S3|9ErWr$9 pp^qgxϻ~(c0#jE\C0mC<#Y,=#-) k;<$hԨ|!D_rW.xT/eg4sM#ɇ[> O`$$r0#J_p ݈ȮC[ _~#;߽w p_Gakq?Ssa>Pv ̴LSIp\t 0VuoX$M%^ĴP(B Ђ.0284X6h-UJWsԃT~IPpL؄N3a\q3r0v $CZ!5URfPUDti_RSPI%OsV07='%7NuEeXQWh0DN$a~X2Wa][x AL`auAMV'S5, TYcu%IdtO}h]WUSTI"H%Of%cH X҈%PxC!L?%~Q[2k!MYB%UI1 SU"8 OW _M3Uv}7Syb 5HXanwŎ 6J&J`Q`Տ`zT*lq\r%myV'0u^ wdEpw`%Rb6U0Vb2\$>9 IPKqfH&"cS 9Ya j~s @ByvDy ZI1ÆiFTsTVQMk% 񸖄iQwWnii6NhG9\6^㕙i@Yy{ s}9 po@ ` {p)N gC WRלxdT@C0ЙuЋW2)ĻA;{ͫ;9~ԫva c˸2૽{% 2&몼`s9Aڹ~P|;+;& ^Z[ $|f+W0Ca!\&LI`0'2{j0!/P/'R8><[:r1?.aǫ,F, $V2!ZZ 2\<2_,2aņ`\i\f<^b dkqL"z" !cZdžPȃ|#CȋȆ<{ ɏJLɓLɗl$cU42&TDʜd˄ʟ,2 ʣʥ ˧,˩L˫2ʯ˱,2I2<2ˆ,Lˬ̄02sl܌wLo,ٌglL, r0}3Q 7  nA^< .,~&#N%~,l 2n/.1>8N)>+>N+es (?KBL%/b;+''ٗ3e 7n03cM2m+V4W2mqag7n ^P+%&pm-C4Qg~uq^($@"+..46MBq0>NIN!P+>(ꏃ)"= 걺<!>?ڱVG=aӾPꕁ Jp`Qz&\am~&.,.0>Q68:o@BD_L~ JL߉+7_9TVOXA?b?d_fh/h^/'2r?t_vxOnBWm?xwgw^}?_;PK#UY'B"BPKD>>---wwwXXX777Į===...xxx̷ȺuuuïGGGJJJTTTƬ:::QQQqqq^^^nnnȤdddlj888kkk{{{222[[[YYYmmmfffILKLSQRHGGECDiiiyxyttr1/0םZ\[[ssslkkWWW~baapopCCC978`_`>==e~]!,v H*\ȰÇ#[şŋ3jܸ C (Sr4˗0cte~8sɳ'OfJ4 u|*]sPJzF#jʵWpBAفqfۭpkyݻ kDω L*[F8&L*(Ѹgz@GӨSxkVMi?7;'l ueܜKNw؋+>]9v5GN͵.;u{}_xx}gWj! U.Z~2AWDEairXׅ֠(a!H%a4B2B6b3aD&9PKFJD2TSfb%H[.ٟh_٦",b'x1wlِIZĐF47> 0L>X3N,)oT7@p *X:0 >^d)BvJ੩=25*_t&Gi3I%R 0t(M)hzƶps λl>5Տ=$4G,E,B,wÐՊqLlr#$,tp)[4{0 5 N/\^|ARфB$!-_!r& ,rXG`^u Ni'ڜeF l:CN>L`%s = *#Lw?7w{S9tÎ>rîx( 6N6! *3/< a]F"I B !ƅwtCЇ@ X԰x7,UĢ<сk*Kh.z 'j17a$w/: >z|@X5ȤL"ŌuJZ̤&-leؤ(Gyɛ\0Fa)ML7T#r> I\T1!bo"8K *)cؚ7R)n0LS*-%둑|&SqHnqO7rVlg{Λ3]TT1¢-ƚegc:IfHO%姉jgf_0P\ e?t7|h/c0t3+V?=l4G%5SGі*ͻnQӜ]BcFlGC;󦷵ٝaf@M̷T8ɣ#kCQRAA< RGn 60OTj8`xmQy {qxɶTLOE`!qyɀ.t E:"CK&uQ})VSޘR(@ag*;}J`!7>X xޡ yWx#fD/kyG?qT>C pe 1 2Ȋ<, ,[w` V\d \ gx J pkiʡۼLpL͒l͹,RM uPlNzͬH I0*<̼Ό+Nmߎ˼ߕ}ϩ,Ϳ0ƣ|RTڱ=|(#^'+n-]7dݽ @E~s*OQS rpZ\ NOm&UAᐞu,/.͐l;|>۱>.IKN Ԗ-PԿJH]屼r ~ڸlj^ጮHN(x{ݘ>r8ǎ.^>]~ΠGM尬NP]ڭ.p&~wzO9n<%D=T>i H;寜Mґ)+oNGvn.9ymACn.LT>bN&heO~3.n1&_=~Eog߂/݄_i0 .aϒ',5M80~ OKOR~PWO/Mcϟ%F 2t؁#E UW?5nG!?E +.\PyǏR8i%J!XD~$G"Fذ!FL,1 H<)*\!k9"$ ds7\Cǎ?  AW'!,$ ) Cg<|ןHԩU{,Adʕ-}"I 8L3ТF$ej%T\EH~ DC)޺wIDK2iĩOBEӨS^ͺ!¢J+4 / 03 1s 2( 38 4H+S1y1+?4[H͂(-n9|O PP Q Q#=qR[ӽ4LeFl vQ, !t$Vt N W2MJ|K134d!7 |6-vj3B?) BB5o틓}[ꎝE S;'FLomWۗ wf=D]wgy*nj RI  I|֦"E AV,88)(8+\(?p Q-ub=Na=SD/BP?(@tWxpZC!,兀󲥲ue W.Fp]t{? F0jPȟ?1!蘀# H?щd`@(Z%AX"4(5bьhT:VF8At?@=я$! )7wFa 3@AA P$.(`D 1LȂHF4@(pCp@  FHgs A 7HLBwZGh1Ȣ< AE%@_hb?R1HD]"it^zIO{Sg:PY|Eh),PJЀÐ(E jMse!\QKH0`jSTFUS%׽zNJbp `g[)T My?jA>GE|.8g>3G_0WYMH_A| !+ϳ5kk-|׆Ҡw^WX/-F(xUԦVeBG/N@ei %ui3=#q%4y&T}Zva#9I+mYN 2-hqa44u.t+ TZ7<,D(tqX"w-s؄?2rBg}uqqK@J0L4Pv];ɻ=[p W d7,JV)q1 xLK2BbSj-Qs$E&a 7+쁆]ah'huh!j`E $@-۽vd0rIik9U9D {x[C  B!§²K6w'O+K#,8 >~Yh"UȂPbR h Jhy P8H$8ALAv:j%\&cA)*B,B.B0T1$C3|T [0K5Ã*2CžEPں8 3>$208A^ЃpG0\ $[WxÉHC: EO(SLU<WEYZE\E^E`FB42Jp.,zG{G|G}Ǹ1PA0(ʾO9PzP]?(Jh@UX`1Fi ҄0T@HdHpHHJȍȀf!07JJJJJ06@QT03,H>8,>̢Hs@+x7PRi2In8t PTOH˵ܫܫ3K=ɌMZݕ[ms;^=WwX+@ (7ȿZZZZZc4Pʗͪ=}=~YXSU_.b/VIL _ca4VY X%N]&Z _s`8)%,VNdE^mHLH3Q㙕^ .bmB%&;߅c)c-&8،H.h@[e\e]e^6~T}H4ᬱb7;>Cd&-eIUTce*Wab6"ÄiV6=6^8&fn1Bh[8s6e'f](VlVXsygQJΚ sUuVcn[Um0g%W})f8 x`TA>}h-8*Cd~iedfjXiU pn .奶~fcjdZYx# ;iLCPne8.$v3h\dޟ[Sp!0P*0hJ~xɽk~X^3Sh88hއ18C@mV(jVfX>k_䔦 .$x )؅RVp9(;^vo{.Q7/Hx.P96EĈe55.Pr@`ψ1_bȃRx6v9Xgہ+_8 ;rb@$ux +/2^m3w TC=hi8l%+Pr ߂^76 "ۀ7@^^-P38DJa\KSHzy"N ni!MȅHd?ynwpey5NRU6Є_΄zhoo0@K|cc[/^ï )qgAƜ/ TIRh"ƌЂ3Bk,fS栺֫Wof6SKb:ׁ`w0QCP~ll)ԨP=A#)T-/)&tn-HM%LnaCK*R/W j0AD ( &F8eE-&2/k4j.V S (6n$H$+O#3ȥ+BK3oчYR#Ars\:"\nD8q`߿MIܥDʒ9L/dA__(` (d@q V I8AQp;h`!1 %E4p5(B/Px#EH !|#Q tHhpLV(p Bf lI6f Ѕ?+?uP~gs.DJp(<0􃢊,.aW+SjiTRD"J6(,e?+rER)I'pNcZBkfFm` kt )J;-jB +\A3w骻..;/{/cm`&تSj/s&i  jP7.Ayvp Fdet1mGXN͜1jii&vlfgR4EiaspcqP4 )/gB,Wct g~Psw;vN`pO~Չ_ ~m|Qk&8]4 |NGiIPXdmqeϐE3xֆ-Y&uA2B3Y5Dhl)%V/9kEՀU12>}i8L tXd.3,$Όz~݈>BABeVjf87_hm 2l`p*W@2D`4.P.w q %p$6La;r>j@LșOTdd-gɎڳrndj&:Xoȃ$$':I2>i_=@;ք.CYf32ƒVD)kj#n|2e@䲁x Ϧl iDʚ5Ug@\S#<W)`w#l"zJv 96?pi4Db."D":HXW21dCT/aT^UE;PK&G3B3PKD>>!,H*\ȰÇ#JHŃriǏ CIɓ(S\ɲ˗0cT @c.pɳϟ@ JѣH*]ʴ)O[57tJիXj &έ`ÊKӨ7]˶۵lۻx5]{ ̶/j +^ةač#K4.À)kެː@gp Ab\lVz롣fӖEg 2U@[Z:'[`^mo?m҉O=[V\q @~@ 0`T` 6hD$g"V]=t. G-ω[xdfx[䭷:=wa(ilyx֥Ifjqpy47'< 0 !-e&ʍe- `8-ӗ"8hH}ɩ>Мz :$|S k¬](o@eiV.f kg"Tj}a=`j 4:Txv )_DgSx6ٍ&b§_.½b0.KQm+oc@vq]T|~iN%ZV,:)w"l=ܨKи$ШOV2`qIzXFKQSw^:ҏ Y`bIQ"N)P~/HB7f-an4"ZMĢx-ޭ`!whv"2:)xg"o^ 0UK (|@r!=#s x# /5dRih)󑀌!ŏd r[l!mt g\JOH5p UsZ=hirJwya >"1+LA !#:BR&XĢ5P&fNb2PX- jd'c '`p$:1| $g8~-x\ч.B:6e&:Q XԢGQ:̓ B(V v@Ӛ~_NuꅞFjP@. H* A8u{TH@[u TA Z2AC%rQw,d&@R /ͮLgZSt;~zGERTA@5S*VU~5c-YϪֵp+]Wկ,a {X$ld'K\6@g}Њ5mjW׾}mmq[nq*lhs+R |W>0&+x/ֱX%[.[˜f5a-b%6 Lgж t'-cɽَ M8w>t! @iJS5 Pd35!FMSԧބI հgjָεw^׺xbػxtf;MjZ⹵nZط ǍleN nqݹx zmb{WwnoyGOpu{ znģ{-_W@2~n?1NN9̏=\m|#9Ń.t˼>:MN8ԣS]Vus1':9DfxǾ nu>kt6ީWNߡ-x~>x{+o<#?sKo/ͫ5虞Г{ G]~a7_o<;' .Oг<5|9p/[= q>ʇv!o?cxL} h=mDlD/=ɍs^򟿷u ~l#wnww=Ȁqg|Xx8} "~t(*,؂.)X !0X6x8~zF1<CD6у_t&}@eAH YtO7La4K3@eWHY\nPH`!da op$ik8Ymno^h"a 2ρb*!ˆ1MaXE1Y!腴23R!%138!2$BKaʼnjx 5%ЅUݱ"2!)_[:46xA%wpYWQ>3$??=Yq膃`h1a0b;"wYوc$3q2בYp6~F_ k،H09 h 't5hYw =P} 15b:-2*!ըrY$ɏDh `  r*;_p ( DCq4Pmm8 :A5) *+ؕ=Q8pX, %Yb C%2?!_җ#4H&W2"b,DqVvQa{12kZ);$'s p>2KC%R!"qOp QGyQC;kf0Iң&ĩ+"0T67SA2G;#(c0C#+I2bHўA$RS3 `q ^@Q;aKaƾa6fakfگk[뷾c\TV<\pf5kvZVsg|6\uhc* ]f]VY_va+fEfuiFZ rBL\h|G- V|XlZ9|Zfi ['ƿw&#g6v)LI#o,3L5,mǫǷP},zVƂl'l˅ȒXʬʬL˲<P˶|˧ L0Œ\ŬȜȜY b 6|ͭGc3;@YxЖCPHobSGX>ǮNS^ZEپ~ݎ-FVHn~G#^틄ąTI?_pIJTJ4TJ4K KK$LƄ/ɤLL$MTMՄM״MMNDNpNNoNOr0OdOkOOOiPL_PN/P0 PvPQEQKpQQQ R#pR)u]S2]4SE^eD]0 ^USU\UVbEV__s%dW}WM6XEeVX5a^4jvܿALā&\3hIďc5RIh.]l8SoSro^u^x TMEUO_o_Vin/EF|e UXW27amZp@0VĦ :tPP oH#$0b\sl!ERK)<0R&$znRF=A O,Ҵ$IVZѡ2Yl W6a=yRvY)ReDաn@W^i_$" b!X $E0`p#6lP /R'dѢ R8†#NhSɕ/gv^uٵon]-ŏ'{կg_|{=i>}3z(U~. @t9z \f 0Rv O8{A.tOCtLV'XsGXk5V.s9^U֝hֺ`\XWfsVhvjs6l1jFq]1>b>%`s-DW]v݅w[xR8y4_ͥ`8Y\\U )5uty ?6XPR%yGYFP k7Yd!(eYCu@gOubg tiꣅN=Z댸3f5Cvm;nwnDo{pP`W|qODJ|r+S9[p|s;sC}tK[ t[wm:skQWw{]v}~ug{uW^s}y{G}賿@Hs#u#Wu[{`l)"C`7O.@$ďw(\\wp+`.Pk`">-D- b4g6,`\  "wLNfsCB -p 0D\؉-PG?:ьԣw;yqIEÀ{PH[<vE x t-p(2!h;D B[1]t(JNQs pW'G\h򎸠#VGqwKXBWBpEK9,a]9[Ї , La*\-CeN`3=Mr\-m.vT > f憹b)kz'Oީ=>CPhP;y뤠 [@&-d49FR!o5DfiD)"`cAIY7g% Q[3web A&XJs2oQpE&oṾ 9+P/-scHJdU2 ,HKZN3=9RwmYּ`3kB9[P[t!iK.Uz40# ē#Ԝ1D<YY~+ [}i):UD@-B>(@J|:J  ߊ~b4[o49D8:::?  8g xFQ@[4,&|r.]Ѐ6u;P3#Y}IF& a QuL[6>N&bF"^Ru/` ]n2\}< .(OgtY7 WX݉- Pĺ4l`0c:{vu+M2 "hW,baŭ1Nᡮ6-p-MF&l/t@ l܌^nlz K H!ȝס7"o xpG\x-~qR!Y=q8oQ+gy]r\3l~s@;y|SAC"~t5$]ID5pATTrc[9ss]D'ёt;]HA 8`^1'cL2nЌ4^N &MiW A;R>w ^,̾] 5 X4Df|'S^ o2`O$g{S Z4$ViPG>A`8b OқI%z˴-}f6߽;PK!!PKD>> [[[hhhMMM^^^ǕȾ)))---DDDŃ͋QQQʈ†aaaEEE...KKK&&&***AAACCCWWWbbb+++RRRLLLTTTYYYdddSSSVVVeeefffHHHUUUcccIIIFFF GGG$$$"""!,O H*@{HE!i@Ǐ CIɓ(S\ɲ˗0cʜIs&CCHdzgO!< JTϣ| 5JJիXj#G$4!۷pʍ `AUt@׿ L&2',4p0˘ a&>}9ӨSɠᑌp\A >z6 mͼ5ɍ:OC#5HVdktC|ZPa1&V G4֖ܰ Dꬸ(Jr4l.;5Ao,\,Sjl @"p+Фn{q(䢼<}(l064 40E C0 $bX@_  PKsI$ӯFWO @3l85ęW7G8Ο֢:+>},xR,sU[aց}5;P*\zsꠓqeȸN ؝3jp\QS,>bn@׸?7}_H8@G7F16x5`aڷ>HN؂~ţ￲; B0!IC:[&80̠h$qj0< 0`x8B0 gH8a@f5it`HL <<Dž:H*Zъ<,~ DnV2Bh鈅 +pgő@3yR|f 64Dž>h"F:򑐌$'IJZ̤&7Ȟ$E Wx07 W %.` Zz$h'UEp0AHDZXIL,Ag,Vfr$c&tҁ 6r5l-8ma2Ča66gjdcjςZ&p`))M} eR S &O0F25T M->zğ2DGjjndGCRjϤ 1H7Rvv{i< &S6mN?X9bNs(*MZRr~[-X5U Ӫ4XVgzԝrU^H)ic  2Ռ,ZӰշ4 /.0zgFJla IT9f'@Ͷg L"ɮ(iAtEjʔmg(@ Ow;i :x%R+[ğ}CX`  dbNs(]Vu+y&]-9Om)u+`5@s _6;`8ݐs^ZQ aC HL.aapyK>>>&:;[\&ϣqQsX*]} . @;HA,^2y̓cHϛ\eXSkŽC6ɼOyGUpt\bt :_9}1Aɴ$USD62  zhB`ЀDQs`*?M(+Imj`|8Yj Yz,`Pk[K:52/ l4 {H籷lA[`ԥv0@9PB < Y_Dpַ[7`F(OnwMxp@O#~DK:&A ;/k9qՓ_Q?kEMA 0A7KB  *g3~0'ppOzNq)i]ty=K \P O `ɬr3WP: @h3׿߶߀@lQŧ5x{= ,n8(E` W`QhWwfhln؋rYǘ،Xט88!I'(9e*ْ,Ɋ! WqdH"x淆>YH i I8H#% ꘒH^`Ybi gcz Qٙ`Py`Y2hћ}G[ ^ ٘B$Uaiv5 ϙ}}1X6ͤ'ɝ5 )cgI1֙ٹ9[XFsncRe%q^PZ :J q)P/G^z}0JY蹠&[= z)ؙd&d>qМ-!Rn^O1&\fp%23KR[3L'Y♁+-j >  87Z5Z8ZIV*@ڜ)BzDSjM*w0=[%1rAQ{9(A\##᩠ZIqc)}Qklm5: W{ [""K%= 頮#l;cYg3O-@750KZCf3Of@ȩH\Š;"S4D4j+.ق H3)앬blB Z} ~Zѧ ^Z`J*TU6?QD+03^5 VJc ep#7tGM31KأJS:b*^<`|yE(,?KPgɲMMU^Ks"1;3'm: )l̪vA@jV9jUЭ](>KΤԅ =5   )&Jz(`%08QD7#;]MaoYB 펜f$"t*X @ڣ{WQ@ g&3*-3QDH<=#kΰ9h=b؎`%]t9%+:ʨ:@Z*zuĹ1!ؤ҆N0֘rR _H@ #W|(PzOj$Mjo !z@[xGt,nG0{.%&m{- m4 [pJ  mj0p0@;Po4`=p o0 rkb}mpM1@jFn .@.iג9^=50ZbSiI/<˳CK=q'K^a!Qw5@ @Cu` 0mI%(tzwyP{CP| _3rNvz#.K^ٿ%v7R'D "8rsC49^{JkڮS@>ot~pXPk[=d|_Wpsst.~n/ ;` f=P d E|>w.پ(nIHJ>ݧ';#K*$Jbq#NŻK 8Z@ilpapf3z'p96} rR_UX쾅0'<:5W%A<$Z W_o6~z4*[>sxTz>oc_h" l~^;.G ~*A?~1*z"_4S\%Y>++$GQ$K0+<CqDK41&ktB[hDl5`E 3!~6tͷNȋtH! n<2# Ճ.+C6˾!J@ AQAzH-&R.1c"&3 ?tH+u42utRq4;Au6un 7 vIKBU1,.bP QpMϩ3`*HJO@\f+3dtXz!Y SI El\,zBd@Ȉ"vbzZuI~K:/Z h!*I@@Y %S)ZB:b)c4 *` I29ivxoL2cxz6m!k X%%*Ml`(J)Z8KW> w!Li)tx!Cr20ʼnsB683em1!B:hZwqBPht<)N)D~tٳ b}mv+I5Qr4NQXQ$}NT}| C֝$40 "aToR&A#FE g}w8|Єsj.V $S!H>B| z!aXw iA:8Kc;@RLVH'>*4cXTE*! 9ץϑk[74PC!Ra.'v6 Ǿ"3e3!%XQ9&V=wzf{˔ ~:+3 n9KZfk4_ut|xiQ1!vS1Ca㺂m s?A.çl/SJ vj5H! XK6p\$OG 2T}H/.қS^Tz+&43[0 3ܚh#$ G8p=5yc5=4A:gr x:Bo@̣;ON׽؇-KE(ʐZ/?e@std%$lWYp2>dIߵ]` ) lKy_w8+<@\=Xؑ@ ӛib@r:@ט #9Yی&6,DZ,]3 `PE;<6a26R$2ARI?*,XB%T;{?+"$@4LC 2@LSApC )ؽkr"#@,x9qJ% @""2J>QȆ1~ƛK؄: \D(t0&6!0OeTEЃt{u$Ȩ&T7@ ЍЅm5S]TU}:YE$5Y]퀔-0fg EV^K>  tډQ-(;(A9"؅Z*FHeץgH.B$^]}]"*W-M ׳׆p=Ti 9{-RSX]}eՉ-نUȥ=ʽYYҞeVEh2) `[^."@z&`/>0`e#.(y7^ ͙QpʯeʊS{W }\RN c.c.H`t010փ'Pc.j#x2㱝 PPa{"'' 0ãs0ؽdPuQU5b b!@b%[ߓ}_Ybp̥ⅰb@KP_e8cО_43@f`NaBN|fbquQRFM\T^\UnbWX>IY]6`E[yM SmJtD!!HcmDg !F,nao.D">btNbuTU)'~:z\G|tЬ .HJŁ)(1ӺZb(~|5j brShviw'.X>iz^Vb@i${S+]  D4h(ЄMȂVj콉覞hبF\Xb%_&i1iqܰ\|.F6!xy9/띎Sgs=,t3 vMifzqjOFe>ghebVY~Yf0@kx*V1*R;/2(2 `""QT Ɣ潏nx#XH*50o,Rќ >rJI&Oz/,+nz!Nqr6bvljܯ)NZ^TWT ?,`p:Upo ]Є;2r[(x0 tδ$|j p,LAlh ĸLᱞW!6pn[[En>t~u^qZ ]}gʌ˜e'fdBB8Q4O: Y'kul@Z4h.s탲"NJ0dm%϶*=W S ceZbvgh:"^lRSfnxjyMNgvT  (IQ&7L3܉4.gFar0JNՂ@{1=-AXz h6O"GwՆghhGhHWqf呾ʎql{wi(xGxpU͓~ȧxx a'vl-HnMGs>'О͈QDwCO_Vt<Lwedt4q$qX'*o'M xD^`( 6 & >}^YDowot__|mN||(zl|:Mzr/Qv/{%Pں!TJ,h@!Ĉp8ᢅ6.X`@X!F5( 2(D`Aˆ@0p`cf@Fܤat% 2DKdR)BN|nJII`j $TBD!!@ X]V:3M:A)٢*CBS1勭4`,P3P7wM/n8ʋ Hׂ,n}&~fW`! uo?2ZX#H$M"^5W5ݔkd5fH)ŔSPIQUWYZmeP\suW^{C6XaQFq&g%hj~5kxmynI9%֥ Ca"GC:38bIܖz@%YFAynuۭBO8w;!Ac9BlMBO[rüiw| X<(&X@/l@᧋)x" peΨN"6.b AM0-NO D:CPE:dٲAVLN)6j FNV9ݓ+:C K Xk `@ 'X D8!,\>'hbTլ@Bmfa5/A4Cl#LhA4B7]6BD !BB#'0=I:DQ'KZdBMT! .f!f/[5""b%2YNAEAcz<  @AlrdrQmqz@@33&AT:8DxA @::@$C\lD:. G %|2G2 A9($я*"$ P:L8uC A$dK qQ`&|bd @dJ(d #0@Abd)uiZfBf lfg.I*x\ @PpF,AmAPZ %De(et@CpUJI D@pS:X'A|nY^%p0B)(0c]WTBA &PPB\1%< @c:I 4gqIhvG8BA"GDyQz\(Nja~@-D%x`ߤxRPD0UDT@ z{J SnBPA )Bx㛚X_HUBā4@=F>n@ *h\ ȩD DS݂d t-LWcV2Tr:I0f $@*2g+zDi,fh@ș-T@檶j&jìd r*}A\ @A Asd zA9>7*}>eWR4i%BiDT8e '^1@¤--؇}X8m@Ph`6IŤW[C0橢hA:ĩ :@:ޏP`*fRFDY-V"+@kHQlmm~ |A)pAhKKǘ_'B0F\{>A :9C8,'XI"Jh@tA,E-8@pBޛ^䮺"C3d-H!iu힢rk~ -$@P|"ZF#@)AA^(J%D*.B:P. $ (@@TΪKTPt0P,j)J/2&4/h/bUK=(ob}|K~$+|y @{v2A+@@V@ D7SyJF/||A\% !0 z*P`0(ҰԊbamlb5S! "+ kD QtgDoɜg leA.SjyקQ-qR0*xB7@4 _Cn 7 ~d"$! B+0 cp5|d+9$z'Go!sx38j9s+L41*r"N+oT@,ioP噆D5FYE!D@.6(j3j٩#Tv BdzCKaoB0rB!N`PEJGPV{xB#4|JE L7Lf3BLVM|Q(\s]s *P!iZ_VA\s:mn5S FxbY': AP P0i4D"q&J^ c"@ )ydyrnmn]E^ gwv"#o (kRAU7zz7{{7|Ƿ|w ) { {K n(DG Ql'L."Ƶj(@Pt%(q݂f]Y (d6PtB$BD%evM:h@ ހxy$dN49eJj(Sy(\d9N~sv~yy 9䁜'9! 9瀠] ,ZL:Zz)&V,<@FbO#nH ʀ0*:*,2 -r 뮼+k&6F+OHNZ!!|HG;39-݌!HG%4An.ՓRM5QB[2OD0 Q: C8QWHg QUEXb-ց'Q9OYOv}(B44v8,VؒqAVΣ0EJLӤdL ULХ3L L襇*XO POo8 Q 8:E!ʔH ȪtPTjX @T: KH:`ҠdYȀG pP 8]ua^_]aKwؖBhl ‚ Ӭg 9~{J QG>7 JP%h5k&6·U V"N("YZ5]? "2y[K5͟ uE0W䕞oM,BBA. 06̈;|SBzGQ2FR g0Qg#|GB%%bkPpPtz]PVYEl& wO M\|-0.DLjXd _ O( 0@(`΀xaޑ^\a m!`@/J@b V1>4Qc:t@D9rc{Ĩ 37 ODIP@dW.%p.mp@ /dW r/ k4P"' + q%LrEQn1 - ,Ё.F1`r ,A, v`$2  ֢:5>*J'QI/rr/Dk%)Ð))+0a*s6,+~8] .ybhelhӊz"81Q1?1EB 0l$'sTrY3&cEY&p3BS''3 5 YS]1D6p67)Z*A,'wB;a1$s<1̓S-=1?4>gR>=oFS0 U^ܢ55/D@P@6 BԧetOFFc?u4@02tTHPwc4PV@ v V C$3@A=IL[  FhTLAl>fA4M FqN(cGT rpZPY 'P :4%a $ʌ`NKO:|@nPMDL#7@Vg# n4g4iWWNG6) է@aa;9r!~4 @[[o M2] ҡ e!}J!x]A]ӡ^הftWSW_6v6 4`* `vFӉЬ{)r U!kU1)-F 4KAcOR PDdKdQ] ecvfl^c4gT_y_}w(Ghh(i}jF f `e˔:LƬ;Rqtu\7);w#vJElcQBlvKM@nB\Ko=?O6eߓWUe6u s gKSgqG(NH#`P6rPzz)m<|@vg8m<>)vFd7B1e"8r|cOw-w`Ɂsk$ ŒecѶ[sr "l5nW3$ZUuzm (Ҷ_'7(gǗG˗@Ϸ%W}zrPw~;O7>8p;@݂@ݶStw9H7`-:XI@JaIcO8D S4)Ul >[o_g|X6ug ! _7`^AƠ6}!J}6zu)dBer 7;(y#un~=FeSt:Ceٜт"H "(ϐS+%SU@Bi@ 6a#a3{oFwr +!9 @ ҁ dh87  X.zA927ck=N4SQ;tM)|`k=z~nw6圧ZҹGoZ؟un禞mW1|P`@BD !a vaA\ tL@72`{r#=Ŧ'C!kÙeF;)Q6H{}7R~_R`[))nۧ2$sC%);%E^K.Z$!5!x@T"<!a pL:A!>ҡ**A2}Guq;|.{:)dcnW\9:l#~d7/kc95cgŽI1 q 2D{D[==;` ^a%>#?']D!<`8ڣ_`vRN"! ః! D l q@4{3)(Χ3e`xP(j8g)H("r:сo{Ȍ(?|':5)52Q8λƇ@ u6n{ Q~@?Zp#˵A!H:O!8Dt!VlTAXA4au@ 4JM] -"%sko 3u@`!`3*! ׻绾%l!`*r䩚aW^k)JCK~^6; {k3]td :I耮@``- z{Avn9An`EV!P!WZB12>H1+\gh@` gGpTST[t3" ,fs(WN2E "A7n͏r_"@:u1`PPH hXń +1$@$K<2ʕ,[| 3̙4kr )2 P * B.866XjtSp=>ۻXrM5v,ڲE1T|1%vmP/6qD$`ŒaFPU#t^W`vXb5XdUvYfAЙQTim0Zlv[nm -q{q N蠃:GD:C\NA&@:`vÍDe~ h'DЇ(F)U#YPA␎8@xVȰm8D.u]8.Q *b:b1#(Z-OmS UFQ c\(:<󣝍FH?c֑u'3`jT ׸$S5LUgUmuf%֞ACkBA?̀.c Y5FVQVoDʫ`,ϐ5X5If:@46mm#$a :@PPQ>ZFjnuk]t+0knG~ٱ*ZghDZתwB@0\݇/ pMP[g /x`GHnL ro~ $- vU2ҌgkV 61x}hXf6p\;BA l9) Ovd;AȲI_''\yj5k|_MpGw޵\ټn;&]q~Fuk|\Z'܀ сq`O/9%ISꐧcnU.đ*0߸&<HG BVy9a^B!(n#j53Z?q 'm҄~p2Pʂ1 vmcWĬ~xQv;Wk #3Ȋ /zѧ zԓD O`} W7 w9^ֹ!B:q >k:з~C gw/$Wz}٦pjgvGX{Y ~U xŗr@p Krtɥ )PyS٤C fI[ʛBJa c| P S jx'"@` UX{/ a H|d4PSq:Qʖ@jJaRjRZJ]ڙ`G'` Y` J-mg@tT d { n Py50aEҘ\S6c$ډȪ *;j.KYIS*Z۳y @^0 P ic T p /%/%a-kb& 5J(iPlg SE1x ]` wb[[U\0@ spL `0IpI+ :˸;+w @[&2+` +~@~0Dvni 25`2=@MPfe6" (q0x{ "PwXJ*jB+f@B S r|o(qk, &@_PCA0 19" ,%pIiM\i&e7`p7&,(| Cp 0 g Y| " ?P<<ps`|bd PP w1QQ6AVN>viCd}KTil^d" `E b@# u%\P ~]N4x;n6JdQ @S;Cq:WzNpb.eNƾUnvdSznlp R sXF[U W U;怞*do,{y1^/B 8˯<, }(B`N@__ '5j$Z`PBa u"]pnyyϪ>prn6=P/pJ 8YPR; BL0; E;I ]I)cA& Ъ^78#0NQ>KΠ*A p u,XvuKwQ?k XN [ e/J9/KO|}B 2zK.S0)ٽcO`gp/ -`X@_AD Mt'Boϰ|wrTSIĸ /NcBP|0hu+})N@LR  QD-^ĘQF M}RH%MDRJ-AHͅ5męSN=}P8@(tP\> $QKH2 TêRU>80?ݾGWu޽pXFLxQ:h2Bbb:F)ʉX.fҥMWh֭]=@,NtD G6F6͝ɨJխ_Nq6tMv"$ݩ)r%`IW3n\VI>$ЭِC / 7(74@E% %d,+ߊ/EY _1FA^Q1vjtKYB,$`S(H%)XBt.9CF<@F~$8\2M5;0 ? ,hb=@Oi!2 H& $dj"hϐt%x2>,P-\:LWU*HV[[k:pA#A " WcLL@C<;qY_f[jZk9lEڳAN{#xr!f.lguoC8A: 4u.7)NrÆaf+YO( ` 0lˆp,50Ă0~`Y L.$CmMiR$,$$!Kg9N/%U%$^I1 S =@60LC 981y,SC(j1*U!4$"_xLFWաTh5U'!r'G1G =cHcdD7+*h"=c \H [oJ”@(ٔ?;P:*1 le:~ Ǟ M}agL~Jj78 |@ 9wHiCԵ.v]eϷC`tm x^j LD:x=ÓV:/@dUetb`F*-!K-p$-re\;8HS8ájSo-Ot󬳉'$ =67t 0":VnUϚ!-r/_]z3uydba\@x%+80P,_vnH8a[ S"*,)[ȋi@-,I M(F\ * lh@kk۞1o| /PU.Db#ag9HYֹp[hTjr|e% f:f(NOÜ`m ҡ/0&E2lHO.𐕦@ K+/]LM-n:rJתm5Sd +]Ǹ:4>Nr2 m銽Nv=P"ȅ TA =Ֆmnt2&1 LUH4BrCn1byRkX[Tq[Ŝn1^Ap WC@ SH  rψ>"G&_c^H@GNC6|e偎"D:a;< bCفk蔵1aHG!jtb5EM[A;@I aHʎ+}a` vo;ݡ?ӷN/G}̏- pC9&Q//齡Ӷڃ?dA0f^5B);jS4F>8Ѿ`.&('(K&`@(1fYhL{k+8Dœ8=I-벁5`@j@=#2`` uú:tk9.:iKɓ,O!IH@Al l?yL@$J*K9Q"BU0J-)";n2QCcC#CL:8PMbL3 =0AXIETF7!t+BJbٱ"B4h1O$i"d3Q x5Y {>ZE]C(P?bycdLF?!ɒ(Ic `1G`<HkFmno7tJ's =dB$DbjrDd»u a"2e{ԄtR*)QHT"H 2MЋYZAľ^_T  F`Loqb=g@"PIlIEAFd?pĸK\ Ǣ<'ub\)!g 9J  D:\|:#x̦(8E| Ӟn󞂼 >̺tH]C^HL$рBO;22T9;:L̜Dĝn,͟$$ x2<-0+lKʦ*&P2tS$ԜO`&0ϠeRʞ,L)xDtCH_TO |/OOy P5P;LlPFЌPA uD6V(֌*ᚸ/Y+;Q8e.*&Mפ0"]NacR@b7h3%"U/H(}҄D|C:ˉK.eOt0$`V84]6}SSS PqP$ T֔uʱFݱd5"CJ `eeJBNL+0<Ҷ'DUG$]X;)~XG*ż|H, -b%/}O1U,SgESheSiP8VmlSFpBqA-T&fB.5] =Mv׿ t@Q% HU@3A80ȁbLD[Pdˁ.5KX$Xw4@\ĝ*`=}\I\P\5\ ̕- \-]]9P;`]:p]̓%Mڭ]BA](Ḿ5 @pD VHr$^qb2v= hՁ8w]-`1 =~c P؛@(X%#10.hU,\ ` NEi`л͠B<F#. SK*jF ֐fnNk^W6TdZV%P-H1ƁY fՈH\~ fzM`z}LЁ!; nJhX` hssi5Xf4{  P)Pd)^g1怘i)P)x[eh楂[)guH|g >s%'p=@8l }Kّe$ @DIABTh;ށ0kFtv9~ i_&'ȖttiΞ+P 0pj1)h~+hjjx |+AKW?mz mF֙nÓ ώ~N~g_nunzv jg+V.9tʃ,%f iH9 `0a-p`cCӈ衜CdŶ &txylf~NjN-PFGvIvûx(&`np9" $p@2biMu6~ju`in`OVH~q&-%'osoOI0x%p2TDwxîq*W]5I<# P5DERGEµRV_*x"=.G3߈Xv~g7wl9gl0hN~s׆B/c EoTe Xxd&E(ʦE 9JTZZr݄"2zhfa"B-MwŎƹN-K4ÁT\i?EA30B A [,`U!CX 8m`A ,ɖCłnVfv<+tΗ뮽*D{2lv٨>ZyX@E:S=ܚSR{è霻.Ԡ@$…5r[s:cD$B (-m ,8WwN=@ chY-L#A/qqAWVɵk,\D\pB 0pl }m!`z {؀Y];ئ:.;;n+ ezX(o>i-^P<%.1(7Sc{ɌV(6'RsT-tg#!eC5/ϋ5pM>v2ؽյ`B6ߕ~kB0 Gr% L!a/dhF`B”`JS+Sr!Q"@/qzo{ bH"3 %<Ou.lGCVxxA%Rcdy;H}/Z"nԲCHfξpYCCQMk?[YY7`U rtzK`_j  pNSIā,lƁL 6™BJ 57ROexEN!bn'#\>, <t9C@9X5dh;Ă uE6{r7C~^3+| 4p2Un%&IˀgbS4G=9ʇ@+gyyfH &D _ΜY @~PcfL\̒66AɃL8_8'A(pBvҡ0̸a2^) Gf!tet];E#w< 7aTc& ˛ݸ̈́G ȚQ U: sY˾yWs?0!@ |s;"qb )O+ X#D* 1G +:!v93d "h;ASU ^PBL3vu֔\z*/_oP0B0>nApC_끵&)0! E]X ־R#dzy{y~jvH20 l_/ѩet1`)UhU-̀XWMY-Om՟,_y݁Bu tLC2`<S?Oe$^BDAUT4iU@hA!AdUkbIJAe %[A|VP}u~9YSZe[RzY#T2ʟAcҒT&Dex%Q`ehA $@`Xex#*^&jWPmƝs`)&q^^%}qNBTtmf&P>AZ%bbd/USrPgB@U!㥃'$gg&DKB: @{d&Ei6VHAjFRfL%L˂:^ ( lVBip/$A%0,ڌEWv&A:pAȅg[@F),*s8U<&aGeT({^J:'t|.8]|LC:~d) *h|КzZ qVKZh A3$|ZEaA5B 5/T(xg<_"~X2P"pTG~ \ݐn{GArʧ@}n5ЀҔ$g'uV%!Ae*Ѐ&' x@))jz89pARFWxZJ3ٌhx'oѰB[zZXAl / d(( Bȋk ΦA`e8,T%@>i@A>fB:5VV Tl-k$jK>*T˨(,L)PO+|jk±LMCP*8AT00 eyPB:`A gՂ|2W:A (˒@*P—N<Ьj@m1"eC ю_++F%BSO|2Cxljhe 1A(N0Cz%Tg9O/0 f&Qh?%2D®V@<Z"͖oLa* N2~6-JbR?nC Į.Y"4@ PL#8!@0FV4QT34G2:dU`oL00ApA< 8. Ϯ/ү:m門 o5 Ch*@ӟ%zz@Z&te1|&. #J# / 0 }n,ooB%7†(@P@)A:Ȣ)q)P / 3Me!!'"P B$Ss%ې{AԘf tIf&XBL$ܺ1V&3:/ !W!X)s13GT45{-&Cqcis: {s`ff'Yfz;'J;[ U1_%# F3%OV3@D6o2< t ZɦCBC"TFV'8CPtF F+024#/I7.4J3K( K7@ 0HUdޒ~&3`x*z% uE7u0uaSG<3R=7r<2$wV%546& A#Tk 8wAI]ߵikP.^.`#vpbTsU/3VIs{5V3f5LqgZh~ڰ r$ /l#>B=˱z7qw xbotut<3cr3sk3OvJWJ%W5Lkl tDu]*77#!||Y@88`0P"B L\x_4#yqtN";c3mICsOxWWW_xKgx&hfo_.m|`*TB yR/+q3HOc_5dcd>Gw~yXߗOq*9oV9V_8|c( yp:M zrHzVtAJsUYvS&CXw69v:qǶ 9:y cHz#Wy_g9CwKepfofw} \zofsn"(֠g{Ltzc=Gx>':ek{@@C @9{}k&@ ͑'< |K#ϺUC{{Ńy_7krǃ{@ɿ9z9S)[{bHG%0&O'G!R,]0 p%T~1$ya YSY/WR$Tk5$|̬H0Ku`Qi_1F8 $,tD"cI" `U]$آ3{wύ$2atRA`"Qj@q4`04c1Ϝ5s8k[-È}ٕ1I8He^\۟=CfCۙ@;5#{*t4+⋿&HG WpDf[ >< >@s tT"tjQ@2 .` F?z&p@7`y ~B?thؤtЁ x섧+=q:03P9)?T Mb:P$} @xƑ% J7#`@HA P@MD t #Pb "vfB;߫Mpg P6C`}8F- @``V ) \/Xt8@tMqң6]UIk %R)h@Cְ3_W-Qtws gH2{NC8!Dl&(͊Hcd:svq:+>38$NiKg$,@2?T)XdA',0TڠFl&9 ݊FW+5DBz bF4߂e}ѭSe@bYLxxdLGjg{=LKJ}ePPl-X᳤g§.~:n\Ǯ@04~axjɂ iaf!0s ?j-ڤԁ> :T6dh ` HY>(LMТ4`h`H /L $ ,*rR@L  ա , $z*,6n=` !X L@( p ibԢ)@zhתoDKta B$q "aj1,p.ZF+\q !1݀Vϣ   1bsLK q,:Kَ  "۬ϠpX ڀ&15R@K-o-D@V (w!!Q*!qlp~."2$G&"$ҡRB$q$i2d* rpQ! |2(((} r) ))a**2+++r,ɒ,ҝ`-22&KDd@&@&A&&U 0j#1j20 S$&(_ 3)4 #d@#␑d)^OLҐk46C6`H"Nl674D 4V@4>@\H8HJLY` r:U/0 0s11#2'2/243=47X4;4!M5W5E*b6?7u7}8s843 3i&3BQȔ0S11p_3?g3l?w@S@8 8#lz3ABapBE@.a2JC;]12C2D;s=PE4]45S6B?iFs?{3@g 8 Y 4HCB.3 IyI D` JC44FtFߴF64GtGt@}TO:!9@  5Q Q}Dm>; J%UK+<1255=;3?u4YMQ>I5TSNsU{U4VA83kz5Xcp5dr@դ(;U1'CY)3S1Z;SCTUT5Ku[e[sUwUU{8e5B8%`WuIUˀudA^m$>`CUYKYVL VZZKSaaaM6N#['6\+v\1,ax^7]MJvj?$- XK]D/5fI`fW4gtaeaFNG Tԕi4j7jCWZC`tf9fSaZwvm}mSGN .֐Jjo7t=$pOfR_6l7Llg[Tr״gibUUhubK A Pj@tUWR7zWuM|evETfmDm6wvw_ww6x/hn/y_(@zizC ZDAdfkwSquZ7[˗[G9a}wad@7=CĀu{ՀGggݖbnA..4#:8C nCDil |C5m'a+@a ,FDv!`P.^/|x鸎8x9y 8` ./@,K!,y.8a>zaazB*|K4@:yhmq9uyy}9h#`jxg` ^2̉^b@ 19yٹy `)@!R$$ZBb!  $NdS2&ٝ-1:59)3ƴ3) FY>U3CC=ңa:ezimq:uzy}zj7"Ze,l,Ȩ " :a0:r DZf,  ګ=DbP:Jӡzbl s`J:Ju ԡ {Kz,j#/ ([{Ft@.@ Qr[v;O:*  `3`ȺF{ r@ hZۦ.`"ա! ``?|m b;׺3``|"zDBN[ | K{"˄/`[\Z@TMT<|z $u@U` <ަWݭF;5 "<@} 񽩤mу}]+1|ڜ,ފZe0ă`Ł@q/=//s Şzs!v~lҐܼ h9=P.Y|ȉ:aaLDp V{_,ɢl bܺ  zAITrK?U, =-,"ojWeb?Dސ\mg:ڡ۷1ݥK.;򼿭{ >[o* =Ȯ]=[]|'ѫ+qj|n _|??f|D& H@:!H@]t H.A: YD݀BD0ȓ(S\ɲ˗0cʜI͛8sɳϟ@ aŋG0. A2iC)(Ҡ4ѳhӷ]˶۷pi"F sa 7r0`5ѭ!'y˘3k̹gu2i,!0V9RKN G@d#A NC߽Hi &avN<*C][ENӫG|]*fqϿ(hz;PKP]|ڭPKD>>}}}vvv333XXXsss 888ĭͯ,,,yyy///̶tttuuu===oooOOO___<<<---nnn:::JJJ(((777iii555qqqhhh666zzzbbbófffQQQxxx444MMM ***{{{DDDFFFjjjVVVZZZddd+++UUU)))cccLLLwww]]]mmmEEEKKKeeeCCCIII^^^...aaaYYYGGG\\\'''AAA111kkkNNNWWW[[[&&&SSSRRR222BBBHHHlll|||!," H*\ȰÇ#Jd(#3jȱǏ CIɓ(STXʗ0cʜI͛8sflQϟ@ JPF*]ʴӧ2BJիX}!BB֯`ÊKٳhӪ]f Ѐ]A24ж߿_X݅(W!#K %@$r (0`C]4x ަ9RTvi>qr+;w5S(7`'hrBh{p0[kJٕ 54(E%y{⇰ ǰ0dt Ҫ(*2&1a(ڰ*̃rIor+Q0+^-R⻡D!uE) IBn&d4#Шƌ\v]@6V1IFs-᱊{Kb'@*WV$G iHJt$9gP &D$_ \1{&pr7S"RȦ6nzsyR”3XNƽ PABU( Lȃt֯$ծJ%6Z b9hC4)JʨC(L"b*+^\-H]&z܍6U[@mo4 ƥm0_Č ҍrVD %p܇R4Xeje.B66uڻj z`:M~%|O, Ƒb)F o\`r.6xr)yzA&p0< C9bT&CX8\8%&,mc =6Ώ cضkn@ P璭8W,̣QIë!+'a3J! -Ԩ>5( ;z泟-@cЅ>"Eͯn 0bU~58\H@P`hD APOX}?*I͘OcFi3lTzi/[@lc#;fbhOڻ&=`; ` \X6f`"ú.\8 pHں³cCͧT MpGiUq3: 7 uIf(V5=};!nIn< dz qc7؀Ixiz LJ  P#| $HqQn~!R!7  `pFG_H$ h'|Q}>a1<51eI8-<;8XxotDNC0B7t ?tIjm` d l'|h~W~~Y~<~'g'xgx_XotfWG~ &p/J`ii0bJpP'00 C{fezI0z`&y%hg~1ȄGiGVZȅ^b8dhglp(CX}wxJWOx78UxY8"`pPP mqXsHu芎tJSч(RXH%A긎xHHd gW6(eҋXǘ،xҨt؊GHȍ 1}Pxɉ(@ƈȌ8B8F}GzȁC'QKCsktG@lDY&H0DДNYJ~8vNȑ(9 y HXחp3 W ~]%(ِ+ eH2y`.ja`B&qAQ[fFFs'ACiECD@jpPU9I"i&),f iYgG Xy$y)ٗȊ1pt%A)M#Ax;@ZC]!IlBٞ5 g jل)ʼnaI|ˉvXg9矖rI!I_)z9-)ٜ :hөMɇ֩K%ϰ2|paGd)!PI%TW gpIBП)w dgyI"ڛ:Ușʜ`wCԹ?$ab5f$]nUn_O ٣ oNHz䰟 Nd:iWʡlڠn `ZjZXo31qEPE f} q0 a 7&}u&5ZrPx<< o@ hJ *zkʠHڪzzڦ {Cvš b ͊xfvkG rp%Q!W p:OP<˳*j|NٔP鈶 [P[\gʯ { ʥ'nYy&9`ƹ[ $A9=EAm`Q"vQ8x]P4p0Xǜ6(b}Ieyѵ z``;TZjk۶ki EdK;"Q;3(Jo0qE lMl "oa!ǸXwyʸm[F @iFpKJд 1 D@rQJW ̋[   bKe0@T P h[evME l nZ! qߗv&ٿ ;[O &=Yu{5^@e ?yL@pM@Év7V"L~а&̻u6F*RHp`Vܹ/I0 pWp tJ@ `Ppsj #HbE!|[vb{Ƌl̿[6  ,Z{p.6?  I :~g Y7 1 Ɓ X LXP 3 P@VJ5 A=B3) ) *@fI ̷fP0 5W@z ;%QlM^@elhk|qJΠ! M  ئ#aʑh`kp0Uu c'MxP*MNҶ 1MK U3ѝU D\u B}UM; M P0Bp_ /\- n Wuƃg# ':Gq8u w 웬V\m@n qZ1 u\Ǝ¸oL,MT12W=۳ Z Q@ r=njXDtţgsPp&P/]˘1M'P GFqp$PWZK@UX9Զ}PBI` {@\] EZX}fr[ӳ#)8G?O \;Z+۽qPܢ\"]ϧmr6 # m7nWI Cꑃi@C?&U~ )\#̆Y X0= ZP F-t> @ӃXf \` #2;בNA4VN<#*${օtfu!شp`*p!^)ޙ=I:J? ;O,3~ ` 0 w"=ߎ@#PPX'  ӼP?_@C] y `Ӄ F0o~i~fI`ٔ8!%0nƲ-n<ٹ76|M /Y{PH/vkMMd 0}b28]`tݔhp YL``Y[Nrp4pu&F . @Pq P,XP@:}AsG!E GNȏVLʒ1eΤ)S\ϟEj%ZtErȁp A X`GD0ab1,P2(xpb :8ⓢy Pܑ&'a!#/fbÇqe̗Lhҥ6xT T9[#gS o6Ԙ& -SP C2ИDU;aĉ/f(JLGuWa-%Ԯ}V R!H",H#4bK$J* ;ɇr!>4D PqE[tqŚV0 *Ԫ*OJ?fK+'`E([& úK0i!6_~ gFsӦ:/=䣯ۏ-JKOq<ӳ*>PB:?&\r.܎LDNlx h‹r(WhQ*+gaCs ǩt\G,!b#z2 8-lW|RL~pD B踃hi6K48 TA; KHrunArШ׈I<4BxTB?65dDS5WSYRS(i:?rKV`6܈ 8 ('6x(`m(?rd--x*\̏ȷR%xcq',{a ;_$ 4&@Jh\oScc~YF֣g%Gia SӤiS.Dȝ-șgFU]HbK(׏'~~6 1B)76D@ l(²7 Mhsެf!)ۊJ 'k,Zܹ$UIț2D$sZ BwCt ;6ÉՅa(Aa<5ϋR*{]~o!)+Η". t0PV+d'Ѕ6lN"PYpY%*@f#b %P,HF$܎@qgH&4 9C` ihC zå V,y-ьhT#8ʑRcM>珁t Y<%rʳ5E5pj&Q$A1%b" ;${Bl<(I6,$pȝ,aռxP\H%O(A>1@MX[ܰ#,e+µRp:-\% '"I1dPZE{=?ЁMBВDthF7QɊe!IFn0B*ȁD#9r gT_ ) a rAcJca wDc i ^ DbD8+^HZ9 7Y[~+_k_'>w<=y>> ;"P;2;; @!^@t Ar,Ll$\@A;RHF']ЅAY0`&544(+OP8+3`\ Je"҈@B Éƶ92CR+EsBk"gF4ٌD3(On0vWaz\>]r ݦ^%"cNE^dFF)~cHJ/qUŬTW`cUZ-^[-eC8ʽaQ؈#0e`QdLJ%ܐ:\AO&NY W&96eTFU\W.uSfի$$`v=>#Xgx&FWqTKYTe)}uf椝:t}mXd%dTHujxgDiAϕギ^y%g5gh6xpFr6ݏ`8ih&j(;)Q Aj.0;KLɂ^iY-_.<@iW^Ax2j X@)L )~8E\jS}iV6q|h@ql$Ŏ^lrŏ0~!XV(P֎dVij%me&Iզ).e찖MDMnj o^l "Pkrޒ\)^$o( X얘v֎Y 0ojAUkѾff0]p0k<H< q@^'^G WqIh #Os_AX ^,3Cn5o_-&qkn1/3&XlneN(8$3?t(rX_,K:UGo80oP`G_ABEtLtNt)y7(G7( Up3߀ZvsPyok?? PV??)#`3r@venZ~h'icMBrf{?YP@*8sx`wx`P@rٞ 튷'~Ewrz'K7x7ԅqpx;mdj^\/"=M Xq ^娧*xׄ[PeOo+z`F%98I0Qh^(T>P.|[oOc{/'d HJݒHK:t;۹uHS((hY8߉}.}k,Gg?SݸƉ)PfD3̏ᰯӊP¾ )C'9BT,P^Ng@1$Tq0@"7~ P‘%{9+p.%1~@ Tbf [bd<Ƚ)mogDR:!qBP8x6( $Wl>Z ~U a dYE,0()>B v'WkA4`iF bWCASpHhƴa*cjyDb1\L% +N#DOBD3*MnDrrYp).իO=Bb >D&UԇI : `}! J@DN2P\ϗ+AVLm2[O q"$\h")A󆚡 V8@љ؈}Z) )ЁJGL:vХ)np٢Bȉ^#XBsČ̓Olh6RacvkszǺE>w9A(U7nxQS=DALdWCO\H\y'gjGWax3x$Br#L8Hgʎ!AV 6W l f>fRu3ܫG ZzMzb}CqoLV1I!3YQy!IGˑ8 eu&aDlDrsg 4X? AxK#f@K?3Om&=+ {Nخ^N*u qݪ/3YߓiHQqi ޺YEW¾̨+701L]XS&a1C)~wːIA|@lp"p>>OOOggg999͹ķ}}}±۽ZZZvvv@@@<<<XXXJJJ˭ppp=== 000yyy,,,PPP888``` sss:::***ddd777uuu666---tttmmmnnniiizzzYYY]]]^^^KKK...(((lll444QQQ cccqqq))){{{kkkUUUfffhhh555jjjbbbVVVeeeDDD[[[ FFFGGG\\\+++xxxLLLaaa|||111III&&&CCC222HHHwwwMMMNNNSSS'''WWWRRR333EEETTT~~~###$$$BBB!!!AAA%%%"""!,| 8@Ѽ/#8\qs歾iKrz3W9ѹNuӝd;Nz k{c&")H Pw!y,Vƅ&$p@ *QPa(<.ІbtYD=ь~T'hJCJҁKSxtY5ir Su6iB~ST.ELTB j&* 0 ,u )Y ցyVJֵ5v+So_7]T^paa8<Xhld~PX  !/vxś: w+v7"bQ83xz2 pP;!.Yˢq=Z^oϯ6m `rXXHT 2mXqvK_~OpKe* _8cm*K"@ 8Wm 9w7Ӓ.fޞ b}>ps1E ]VYpvIe.1>&*Z@LuxI69E76 n{*d~#1yǚz/[`}Yߔ#X^I?'xY'r=oRu@*`'罚F #_Rם',J)!;(h\\| D^dc0tpB0`pz6p†%rFyI>P} DH7]a)bbp[ )aFg2qjVZnUvg)چn'饒}ǚkc>P 0Y,pWz> 0&*:0[@|ة;;B@SPaPDap*ij^j8[V)Q |!DW6 "gvkUb@ hc<PeﰐU` c| Sym8 a@:Ч~BZ * "gBJWed2tpz'ʦb3i&H6S> rc*cZIjX-e_j֊S m&1gzZmB*a[`:pb|r&W&( ":a('|Y,Bp ˧SZ6NB ɞ@kI78(&+b #{a ){6Y,j7ˮNeRJF乮A :f6w&.Y'&8`0O;BrF', cP 1*:o$NK'm0թec'pYȑCV5}X6]{&/'KHO;kcSZp 72vKk={bz>9&PjV97qz ([ ?ƼFȒf'D' '/%eڷAU2Dm*Y+K tVB%d襆">~;0 J&/EԴOK 5/Kڶ>[|cf0^~^C0Ŏ*ϐen{gR&v|h!F>0MN Mc-00@/0!` %@c ` Z G r0y<hΧQg#ݒ8`6qp:YO'('*報[pѝ} 68.ڤC^G%OS^A@ atӼ=Ԛ7e< \漎  Ti` Rs.$cI`3יhˈ '_2_՛'E0]G+j@ S npޕߗ.DnHP. @DHp4^>)rJ`Eh~0 PC `~]]F:s黕`o(ߞϮ0U Htj` Ln_nLp0 PPhXkn&jg >` Ϯ&p R^p$.3g22j_:Rz*lm0 { F`>^W/-ot YPـ}}n jgȟʿ̟6@ro৑Z,p BgA&gf|t/8dl(-Ÿ[󩋬Z,j@\PF#XAA0`+h`3`x1"Jt&$($I@*G8 $Z(8N*Sذa1a! 1Ԯe#g ᅇ:d u8\B;7GD@`k8hD twgaV\mlocJ->L OP &w.C)ZĨG"IDK2iyH*2=Q@wAA,J*+&32A0K.;CfI: ;, ?pL5T@l@@<4!cm\0 L 2؂)K/R|؏_,LNk:29b k衈&袌6裐F*餔Vj饘f馜Bwx*b$,AOb`IL_PDZkkkp ^{LXt#+X X* 8#U2I^pB'kXA" Ab!Ç6 T@' )$I٥+: M7N< 5o e/$4 u4/xcd:%W[hbV[[q;wfֺ*"d9jbMIUm[p "xH0DԢB T2^{)7wp<3Ho mݔk fɔWvY K_߹s[mh>h$[˶d9o2!H8$.{d[8N+{`,OP ]QQ"=PTtT2SEdO^Y$` x@VbJ7ӽ0b],vײ? \󝴀'5|(ħ$~> r Pp nh~eg@hw#ΰ+Q#ZG "t {GH)\a Pmқ S+`rw#0aԗSr,f?XS)L[ 01"L]cy Pq( ^[ӸIBp*T$#_<譓n$vDq̓{y30a+Ĕh  ,' @[ QZ jC@ 0K~!\Mq[/f ișs.d IvRϝE#%bRE#:5P ?hrU֫`Dec"(ޚxծC*n?/`YS:Ұ9bQ7]%aQ=!*uD"PY~{AYZ6*kC8 xE @@eo[[KGט7,9؉.;TN3H %?TS S,PPX>, !w=pcX;1!n*$a#(a9mAx&n!Q`s3,ݤ=xTPL-%@E &@V< VY@cj@F|Eɩ d:>+kJo4eM S'=3pf̟'Jz*NC<3VU@DC }w#шX~ G?(b%-jFҨtpCiN.:+joP]ٽQ;`AS>b 8,Ss>]JMv[KK+'`wL``=\P#vT4CN;2@]jͭ.fZ_BWlGek A]Lt#0_6pLA0vA{nW_xG1P#yr;'|4*līIi aH)ys 3'|Ba :cd?\n ,+&< x &{nzOsYXmw3?P*P\(ePI(> H6@7U`$Wj>Pw >5{5)l0uaU*} 5? B)T 3*?d;Pa;??"9o%0@ 0;@x'ЃAxa@'&Bww\T8$A0@DtX,J\>IŸ<<'ѻN2/l]qÂ)2)£22@ hC0Px@xNd4/$P@@3\'4+a@Y*7 #0n4(@8?t/%(H|>M(Ȑ!U@G9[N脴 5A17#BDq =(BC- * ;Vpb)a9"I 4+',k0-+)QS? 0-I,jHp^X` VPQDLx@E:1?>C?k?+H&`ҎQJELj{fyh 9bdLp >r>wHRR;2CYGuQ x(aլ758xX7U `_SW]paqU 0);a<;u)MՖ#՗3U[;J0 @Iu:@Mt{{*X`],uAWfݪ2-[Y)1 IY | 2tY`= &+3w0 ;;"&p> ,V275bmXӰڃ,  UO@ kOp֮:@Tj Z+ۧ[{ٴYyp3!\Q5\Y=z]U`Kt1՟'}l<{bue\Sx 8ځ<]3x 1^z%^0УU y4FxX, 2P0 H0a'XMFEM) NPN$5-H%&?@>b %`ʈ hb(9CM(S`B F6v"- `ξ`owxwIxKނxcpcx"0w UxA@Ax)0wCPPQxRw{U7M`hcLXOp^}M]K٪`(UcE A 9;Hc`c^(i9&Q-~I_|:CE& Dު <@dAxE`G X9PBSgE6 Pe;]f+O`1AWۮ:f.V%<XeVR! O`T^ d\Ng^Qy_a9w:whwX|n(Thx5@XXd<\Ef!ͳQP̓xMhϔf^#ѰMX%[[ˎQ̆H\ -FFU8m9F,̢NցKULgDN`؃=6jHGj hQx44L>hVx)6/@kpuASXf,FiI$@~ G f5Aua;81`&j+P(E;]8B؅lH0N%mZk}4-F4H (a6/؋O~n4ЂϬ/lW.Nx8o`y eO@Zųv3򣥰MhuI6E]*0+3ҠUb /Xs"Po HC^x&C@8K 2):kL;.eT:xB0 e? QP53pr쬳R'K\,rr0G(1'p3ߏS\(L'W x7/4 ( :hs+ԂgP ,+sۻn7 # \Dba7A2V+-YS" 8 ףk3אtUwP㋄nG+ibu.`  eAB_cf5v'Ԫ2؄d}P\HxxJ: n_? yv4+#VHv1+<~_⛟Zvߗ1 p$)*i$zz?љw5  -#^{y{8x @lj x\O<>~WB8O|vx/ gܰٱWUsg/Ϸ L0HP$%H0"e+TiREƻ,2(#'ѱ vac$;N4 $E("PDR; .%t"8pɀpA?ޑ-N$!_ֲm;W@ ,B; V0/= @Ã=PϨR0!ʅ .jX4hh;|0#G$HbĈ%!BXP@G %JXЁ)*T@ J0k ǀ ZY R}Ǘ $hB!JhFs1^*tĂv! BG(uL(>`csRPIEUXe5"%Xb|b]Y *1B8 BA -=w;V@I*$a!$Ad VvYfuYhvZjZlv[n[pw\r5Hx ,ƝwGx`G{uB =D]F5 1C `O?e xTz8UUW+(Br gDDE 0W]xQz^,5eYfyhjlنn)q!sAׄ!8~& h߹^i"h U^AA*ߤY_~-O.g* fQbI'3HSlB+-X &@d:.#Ad d\ vi/ɯ)0py6,,I'@y$ۢ'r|W}6x; ;@_6I%Y􇹊4% xu!CК0-Z[{;:(CtޮYe_+efkqVp3!,]u1Be r\(1F}Q|(eK/ @:AhRv>O <J @;b2hZpyHq $F4"s*0$$nY]t0L: N7H4 n@kZx#BjP21=*s4Y*(E&Lf{@thLE)Z_ËA-I!BEwa)^Xx7qokSaQ 5HtE,sGXM.= \q);EI5T)qޢ%# V<,dI"-a\LI)I)2lb+F>- }^I)@,&`x_L$swdfGJd`1ޡks0<F;Ȫ G*r" >PCF0 M DPsMV $j5'^F{  p @5H?*p_v.m}a*UafPW3[JzGǁc#\I5'PBK:@V5z$ 5,fYMyRZ8'[N0*]k*bmVtʷEzkvna:irc[SX`:Nc,@nx 2@L;$lF[EkZ-ʥ EN "^ZGB0B~sZSST+`Mc 988YNV=NP r3g/ƊqtR;|h^c݃u[Q!lNQ23! '@;z/IGQpC0F)ȝ1;R j@;' X1ԀistjHKнһX e~fD w9.LX&Cef!PAO ENWLiS-}s[ ub! Ǣ\]&Q |ǚ6(ގ|[8&?SǓhu5`ed%@}D^ьp%P8]fF^Mr[3&mn`QJfX 02sIT1a]S~`rQTw5f$7֞lzp%P-WAZAw,CL{5zy٥6-kÇv RXHt#f=p>PpA/(k5-Y0/-\7Pw#.@B" `3~ D6,F7E(] ##̍!qLA2C({%4b`tAZTYcc ,xY00Y6AL~(ȁ$A!X($XcH,:$ Id5@ B&H7YA.h%a,IquQR`Zl`(4<3I89 #0lv`YB6#pe4%pd h#T)%Yi6Cj:jAkzl&(B!$.GDmYLmV-^zwEDؖ~DْقuI---ۊmmݚmުm ߖH8u.(n~V+-Ab||.nz~nvꖮ.<욮Үn.. /GnG */2oG8oBH/34O43s22F3?BGs@1;@#tAKA@B3B tCS45[3H3|GH?3>Is>?4 8{frG,ÝT^WwH`dx,*yt2-<ā0@\9y[@;A;āAyya۸YTqÁ9< zGc9:]1`\:Te2-ck>{+>W~_g~c}Cu'>]>ue;``*Ś׿vл3rt+ D,JyuG؀̜2ŨL!I+x5h@N? e  '3hAx8lKpb * H"  vdH#I4yeJ+Y|'%wbִyEx LР* `pfQ(8(؀m$T Scɖ5{-ȗidA@`.Q DE':F-](jAB ; {sf*x!%-gn6m僑 .G'~σ:O7~|(g4r#El m4%&L]e=k16M~5>%L`2ܲA]Ҡ3AĸOl 1x0q$C S4QJ\MI NX^L)v>)hɲahkg P } (1TTi([cUH-k.,2pgÂX |q[ > HHB!` DrL ^@XB v8Ft p w@\.│q N_hp1;.,&  a SQG;x\°8A7}PacG>nI|ԚH4F~r$Uh=M$ A 80*!ǼܥOQA ¹zrS 54`*`;0E\@Іmei )Dw\9pkD@Nwq tC)!wΐT1H@#%萀w:J9BEs'AW$611HrZ&A0t9#Cg, [N4V %3@Q>u ' r0$)u: auӖ"dXwXgRJeV@A9L"U$C\Z3C]Xw$@ zM 18aܤv|lB9]xK6A([ + #zPCѾ6!jNn)PuzA ":02eVH_" JZ{1ŵzրM;.@u㾃j1T`r-IQ;B H9V:ls~n$]R̒LWDլ{ZRYf*{YCr=ɹu#N1H&(hA ht~Q$³YN 8nCL*$Ó,'#NcPckǦy)!:O3^9m:"Y*4& 8pf$}|G Ccec`Gf302Ți All*:ʢ`W^@),7CgA)qa;Y4M.u-l#4y{XNˊ 9eM)UpJ*Ձ[O b0qI@ˠt+5EPJLRsNf*T^MpzoDF@Ȃ ҍ%09&9ەC)~k% Rdl7 @X ulIs UK~%uB MkA)h3P$:GlP$1F:'pb; iVzF%`Om; CXw:~G@lpG:;rrXXm.X p [P9Na $A9pWB/ċiF>_8ܝ $=EZL? 9ϣZ]%:/ {`@`ap|! € @  J| < FKOKpڀ 6:"gfc>⵴D'/@ /%oPfHF  ab!D\a 'NhϝB0Ki ^ȋʀ%[."BF\湒c,J ԀZ_/0 G ,@2:RL P % O'`Z4.0@`)J q'l[;!T`($, L$$t ,Jc @؈ ,-H ߡ@ oKq 0W_1g p w6`オ $%@`d z~ > Ca@@>/:c/ r^'H#7tGwJޡ[7l6 vmbwTcicgoZ]!϶p{r',aFSC@{BKlOz`p˗JaK$IL`? jb8p;L@ EvI-SQYA^;a3f @NԑX;r$%I(rW`vXb͠JFsD&PIfuYhvZjZl#fnpWeőn grIg'$ ,Qgvq7PX J'hHm"|%DiOHA08TQG-ȔSPILxY>`!,F!gG﨑H-(a&B CALhfjl m|8}̵mLp#'gwaDp7 |1LJpDUqII Q*ʝJ`{a( p;Hf"rd@+(ۈ^P㵈 ;NQTr' 0f.[c9B "U v. (Q~7Px? (GQE$Ngr,z~0Ma%Ya hL;^t`8Bb6RB7N1#`;D mߒm6fq AXe^V7L8gӡM/F{,*p($s,,;sS1=nvC *w!sFܥe[!K NA{h`I `p"t x̂<< C;2V@eHY:aY+H爯%@Rԃ>x+g/ J4AKҼ QxFs1r\:$ a-A*xA8D1%/LZB F8BJ3(4U y %U4dPf^@ @#.`AHCJ`K#8 E Ё x@eY43gYpp |i4d+]<đ܎73x` mCPt2P`@(XJ,s_d? /K2fc&4!L/9*.|@4D&BLK,A M C!aDKxvRI]W2%4HBE,bqIBE&AOpЩ&Gщ AI jPCHLpQ\ A%9)h@ILf(3%iȈMK`ꔧ$A2%55D]QԦk1EFDW.:m~>Jw?@@W`&@: ep4Њy#ʩB6,N˲lcUS>V@)a30&npZ.!Mkjkޜ-mۂքRroܸo^_zuK`pJPB';_%8*v! (@{^7(DP}kXwxK;ڠ,8~ޑi:YSq,>uFkm[H0\!c$8P %A fTr$'D) M;te>LsAhBD9J\V i :.0f.N0n@#'LP":="qp$ZY-W:έĒ>>U\.[ <0w*k.XgP.jA E@K lK쬰!v C(pw4pœ0N;5zE75M0._ wđqS\_o#@62;!9yHN\tR@I9 |9ܾ #P Jp>;̝ ӣ&(D!)z&Me_ bG &~{>(AƧ)~o~'"*P_j쒟|r^YQJMvb38{  t ;0pN 'u' P A0 px P|av"ivqwt}1}*0]'(h"+ߧwdj4s d R U Հ@ _aX c @FpjXk -Jçwb.y5$#dpfaa3+C`4ɢ @=& ̡hv!hG?|'wA ,p>W 'e(H`` Z XT 15 c ` X' l 'sr9)XyX{ c IC Y K:> V` I0)"R|` Al-p& )oՂ|Eb82pyA+ƶwM㋁5 ;w"4W "r(qHJTrH Oo`xJ0YLqKQASZ9E`=eZ?L5TZTLUaN#fhpk*i\4|ȕ+%w8A)DiGI6)NE%&Et[啰eFKcye i1`0`kYZ MriTt[w#5"P*#y||h-IXPIxe$!$U5&!5R%p!_"7.1 T[pSqYTMvTǙ՗.(˗5iw.Y Pc=G81:) p%0uh^c3WYyYi_e>8ʥ t 0 zp G` u r M jp׹a9&p(vɒ-!(`q5[L@K/jkZF#[@_ga `;/`tf,u0, x=dc@V֜~ ֦l'p6Aħ9#6A 3> 7 Ь1) >9IdE*8p`&v÷gꨏYyei*E|` 9 F1@ p!P 90oz@z= I4%  P z=j VPʂhǫI%{rJ`860b@`c@zq@؊շ{W}K4?` +c = ұе5磪chSlq;  M`,z #;\d ЎK+;++6+q@ h;;:.[q  760>jJPIH_DN6|)d$P4,dAYk{+m prK10 T]F gQk1X | 2f$3 .\+Q+!}/օep2NJ0Yцu@ SBEf<# I4D2nn{ vt$aYGN& w&ppp'SYmLքV ^ >imQm߮!%+Z]Q4h -` 79 )FHnw hR 1u5A.-e*4C@%>=a}[n3io:la-aey f Bc⾃j14;9HMXzǃ 8p`,0f|OH%M$b ,0#BXPD )*T4 A D IFp}躂'^ŚUV X$&DpE+h0 xs' 1sI"#@ :XIF =&ȡ wnHhң ,Њ8PZ $q){«W,z'O2`Q`.9 = dM4pppW!x)M\9\G6d;0@t'*\d9 0C Ej@%\!,b-./H C +&2d =`  H@0@2+xH:ЍĶ2 . .'[lQ;*`"l"?rǁ(Mڀ ecBdX⨅R1`!#B>3TQM"wNd,j뭸뮼c,lMs 2 $DwH Y%*ߩFX]K/SZ̈́\kTތs X?l ajGf:`ePt&E-(3zb 8TLEa%-Uc]VϔF6W[%@$^pPy Jb!aK[2,Mw]ndWܹ0g @kfJKx BkᛁeOUYuqO1eɈ'c%fy1`#q64z~[@y˹ۑUZr̙QO jw&V3trI| x ?%Ќw x2ȱ=x؅݀9\i} C&$c*dƅ$A_-!C U F-|-C`D "1Ѕ*Tp$R(p,!ˁA*VQT !xca3lhU<% D4`b7,a igz6~ iP8 |@`"aZCR$$DHB`8l#E9#V`aZBRSd,YHB2P '4$ ^@,`3@0HTs chd@'0 `zDg:K 7h3`izC+tӛD (Ik9cjm+h@_i@Nf?G#:Rb+ Xg?[g-) H9jP:ԠtDEjRٝ&5iB:UVժVvի_jV:Vb5kZպVuTkTH2u$t]E׆ƒzk\K v+bXxR(c YHv?eYhmc#+ʒ6,jC;Ѳ=-lS+ %mrk[H|[%LE.jHW#ans+&$uu+ծtkn+B{ _}%_w%/Ƿ=p~`W-[ܑH$ps#< _IHH'F1bU(1_>>&&& III TTT!!!ccc!,c H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͔x Jэ[JLDcRA#ad GÊBt9Y etm#Vu-J\J߿/6G_w ,*u.'P*2<#<:z͊`-c}E\AH[gd>(^K9^μ9ǥ~uJq/r6ROWZ}*\K:$G Io5_p]]ePxgUjE( !B !n`eR<(~on%\XBVPK T@ ?dZ9,b B"yYQZkyE]oݷ*~yGRFU݁ڔfDۙRUe]G'YZR 楘UR]w@|ũfr* U`YzGhg[UGX馀l*'fjC%_Sr*P)tҩ?Ztqs'XȭPUt%"vkh:+"3UTvW}N~ D7s_[D0Pi%_}D1Pw8@m T4hrXmaf:HR+0p ݙ-EDGS_K[>2+TpTg1@epjŃ $8^+uIaGG0U `]@}R7ByCRYym=nUn]?&o-4=R箻T|/R3SG/oUWbwRHE-^>d[iDADI@2_H@/KVR3qDbV/A4 Bz8KT 38Hd?$,CXnD dσ8!5G 1=*"ԣ)l{q '9g`hZ dž@@F9@u5Ԭlgb[NI#XA˶o2Hv-l)XY7=n {L0B~c h ^f׻ps$G5`x<>x49KotHA0?"c|NysVn& !@wt`;:y{[# AzEzҗsڞR~;<~ b{~ j_{_.ۯw!aR<ܝyXۯ:Ax^W|Xۯ鑏܁Ay/Xϸ31 (x" KHpdlns&NI`0 &`[(9؁^ae@H5aEl=@AphbHF&eXSb p/ "0\T7Q [c0!hhHX!5`a eHpЇ+f\H T` `*p(h؁c؋b(˜؆Hɸ8# L7 0@,0P @ X8؅TsHHkȌyI ɐ IIYz)+I.1ُY ِYHnД& QSy(48ٕ; >9AiYkT~q)suْwi8Yi[^ɓad)gɑFI)(61ƈW5y\_ٓb e9yH`w(iY0y{})ym`O əYͩi 集y و#1XY闹ٛ)T Pax)y)IhYىg*s*[GHɟɗٙʞzlف "c$Z+i2j !z ygh ӉY؉igz TBZ ԙYMږ=e?ڦ'.zZ𙘨yLȦڥjyvJ7x }g:$QZ1u:6ʤږL*iih\r 3j}iZ&tū+JjHwکځpZ 5ڪJxف}Pj#Z(cWwgڒХ|JbZD U $0 'Zʆ Ō0pEP)'P|[Jur@ q0qYʒ$ xcp  R`|Kq ӰG@QM%h @ V г xX GG[IkE#  KxS`[۵` I  _ d@*O(F;PqP[[ O_73Y i@id9d fx`X`EK;R u OK% țo ۵ K\@ٛk@ns;9f "<$\+UR.U۫0u<̛;@I,`d0>+f*Vj,1Z`vzfMڭ*`3m׌ z9ڰ\[T`0 O., 0 KAr@^sN[8 p,N0dhP&7p!X\ ` Vc۽p k}@o(*[# } ͐ `Fs H&83 q  D)GO_ K[P' @ָbж5%P  !>w_5_ 0Pp ЊP 0֝/'Og Hh//օ٬- ,@7 69 0R )L6t_׾//w_]u$XA .dC%NXq4j `G!E,1I)OTK1e~2iIAng4Qκ)ox{5]}}SԌ 6+)*##304 "=tL#'JZJhb DSZtE%&`@Q; !N>r32$Ȋy":tCJTdF."{I 9QL&{";"T4tHkBB*lSHt`*KBG/UtQŒL/Й 0A)q):.LK5|l'8 !]UBsEPF{KGa"s$&%&,5+#E +m6;G"d- Y]uWtӵ/ؗ {wޘU^|c^~=|x}5`ڌZсvXxb~bb;Cy=6YbGVO %@SHky ЖWY]~驼J2.PziNh$#ӵ{ju$&J$@'"sV{m׎A ,(l@:k{zԖ59W|qI‰96B pݻo̿ܯ <O+C 7c3LJtuW,XO ʰL y.[QLϭ0~{?1*gz]E$t~~ ' ^8}/# 3`$P d`@FHGP? I0-$Q@Fz1A\BP3a m(Cm `07$b `|)d"cVD(FQSbzC,nAL(6QiyxF*V!Hb$,s H2Q{CxF@.. :Q?!$!d\<K\d'}HMRaj %M6'`g4'] PEHOysw|K`S+ɓX2$.OA>f5yMl *}Sԅ1 (AQw.4 }h>#*Q420C)<?h;AQ%IQjFv TE_*<!dhjP9m%Opt@:p\T F=*Ht 1pj:Q*WOJŠvC#iβN E) PcCXFV+]{ջj2{݀ $tU~@h|A58lb: SejFqZ`I]jJ╁8K"^IRzMn{8`rD(V1Aq7sWFnv]*̻mxw wDy†28!g ۤ A3Id! /tK1D^,+^<@A Hp/tش Ip(QxELbF׵cpÊS1Ȁ @GB@1pGPZ6qC "@ 2鼽:{::KF?16s9> s-0p@{L@›> ۾;K;X(u@%K%Ę&L:(•P@@þ#;ýk+;824@` $Q5l6=BӾ@E830u8$ѻPt`F̗Gヺ9 ;??N?I;&S|\ETWLXPIE)D?EC@/ l+&Xd4DCcPgCx1l>m,Կ.D@D;b8SDfLwGGz(G+EM~BO DDŽHH[HEtpȇFݢCF G`A?Bɛ488Ȑdő$.1ILI. S&Ɗ?(>@X`FPtɞLʉs>9*PtxǛ%6(t |0HF09 K$˲lo*J:J>Xx%A;3XH4t˼V8(*EEH@IB4̟D0dL;Ā>p$y8,HhkhQͺI6 2E#WHCI#^MKl ,NAԛ@DPx˸ 7@^[#L=0.HtuK4rN+2,HOݔ7<pܘO2CNzS0txK l"$(PW;thjHZKKX2 MHLtPbQ3-Q] MQU1H0tK%Hl K^3RD#OKxF܌R*+U,t>CH,P001 5; P5}]}7 E4 t;қ73?ɈI ԡT-UYUZU[U\U[eP< C7"K0QJyӺ!@4X*3<U׊]VZ*PUV5 W%TD@0N052Q[;LWI4"pMuK^>B=POy+0$Tr-W8WteLEC KHJGG?eX،-ʍmJ}_t G$nuuHp@y$$6P!pńXb&VG-]Zmp_e%?FydaB6u@BEnGvƀ#N@< 89!H6e}`,aNG`_ uf傘a~`bFcdNGfT?N[ Xc!b0S.,i[][M (n߃xg#\$nzg~^}N5&q>}`ata 89 g'fhM.hfONhFYoNci.i8iYLuXiYti:C c f  e@~gd0tH*nIL&ݧٛ>hE;OHn8S F>é&j^F⮞ǯa`"j@^E(_&kgHKH->_f[;DPĻkPl"<^.F>>&lͶ"*Xbiθ2ʼn`~nF#ǝM~kbPU VaFd2J*%hʃthx>q.f~o*@tbdk^..mRhH-o~pv'4V$=|6kf!DHVfݓ])TMw:VMp|$b;6bSq.R(ok$'qT`(h*OG|cWs^S~)2{55}x~9$}""(RyUW!p|їN7XnDFPfr&$M:$Q6MLbX,a\9Œa(GL 1b0 $M^改`%!tډgSgu Je\E',) d䢍6)N(ӑE"٧Ew$| Ez7N٫/ruIQGFIEEp1lu۩3GBGihP;]Iŷע"h4:uF:ӥSj!餫NsIѾPPбI-_A ֞pe;NWA\kitQu๫NłEt1ױZ[mнi|TA:IEtc \|Lk!Q0"EAm"uI#g\:APq6:_}Cfurꘐ6 Q\|Q-1;-REr ĦCoyTGPH]Ҕ4>1푫SxU R%[^ W|%ɁV:)$WޚˉI x5͕;]tNFCl-"Wm;5U}V  ȬyjS c+C7dY\4q h릒$h! ZPCh9QhB笐_:_鈡*RxPa!xW\wӡá.[L8(U _B!BTr{HH 4lfD7;IVIN|lII-heR-2{ `y7 S6fVARȲ`!:j} 4r7 <%4K#ȷ&;"!ʘ[3 *^etq|=b%^+Y(vmc/Nh`WBְ=R*TB,jc7ֵ/-^a+ݎj\q[(QT;PU-rX2}.vBVטݕsw5o9+~Dk]׽y^p /,>03N#_ꗝ03Z0/ތ8&>7Kl.>·6&cN@41l1N]~,%3Y(.-dtШM2Y*7h,dT3VKs1<)+EC=5v "P~C\:3h2O3lR.l㡍46eo~6-iS־6ms~}o>7Mn, h[-yӻ7@ 4`[-@_f/?bxB>#N_ &I8T&B)T k sC )h9t` җ<է)!ܓYlIHn;.ӽv;wגC"{ 3<+!1"1"s` _ K) n: V!%!-aZ!*_F `aaaD_ơ!ja!a! ڡ"!&" Fƞ*J $! Vb$ ALZ:8t$H$Nd `;NXJD^Ez >^FA(@N6MXcE40 $"V` dEt`SX;@dDeRRRXM*TNAPr VF r(W2%]fS*eZ2ba8VcX&fd ^pc%&` LAE dj$ AEZ*Z"["fj&A*@Te&%(#$o.gAJk6hʦc"d& bsnNf:'f&w@RwNlTmm&'}ƀk@lx&}' A:(ta~fb~~ҟiAno'?grg n(Zh=5AhnC0_$(s@<tbbug:''x''$_:@h L(,F5y@:nv*wN  LRvJA!ߐ*.'^^*h**,_EZ聩:fV*fi:"h#`d_E)fAiKAΥX*+m8u2j% tbkj+rkAxkd~Obc:(:廦'Xߩ@ +&MA|B&k`N_:l*$dAz_FtlrlA*k "k,Q:te@ǒ.ZfBj@t9BErboCĀѦõ&m*Am* gb, 8DȭfDvĖ l i^췆k(A֭* n: 1JwB)|.⦃2m1Bl,(ce4xԀ-7 L|/BEA"fYB*D&.ެr+j@ @,4Hnn:dC7PV2Z:-o 8@oEW.w*@́ /,'4..R84@H9l$wdE CX0OG4pzw pA#BR:GrATZ<-# n :nư @d@` l3\!A:ҤHkЊdB:HTD$q 'z62:Nұ1C`B2jF3BF$NƀHB"AYV+& D$q%o:pBqkΎW(1Ѐx:062*ffX#% \r$O/' kmqo(33C3:p@ 46485PB&8 2[2sjp+(߳3C4@  A<KkKGA) D@ 89K2%%& ,u,ss44CH 4,JKGMMN@OB3C C'E3^°Fwt4SuHtIC |<@4 8GL@ HNOs9CPs0#;sR{rSSuTӀ>StI@`a#bc;6dKvYuZPo̾p]gtiGHt@`a'6,vc?vddO_96D{D{a1io92ױiKU5kkClO7mWmcwn't9w/ApWp5Ew3^rsvtvm[dv[vPC+q:u~RC)S,i#8Tj[ukwlK7uu5O~fC1o2^t_37` 6S8v[xe5fk8[5x3F7jO{_5|?wK}8 7ZxCw[8˵+{#|x}xSvg8q'C'.9rv8G8}O}˸YB/A&2x#w7xsy_Cyw8A"$zR!;z=wjkW9k:wz:'¨8Κ::^:#c:;Gyϭ:;':~87߮/$zqru{z :ϭ6CzGǻ9/95QG4.9 38w:紸?|zQ/mGqryonޛoS8}{~:wϭ,=Fps=O:Ǿ;}޳8-cqc~{#~?y;f{l fo_+='y|,-,4E:,@waB 6dx"]\lؠAC vd< 2ΝKÃ74A@& B$@a ^a‚ 8P@8~[ĤCcbf@6ֵun\sֵ{w@(t[QQ#G E<2 )^ʤiN> %AQJ_45ujիYfIG``b `٫!đ0B)ZDǐ#KL˘3k̩ngϟA=tiӧQS[Ūk3Ljw@kַܼDy~\8,#[1lʨ83D#4Ԩ25\C 8 .mLP$n.*P1D"$Tb%,.363R)j늉s10<`>ے.!4lĒc![.N3: ʐIk-d̲8@PM NH%J-tj"8\GvN /tBZmU^}+hb46KRA3H ¤0|JC?;t88\L@FqMv}^uشekd3TQN #N:2<%kč!ކn_%-0d Tƞ}S2i =S=}I@ enxљmnw^d<^PӗG~$UU,e(y^-kSO6`GS4vm>:gNCPc NhRG ( @^U),Fy*6Vk[ꈫ<1YkwxPH9N l!$V@wo[x뮗B1"&5y#p~]i^6'bqUP$ANI{O 9./^f9XA' M/0"PfI珑сua3Uұ vі! 2 jV:V!b`3y"3,@.1y8q@t@:a@(XHD0tDi ӵA~YѨH ,!MX1$) 1 ?Du@ x5b Ǩ AT ,^@t4Bq*.Ҍ]W;Y^Ҹ^ `HRt$FJ-KN`1 !`)xb$ҕJVQ¢Y`E]_LL, /1Xΐ. )p.JR I#.t@H``A ~xҁfz(K. (ZQ^hE'A/&F%`AґNWl&u찇(#|@.c i]Y6T6g0YM-rq yVJQmn? 5*4#@E:4(̓ AM .]!(Np0CWQHP2J,0B20S&\42ign\{{k @2(J,,RŊc1@*qFUժê w,_UL8Qt!T˶ +Mmy(DHH@WQ B#wPl 8nC`(^:DH,LAzAvQ2X*!y6hQAB)H,\uXK-%\4"~#H7 xHLx \H@ axH/)H)Gu1`!PĮPh'7Fix!O0  #Ņ]t4 ʪ(={,D .WL4֡qfH%xs\sE!x$? WH O'W0t" I1 lK*rXP웎,ĠE> N&6BLg4lB<0a B0lhpxg:,1H!|IHqi1ۡA PSj [&FkLG ^` 6`: O2BhzA\fwMsM, 0W)jsQe`^Í~E,"hVbf-ġH1Hfv* :;C|bA~q̺k ZK(38! EIj-KX"e/v@D4Psr_ pӱB<+^vg{,7}Wɮ\f"|2|:J!}_Gʗw=Z73'MWj_[ϦG{( =fe(|;~,X/ 蔯Fz(F`I<-Ko, B !Bc`u(lR,NO)zyP!| PPhXpT{ (j0Po . msnmĭObgc0!cPT kq0=p/6p b e#b~% /O O 'Lr7kB_nH&0V\1$ԴNƢ" kHZl/5P@b PQh0 Qo貰8q!F Q߰ndbecpvg/ONO.옱11 ?qcx Kql R .@q"p"#ב# $א'$/z&%[_!,hE'cOEB|WQAT[M""D/ 0 uܜrMq*G|B`TRb.1,/ G1'r1tO]Ԯ((+"6b!R0g>R3@`(2-,b3 !$ 2oZB3$Gr*"q5//r7r$S;Y8)1933rՓ--H@#2[&==?Gs28*s>]r:2;{4s/Ns" %sΑS84 uZ3srR< tEEEs'AtFHsQGG7::T4A?M7&EI75Yw4P[-!gW CyZ['[Vh 4A5?4]OA`>@^5~LMCE\Z_[5`[cE`t-,ZaU1`ac9>@q_A_5eKse#>^VccvR1H  sp6*vg%SWCA hi[M !7 !9d^ h`n(eJ@`I6=ls]Ԗm@00-4n;>VIMP)7o Ap_@h/5Ur3e+1' 6 `ss4tT= DJiOW`v lhm?mqwIu蘠q{Wnǖul[l rTWz?w.V-d UuUux[WD WpQasS|7Evuq)}'7rM s`fnwhn+6:\/{WuMeV&b.\{$Vp) tcXwpi8a餕L'ߏlal.R.XI+x8! a bN4H; d dؐa،l`xy @?yx{8vT 9Y !Rt@ ΃_;`(,a` @`sA9 } F`PA` liW( AwIbqB=ِyV9!> Z !P 2z `F A~A $[6WbJa~ aPS3 ja:XaMK@V(ΠH#o6HKQY c›}yfL +J:ɚ\!`HIT S}!JT8"*`F[Jc)`(DXs-[!:ﺪ @³?`L[(RZ-B&xV3d4H':Tb{ض;`@5N;2k"w[;ϛ|D̿#Ly\/|!<1eC;5(E6OI5 @z <ōv9 Mǩ$gX@,BB}uɟ9v@ b t` -"˃!@ \!\,8o| @ !ȼLł<ə< \zO ͕`N N"[=0<̭@\,  >] qwq <YP}Ǣ??ɼ(] p} (}Ǚ t p ̃ ٍЍKا Ӂҧ|̗xg =ڙ yANoС=!f}bޙ=!u u^i=u@};]Ţ=W毀M> ɧcʯq!p>ؔã "^z=Yם p] >?9߇_ޟ b K9 x U%Ot?y";0n%L=QV=aveQjq`UtzhuMuGA4W{Yo :!:M,A:,Tqh8j#Tw%mCǙh_%[:F(!HEnIRiDeEriV:O8ydfNލfnU^wݔFMQqS9M":_1NiVi\aVIcCVC\S:W$eU*qrZ2:\/(6Tpf M&l¦ɓOo.,Rq9gMKڴKfK;#<*h䖋LȪfγN.Myo2[:Lp). )/ q_f~ LqTw.Lx *,. b̙QLsf`y#̳R!纱B]B#tQDZ=OMTH Mom3qth@@AsrRS OVc"hvP"`-6'@xx/x?yO Gsq=z袏N:g@8z뮿{N{Ԝ{|.||"?}w_=3_~'-~c= {~N{>c@ݏj N hJp/Hu p=+ p$,/'~v  _p4 oA })hdaRa "'O KP6qnLn\r/ `d Wɰ-3 hR#2fhs7 ps gxLdfR4O9 xʓ9ˣ(ۉ]Rie>9K v䧳9ЄN mh G:tߺ'E)ZPGu(C;:Ќjt(#?jtA)iJҗs,GeϘ┖4Kw@u=eO@"ՉEQ?BMSJ?bՀUU>]5WNJ@鬩QκV5++?J5:5+2KX 63*-6s,&!KYM6ֳ,!1m6,AKZ63mQޭ6saK6³-q5m[vSuʭq+6u ]8%׺ӫ.wk߅wfW轘5YB_}+~e.3<+x n6ߣ4  / u x$.Ox,n_ c>(g x<cኾq5F'nE2z,ec6Ͳ,Ŭ>2KF3]6̃u_:7^gyo7> wȊ7?/LX&2qPzԤ.q%JkxլnvZ7~oھ.ޡ6&`{M(#;HFCn鳧]MlغQ-o _rFw{=vc7ɂ\z]~8wEqY.דն#kUGr LnGz.߅o|/wW̃;ּ_vΕsfݜ?'nЗ5=Gu=GydJ}vWm۔tv__}ؕdUcv=ogm3w/' %+~cy~w=uOmvUY_vƟOvK劏d|G/G:=Yo8ـצA6?4_ڶw@&|Pay/F8m}!8@AxwApQigM_)@PtfTCA@ g'H@q6_5h7x8Xgmf@eTvm@f{H'{IvߑȶNwvi7ZWOQkYvMxe\^i`hv[x7'eFGalvVX؆6yk]/x{jl(Fhiw?l8hxmX~X)@Ȋ芯(HhH 苿(hLjɨȌ(Hh׈٨ȍ荚 MV Px|5⸍B+Dn"@ 9q3 P㈎Q9 S&D&I5I+@+ ܣ &@+Y81ɐibɞ8 I9@9D+ ]y򨒵:?q0ɔ 9P㨐I*#ڏ$ZQ9p GI# !AʏP&z 霁) 099@qY¹#Yx Y:[ڥ \C x s:) 4z 4 pI6vڣgJ+90~z  َ $*79 6ygyCC@Ii?jJiIIiUIq B**y6٦Ix!I!z@(2)DAB@xڬ"Z Gɏbɑ:CQ*(Zzz@o۬tj(ZƚE IA+$y+!I )k <jZU2'*1کBJ{E?A:G)+DD鰎X6$3B3IUk "Cy?G0ɑ);JFˑji)xZVy V`˯1KJJ PIKA196 Rk4]  }k˓Z)P;BP㪔ҦXvy:+PZzjJ99 Kٞ+Kط쪲g K'j۹@TۼL+ͩY[90KQY^i*?AP ˱Gi G9٦) i;ڑכ!i9ܢ1]XB) 1oAQD!NOJ<̭ۑ2L  3 99AL:ha|AC0Ø9,9I(Q3  LS+mڭi P~jL+CДh ʓV 0B7)ʏ@PɴȑLB)B爭I:yA7ٕ< ټ, )7+ʮ<t˻ȱwjχ#YʜE }ɜE\;k=.+̢u+& .*«f%O',Gob+PӅɗflɜ[GݙȚ㨬8=R'͘iJڗu܍07쳡zJAJ4+j :ՄtI+11ٶc2h˓Nٸ0*2:ܪMZJFڟIjiɫy ڬȷ _yK(@ۘxkk;*J+LFTژz\ĩ[ɏ z^)!K\ WGaϋ=5)IyoM|O몸y.[@Ҥz ʷ+o( [>I#yuɭJ *|z[+Ω Kͫld<ޣK|[{h9Y8ZƸN? Vl+@ lĪ zܬ+6(/yʒ ]Y̲Lь:C.9<8i+iz\=}:b9=]ȱ.;PKWak\kPKD>>}}}۱;;;___ooo///봴ʹǺƷgggvvvyyyOOO¼<<zuשwƾbÃ'=E"ӛO~y×Ϟ~{뻇|ǟ}؟!B6߃E.azda^n^|XىYmu 4h8<-(h2iH&LdwMHڌQViXfYi`)dihlp)tib@c 硈&z$B(tDEj)0FOv) 4Rűi< A, QJR@*Z~EFFc ʊ)njb O{rʆI$1m&kJjT!DZi[F [,:эscA X*Rȅ R4x0,@[#-IE2d0M6Rtd(A IQ4e*QIU4;YR"d$.o\q%/mLL2f:J  j*) a=DZ1'7p* Mhqzn:TҩI? =63*gh ;3I \*@/zs* eCeEPQ "CM;@}#L"ă?0)a B5t!Kի%5)Z 7u4h@D"! BT2UTu5 NxгGQQ`-HJ4 hfGxElfYH p)*kY@*ek@$ T$i\ VEk<&zm[ QkTgr6T(`?n>_2(4Ȩ?_F7Fװ?8¢ .MhW:5lw"o~@"X @|"HNr+AǑ@d%{EfHI    'K6 wAf qMsiH0;HHFh*ѐ5%T!@|'l2kh0 nw-jI!RE3hmk\ԼN}mi$9"&1Zik; )9A`tTQ/+>_Ѳz*mMNi֕vU0{+(w̃'yVmCJ-ѹJhv>$ ȶ̏p| y/*dZVVRpXU3DXG2~6QYz;x]^3fxH[? ~G Zi{A>bN'wz POn=MTwC;G;UjZ$Uoz mbfn wdr UfmKUS%TFTERUTLhTs3z7&@ YY|y(   SJ%v Quu : t@iNuG]P b(jBIF% @b%fir#6y8AWN4`k$ J[pEGorIJfQY Z@wzyUSdUtH#gY!?4pkn i`ڰ @P@ `F P0`PfIY IrZCWB~49)@plTPiv`U@,YYyd(HƹdT`鹞9t29PZEO7c`k p) `+)96z8:8o>H#z&2Ut$O&&Q'@ǠfisMFiB %z B`yY]TС|ڧ~= ?Vi@ jzmqLHv4`PyTZ-JZt s`)^T@`ak>ar0B03iHp*.t0 j)OQ9)Ǡګ zX$EY@4XQة`u ,$0z H@dF`adef)v ZP%`ڨlڞn ʢsJzNJ)9 ۀ$[$~@[ {(3WAJ8Pp r_G D { `+ \૞j7:'9Q˳X ym+iokY\{~+kثkދ !|BPL@t0d0.L 5ZV%Ĭ{[ܽߛT\5f&4k;@h){Pn\X@ k { |gz=7;ă̼ LOR EY ;ʼ;ɜܟ< :L͆ P\b6ʁ,ˁKG|{W\ `DgǼ,9G<ʠmjBBk \ȶUL  ]& j Gm , ZЦ!=#mMȵķȿ\!l'[PR=T-ՃSu 07$9-{FlIιPS\VHJ@M*]@/Սm:u,(Va> ]ˊC̹ H[y{ =x=+±ܜ<*Z֝9L;t7j) _ @2  Vjm^yҡݟ 2 .;Wt]{@"PP5@K<t %~.PF--n=)3NMm@vΣ8;gm= &lÈ: m_ )İ Tij` MpW U RTKZBsޱ(:g[ЦMUޝr]Vm~-`k8i;`kخ9b%.I2f'^@ e PR 8N ŋwɍx`Ä >l!B B4hB=6IebL*[B3xNJ5T B ENJnK ^B)&m TQ)`E @) $- o;/hID ["3F@(l0Ft4 +0C8ABC;Q(/M hPFy +; l @.R%=JM /$`A8I#Pv*31fUW ETQjQw)Jw*#DSN $A] xRp(QrV_R@/ Dʼn%W&lMc 1 "$/62j㎹doE.qMH2L 8 x-x~ e JžOGZEQ<_iA^*i)D GL8#>X;({vɦ+M*  +:Spg xx|@-4F ecI%x7/Y`C>6 PTu7v F&mkC`=e1@i ( +q $ q`e(\ X~A4Jhq)$ 2sp L@"Bg=ن:aP:@BhD# ., KlCvO;apB2#qaW&"p7@,.LA7ՂBK\B-'M!֋J]j[7ђTv\#rxԱ("PA|"Mh #ZeDa b0jh0 , L I@kYՃtl l;p!d`V)[K(9Ιy)`]Dԑ*& J/]~JBjE+L5QM!7u8Y'-G_0%ݖUS3{}g,eт׹]oO[JAł.ƹ/] P&ߞxz$z ]!mFZ0vG`˻<Њhl@S b(R\ ó?y1w"c5%;p)I7Www^ 94YCzJ59X{s\FM+T:Yˇ07W̾#32KTkwu ,VsG;5A;\9KXZֳ/ŌH[p$.̼c4_wI?f#LV&PЭz;?gYBܭDq^ T`+ a w@_}-4zGlY:M/3 ͟KIp@i=m{瞞;WoG[댽 D^:t ? 'DpWQۏb 0|҆d};9JsBC x:i]eR#n*!+8@-8C"F 8?d9/ " C0;;<?;Bxʂ0A8̸s\;{xK-,@$ &"47#x@1;ck>d=hcgx.lk@ 4:!@ 2C@Z-,%|A3HVe%eUguM~R-L>#O _ hb=VeUVQhVgUh݄i݊m-.F?5]] HH*r:?t -X-0l1֢؂=-e[jXނ {|-}0ԕ 05x}(p5%YY} ̓剔\Չ~ڝxڝZڜZ!2Z [M hl[x([`.۠ۿZ[5 Í \h\}ܝŵ\ɝ\Pɍ۝\ -\=ݜH uݿ]]5E]֍\- Ε$U)܉]^ h^xޜ^ ^^u^x^^^ˉ^h_}__ ]^_]^(J___ ``V`5ޜP_>^ F2R Y $`S\]Y> >/=a;cJ}>$ؾ0G nz56X^0^` 6.!bK&v'_b(b,bb/F>h-21&0c4Nc82n5Vc47 7n-:c8=cc68c=d>56>DFcED. EhFޖG^d2xKvdFNOdJQL6TN2uehYZY~e\eZ[^a]vbc`VfFefH\fYxfWA 30anjqi.8gmfgsVtwu#hNfh~ffeg}~ecgi&FVvf{hznopFkhw葶hr.Vi=@@A6d?Ή&隦C雎BFjT.eRNMSΉH~njVUMkj갞&0^c~kkkk kkH2.l 6nl6`0`vlˮe2lN2` l.YlX6mئΘ3Nڟn%2Xb0 8F nnωl1x0 X1`_4Ή4֞8o > ho}mlI'l( g*莇0H`6484 m4`h͉3 noqBI žmx`383 4Ѿf* !nAm^m3V@3(r `ffbpr(s) . 6(s.mx(x~nʞn4I7sHG~ɮl08$%>8X mC' PGtYPt~oX6P1xmFE0Ⱦx@lc d׉x`2ukFnvn2" PnB^'wu7.):_wx($"6x_nz_p|wG9pw7 _ oJ&'GeH\x߷6xWy!GWwyYF\yYfIyz x0zWgwzh|6h}n'Oym y7yz'GW{d7{wy7'Gy,z?O_o7v|6gw'G|o}{5'|ط}_}7G&~|&+x0nΨrwniSn~7a/G{mF8Ԩg0~ ^nɮvI,h „ 2l!Ĉ'R7r#Ȑ"F3D49%̘2g"&Μ:w,ɀx!Ƅ9a~M8gŒ /)PyrաM_ǒ-KɔaB\(t gD`2 2I L %di0a3n8aI˄;0^2„I U Z. 5Ԫ.^5 xz<6 nƻ/I.s2Ϡa3gܺ9p uwe/HhͿ%Q]:wz`Pfe[o6P^{W`uW3!_uTaPEO*-EP@%UV%5Ћ(҄3Յ޸ct5cL9f$X$p`7dQ"vUV$`%^Q%ayؗcy&1&m њo9qylݹ'XgBj'd pPj>HP>OMZ)F:z1駒r jAj*jjBj<jAJFAbkA+Ak [kΪl@D$J .DC,ZKPO[{민+ppFCpNlP^[q,r1A $l2(ǣr,lP-:z3]F:o?r;mKyt65|ВM>9(ѐHH=QH(1M5tJp3$AD<(VDBݰݸHda"0v0p̡R -$8/Q\HN,g76*:Њ8ax : d4^ ea #niC` Ó0:qJ6K٥EId H:toZH%W搬9 C\&7+Y~L!t4۹KXMj)ALI1e!C%ALq3&լC.o^,7:Lys PɄ=a"I3uKFahwG1C`'НD%ZEK0rE(4 < B+CP+؍7Ci&V3>!2y/Ɯ<\4 Y(B= W3 3dkyXDaπ~D- Dɓt1/[d4 MA{![y !pnCBY.f샟8 2y3Oht}w0~68\A҅*T@Rãym#SS1LAW"`};;> }=MC` auaoK2tcx Bx`[fěLV #d<p-D TX6sxiP2 1o3wp$cmQCBBoƆ| T:ǃN T;ٛ;Do(Bsx kPyl /Bǻ9ChPe9A7d^/s8D}{W$pqx} b;r)"yx=7Dl~|_jzc˿>'RZ|CVLanj$p[σ{phd<_C->1ِZaOG ` 5_yA`)͕!>Qdq`LdiWxɐv}UDV Q<  C0|!LH(_2 !p VO-|}]˰ۍV!K1ĀS`aG!ALWZݽdx)D VAU;Ջ)1D:5_(t]+B@aEN"ٱ -B̀B_%JT NE)"0 d;@C.R!L "8Bb#a6@BS*6j Q+\D>ijcPi#(n<gx) | H$ 1DC"((4HAi( } rI%j6PB1x+@6$0`aNРW@VfSƒl ***@h@l@@ 0B)¨֧B:Ķ @R+>*)n$@2 ؑL X&Hx<%P߿*dSt,<@jhVfv6F봖(hq'd:,2 @A ( F-NrJ@Es͌DUBKL&2|c-F( |,ٚ-ꘪ홲þRbrǂlގ,쟢lDfmhA ..+@&7~( y0A<|@xC8h :ذ!A-&L V,yB<$tIQMV,Ym7paN,iDJ+["  )]sdc `wp~Z;6`pөW~:H#FODapm'PD"U)TTb+",R-.ҋ/0HJl-L2 (2lLۑK HH@ oj**:+8[pk lK,>4 &D$> X<-Z{-j-.]8UqGR W^T^fvXF8"޸$xhu XZkw;s4Z36?:S@= nHjE/~a .qF|& U׹E>3`~/\^4 A oxPC;.؀b xf0 xz/^jmb܊Xٍ `o p(+pc2m-Ẁ!@p`XMǧ X<H/-ҶW5Bb S "W7^Zl[ Wծ< 0A. & Pnm0\ņo| *TE9$efiC cM:fB\f3iV[Xls 'x2̄ 7 MhCw$>-rIօqv9H +]y]eg@C"%Jv Q A %&(E/laSzIhӆދ1<Aȝ=Ӂ PPЄ;'"֟Hh\r:ڹ'I x ӆ}5}E*>x_ajA*}lns/YQ+@ ȴa<¶-Go[G`d&uzhb%\s+C1h '> cD#0+æv"4`9u26*. f %ZM:G~m.;iZMw %8 W , ;ť8 mlgw^@qD: Z˜Ż Gsf`CRl]B $®y.^=MK^oN4@=v% h`|Xs" 0&6pJZ /JA C \6)"(ZX*?o@8^ƼmkP _:`龤jh؎n/ZB~~( r/)6p;ZA @e΃Jrz #'ˬa"2%^   `A~ ba6A a TA~,1t@TRn - Of y Y" +`~o ! Xk QeP=pM M2  @ἠxd `j6`AANad7꘰KNFQ70q%p$ UְٰHj@`k- Sh› [V*:ڀ8ppl  x` $!fꠒ>`6@@"V f.[k##S AFD1H' @$ l0IF`(A2S33 A &'H vY*'r%M'jJZ))D j!|cŨjh+{@I |a,I^8G//5rʦ }QB1`'x ix`%`2ų2㜃:&?kA?d@<! t 4 /8BH NePx m`gԑ1 x,`*s0 a}A^ʯHa ! @:, y/ 7~ S'A~@ =K 22 MANN5APt@8!۸H`V` C-:7CKCaDC:Q 2aE}! 6d4\A C,@!:*Trg `2RsJ0E6j%${5BZ,"`r ڨæ /.% 04 -G 6S ђ  "| Ȁ ^|  _xpb*A,!6@ 7D 41ÜI/0JS`GR>} εZj]]G @_dRIM "7iY5`7U55`Sw ,@: 6 D$!% F ^h`I_##nJsV&7JqNjMb&KdL=ބzN~.].Cn01^fsq`;4ZiB0' [>!ugyhqKOr$s w&̱qI?tv]Av'^ &0Rf0W0JqxqBXW%7y8} 8qs rah뙞 MAy_g@4%cb]k'{K}y|/4@Sg7;Jk;[ iYx[eL'%U"pTYqs@ I ~ :u,evidsm5/mP]o9~W8w9e" ƱtƏW"eW1wq {ԀR+ۆz+r0:oK-a@`'X5 {1/|;ثIj5w1ycbޮ=8qY$ z ۰ ߜ '[cwL.Y6S-EشӸOEBr8fYo7ovwaB%x z; Ŵ'$Ǔ7!9tvg;Xa0 c7|0SY+7ocײ]Ƶm0E75t!z|S;[7%g=ڙm}+Ӈsq19rT`ޏAz.d/b$űg`q.C{IۡۛƩ)zdv  oO>M T ؼ.N A]=}A["í 7͵cz`äya~w?  @&>:u|BQ|}I8{.5}a_Cf:W>Ӂ0$p#aZ _] Z*` __x!$q`*@2KQ?H>%Lm0~"D.4hСdž Jh1a TXb;^0a#F$K<9xAqA»lpCBQ%KatbKʑlL!F{9Os$Am꒔:Ԩf5OjU֠ek֩u_D%!6I=dV=lh[ǦvlP7Ք mo7mtiwimo|ӻ$~o{;&I{G0+^['3S\H>|+/9Kcx*O90l{:}[t!^σ'KGtG ĥ^[[z׳~|29Yns/{njO;v}r:K#d?' n)71xC$o<%oyT>曏<=/yO^|EtgWYzc^m_{~^G}{^_"Q5曯ZΗ>>hhh|||vvvNNN???ˇ888:::sssħƭQQQ777zzz̶OOO{{{iiiqqqǬŴ^^^...EEE}}}xxxzTfc`NKKe}]`]\GEF)%&cP~~|kih_]]611{zy]ZXZXX[ZYE?=caaJEC|nfdcURQyyw]\[JHHECD.**ytiUUUmkj;65v{uп!, HA#fz(\ȰÇHŋ3jȑ C:4Sɓ(S\ɲ%J3| I͛8m0ǥϟ@3>*ѣ7}ӧPN,BQXjʕ F[ O]Ӫ:ٷp x˷o^Th}K!˸^L >ϠCU?SGѰc{.cO?ɪs&ߐײΌz!UKLzٙ7A3_/|; =~}g`w iHY FJ`W!abلڅ~ouX> \$/x2H^<'%(թA" +XbT%Rfez- $_e]&EVz&`pOigFr HjZ'兀d &3 dGA0z 0`(S^ 7&(>0M)m*Av ">4ꎿy*\1H>5L@!EcB$␏;4<[߸_ rJgns:nr:x#tV!+v2/YvMdwƃ ŝ"oM!do܁9I,%׏*8]1,$k%tqFPU`=|cXUL`#S!d-cWMMum>z׭!m}*(p\wH|so7 ! P8PO 5xͭ$^M8< D"<1C=f"ݣoH"$X8PC7d{s{Zh6  |B/ a8x'xcNXU].s?3CX:AnvpSNnwc`Ct-yϋ^?]/{7>o"Ǿ 2ۗ=y97I"&PCBY7Ȟ."#3 F&dp: tG>jяW#80 ᕰ,gIWkK,P^4p[!SEX f1f3b3MAd4b<624A.UGm3+$o:u*^>'<_ӄz힚VW\;WDheęCw@?6|,FQj%-K4iZ`JRJ5)N}M>R*唨/Z .զMLK}jT!U0UlU]%Wg*^f}*U)UQq+P WuieZ]Z.p9L[*@e bŢX2֋hA";Y=cfi6v*Uhh8wʹmJjZJvăml2|ֶN3uۖmlŋr$wl&q޾p \R$'vפ]rV=wYWMy(Uo{2ŗLxؖ8WXF|_8#fߤvE2fq0PհvVjS[AsڪJ&oswݘ0RNuyd6(EzVidXP LmH7pݳ$=pYSl?4lIN[2LFd[ &,P[ҖU  (@n9M@!ݍ\A šDA)B,b]/n۰K;./ @"S')+!Ϻ u >z_[^Z;D`ps`d=ߴR}u]p׉ⳆZ1v7kQq L=ݒ:=O?5żZ4 D!JWw_ '.8?p;+>Slc\s`AxYփzЗY6ߓ}7y}ާj5#FEu{ts`wXEP0Ee&`fq6'y]|v7t}{;~}XG[8Y:HуeEb=0r uqw|yJCE(48YH@zny5^Ws䂅I[hU]W`.FSF_j&T@nolzRaxEOs65=rW|10Ga<~P~熳GZq!eE:+=Wߴ{padVG&pa3r!Thw m]u4`rA0g=18N0qpJuMeP '5Y R`h7H7[~Jx{p}nr+p&ypa ySVB3dfftA|VE0Qτ_ mӈFƉw03ᒈ`1 m0_iL7Wьؕr=Vu=~XSC{ap5FscҨf9{02t!qepwSy!P~)_ҙIUc%2 8CctgTyp4>`aQLgGzpCuq6hng_oy{%xHo] }" Wߙ[6Y\[I!ɞJ}𹞧՞9ɒbc9YY!z5eu i"yOzzT9)/]e.¢Y1^& M4T6z&T;*T=75Db:J9="++- B"jDe5R_?ڤY*J%>Ƥ|ɥHF EnzOb#rJu Nw 6# So_JzjjwbAdDTGbMw in&i>cZhƋ"RL0( *&8AgUwz Z&"(! r꺮ڮʮO@lFz93T0r:0VkbhU[ˮA #R(7I` {+:V$!"%7T w8O p_PY 隳Hw0ƪK!cnR;T[V{XRbcf#I;+x2PdƊu z|۷~@O`p۸#Џ@[#`*G!{p?0[{Ƀl; V/2J GK`,%{țNs "o0++=ˠƫ[˼A1`p۾rֻ!'[Wk;K p<\g m [bcJ E`G0qI0 y0P = p 'P9  @(`.@*Rкo=#0Z[ H!<%|)-1<5|9#?C\G*pAAY ]"KȾ;.`e "L&*. 2L6:> BLFJ|ȼ("L{@Ƥpgjm p~P _>InpMNm03d0'-jU> @I_~ +ѡ=NrNjH %JԺ(l>pm[@¾rO>>0p *0` {WO$&~-l>W4NoN׽UPJnh` :%L&̘b&r@d~lB,R$Bȓ'u{%I?1eΤYM9o//Wh%:t AljUQč~h$H7)6!D-bԸG,!$r$-_A $$EgKEBTУIy0CRLEt҈$fB4PXpA@??RT R5K>:*֬V3̚5G)S Nx1ƎC,y2W^_>O@5T)S*j:N; ;s . /C0S1 20ӌ3@4PS5`6p#qg-39l@А:c“0B#D"$TG4L~ jJ*I|@'Hp 20IÔbT4;,{,*,:,J;-Z{-jQxSt(3($9nn;*kJ R=/ s%IabQfKS?6{SVi*:ܵAO-/)C6DUWDT=QR+mS7SC͑TOQ3Yu#Z&mu(=#.=1[Qv]qzvMo9AAIJB@wr =t;VtDL4EJYEMeFPquGS} `3.`X v~2W)yWrdi%#tv[kӿ.d;SVϖ;fAx{v}Fq-u19Eձۅ:H8IɭkņyX/>֟CipVܻ۾# q㢹.HQ qzߤ-iUГZw*~u=byme; xBvD+/=^F\v~9Q >+ёOkwP> }3Vڒ)Y۴B?i`  x9 d!g gQN_KÜ=jjũn[A|^K_^²YbGpDuPZ#S;ě'Ds5!F#}=5n^DܽV}1-s"ץiTBdVUSh=-i:8@CR(kNxDRP`"YE \j#)8@* #  YMA $ ` BA#^ NX})8|ZcYgY\Q>H `"[#EH@s _8E>؃$3=&|X!haC4B  h0%0@yƕ(b]`o}j\$ pu (k'\ gX0 bwIӄ@=ט, >̀1s*aU S V4$`p0?U  xC)2H/,l6m0 +hƚ8˙vPЂvE0H$cm }K*y_LHX9%>z QRߥ  f8A𝵺7 q&4ۑ2/f@j_UE,Aw!|9{đû=o\7yൾa(Cf>jXX6皽_Ǫ"p`$U*0:{{@8AGd@0,0;a7a |{/yoA <6C c肖*C]f`g9 2h]$Rn 7" |VeTg h@" 6@'>?~w 嗱/x  @sjF`b=L禄?0\(8 T}#w dsGUA`{|;?Wk2Fp-;;L@\@\ 0P/8Sd? ڬ.+*( E (G ˏ ,?Ux @ p*+4A 0A8sԺl@0 C1'ԯ !R&'كZ#؂MG3҅X-8AzGA'RRPW:Ax9dG `Cb >)c./CQŖ렗;C/B&*P&CI^# Bj%rڝzĹ1U6;" Ug|FhFiFjFk R .$0V42(4[Ś0D:*_|aEbU~=H N8ȃ<7 Xc٠ `PPK@N H$"Fg)U(xA`1"ljRbZEv q)ɵr4'!z!F@;J :h(8~X B`&hCH`h\`YW0)`4H LBFGͲIO&s%ݪEX]1L,F=JL:|$X̘xJ~}(R]xz83SSH&l@@K2[ { Wr(L4 3~7SPJdXXh2ȃ?LEx00M7$hxr̂N6NS$8\2x M:Xv  Pͮʰ,4KTKĄ˹˻J=ЕT| pBk¼C0Z7@90`0##U:?ŕ:VR7L#MRR& 3)R->-P'\ ,B,kADMTE]B5T*իF5'X1U¬ %Q#HMGR09xQ!YR¨' ϳK)44T8Q-#TeVBE0([5]U>e"@ERLW|@fl=^sTw *0׾Q9kY0(Y˿pWhC=%b(|ۃ~[@U@00WpX(*H$.@uB(ZVZIE[ `2s $ >{ɝ\ʭ\˽\\\ 7@ژSFpcp V# T`o`c HՁ 8K)[3 ]EݰL5׍)fU7^-˛}I_]_m_}__M_ !u.8 C]zzP TX7dX$%^,]L]7ZX(uk0F`. 3_¹,!y _aa_pgb)PUtEH9^ #SðB%_?@Ǵ2``<8IE('$VbabtI)*bƋ<'aSt&4$j*GU ,/4`w !$8tEkPVXL#6x.cJ!KLe @s-R^ff>HTdVOzE-~᪡IJY{ͨ$YhKKff~RvfらU6BɘEZ޾~no#p$cydts&f~hPVfxh眀悮|F}+֑ش%poA^TxģyLhʚr;Dw&cTiNT>KW_ v#.fsf[8 i6qBk<*gM$mjބhVNk^knk~k.&(렾Ane*{T(P`^g_&gLX(#0ɁDP=Hllmm..)L\\zFhqLhN#*8lIqoji|Lg(#x(h&qqqq&pn8P׃44é gO16q.ض6Bg/"P!Op77g%= (vgb.e!^G.n"`n=tLtMtNtOtPtp쉘 %0?W.(CrD7qfz$G5f(\2#dOve_vfovgvhd7}PV/rk07(#p0%=]u`/r2bSӼnmw@mr950}X>iAwnt`GgBх @#h~7M%߸I{m&wZ-X0`x<\epE_z? y#XTY([LWGjAZ1Ѓuurg<ȅbr=ȃe &H#"(]hIO羚g[`zox~moy7gVMz8Tvjc`MP%{{TtuXKς;/ksy|08yvDT0Z ͗2TEyN0IxS@Yh]0k fo𕏴xɿ=,%:t AljUQč~hXg 0kL.lL1„(P,Yb_6m! )40g 1pk)ԨRRj5?M@HDyo =_=lKvN N1v=;f@㬔=a^%Shѥ\Lq$;baD3jq 5#KAf0eҴs'/?5( R4b/n8" ܄P'VqcǏ!WD3k̹tR"3@cAP&L@ W`0H?RQ" S!>5>Lz@K 2m~(A 8@+q d&; z/b 9ȁ Ơ5\)@%5 0 HD()(# sz-D P-D(B$^I3W%HA @zB 'vCMR@Pz΅҉ |C@zS7=g %t/1B* =)JS*/42S YҝJHK}L2N}*T*@KS̠038BS*X*VqƮG_; b&μX%,c‚W5f` TU;A~F2:@! /TOU 8%Ik>V>Xt,hD% [[,AeU1z2PUDW9(8d(J%Rjto)A/[`_k|_8;`r>> vvv000*** :::wwwXXXsss^^^gggȔiiixxx==={{{mmměGGG\\\888ddd]]]|||)))íɅ### +++JJJQQQqqq555aaacccDDDYYYnnnSSS hhhVVVIII%%%bbb...CCCEEE444TTT333RRRNNNLLL}}}WWWHHH$$$MMMBBB!!!&&&!,2 HÇ#JHa 3jȱǏ CIɓ(S\ɲJ +ʜI3b.sF @ Jѣ?ɴӧPJJ* ʵ+QZٳhӪ]˶[ʝKݻxz$߿mn^̸)L4LyqaĘ3+|rπ7sM:^ͺgа۸sͻo<\=qS_μ0K΃=iw"HpLAXB;F|i#M3tp;^$AHl a`iɩy(}>!nDJ1ϵ8 [ot>+^yY7^ΈW6ZiJt ýVn,gO zNcU: XG\W2Uj "nk w/c|-Pn)(AHz`Am0lD խK §њ &C6,G#~iy?4Z 7"Έ)[⧔(ƕ5{ӠK*LX<5P~o,83(@+)ka@ʩ0|$G,` nlsl#QEqrL%U*K)%Є]_)hHULaJgtP-:rzL&0IM{[LCiVȓC"EL/BeQ&sp'u,Ú, >3L8 s<'2tq;XJ{N=$?A9fTzDjr;Lc6қY?-KQ]GcBa ljf lSrfF-MAB麠ϥDhRIԭf S7YTJU?*.:T0ի*\Iznjh'ɴޒlu[:$&e֨%}hZkGͬFƲUO%:6%fWGΎ4ݩj{pR4gFϵ8'Yg{v}n|p3, ,jZJ- tW|:u):_Bix"%T g+,WMψ4U["7(kߩԷr1ya{휢̴je?A.<|v!pN{#gtv`*6Ml1=N2u86P61)G5Q2uA.392e/z$1krcM+xγQ> MBЈNt 7FP'MiEc*<7ihSXDZ* Hh#<<ґCaXpjH8ЎMj[[ؐF }m/ *uDPSC5^k($ַ{`CM0>H/  b 5a`ā Ҡ kxb @.`/ ] }0gNۜ93@:нpi /C'|(&Q-H *䠁 "p* 9*F3bfF> +&O~-^8'O[ϼ% \a9ebO/>[ xAQ,=@0S|B/Zc 8;¡` ~@Ix,0D|"  x `z0`}Hڇ}շJ2 ؀2pz^zDz|p؁Xz 0 }P| t  gz PA (0J6;6g,1ǁ \@9b9GЅhx$(QHpVgqfsXZ|_3Sg؇l{nduwX|0iZ(Px@iOyX CVFhX'6x}8XxȘʸp(Y҈#ӊ花(U踅x((F}x ȏ]yh\)^9XШ`ؑ&1IȎ*َ!#I h%:1ْ)=YDy NZx }79k؆h\ه^IH6yT)" 6 YqXɓ4ay,Y`2~9Y 8"1wgii)`lPrprtIvـ}i yn ٛ !Y)} Շ熝 0 @Ȑ͹` )9NyǨ@[ Zz iH3٘I p 97 | XZmXva{3Dg0 z8Z 8hr`;pHJLڤNIRPXY+*ia)T,7.0 XhI!'|ڧ~;PplWZʥ́Z  `P:ZzzVXJf )Ѧ*JEW0HQo7qszX `z:Z*; NFXJDZ}0 zڧ: @T\y# r@ `*Z| pJpn:K4Ш ۰I@*Zj抮@:$Kpp *wɮ"!R#*C0*CDa"HD/i"@ZV{ & @a jlq*PPꊕ8;57NPgж[8*pJ[;뛷mq wxQ,ZyPqMۖ :ټΫ[wc ϻo` ,a[j} `ҔNy@nۿ W05;X0*Fzp'p["X+t @5SV",sX*ѢSpQ yBy'˙Xr gC00PyQ0晠٪:r )\z7hƭ'zR8PQU ,@ ؘXkrpQPcڔ#:7j90j&IGX)l2plVJU kM!  l  \y{(ɵZ @Z}T"h*'J:7#:&L}{K+-y0ŬȬu\<&6@xp@[| P Up>\Pypnpy@9{%ثPzT Qp:hl(|U1lXҖ眇Py ҕxϞ  eMѵ` ]μ@ Mp@ fp{ pv}$a1iI9,:b )0N)J+蹆 ȁ{w-چd* g  PPЛr,qP 0 ZS!My#D]y~:}ZX#ʯ;}5 Y]@;UofMyFνԤ}l-` p֗`̌ 0.*x @d Յ 0 =DYjJP,HjYt 1G?$mۗټìŖJLNL֭plٌܑ]6& hmAp m 00 0 % aT!]SUy<;[]N1P` p s@ [ Ѳ]z~(R``znC1_mz҇^LdNNA-* 0P ހ P/^N}* p*pē;.0^&^> ΄̞sZ OkP U02۹A_ZL "?$_& +! sm A R﫡p]?K%Q 1N_zO/8`^OYh\7kQ|{@Ǧ!aI>KKL]ON``_1pOˢN4M! H"iw *|o&)!O9qj/31C@pOl'!H8i!ֹݘ&_,j?OOģ:-vO $X`A-LqC% rL$O E$YI)IW@?,ZͤYf6f:p$ZhD~/PpԩO YUY`$[ ڶ_d*Uy\w"a!>Gg%OVe?r&?״T~!KAM Q*(=P*4(Ay]C^D`sѥO^ݺ +w Ӯ=]TM9 ʷvJv*40P@4kw:R 9 4@K60P!ԛ2³,˔! QDBN;5bk6v2*5Xk5@ A(H~T% JpI Di$@]"*  <`ݞy 0؉vʂIهh;~0~Œuhۢ` 喕[~|HTx *a`~9ѫ to6dWNuf@T hjNm jUAg?5 퇂S sqȂ$#)ro'_=@W5qX )KN-lLC'Kr'b_5g{n#ɝ-^@zmqLJC`{]zh %;eY)#4/6uT<7m`@h44@U 䔼0)Ҝ #6pE,fQ[bE,#1"5 .] ٥6bP<|"POV }`dd#El8⁃h5z+D6|zDLpI H gĄWMl%6 ꇯr?jzHwLhFSӤf5IM0,L'j}!bɉTPTy -Ty!&~D/4r\S e(C/LKS qS'ęǀJb*C\iXN F eבIYI >$fˢrP˄Ѕ>RH$TDɵ*w$*NbmSÐG? " p:/.J}$.Т/"x 0&١q@XӁJuKA*b4d ڻ-)XUAYVy&?[WͅS{7%x>mre;mD ^6԰# KFa69iT0p 5 p WE:~嫝emT<@V sZs ei:ۣ!NhvK a춾L|b%i;_T#KPF7ȩe{ժfw] c#E mV:NB䈱Y$w3ΑlRؙ^捳E`bLxrADR &eG; m ]$mYdͮ?:K3vJ{BC@ن8sxΩ~Z|k`[ؿCSp޵QIhb]E3jмù-JxWMPJC-7L˼~}o|Zf!:V6l:qky.\:wñylKI l-([Eq;|\eOq:H [*G@DW[<`}@*(9-Ő08U( =ӵ#9rTy ̽b9I&H ڙ8ܡ;:A LQcPj33p ; ᦂ ( 薝ȔXt<@8ibB'71xc {@hK0I2qi 'b>?,Oi&#+9v =rYDiD[>X17L!4 =Kqra#LـS.Cz 2! :: 70`¾QK XY籏kLq=X!b#$PnT^j), 'ڇ-c|s 4z;U4Y3WTPڔޠڳEX4}H&a<{38fbFֲZ^d" iHIs <[v:9b%IĭA*'{H!Ƕ5BT i؉ɉZĶ/\5`(=dÎÞ9٫K ۹AFZ,xG,Ťc&Z6.7s)|[dFn|VC}c}H@}7GdOƷbScKe)BdOHeU^eVneW~eXV`RIJe> pYfafY&d]>f1e;Hcd[g4\QXfS9ef?ufm>hk6aq"f.guqg5gE{agOg5HgMhPg$~5T&f`h`hhU67> Li wQx.i6j6s~Jf6kq(j>jNj^jnjM8քcᚾ* k Hi2c`& C*jkzQk~kmknWTENhf䁶k0H`%Į`[lxk.Ah.mhɖgJNmh~`lle`2@mmnn>Йyfm>`vNWnn^>W56`8!p.fH_r"0;PPTo08/[M@rP{X7XȂs{lhlt uDWtFwttItKtMtOvQ'uSG9jhud~oGYo''*,^1/3v6wsys*mfזmeijqTnv|?D_FusJruNR?ucWz_!GW}u/'x`?x5ObO@hxsOmW@7 Ayp/qGHWtto_x<`NkyY_r~)w]uG'_z?P izknzw'Gw{vP'x{OWw[/`{yoTw|pU' zq?Gzǿtyw|OW{_{~|uu7a?}G{l~:hזO|ruwޜ?Or{\xx1x`0^8dC *Pƌ7r1 ,i$ʔ*WDy2gҬi&Μ:wgG~,j(# 9`bE@NdAI# dQLj"zzQ&0 O< n#Ŧ| Ԫ<a`l!ƏEH !\|τj(?Wf;.{.K,.G;NC..?HX)b"X4gl|%-5bN6풛YdmMlpF+,<3mWJ'uZ0w2!rO qv<Ǫ!4k2Ҫ5Ĭ5Gpμ"ga'-7BjǓRMe#L\۽8bf6:m2MJ'j}CzLz" >;~;N"*;|F9js }ОM?K:O? x n&p>髿>b??>?:zT{%)z3Aan|,A0i[Lv­VFL "`-;mrOZރ`$H`Bx2G?b :OtvYqL\(N5$F/k|d;f$7$<|QpvZ_HAeJU'oȏQ1l !Xͩtd!YIBQj& 6=r\M YʘJDJτK.700DWWh r%8)Bs(p\͐Y"wBqӍ\Z&o NpjQ9.s)w6x6 {%@cyQ׌`qYŌ˛CTU=)?`ЕEr#D#+y~|@zK*vKdH{)Leҥ2դ0Gs#EQ0J0yGZ^rBSUH}V!RVӍn|mIVUz+\۪P9t\Y4T WϿ!%eebSoB :Ҫ$̩]ӳ`'Hк,T-r9q (n╸ mX)\n`3(.GZ.?R*[v&t)(q.g?:V+a P]gE$ھw}èPЬ_M]J`}$VMW!JV%bp$5\Ksap5DfV 0\ ȃ@b Lb P(8 cB%_ 1PD p/DZ"ʡ{ ;A+/nGWqaqѬd`6 eO" m%yqAm/я5-qڳi)n-y{7 ׻;B@L_V-qU3 C: S8HEl"[X ~*![ ġ¡ L aGa@.!,N L"%V%^"&f&n"'v$BF ")Q$z"++"%&A-".."//"0Cb)b!W-3>#4F-54^#6b#5(&c? c2. 5f9/n#!#:;7ca;;c>c6#u#={c#@.$4 ($CF/:(A^d $3J$G#? DvH"E#Ab$JF#;dGd?KJ$L<$=LGL,%+F@!cMMG褘.2%Vj) R$P-B@"<0Y%ZZ%[eZNB2%Iz4ʁ)Je-BY%aa$\&e/.%DeVe^&fffn&gR. ^N^"_I3gkf*`@f2e*aoX"?@gTeb$)&!lonabgv'r^rb!ȁ'l'{g) 'p(@r>AJ癀ȁ)g{(^&gp(g~  -0@ewnvZf<0F<4@@?~2s!hBer(V$<hF@*l@0$(6)fJAh~Tl@|&i翽h82"܂ĔV0 !&ĩ)֩)橞Ʃ.HL\l@fB% )6:.KM\6. !n*v~*)Mx@lbM*Fz)(*jAN<*=V\j{^@"4밆]TEg$zcYTr \^?`=eC0@*^b5x)60 $T)^E#߻4?k-hl]Ѓ \%B&4'3U Pn@) ,+r 7 p$@y-!bA7iABH*) L\Bo+2}/mƂh=Bp.lT @CрHA%1 11BI #@ l/8A B%1 Ā.@ -&D.x/!\ r,&@/B'"'"/2#'-l8q/EO1[ k  <@#2)?r'q: ,pjs0&τD )/7&d8q-kK !4 +'3RBL39@%Pr *7{ l3=K3 l*1@| (3AO3:;bl8DrsL2j 7I/O4HtBk; ` tBH4)w5ˁ]jR(?x4LtL;q d4[0r4O"1tj3ROu"k&4j u(Y7 Uu؂ȁ:RQ XS5V/WWADT'15ZqeA45U'Tr͵ص`s\9i vcO5 4r! ]c!|d}hSu%%piKjSu'6@l@muf'춛oS5?6qwR]7rACs?@TK TsgvqqAw?XvxTAycqK ?p#PB|@4}tfK7%d~?RN7H?w#B@z#8=x/(Uc/}3BuBKvsk{cx&4?1 g@U?8vxIlu[C1L}Sˆ+֎@j79#urK98_5 Lx?5;!q"(@W={<ޟB!|G7~=K 2@(#x|K}?7|~E<#$0?$ d+  x>SՓ>B+<g3BͪNn5rb<Ko:p;N$LBI%[Ne)M.*+;C܃O-d l?/@̩(KK50` 7TCEtN)X\ K@ѺM #HSOӍ:MʒkѿV$.T$3*Ϝ0 [+G;lONJI?[V`1BuL TsHs4$m̯0MOQoI=T1؄7g9T4[O~5:CWf|IHe'T?mGWp1\8V״] e3ކ-V{dbݘ暿sCf65RGtRlx1L$4nƎU5ŰݓiMyssW|xf>9zY!{n4kk2öMZl?FjuIdQQe͓W%I-\2& 蹩a%2u~M]m3kŷfnz0օ~']E`hN!VoY Չ>UU6KvCU. A^ҋnYOU7\FC~~tDި !md5Dj qkO PXGx>Naoc F/(`6 Lh`yB+SH .2!=U?CȄ,X9P;Dz ~. @h$b1~#B˄D XGGAWPY 4 iF"PX 9 ~ #/Q-`mBXDb>  ipB.}`!n`@8m,dЀ"Lp시Jk cWx x@!yHD-ATz0BW$X6C@PrJ 5g4dWiV|5PP8?:7d E#X`qC$HCc@K /&VאO 9 !DsQ0 ~PE?3Dʢ 85hNuP8  A4h&`KO@[ Ҝ)?! RܔNMC _"ڿCJe*0UC"D+p5od0HahT͉YHE?(Al[q7<@iIEN~I"X7*e%E [ƈf;kȢ5KXj#1 &K8Rpo,Fl{б~w&u%V4ب]-тZ(f_!Ehahpmw\b脗{/?8?Ϟ7= J HƷ>,ݺ*[~H LL-&QWu ɄHc@,I4lݯ*H\d3c2#IX,\A ^B Tx Q)T@'-`q~۫ N 9f6(*$>P>&: Nt0H T1Qs8dNFRrは`oCX8d,-Z[f%h#>Dž?5}Lո @^{9QvT rYq wݮȒapān-J_sW:S)~̀ˀ!PռxZu|} ^ <'.'CBeB v4*pҬKC/`Ԩ3 <7+i["3P# `jAR 1iy+>pG\>r: )fDCyGcab5 :j-s!*;!n3@34sބFᒮ$\ {NܺkQS>B?T4|0n>!FDBGB5/F&kAG ^D4@7@h>Ϋ F@TxbtDgTkjq 9 B@~JMHEf@HN}ퟘTKY ~aJ3@ 6a|Г|Jaed.`MM,LyHTnOy,AY r޴Da "_`dA `E. 8A$ *X>oORPD4I"͚/YGNE!+! 0<5j!, $iXrp!OCXVoջrs QOJT(Y"6 ! rh  .AVj o qG_I ^UD'Q{IH 4n h kN_qveY\vL5.fLu_9e@ txV+6 S? A=D 2p 1cm ^_Ҕ%A 4m n#@ Z6 8 (a^@4P(">|A`@Wms@&rursE'_Q/I< tuR227ے[ 2`~ aj`J$w1az7y#ky3DWs?fYB<@7ېcQJR ;P@A&>@ *!֠f!t\]m AvA^a @hZ]9E`ebG@/WivI XmFJWnqlKsx1T`{sw@T8 `i批-'tOS a6sm9cj֘F_N5tS鎋l8EԀ>:`_p0(4fVG%W@):ضPW6Ui8a RY\<`{`i͍G @Q+rmFtlXfvskA[`*CDyj8gN=Y9euO9H 6 qsx%و(y[րԳ%kAZr+TEَ,$ \Yy'[2`# y yOY3y ` 9V 3ۚ@ z8CZ @jA]-8{A{:ͺQ|{<ϪW{!\:I[Jv`!t ٪k[X{!Zûmg!#8[aS adٹIΠd!"`U4|ÿ%a"An[PI ;q|@j\@#YF"0f iJa!6l Ny\aqA.9 :z< `;{E T8WŽ@ <1 `e@5a-*t a !uS lӽ :  6! ` ,#M=b ATo}L{}cLa=Ёٝ#@ڧ}"&H֗ v]Ml % >!q2`M8 (9 D?1y,֤ @}8 ;PKJVWQWPKD>><<<000???}}}vvv󠠠```ppp;;; ~~~PPP,,,˷ƺgggôJJJyyy::: 777XXX[[[zzzsss+++---wwwiiijjj]]]111555444ZZZ///___ooo***^^^UUU888...===222666ddd\\\eeemmmQQQhhhcccuuulllaaatttbbb333nnnfff|||qqqYYYOOOTTT{{{kkkxxx'''&&&(((RRRKKKHHH)))VVVSSSIIINNN CCCDDDFFFMMMGGGWWWLLLBBBEEE###"""$$$ !,\C$`A4.@ .4P!ĊJ!F 3+pD M2DPA] @Ȓ'qL7Shτ pRJFTjՐ #EvlXYm9ژmᾭ\4[Wџ"ljੇ&.l1UC(@d-O|p ~rtO.!"_g ʴgۮ횷۔sޭw߾  2>?g=t+_/9tBOӫ_Ͼ˟OϿ(=U& 6Q Vhfv ($h(b FD4}$\Fn \A C;>ޑJHXj﹡n]/Lh|zGT\1.']7,LjA ^g}*:K/|?oUI)7G/W/=gpQlAMI@2V^s=^#4@_F`1W )|;G dl۫%8((sLdLUԕs,CZO4@ ] BGAj‡^-,͂i+N cm;6}"x> Lzei Qγiˆ@t2lKa-u)=&0>'dNh$N; { x*@ڷ-4sMJEKf6В`D[Xs2!Ng4a.`hbչ`v* `dʼV' `)v$\g쪗Єf;[^@ k%n:w ]݀ q0d EIJx[[͢JxxXQ?O~6D7D Xfuі0 m^ *0&ukСr8 BG% ԧN[}(9漣[p %9 b LHHxϻ=KXBOCa `lh"ϧ]g{_EXbv_{~hO;m'^d+;-ʖB:AK~F/|Ы s`+~*$EaLaO~Pノd*=_)o]^8*|HA 9h&MptMѧz<*hDFZ>  GEwTe n g" }Z~w~"8 %f` xWc'C0cBg #E0zS$ x[觃 BJz KJ/Ьpj)` @ eZ Yzd:?K5Ȫ"%zrJuzx|ګ FJz p\0/Ҋ) v h0j v zSCǮ Iڦq:tjwz}ꫀZIJj`P ˰*K hХP: γ';j+;j{ʫ~گ8kI@+C۰Xum5 z y0Q >Kc:-:5ˤ0%"`nDw 8V` |+~ <몄ʵ`+J6+X `Prpr{xy`з`(k]k03e{;m{[g ں +%Iy֋*;_bK* ˒ڬʾY[*  1 2 ǜ̶S@ w ]FN B;v[ a挮d鼪p˕Zkcx -`M@ G5 BȌk 3k0U $]W-N\K`b=d]_tƝ4^6^#CP;-v|x7=S|pvP˺`MP yP i'.ѥQ0$N ۟ ʊȈ7AJ@䢺 OG.=|X{o=c`Y0枽ii9LmÒ̤TRSfvèmO慎7%[ mP; W|%3 zE* QPAY!uJN)@F`v}J Q R©ξU,{ӎ@RF+/0#HFΩ`NU {Nî1`@ Y7̤PS'Ѭ~)@n0 U @Сܘ"_P N :pޛ̻^\ 3jMT`#T G #KߪLp+ QV/[ϸ /O0|) ovURsP^yy pPd@  " `ݠ@ '  j w0eP0_!.Ĺ ؟ڿ Eo`Z ,ҠӵǴs_B7ucb'@.wTm*A B]:֡bi" `j qBlp>0~5 CbсK"X͑`d%-IIkILn 0&03.Njl!70!xrG$$q,X`L#l x@"RӬD%nq b8E/,a-$ZPG<)_Ȑstp?YχsYD9 .N DO;v )H ɏd)kR YPNtBD(Ӎ*Q?ӎܡ>k  (}Dt (^P=%P`j.Z2N.a8"3AZUaKVOR>U|5'GE*v`\R]*1.))UF%dG`_13d; XIⴐMW)R”c>j˜J & ]W .v nOx* H@=*Rt[ Ր9 (_IQCW g8,ք,5Sx497~ t1`ӗojME=꛰wJB TcV4ugx9V zj2hl%o{MO@G1̡kc&vURp1&7goٸZs4nbDŽrx3o~ȣmfd2`Y%>g@3NΔ>DP0$vycb>֜7_7n:82d_d!a ]ƣ>`BOÐq 4g7GvqOpiӠ8&(π1t=60!ɋض2_vWy.?SV+ c7@.ݫ]~u!hORc?i <<|9샻ӴC C  x;3śL)nl@t8͓s5s[Kk -@> X \$U?|˳+AO\!5=3 >S@>4P(H> q7 4>46,CPPD0?;%c@3b`D+0) 9 'c@\,x8ɳ(EPhH+8@D5@\EFƙ[ Ve(4'у_H\DGX-5Et`ƙcDL(;P?Mjhl_>Pr/ $0|$;b>ڔ)MM0R `ЃG+L$[s@DNK RqIS,ޡO p8Ol  -,OJ$a2(9-"?L5Rq9=x ͖0 أ=s QR# R!R"-R#=R$MR%]R&mR'}"F(R*$R-}R/U.R1-0S32=S54]S7L }S9ҰB:;E?SӗTAMB> TDB=ԘpDD JSKLGNTTTL%Q UOEUPTeDuTX YW>UUZU6IM` 5 acվ3e%Vbb ,UO ktv־VporsVj teu%wnoחWטWz;{M~fEցUh%؃5guVօE؇U؆5xmWxVuXw؋VqXXtX Y|ה] XU}~e٘RnUFtٝ % ڡYZ-ZUڛ0}ZuZک YZڮZZ)Z [5[} Fڷ5ڸٹu[iۻ9[[[ \\-\=\M\]\m\}\ȍ\ɝ\ʭ\˕,V\̭ 5EMYl]n|]]׵ٵ} ޝe5ޥE٤Uަe U^jY}Y]^}7Ue5Z____e!M%ؗH.NF`u_=_ ._```u65-a.>>aHaVfva``!c#&bY^U_u\Ub'F*+nb+pD}.b>c c2@cPt`pc8c12:><^=~>FM@fEBS<d$\F^wmdH6dJζndLdMdNdO5hQn dHۙ7JPthS+V7x30# 0tX0]NP _e3MS[et0fUF#0ci8pNx`eiSj$"HKxF$Xw6hWN\gt0x{~^/mg!ej t\.=l\^=jǎlɞlʮl˾Ee̶10w R);ql:!XBi@QzdԖ,8!X^#x`|n@kbmga&XٶƮE^xhh!#HReCkh$Pvnچ#0pnp"6fVa oX~jg6fG$p6#e! ostXqeb.r\?- "ag # +#H#8 sf@gm30htxtHHo0@-w`m@i%tDifJ*TXuVoU'TJf"QWR0Ȅ@$Tu`K H0 "Hg.i"-Qkvlvh %unM#e$oM ?f&Og.f}v|OF~|/~% 6@0}@/}?qgڏ #m"sŧ:&|kGs/E+xAtlk!:}[`Hml@A;6 cC LS%̘/YS&Μ0 РB-jhP-n,)Ԩ2A)֬Ze@pL  q :d0>2T4.LĊs0Qm$j1? (BeQiH:!FAt:#BM$ ,w>3/nʕ/B8t ͂Ώ自mcG n- YYw%~HK\"7#C8#AmCB:Xy%Yj7@?m!mS{U"mIVp& 4{}'S9 2mRFq\mfbΙu~1 #cV?H`*B5ޘnDj)H. `ޑ@u2+~ƩJ;)&c Ta MVuzwXHEK-b)mpKd 1D7A_.a6(o rF06pbɂݎgyD*pCJo@ٵYZ[ta*CQ/ 0 (lEƸ3>7Tq I,pICMC:4DSo@Xߡ5]Pxsc! tbt yғ56k }BM_f@B`X!X'{P2`W:dATQ/AI&Q/L@x6I'vX?O+Ԝ Ap LӝYz3M']T'" ]( PyEH'G j-X@IOmT*S>r¼BQrM Qoԗ2DPTuǂsW5AV*,`6A0 , [1yC   V٤$A %ղf)9v v˃ꀱg 6<(q^+\ؤ]kU=Pju5-jUK@-V %(Npַ[j\rSsǹ pA J-X pO[%//P>(dz׋h5|Zіm `F.\W\OlFYсWk ËP$dzQ%|!: cr֪ }_$pv2qz\-\$tv0f2{oD Rj>TtfvL۱kd&LN4`*C8Fn=LgXӜ5EMjWX hHs L=@ q5qk\3ͽjL_ {-dEGp% Cq_Czml{ZfU 6;z"=?L{Ӟk![G'/Zn'Mpbyl;pO:U9Ѓ.{u@*< ܑ8?{4~O^|v4,i ׵`yLsszW<0鳎/$uPQן4Nl|-7!s'`XTF}AwgAY>%c4B)z!Ti]:8H:D824A.C1$:":B%@5CP0BC<%TFTN%U Xy! C&+=DYd?f3΋\ՙ(bj)":K X:$P@+:Be-"bM)-[?*@AhlBԀ2diDC4+B b`/(4(Y:PM'tBH)L,$d駁-De%@qBilN(^] [ 45DjC'PA$z\) ,+hT C5C ΚV)5*٪y@УbrUB j 6ܳS ŠX Ȁ t[ \@1Db @ + 3*1WnI$0$72d6i&',H @$1@:dB<28 =5݀ S~,г l42[Bl5$!00++,p(%H6lA$44nBH[]5*P&@(m2R#j65S@-|8Tu*GO *ץ,u&Q7kFd[-nTϚ p3~+ 6 i6d&WskFtO7u7&@ 6qGV? Rn 7ki> sQ@ -01 0̷xOn4v厫hz*9i-fX@}7h8A,|x,'x(x-xA*3Q;…7p{p`z І{K*:Ђ@k8ɂTA$eeZ'/ "-0*gS?‘xAod+{6Rxr @+90LABW*Ke 8QBAw `/G7j@y ESsHR?@/4oG>8rw֢8tk:?;b#6J ;?sy¥U7KlG y'd>˽>M +:~d*L;ԫ>ۺ}Lj &_%T`EW  ;oܷ>4xaB 6t0'Bx#ؐ!C bt„ (b>#@- CLςrƻNFۑ0q[*3@h2?2SL<}fm/G -31$n<42TJm/S)?E'QHD'!^-(z]FtQ"6EU̜V0W54"s[V]r=5A5lY }Yy-s^AU!}5ds T Ym>(fb @AKE6R&~>Ɯ'`gKo 4@D&~{1v'GKT QbR.>ypTU C1 E(B4X XP)L q _LbHҠWx 'XLbVGH A Ѓ^ob"X Ml@@)p 5n," @P 9>?aYT'j|$D|  #e2U0N1$iY%* N+dB& Y&-`S  mjcV@AYO Les@N*/ @ ^ӢF9v!E7孑)Z:R1LiZS9NyS`PZ}ԨIUT>EMT*VYmUU^WYz@Y:n]k\jumkA˽r^Ml\Ƣ,bbXVVA,Xz m\CÖ 5-iWVDe-lTmjq["o}\ND|;J.w}s*R.vknv{dMGzރ E/λ5nq{\׾po򂷻%wL"x fp^6ְ ||k^:u.+g,c˜6kg!7K ǒc#;Cr|$-lge׶YHLڀ;PK=IIPKD>>oooɷʶvvvOOOƼ<<$Сw^hbʼn ?n!H$)3TPfCm9M;khМCUzTN8&Z񢀊bZUծ6:wU;bڵ*|V@l]x-uW{\yreJ|y`浛=wYmhңߕpZuխa[<t {wソǭP7 V~qb!7psϱG>]{uݹLxџGzῗmmc|q`G"H`1`.\$*t!Nȡ@ء#FbȢ'b$8\B9*=A 4dEs$:$O%S2飓WB[Re b9`JVV)tix|矀*蠄Z[o袌6裐F*餏"f馜v駠i)jꩨꪞZ**무$zr< &vʅ;кD ->Nv}Fn@Ůö8Eҋn ;lB+;7;f;;kƶRmW\к L-ml8; r+.Cj70cqDjhrky+qhvȡK2S}tKӮY1unƴ:7N0o!Լ,gpۭ/ \Jm:OK[ťߞsYF+nz> Yo{l;786YLڿ!3uʌ{&4Mf|f.AޡA_B@p!ܠnhJ+Cx tw)@ "|("ե&:PH*ZX̢.zBܜA tc=9Lטz#%6q|Gj az `*Ue X*#$6 l]j& wuWXV 9U&`>0;X WHQ/w! Aw A >}4, Rا3;pHM1+^-n m6Enc)YʒW7`JiP @W6n&_dvً`R+m}ֽaUHg @X oUpT`͙-]z`mϻߒ7ҵ@&+$6.=ps;5!VۈTbV ؚxŦE%|K Ad+e.Iqo6dk'4p@\2O3;"Ǥ(LtS 0q1ߗOuLJE3zt~@p>+mW{t]VQKeTۢphG[Zͪ uf&lz͂Bv@ZTw9ث}DMWڬYr4mZN pmDMWw %Tq@@-fzV%kK_ƻZ܃ɭv@){ f kq;Ֆgr%I%;IAm(wK{c{o }  )`(?@ARyYVfX@\Cίu|}6L?$tR[YRٍ̭(g~FsLq1Wp_Ђ8hpH-CMT`GX}GK~|vLlTmh'b"HˆpXJ{(ȇ!`_ Zj;FЈ"w>*O'''r'Fpv㸏pPH|aT!Hhy3 Qh Yj-D7BR+UG*I >IhXj3>-_`6}HrRhE'$M(#y2%iR)X)_-KF(2) [8b5)*pp>Rti`wgkʴoPx8!h  ( (kYv3vwYВq]ՄUVUvyi9rxgyR]pi6yXw}ғa ( /X-Bw-oYcH@JY*b&:VSfw@@ lI`zB F0=)+bPG`OEWJ)K'Z(8쪹^+`[' 0\`q෮70 MpVY@;BFۓ* h 늸Pk VvuWZ)iۂ߹KYO]'_۹ 盾 *40 ,&@ KN[DwR{|Rȷ B10pIMQ̿[uv{Rԋb\ƮI [ `Gp |U fP6`E\/'$v{-% P ;Pa1\KÌ=A<ĕ|HLPLLb ZZeeʄ6;}ּ̋ @ ۞p{|\qbY^7_F( )|,`gzI ؠ콈+˹kLɖ,ş\'Zl<`{7TRUT.r@uU( ` D+Mo -@JlnkecWW=p:(#l WGpǘ ӆsm29m;,S{|6mHTlWM(Ym"Ѝp`D`=Ilh݇x o}kT6^DqU]aCU('%0@PhlI -p UP=r] 8]ÚÜӓB-ڜ,ϥ IfMk\%}R-0w. [Ke`m@pU@-%''VZ]m pi @-G}s0}'3ٍӐڙ n|vT=]0uiUz)(nݹ a -@^_n<Uխ'U*(C'E~޹K>J Pr@83x|t <#33(P梊oNԤ}t.uRTɗK6,=@ C= NV\xa&m@pTR9VA>q /"e FpVo^4s.;'䅾LkkKF``+ P >$@C0®!,E=Ϧ]-~MP?TWP.z,b}f 'n9) ./?.boࡍ>oGo۠ )P$8 %@X DP!4hE',XC;>|" ИtNA b С#LDP (P[B(J]!hd;0ihhK+HŰ$MX)(TQ Z.fc sq"E2n] ]iԥa^! enEmLn) '^\0`1C#^άY#G E4R%K0esgϟA=tiӧQVB_Z$kd j+j.ҋ/0Sl  J 02>@A0([c % KbqFgl f6.xDݪ 8B4Ȅ[9$ D0:β iNJi^i|J(BJ)J* ON8 @뮼+#0p Ph:TH2Ha wBuTQk4T[;&pU yM)tׁz1ZW.J0NLpdoܔ/Nx5S#ВA,$tCT4Fw,}Ј|s#|ˆw&L 0wz @aߑ { w8N K6F`Due[kWHO8i"%@bg㌅6î$T/؄䔂!3w,to.x4BDE%\L޹bAbx6 8|C ؝I SP)|sPPeYs'b>A$lagهz:)i,8{k;3=?tP m0Q++{!!S-(Co@tpAQOs\ tC; =NHYD9\Ѧd4fiwbZ%e yU) "6u-g6{On\x8w0q@2> BDƹ37@04AxF4ьP`H 4ewt:\h5Mx2O񤦭Y[͓Sgӡ{o<V*ؾIWHH,uXr-R[,NG&@0RDdYk86zD%:da|t4XDoZOk$G5n-kC4[@mrl|ضz/nwH`|EY d?8.>~m19LbʘUH8<@,@M#p`8\ sG%o:KiӆWNMm[ʻڷN,5'˶C D Ѓ iweV$pa;**@2:E`Rl+_񹌚58FFJ܆ vCpRQIach& !)"K8Ο^+d:j& ]ך:ϧzRj*Z!F6!VYps{+ؕ (h:j Xxn-8Y̴иYC0=]$ Btu/\+Skl-)Z72WH@Bw`/)L"F@ܨ  +a,;͂0"OH]g-F.9<;Y2R\a:JI @t*,$T2 UҖ 2*&8SU7e>DA]jXI\5]E$9ԝ$ܑ./,xV +hN";5Q!6 m;&1U| \r9Qk`wkvpy3W %l0>VX @!a\08<&ZF *@1؁N0zE>! :Pl >UƶQ` raGu]c'{82qz\~ר5i&`@!U'_Ȱ &x<O J B~pG)) (_v؋ <6ptV c'i h_"{'_g-׸{EY@&!O5LgBR.ƻ}3幐H(Atwу`MpI w9h>QbUP6w^p:OҴ8w[E0@lALA| h,S %؍k$?˱ $û)Zp$J˓ ]Q!K?ap5x@kp[YC*ۣ= 1KC7RU>!5-i5LDE\DFlDQiDHıB-Aꋻ+87>h +kRT8b!ǃv<3*t+́Jp^Ѓ0@+8a`=216p6R=I@ C\Q]h@DGQH9;p0L(N:(]7H Sa5Y Ȁ15( [$p@En$L1+(\ŃP @aB8Pqd<-pH2xF"K8ԤtQg2jj/aízPQٗS+20Jl8܂Ap2Hb)))ȨVHw8YtZ)&(wF Ap)1BA &hF4sprE:Ё5?'sp=pM8w`3qpHȿY؀PTw@DX?mwPE:J`RuC2XbTR+<]؅K[eeI[nh\@ Յ[NӾ.-0O1uPw' %XQwph0pp=ÀO)ip}:mLŴ Lb-c0rOA eJN(XP#Cp7JH(ȊX $5ɭݕ#&͝Հ % эUc8PQ]Z[(%̳]PI>p> RpTp0կ@VJ4[^A[jR`:=UAcRׄ#H285L$S6ՔL L+`YŃ(&H]rF &Mp%rlBPߛR.#+H;J"CV-`J&;" XYdT.bwI_xZ? 1Q PyHJp(@H=W:h@Qh?cBd>FaA%]a>:dL0"W-` w#wpN Q8e h)BVF ̄xѫk P`*>3X]\@wA8P2=8^lDU(F >qԳشXuew*bLQeR)vo2-)p1('9lAO qQ6$%M>ǘ'}L@Xq.d-i~k1E> Qdj(b9jrUh*B3R>v㲾Nd}aMDOll2w k kNM)>/-[%cNf?[ |3ѓn>&r`QP\Vȍ#>hmkVm[m$oЕ D]i.ֵU&t~N[Ahp$: ' 0f0>ήQnTfoXX>l&d^6pu3m&60EGfy>SJR#Jmel#%Cl T= o6kA^7q+"#_"k5Rxr[) T&<7-l/of=Ni]Tgs:(w@Y'8r$3&rKGtpZ-o.tޞtDhLPMqN}T8R]ߕ/u6ޤp vn@Xb!1uߚuN`aWXDp!6 pP/gQ?gLn?un 8pD#feF:ut`iS;t_~o8okXd_Pw\U&xFw7POl1@HywqʆupD gM1g .A#G^h^h@oWHWHD+.Lyaw=gg4Kٞ$2^GzWzicwzblz6o^p4o+藈 0!}"O$\u P`Xdwxgh> pkwv|rVw ؀χ 8:EȶzwC 榦HsOu2L,YA (Q!B#`t-8 cPƈ&LDPa p(v'РBz ]0`pj@:$ :^(P qJZMV=UW|XeZm\uݕ^}Qy^b1ո*ЇWYhVZk&mM/ơJ_SA}eW%Yf[!]*E!_~ywz'6D0y';y$\5eV"\:\aN8ӡ!ƒ1;rB;w;b26 Z60qPRgPJ;\cՔ[6%f'y)'20,T;I; @h2%]>بV藤T).֭l ԇ'*w/hF؍5К:Js np; P=U)o* -uYxb*C%Fу2xx@S.T_R .[s+^8A \3I0İpJF-TG5bjk32QJ%Y (7rA)Y+K3v%Ӓijklcu#BwJ}"N{%מ/]9MISS4[=,`K/ZFҮqED݊13,v?d s)| w 8 3L15}nkxJ!I.@ @e n8a. PlzX '롺@(6{V4կkB! vSbC@9H#@P4a^]5:~QR&5ޠAwd4 wgq {$ߒ@Is;nkcȀOLv Hl1ea 1>^xЅmx#R1 A(0 X"m$GiHI؅& KJOy "q8i@QI)щ$` +ԅP`%جy?p",0 ePəBI1!`KJJ-0q PS܈R!C,T CB4%8 POHA ;$;s̢6x-n `'8.0  4f-N̪8a7ܡԹ*5vU= z*Xi@gӹԔ`A`W"B2 0$VC ; f  Dcz ` x+pYzJ N`!;jj?P!`E t*p \D"HEp @}Ob x@$K ]N"F :@@ _`%85:@#_5AgxC`C.:Ё&L e.50r8] 1s̴ ,d wzZԦ'#3Dn2S(SO22,sWr\. lf4gIfY>@<ߙ{nsg;́&4O,DыVWե$Ċ.|LdӚn3yNjNF-RԦɌw:jtA)5Ov$`D!;cZ_OETOsmioݶvnmvC/nyw}o}i;pLr G{pG'_xpX@Aq? 7{<-g9OV^<.NNs\<Γ@yё~ }'FҡO} o;Ŧ8)þG=>e>A'˝p;CS|{Qx&}.D7#/y ~򖿼َsǕ` f  `Nzab"` !aaa ! *bRN ""!&b^ɞ$V%%^&n"q'~i(")B^)"*("+Zb&",>+"-,".-"/Yp}OUc/.#|W `] ET3~! AO18N(D\"!l1"6cǁ>f|=&<#5#AN@xc5;0?Z@C6Z>#Gc|%[(|Cf]($NdH>6cI.$1cG$VdʙA#գ4JN@6$NVL?64N <$H& 0eSʬp6Z>MnVޥUcSNǙ!JB_BlcVABf%^VȗEcA1 4K<1BpfhBd"Y@lXRe}Y<ئbڦpAo69f@:Nco]:@-&b&nVgeuf!9`v~ |xagXO0y99fZ ({1!@wZO4'$u'OFe.5@~&%N(pDR wh 6[(-((C`h(hDhhhO[)ON 4iBjirizfiOd@O阖viٞbDĝDĞʩi頢a~a !:A'%jZj6ja%ꨊ*j*jҪ&j!7gB+++ JPMYa+Nj+v}b+븚kVOx@r+ԼDD;kbz].zrҫ2:B.lOFOX,bl^B#(rz>5J^="G&b^ʂl#},&&hͬޔꬄ,gPf.-rz-FmPqɆ(%}JSq쬆hP(O6優&6i֬xFqq]Zs"@2Fgv6 $iZ2;Э;;cC6*8SlZ<;X5=nYOgJJdg#¡m*D>Egeʁ->>ZbFN$ |h֡;6%>dIv^ƭ:\;$bAzo2evoqo vJuLDžAomoevN^fdHcA6:qt10^:0 Q\pN :0]mNp PO1O 1-͎nBAA\=kkqScqqc{Υ\\rr:±\~DZq1q˱ ?0QDN1O) R0O<2F|RrZ2$$W/&O&W'_r(g%|)ru/*,2-+/-S." SٜD 332Y11Y3#1+035;YZ!=;|3988G:39s9s8;{3;sO Ag6E///4Ps te t,E/@@E_A!AGaN\%uG].+ Z@ItHũ,mج$Ks"ŃiXOMo ꯃ⚒|y37Eu DטB/SOhQ";@uiEHUXVPo t5Y>EZZJu]!AK;@ ;Cbdc/?t'2{tLu!e@O/d6@;Pe ]dPеh I^t!A-TO@NZqTL;ww״3pCJ$wP,7/${KwE{w|~wwOKxwCe[&xl;v|7;wS7+@$lR U`7wM@{_H'Έ'btdN{ht"mw}#4y7EOPxo9{I\9PA8׍Ly+@9\9QS5$wQybN,ev4 To5zAϹ9|yN؁Pb48'ܘhߵH?9O5KMW\7~ kyz|#Jjx50o5Q #esٱA#<{;@K3YsW_ :|ŧj L<GsPH˫KЧǿWC.r\7+@߶LLP;{ D;O@8EC7Q$xPs}XGls; 9dk}ŁhPO(>!/;@W }ߣA;oC ;Avm3,)g{6[8l6OO>l>㘊'Sy7;vw ,K#!k6 ;׏6?C9K/?w 4xaB NLv)VxcF1("w@&$\cAJ~pc${, ,(x3  FCK6u39Vz+F8j d5{#1h2RqW̫G ]ʔ $9{,,;& rd'sfg;4 5 GLٌ!x1x}Gd4HBp(}(݄=B7wܠwGLW[ПqzP(]  "<1'|$o&#B[N.%2hp0<62Сމi "6|[mo= }̀ z>4ɪHB~AҒ, 5ժ"=%L| $* $K8sHŌtg)ݬ wT2z>@e+/ʻj,9 #X}LlO)LR,xK3"ƆV]}Xe}*Ժ*T \ĒMlR,55L7]w<4B!f5d *@.h<Y \ w(؂q?̭ t GuKXzS ;b!E,^>;_onߪK /מtu3G/{1FLA Z  * -3"cT/^˗ů~_ 00`ߦd0 @a5ZgazX]c.u Q< l-}@)E.B=W{欎  Zȅ.2h7tx)lVnMmG\]0C>Uҭ6`Z!{,mT>>lh#'zJ>vo eRWCl)mnȘE8qMX@-,V[27޵{w@D>3O4/-0܀w t A& \9"8AFSy˙Ԙ{~qao E e B@RN.. 6 `J. [+bϮ  ao ! _PNEGHLQ/Uq}a \@ >'z ~q3N  a1QO z`,MENNl}h RUUN2 G`&z `6Afր aUABO0r UZ--#tM%;V%s@44 J \] &N m ;/r?a b-4cT.95dFvMA–@h 0 'Uo Ppv9"p `hBvhZHz6݀~aCK,NJ.hSv =е` @ fpl V`Nqm}Vp` `c3ȀB+oPjp םWEv!lK s:s[CWI[0@NPS.['l! >K) +e7quWx!xrC FwA~m D |>D! qpU o{]/v@|QhpJn p_W 5g-f3xAa^BA|FX6*!\bAQA` .m$8{հ?`ۏJM4;}p)Sqj(#h 44 "`a }3*H H@2Vv &X7UΏϏ KGA##>MsR;W6,W7\ 0ǖ fK yM``ؑrTD4u`*@5V"]9 9 yxٞ @`,!L:9Dm;MZA6nH 1 xAQ 5|3RZh"@@99 4 b!LAuzr 4 7A.N tJzGpy3R?d4" fAxvJa8 Fq"b)X9. Ѡ`:T>h.@5@uwK{! J[R]* ^ X)!ۺX@AH |P{Zܡ L`>@Y9@ d:?Μ@G RxD{{AbO[[۾3l@$A!Ǽ 0X al!@!!A@Ȁ@ހ Xa`;Kikk|mcL۩eem;efUll/@/`Z\7ƯA hЗJn:ǟpL+g{rft{vz:`!al ױ JNϩ\MX<Rls +9]|`OQ \| +P`}{ L/ð"< @OqB_ٟ a ۯ?_'@뱖w2im׳Q0 sv}HRaLXo2ܽr+x<ӟأWޱqH )6].F I!ճ}~(@{Zn=׍+}=}ylY->&@n=`@ӻ%6hT" \0[i!Q#`]v`UmS>ww]^jJ~~\{sӥ7ZCb tїz`_\ϓ}i,?9}o}12ZN@5cwh Xb`LNllZ|X#C׽mv7\f|ӟF +{<_l)Rذ! @h $H!ƀ:1D*@pę14kڼ3Ν<#PD(NLz uSXJ(+k0@t0"14rфH'SlDLtڽ7޽||w -*@@ dРANj `A H B ~K"F?YJ8r¯D:5vh9$y6m8wB %K0`PРԢ0bɲFrIYVs7Fۿ&y;; vb1cIFeiƙgtPB 5PDUtQFz르R"M pDE\tdDGu+iw&1pWaYdއl*C+%O7@^G@5XdUvYfu@viNjz"l#v;Nl_;SHb&$D;J7]u>fReBp0V㕗yi'8#*(h4m @uM6܁l.rFXghi|!GFҠ 8N90階=]a;tiAК$yLQ@2D"k [+"exgW[`k&ئ+!x}V "7*  0LO@L1l 9Boz;Q8Wv) uq& `7{!]+I|NU;"q7GΜLg}D/[[wu\^v9@^ 4 ~ 6ٱxgjZۛ-[n=[،1o꼬lvB+$r7-˽ەy]Toog|)Ax_k,6dּ"g#i )/B'hL z+B'.Op 獅_3ZB(`Cw%9O @VǬ ҒUw>`/X~OA^WE0q\G2 *hZq|Q!5ʥ@YaQ>/h :HRob/@;A F i`JTzE°=<F zʚ̨^20ElPRFXU atB,ⰇB3P`r@(Qߪ%kɀ H$ !ĺ6C3450@ u&H3Bajv9 ( {}\BeHFbCpA%\@"PL`H\RЎǶEm.mp1I¸HrX@j77$_, وH5*{Qzɾ ƂnHf ȳ\Y3HZw' i#U'R|[;bxL֍`|{-"2 4>p w3Kh'-W<ϸǡqm>$`w#.Wܿ1Y3ss\8y/l&t$+ӭ.PݿX֯d:]bgQD5{q=L{ w>O|+9dy|y^+osӧ}I_~|5Ml_{}O_|}azG?鱧Kc}_Ͼo}I}tFz~?OvPvn7dthve'vgvigHX( ؀ wvgx:"`;PK6SUOaJaPKD>>OOO˰000,,,(((---ќƬ777[[[tttkkkZZZjjjuuu222Γeee111fff '''~~~AAAvvv ***}}}:::wwwXXX mmm===sssĸGGG+++^^^í{{{|||iiigggxxxddd333555888444...QQQ### ]]]aaaJJJccc)))YYY\\\ SSShhh%%%qqqEEEIIInnnDDDTTTVVVCCCbbbWWWLLLNNNRRRMMM$$$HHHBBB!!!&&&!,2 HÇ#JH! 3jȱǏ CIɓ(S\ɲI +ʜI3" .sAN@ Jѣ?lɴӧPJJjS ʵ+( hӪ]˶۷iKݻxX߿۴oÈ&̸qU&LpLr^É3k^l9%f͠sLH} @3ͺuϡc+v]6sͻzhQ;qڨU_μ$nKNG?~.EB/~@ÅAx0mXE!hTN_YbȻ'Yߪf%:vZ{ꇴJ/ޫ^ǯV|w0njj"Ŵq=nL`~,Kl,&O2X -lXGE3s\g tTSkH&t2 i)G,i|m9-k>nD vo,k8lN<{g+vqt̲ d.hZO*FmD=<#֢x3yPhF bWH;Ej Fq 0%pzT!wr\ ?*n%ţyn{ɈQjt 'H:pBsJ6HR[X*rXE}#EslYՐnz& 2=iK90/QHD`$f4IMRf@13ggP stKPyBH);W?ma(HCъ"#- S9҅^8o8ZP,fN SJ.~1%iSʈ) Jґv$yԂPK(Tzӱ“bO#g]gԯ*p5k^W3u<'9R\&w@GJ&yd(U~ЌljTѮīX'KZ5VvukE5;ԳϔgZi}PYumQc ͭrs[:ӷWn\E2VDP] 4m(" ݕVn*yP:'|I>תz{-^ئ;_l)N*XޒTpP$f}lzu^焯˙XU$<]~)O8M1E0pj?͉i udp;.\X:|QkiU21K6`Nԧ6L)3f7FՌ9{NsdY+NQ``@'MJ[Ҙt?WF`дGMjM *)0w1C6Y8ՔNو@7B9X#ЕgXɇ~gIUkixo"1[ibdIfyKI=ؖ #(٘ɘP)rtvIF4 !4`ٗ999Y iv YyșʹY  !hg KK`~Bpoaa3I*xF9)0 ɜI]i !h9P Р:0 p8ɒ q 9X BKnBw{QN9!aؘmB:ji ȡ10 @0TZVzXZJ\  0dJљޙОc678: LJi17:/ࠧpmbZez"K Pک:JVcjڒɦu4 @U4{7}:zB JڥE ]vcڠ~P :䚨Z Р0*A[`KX qYJqz*1T>y$K :嚱K:P Ю+& p0 p@ +@aXa-3 *3Pی{9)"b;ʬ!& a,Kv{x@A+AP)xqȗY3|4ɵ;y+О p=۟{ĠnA gj9p}vh  iBۻ˜˰QA7i_ʛ`Ы@ 9@K ;wi` ݛ께t0 ,;J~@•]Ǜ@o W;B 8V{p1w1k{-M1qR. uH;ujf8A!w -'<0{U+y{NĪz5 a+[ |03Y+I,ij3:f|{P\v{ \>E|&PQU %,pǞy%xkk[P4oʕ-JJ7ɞ &D,JyL{pmazU +M1`  o Křwo0,.tТdp{ f(!(yu%0 3^yM?0Ҫ] A2. 0!.ߐ9@  QQa+殡hh##)tA$`#"]܅Pի00.yG0г ҭ}=A,Ǫ}+Q->+Ppݔ ۰ -{k  n =[ls9z,^ ~Oag:;_ﳫ; Z}\68:<_MJ q BD _ haq1]4N=?9///AߪL8 ;t >Vo В&POF!^ ! [Ez_15/N? M6IN*?xߕ^33@q/.K`OO{/wj zrj[d $XA .d8~+E5n#WI)UD1ec 3f9LoATh"Q=iƤa U ;`% Ŏ%hRiRiyU)]ed8hV;I7-` \VA(̹ЋFQfYl͘E*`ԩ-nϱernZP-J%FvI LYAP9:u++ŏ'_y @`R"E&ۦ[ nָI (iB0im& S@; +B 3̉6Pɴ3#€sE[" ?$h8'9.`&@J-"N 9 rDVY < L"C%:齿Nt3#̔iJr $X&k*${/NSNJ,D7;(œ4Ӡ* 7' J>u@v X=$#71Kzג ٠ 8Xp4c֒$Ò0-J,_B%6=U7NrC%H0Ҁ?$@|@@ RՖã '"m׾ ق*Q]7AEIԂJ}%TM#'x:6et$h7K t+KWR%y@$(K, '8 r8 k~U@ dGf8wt1~ݏ~8>2xh&shA_5l} \(H{|mٶX75Nc(#hc:O%yɬ:ll.2h& qn-('`{$+ (hR6OwpxEGmSMlĐ+X" ;vi 5%[[n&ge D[_|tO ! Rs<yϋl%\$ t'6ÅX+·+q&98q`Bk-_ h$>XqKEƵSM h@F8Qsc؊Ȧ 3V&[, dNeA 3#9@`_BT~@S҇>ZaGTRs#JI(HHj"\ T@^EHOՑb$IqA߮X@I]k@A[ Y=Y@+8|'=Վ*})e;NxS< ǂr}q?h1sn/aAJ+哥X; IВhQ)Ycb3ɎB')=*6HSl3K 'U XC|a)@/?(KSC ɕ,H8@A\YC$!S_!zBH-!K^ó9Ȗ1: P!s;!K::qʨؓEF CCSCK6+;L“>|h2a(5:ąy>L;" ;"Ɣk#D$Q^$b59 `EՓg; ]"rD5\ "1=$ (F~̰k ?FFآqY; HPGCD c! 4㽋(`44j)dx4C 0IȪ#KE拵jȕ.Hb* *B5Ñ}ɺ4< 5ɫ< $n%5+1˳'YG\vø3A0̰߳L\䪳 F0ڙ~.bXc-R 9 *\ `NNĄ|M?̸I08 0[ z+4 P-EL /CTjA'X|:NHU$8ΰĿ/Ip1IWj=  -ݗKLAB2I «ϒcr~{|̍#MyPMQB.E.@)5. 3.; C'gRBBp+B}(7R KҸ41l4( @KLTMTNTOLE=+Z)TWMsBM=8UR]eTU)7;$HUI= ԠX_])x]V#YUVqe Hi-6FptRJVXVČ(Vea*ߍ8_؛u_L݇]b"&/y֌`MX >/#a0Cbab0c1V Ӈ]B\~a+>$/ ,6:\)>2H1.dC&Ȇ0da$ aᬕ9:.G}$6ZrS'0c䬕d}䑵dpL& Nߪ4 (@&e$祁Ve`use\bFvY" iOn^~hP\h#T~S.geVR䉐8w~gxJkzƍN.дgaNbVcN|F f8hhhhh7^vgg6fːGhNihqn6rna+sihhZuif> vpnF{iSCd !Yzi_韶j炶R6,0v[h8NUjj6ϠjMjJɀNsPx%!m <-Ȃnune5h^ll~|gƞ&Q6ht6 ?Fd^` A%0 0PhZ kh{蕾mF鍆lVemnjE#jJ]e` ֯ ȓ@!kjX,xpO&0(vkE-N]6 o ֆ&oB,~`XIFzHp_popppY1.Eopdospqop76 h 0nvG, _!lrCN$ ?R p%`f_Vr)c"G 0l@,!@.#- m$)os-rngʞwv% 0 ;oɦs@6~`s7?5I 5I]0p^ h 8K-u&o *tDJm>>{^uSmT&nuQG"5uM/H`vavb/vc?vC1~UGkW&_'O ^?_Gvmvn_"]Ov-6؄7h/dOr7wt?7<w0w}wHH\ ?xp$`%pxx8xJAFkIH 3@@. :<8096X`@x42&$P 0 W;ijGkKpz}@ 9z/3>{# `x<7h)h=?? fXmHKǏ/OG}{{{B(-He@~m_r/ӇtOY2߇|ɧ|o||}7}W}/@p.!Ĉ'FtAC?(r#ȇ0B">(1"Õ3^h :a94&C=HS9BÐZ? ذbǒ-k,ZW-ܸr%BܼzOߣF 䫁GRPL#gfM^ӧ@@جxJHiGԃ K %KGa%/"ġ[/ }3g ? 8 r *HB]d]FS qhfjl!dUI$`K(c?3$5:d0?CsnIaL2axSN;&PEyNAU(beH-8& &i&&5X B&eqagai刯 d^9&BD׏2fF,O{IN]AݔAUYLiTfWq 癣+*8烥ڹYhf("<m?;&Q?V*DHZ=靔JޕJɚ~ f<0|0 +lZm0Z [|1 #FJR煙Aӂ8jז82ѷapYĈbEsqNִj?kynikaQK=X[}5y]{ aX!$f"|(ˊ.YH n?,s.E !+zK*7ci,$` .cwuE>sE- ٬Wxn۞m 4%h;~@`w:oL*}2^,>C#6Ɣm~KVQ $•p,!*%0Kb 4GkFZ*a3 ^.d*%T &*:dϲf!jeۖ`t5 Z_/J{hLĐ=qab ل@o#")=muw2U21oudy$=.eLh2M pXrs$ 6؅Z,NJ8D%E@ 0Qod)嘽Y1pz%/w5K\҉.é+_(~08Ic[.\&n-H4N#@H4XN~`7К3e 9nћ:4syIgHN3$eG3n@# dR"N3X#/pXͰ05pfѮ8rtr4рf=+ZӪֵU4)\8v+^պcΊgLG;.$I0 )d=ͨn$FXѷP `AKL4wuթx)|@TnSv0#Gsr-Cӯ{c͏KWnTYץƽgJ2Z:@=aagt\{}x_Mpsc޶ݒqŨN*}mߖlM.:v{ 84taQ2ȿLr]1Ot "G0)_ KhM:;\tIw_tQcg~ ݂\) Ĉw{xIs]}97)o{Ώ=en],􉜾qv0 aZþ9WyA1Uډ}{q՚Wi MnA@CĈ`daAy˖AIR^~t`XY0ᯈ`ݑٚ !D9BsiρL` 9t RT`]`&z *| 2* ʆBt]>,!aC b(!!H& R[Z iZmd\pV-#ڨ!,b ab''"K}b1LB)fZqI=.R$,@%A&A4A2̀1A&t495 6x$B.B!4̂#;<؁3<4#/E/tHa.4R B7zc@ C9#$A.ԁ<#8c?C8 }@$NN$OOdhm̤$BA%_($(2%TFT @zXM%VfVn%W_ڡRERe^0e0v%ZZzn]Z%\_%Ye#d\Z"ÌT"_&Z&_B ]ڥbY" &aFfV&Y]&fffAQb.&h^Of&jT @h 4 P73'DC*1A%0\>l"g&u>h l@h"A+!Lt#8l-/$@'A 4A lbtl€ hx $C7D\ |@8k^ؙtV'uSpvvx%0A6CC3 6'rb>4kJ ( TyN-,C7,A/`*\ Cf\}hu:aEc lhP*'+$(p)$#t&}] 4H;$*#A$P<8,,1؃!AB4HgRX')j1JA¤JC^3<:p*=ꨖ6B[J$dZW.!BīfDiSĴNV+[Vko)"jhn+2V+FV-!,&.,6@$L+hkĽ>,v~lFBɞ,ʦʮ,˶˾,CHleZz+l,Оl-"b .&֫ʜ&-V.Mn-b4ņ-GDmj-מmz-ɢ-RF ؊"VKѶ-ۦN-bMlR..6m-نBJe- V.-$F.͖,.F@!l%ᮮݶ:@..(Ю.砮B@$<8N/V^/foV4CJ*rҁCj/No%nˆfC /0_@1d\if 3_0g0+h&U4p*B0 O/"?0TnQ>0Fĸ`1\ 0o p 00\H k0W1A _@r?Hm)? [1q;J@ WA1!a@)1!13H GPTACq0Bq2(ǯ'o%<,2D #+p@8@hHײ- ,pG@\hq,o(&& \-/s-+1 Grp8*l37w738839os1iGP@5%nu3=3>s1LHP+3@;c AA4B'B/4C4:DL3%;shqCo4GwtG[L@N4V7 E-F(J0TJ+fKTpMM^lVT7 P@@S?+uhnPcf+I3h2A39 r2O4rk1P_<?l YDsT%= x$P#P$:,+p5EKuŚSU[*%,؂)"(B+8?Q g"L?X3JRve4j17p@%?Tj#?v)$Br/wb7&aMw[S'p`/xv('_<'@ԝ;,# 9?/?zA_ZC>WDx>ۂ>g˽{,]C @-؁ql?-?~ waB 6tbD)VxcƌB}dH#I4yeJlɠfLofN;/fbJCE sjTSs-AVZ}lXc*DCQk"CYckޜ{B ??/R5|8*?#Xpqy)W|Yiw ?p:0Xi3g _v )7U-C~,yuparl9LB&z J~('~nsP|xɇw;޾QtK>20+IǍr[Щ@=*ЃRCGB}2Ĝ Yq`GAsE`/}"HóA \9#3~~*QF`0NLoJ4}`@( 5 E:<7|a>, J $ ;Mn;\Ga"HxrҵTt*~IRH\cLJM0g.TWP+TTSsUVVY%5^ݶ4Wf$ۈM+XVܡN}Ц|=A~Z~nev ~(޵X⧗; ݃ZN];k8y3Ziaq fV!d&j@5(y!hc$WsYzK6y~80̑`V9fA IB`9~Qe'.$샌U 2qZm0J*6&aDgkD𞜢k)8g8s "/o~Sm^uc D(`x,A^0f d!@\ϑHfB! Mԋ!o@>c#Np֓L,å凛v8x Z~O~07~Ad sd~h }@=ɭxA8!O08:/?-1ZOU×hh[?txAa` ͘E 3@u2p@12 &X"?8 hCw@6Pa&G`X/ JpAEuB$"I@+@ox)4qh&9p,7Ghc v -rĚQ "|e !#.K8ZriAf j"dHD6 A{#rAܡ\NN0(H+)pLJ<3> b4@cp Pt(.NwSH#QeH@"kCAPlj"\*~xNw ?hҹv: 0t (@ 0I" QЀB5(Bʆ~/h8婡RԨR6jU 0b\JVU-ee6c:2uӛqr`C [Qyν`M}jTM'ku)eyJ`D IG#Ez:r-leK[V-d҅ҋq:ͥi3z_鶶ץv]6%o`A۽WPA .@u:׾}]~_"` ,޶2 1n! >^QgƼַхt3 c+ٗj1| kHC~UC(fmG0_U3Lx 咛dHTz~aj8`{´ P `ݓcgfϬݯ2*TԽ3fk_{Y4G#ꌹ 6pBv#,e*qS/l2bvE jgxeNB w}ϸC'c?:.tuMo^Wb:bOvg; GeEmN`s+k@&2]^Kx:cvSǴyO۲g?a l杈%Z(mo tJ3i9mPg]v8 ~a w!b P4[^DAȠ g5n8Qo{.\~6(?6!=_}_%B҃"I/q'<_EQEp:8stB`~/Ω@˞*/x+@/ԪD PQ B0ɀNPpA$ Zp^b0A@PWAhbo, ! *a/t>r b> H4,kd J*AAA}@|jVCA*ag PRz!> "a H4bmlC0 A>f@ A`ӆVAx`ʽDƍ(LO 0& Fr J?ƐGl 0@K܃ok`ΪBT!f:hsnzB )D@P ~ 4 `ӌ^b葜L !r 'BA!!5`P28#q$KVX%p{@آLʜk sa܁( R )/M`5 6_뵨+!`>bF /QN.cq#;!`0jH SnA  #&304@D<( `Z4̞Sj@V7$ Q29dRӷ$|;Dvi<4쀔R #6HpM*S?HIF@ H% '%NM`*! >"> taV @4̲&RD K4.Jh6@*A A M &atAj`!Ha62H7UԆHT2&@I!CnHn઎mmAM=S,TAV$@%m SpYZF!&w`A3f  RN[bZnNZ ~,QkH!@@~kG?{[nv  oY]@Rrq`O.awO;%v3]/claK{Ѯ yɒ/kQu.ϭUVuVaF/1VƖv65#}mGco Vϡvq~Vc.A< \8؟l؟V}o>Xoթ@,M?% ٽ|ԍm܍P}a_Rbڀo ^>ܽX ԩd$JDÐHM]@)B=pS=d= <;NYa a@G l#bǷ{ 7ێZx !_^~ C}^S[}^ 9pR1:bl#A<J9ֳ@ yA[_c_g#_U܇?wbxsi pha R*(4@L,m9@~( 8$#cm>0@ '=xOQ~RL`g~1>C?] Ch񻰉r1?jڼ3Ν.vȐ"Dd0a~xp :@7:7 U1Y- l~'|#V… #EA7m?" \GZQU0ed'ۻTVjXfѪe\re^z`fb1d(HFeiƙ!!ha&UlvI& pv!,ar&0t!A 3h@ bӀ&4RL"[&`0eZn|SUuUV[uUXcuVZkV\suW^_ Va-cMVemBpNhJ[U'n+ș'<@*( xؐ $ |JP "L"++i &%\qBYLRT* Oyc^-Q^}dwfjp8'v.|FgbQdoBj"V}k&inVmAss&їx6'EyM \0@+TPAIWۂiߘk&qX `R腛E)tJesZw5pwŚ#+(PUT (@OLkr҂>%Ҳ587[U5_ZKq;\ltg\5IS NxHM$S z exNl Ly9!ff ?f|Ϗ蓖E\V7:kb<{&=&XMt M7~ll@&ЏlPc+Nh=a>,CƁff9xO^r % t``FGo`CXݰr;4D?5 8 P%߈4!_:^CƑk犜tFL}E$8.n!"ߣ~{P?=aQt5ml@k1cAM.P2rp G2_K@2Qf{Yq69B4!EQeM̀WyIe҉ < HP6ƊSeCnhRP@ A= ˦A~H -N7r2/à:pС`3@/oZ3&s 0׮X @Qg1 ?=P#@j?*Ӡ4$jLQÌ(E%]è33 ~E,`U48 $ li`zNδ|5eRe @h,DgCU@.8Vb8+i,& JT͡aX(M |l1?vB(^PӭPAoIcF]W)?8p'l+בն7]QDnwL+ P{En LJ H(,1'5-}W_l7^.0N$(51&Q\7|%_s(&V(8rP@15 1\1 <֝ h" JOHA jPq$Fs'Lb( s}:KNuI:8EtB_/`i5q38a@ltMX= h+.%nm'.pHw=,b@lBp+| oxJUu@A"z!]PC&$>.6$GہH4V"(Hn<'$ Y!@yAqeXcE87X Q}d/ώv";G5 }d `@%g9xl^1'4!)A``6;(Wa{':Wslbp c($A& r^Lko/AD8hA BL3;Ƹ@Cl "@,pjk(]42;ҷ@(Ͻ~Ʈ1z{@0':(gkA0V!(WU;PKC~OaJaPKD<=ӆ~^hgg_^_quoP!,v H*\ȰÇ# l⯢ŋ3j(`ď CƓ(7N0˗0cpfP8ss'Td sH3ӧPB_1jʵ׬cdEٳ3۷[5<ݻx"Ѕ$ L0`(iq 'A0L`Y븳4h0ӨS>E1ϰ#B۸OI9 Ck֋+G8ۂys^N@ġkGKN^9йW{>ᄌz wk}W`^BfSFwUaihbp'J%%(cCNa8hP#L:ZC١HT$G62)HO"-R \[ ilxfӭ9e!#,l"'9gi!xp#}xN>a 6쓋> ޠ~5 ((,>CЕYZC ܭY,"F:v+_dP>y1:BFP"(< mឩ Ъ)/CA@qTofZ?X 7 CR/4[?@ !/xR$Lq%VH_^QVw/tq[4\eф!iEAHG Xz /Vf(hvL?¢n[oW :הO;@Ix\| 9t38.7;sz駧~>ﭿ쵻?wU=]s t<MJ\ƴhY@s+xg~FX( }=0!TfCh1 $# C0\P$mWVC.D8h"ȁ`$gh>z%صPL"耪2DCJZ򒘤dU8u! e%BGHFed,S')iNnPeT FyGAT/D($!,sD-oITs',na>S'N2kEf:ř. 90<Rzs(D'f`К 6w$fπ| @ *΁> BڒFԜh˺I M:CAʓv$GIJMAT&=iD>vM)Sд"+eH҈ƴ )PwӼtGEjBTLTGBUXWdVݹդִ":W W5eHWրUoh\2Wwg^ 5aQ WR# |FYS,e3؞R?(G+01$j-iΞk>{V!_AJ[,.֠M("Dླq;/Rִu{f32]VK^JbwHmyUZnnBN6I:cn> IFը}) O)S-E tSlC$˱sM7x{O F0ٖ-v1bS@ruPc 1̱*&;Pr R$PK..{#~ieW+ PfQHL:xγ>g`)q٬⸞%(> Tioyh'3IEك i0$ :$qL&Hu?pD[map;] !@ [Lɬ7plH9edM늾 *x @; 0 XiȒ܌ЄP:YΝW3\E=ʵ@P%HA,d@ir ӚQ[\Tm~ą+B_َ'_ڸNkJ"nЂwa`}h_m泮sLM ;p$u:A35Nu~L 56ׯ] >hD7~c^낿uA0);(V0e6?>xd%Ё]kq59UB1;r[i?Vzl` @@[3&ڇn=bO ˗ϱɿ]܂T|GLƷ/(gl''\= Z64vh7jdg\g`1Uu¶CBw2MY)j%W#xa0v₂121WG.kAuyxfkAhVg{:ło4c؁TjWW 1VrVڗa`lzg&hHfjxu2TO;u 7bG׃Amje\>VjOw*7{@J;CQ(?1 QW n:qM!kWSAwqtx6ԁsS B6g*E1g!&iegpVmH7M`qg: gsf׍PTeR5n1mAghv7eFz(np8tH0Ȅ4`Lh yaRх'V&npA}&'@~~qi,Ò9 1ydX {+#(Cr^Ew3V6c5(4dz>i|ygEo|W`Cwq[WVLIafytU{P)QiYG9ٙ  š嚷iYiJ9!W IEɜϩD)StYrq:ĝٜ3Bee㩞ɞ8[%CŸRXC"HE[>=oà jY J\"*S4^R'ա9eJiWc':dâ-*Z_f"E#Y!?) /Ck)nv>w@ Jd}>k^响zkU%J @`ePpZ@tJUr:aPn_jPtU0cV ~X*%gMZt Wpکp@ M@eJ,@om*t`WZJP0p-'kPjʺ ZrUʬܺJ:kbz蚮꺮ڮbJmT}0`%ЭPz}>H?X̚ 4}"AȰ۱ ba7 HMt, PD[F{HJLkb:{̪j{k"4[X;tT!M۶nKGd{zrR"xjzp۴r+7 ۩W9# :k6ik,[{ ˹i"?EKЫ@{l[, c @   :p oPl`r uhp Z\ g0b0|`@pK Vۻ{k"&aK9`PKWK[p ;+)$]0vfpkdx}cMqorPh1Pzgb`sk ;{f #a+Ĝ0JK1 <|!<%|)-1<5|9L \kJ\A"PsLj 1 U L "L&*. 2L60R1Ǵ /`"<pA+ȄʼnŌŏƒLƕ|ƘƛFʒ{ R{Ax[\`3ګPb-D#F^hҐmҕ1>8P7`+p m^{[ 3cΰ4Un6 <Ϳ }@pdi䄾P~."= koJ9[W@ @.-.nj60K 0Wc@JV `y೰CF=샮}OT>OJu0 홪ﵚN[KE@D`b wP}4KJp01po::W`qr0P5O .}QI'pA Up eP 74o8;O>_ .Hz Ȁ'Qpe04ao _ >.蹍ފy 2&`*z}j ؏_@ @)(R0N>P ꜯd`„WR&C6XUqԨ(I# RFѯ_"E 5jt(Q‰Yw*'V!b/R ѡ;m\YE2{e&yxFN4~b 1x@!|_y4nܰ!BĒ'O$I  PO&CH.# Rt)(trF4dɕ/gw?Çi(x d%wXt7#Θ6H${.Ĩ;Vn@#6Tb%d&*/0z***:+z+KTT9Kll.cq> mҲ\bKMa Ҧ!\UEA:'2)5O+MR=׸Tô䯒A;T4pd0 Ű VrjDHOTEK<(S91OoUR{D% gc1\Vƞ,vuѰXb2>0Q2Cs6e٢-PN\OO P Ir}DIS;TFOm 5Gb3_Ue՟R)g;Ʉm]׆[r}xKpjg9|T p չHQ4S=zԳݍߜjO.󞮞.ޮk*!.Ov/Yb}v8v弳}ng<\D yq}qh5Gc<̃{}:Xtk-=;Zu[l7V펕GiSږm[1۷j6%ypv-[4&n]?;(/* i;IAt,VF(WkԒ /&?kmc젻e|^$sDБ"ZgEJf{\^ʓdd06_1L%h&$dz>Oh2jb"C9LMPr|c%Ef' Ai{ ] kc\~_r*TcvK1zwE4VAތ b$BA_x(@0tp7 44 *rag` G8`sP-7Qm\180b? RM $)G<#h[@g4C܂\^Nt1i6!\&yαBvG5~3{ўv= bl[ EeP7xϻ}9jaEr\3lJ'H6'I`(B1\ABP#2`6WԢ%z Xߘ6(NpXZ:0|tX TG0.u Wz-<]9+\NsG^2)#2o YU D_@r Qju̟@|A9ëG"z@xMZ1yG(m^v jB(gh];HU6R8{e=%AKK[?h?5X;; k/؅W+RPk=`@1A B!B",B#C;/@9x$@:@h+ŝ6c7x|EÄܜn~c =kB4 B*+T>ȸ/ 1,3LD=.IDIlLbDXRH1a6i@B8fhm  ]L<J=b&l< `[*:b*@C@D,(p pK((^ȁch3UXDx@ j 3YPVAЄu=TjUPMpXZ}\T5) }p_PXChX.@eXRpXxbJD #V8g-\͕Y] FhZ=]cPݎ j]5F%"p^5Plmn}X0^\h3@a=.}(#]94KU^͈EWR\"(^n_ ,@ @I8*!@B?X]`hLȂ8GSZVHR ` N H8vaa)b!&8GHX&~ n(~c{h:N*"Ҁ~]jN( cK heR.eS>eTNeR N^ JhNh^h^hGTW&rRZL QGZ=fSfM0/voiiki4_(6fzЯPyyℒdbd6NTZP*hS$>kNk//ĭф(e>,$6d|w>hH4ikj̮fi)V xI0Sil>hIk(}KDlJgkvlX.Jl6fK^~n2H Vk^n>hG Tpkmqn\^C #푾mmn4Iήo0H@(&&OB.`8&m Lvfrq>GЁpqq/q ؀^\8Kk`dZftjwof^dpD_vSA`m_Wצ[h_캅 qHar r&dDx P#(+)n+/GG7(G>NsZA%?C'^@Xh P%ȁ`g FGOܚЂ( (lJT8ΟHK@Q\NsGdZpBM0uHqVzj{jX_}\'oJK4hXHl&rL)ϧr?;t8_/(E&4u~^|GfrE6#!|5U^|WpMh XYp8#4 Vxw,WRz ~~PtçX)r@Bʈ?Q`e +0Y#Ȑ"G,i$ʓP:T/\9Ti'P8$'ДTUAP$;N C&T'D!CVE ^ rĈ "TȰGCT¡+/!?lxT1"J6}u*?LXG΂Hm… eQ2߀wjr*C @!6I(6ņQL^r#1TVjزgӦX{01N˴{X0 ĤB?bA*=VdYUX/hƙg7-\rmhAD~zUCum Qdo%1T80^/PU0u4/SUuUV[uXdZ9)BҔ2 b&C_EH4JZ&@ P 5ՃIhJ7X`XETsJi47HMAXASA G*ZiZP!T||JR bIE^@Ȑp)' ІZUA`r(.4D"e0ZeTc*Fi~+lk+E(l$UtB\{zm6P<@n Q DbQl'#!OEVh-f? @'$ 41QcL4Й qC/?Fс!h'P)dDb̃ IXA@ob+ώCTd=)oc^L 3 2S IA[`>4?:n #;h.R!? yӆ="d|ZљTz(0XV(a $f`@ R  ,Mcx)L7[<#) QaUa?vF<"K ,@gYag4bXEmq:Ȁ| ` I<#Ө6nLV0MMяU '+YUk|$$#)IR$&3Mr$(C)Jp|UH2%IB?a I3!%mx bGXTё&"ݴB4.?$TE LH)ʛ&)et&yM` }'c[S&o>`M2D )ĵʟH06,=MlĊU |Ea;K8 \M.u(D?bS[Sd:VM@IS_ij٬(d*"_昸D\f#eS5U67),`1@EG ΜbE_@2T#~׽zuehU>Z|xc3u^vS͙LV}^7?`"l(מlEۍH99: ;f9SX+rEZ.9 L3}\ɵhYI"srqp@d[yHo~@ҾfWn~c PwG,vg}AB&c0m| gUVv{"#Z>>ʦňjjjUUUFFFZZZKKKeee~~~111wwwsss===rrr888XXXuuuqqq---:::[[[zzzxxx777nnn{{{<<?͸zynVxwxz{zzYGFFGFG||y!,v H*\ȰÇ#ŋ3jܸA C r(SrB˗0cܰ P8ss!"d  =* PJj0|>hʵׯ[-t0ٳ` ۷\mzݻ 5d!@ L0`,}x1 H30Lr`? ls̹TMӨIڻسkHMiH7$ l ͚qctȣK/;xŧKW~~]{{s7?{g/}|vq9_WSi1xׁ`EYrυ8҆"zhT!rH0>"*8ThXa<>r"Z]H@&I$K"!AXXeWvr)_lf8VX>trgVݚq..V2}xwd0g6!t>TcO HM4x"O;;l(xFʚuVziv)v ېjj',syg{g>4hZ6퇠J 2!AkXoMPlT8 +ֽRSZ VI91|f_!Uil,oM ^Teͅ!v&@_Y]tXgmŵO-P~}d-Ս6׻ 9G |5SHh y3(wy裗p뷿n(]^;NI}.7'q9=Y&h fɠ{?S>yyY Wm1 BtPX9ԝvhVov< HD}KCw&7~x$~X(iB(6jXG;`b p̣ ܄dl^nBL" 3:&d|X$'iH*8H2&d8CZ, I_$` SHB)Q'JLr(N>@1fǂ^҉TN>5*'&O™ϼb4IH+9qC\2\ŐYKy.ŗ|I09MzӖ >ْ}Ę:Dd(: P:!BDJJT%J уvmhH2R4'B?zѕ6.M);*ӊ26UNsӉU C3E:B`5)OӃG ӭ¯*X*>լ D+?ժTթYJW|i`NU1\!hAN^ R`&`mc-Y%U\X5-RS ʳ|mle[TѶ3uك K.."Eaʍڄ7/]gsG%՟V l% n +ޒwJ4p0jSVdo{u]"ADEtQLH0x_;2ĭyFL(NWQ؈5JS+0bԪ@L!CA!pZXEL!Esn]r~`X8F`.2C.s?5t\R5isHe:K$1`f<^b &H&}0` `KD_D90;@C"!63 $ @A|" DS0tJ=Zת%jXǺ˳&P ,$ U}5Fc[dYvgЁ(l: ±DldզDg:ֶ,̗ 'lp i7=u6 (@ޜY%&%kPx ݵa D ?u@@ .8g{C E=j-Q|-D"[ߏ7tuM+Z{z/a_菓Nא矝4CRwcdwVuSVUiv!o lvhJ"@`0sFFcYpxGQ8g8]IA~,d;wxVFxWu|=60cP0qxjtwAH FUw1FpujkhoO!ADZR|`MvpN) 8[e,']}&q6 /8e.:eOCT` 0sP<^HUPgxm%_m8(wⶉApJ lv@306id8E4fPhHb7UO7S'/ 8!wOPkZaO|_##qxktRquLXQy_11U1LDv`ms7qsH'|cj3`nE~ґُJX"0 ,rg3pT U{ HPsTCVY`{jlTEYұqrїl$hAdtE_iE O٘i`LIMtQᘖi4T9]IF ɚI Y]IDiYXY+ɛIDB)iEljq̉ICUcU!)%e"yA)I$牞ٜԞ9;hce%2`eS`1ŸT<Yc< ڠIC1*ɹ6"JUr2##5(B ,ơTPU{'psZho*ً({UPXeXPepXPXpQu28jZ\1IJL$N fW\ v`:Z`ve1Z@:,P`#`j`aXU4Z"ufe`&Щ::n `Y0"Ky^Zjep ^|Q ":ZzZhh^Pjj&0bbq:Zt@;[{KP)1:Ū\G֭6"Yj%,۲.02;,;)!|*>+% [/* ?zuɡ4R;1{0k;+JjHq`\[jeॡ&PKn2k-!=kxZ"e;jˍmh۸/+#q\ #}!w{e ˶- @ Ő@ P `} 0^a@z.+"|k!^kvj˲u.)&$0j `@dgh`pmD`]wPt UPsX{ @WpX{k^'# p+eK[ۿ\ \a–@0`"Ru.4 K˿ L  LK-ZxP"QŀW--+5\8;>ADLG|J|dle/p %"U 1a<6l9jΦ&kEP:nG &@5מ,0M ˧eJz[A $^0  df^}m8^.J.p  XR  @9pD!ဍbP.*`0`PU0 sDFn>m n+a?O"<jT@g` ;&H4 tx~_o?>`Ğmf_P!&=9P-:t1BYc$^HÅ0L#1B bP! 4Vi.q#J9dE@0 (\I0!D.eS 5ۇE 2lĈ!,@ą O?~L89$H #FN!Cc $HzР_%Eq +Z$(?&&]ڴ []jcaUbVZ>2{n {1&X:(BYD1jH&QdL6qPF'=?ө*J6BK-܂K.K/L0 CL1L2,L3 ;SqER[^M,lKKv"8[碛2ȟx껉*(:(J:)Zz)j)z):J"XtsJ+KJkފkK8  k&63c73uEִʲ7tG SHȌ '[3)ɳ,㲽/s2C3]YNAsA>3BAS-p1)Sh-S=jPkCU-N ]e"';2$cXU4d”OuW9 dp 24Cq!%qfWd7yA^RuT~SV$8VgEփFcDOtƤ%:3el@'$Tfp5LCFA|tDI;lv!:uhَ Ri>.WjnIkbc9f{Y~͹=nԻenC7psuNDhQ\iޘUNNj稖jgX4:}xla/>؍VٜޙncX/Kȅágh_Mz}sBWѝt;AJ~؂e1KcjK֬ (uVW-[ W!.]! ~`@B:aTHh$5ٰbe#V҆Y!bX-{sY&:f;6Z"hA|yq_" F>3Jr"F^E,lC<ގ@i/tB E!82ySd$$YKMURe'hQL&T!4C* Za–VRo_ Y;M8QENt3?O=HBe‚?(># \͞F>R>xi0 g/C?MQD9Hpt'C;N>ܣ, [ P XZA.VPrbࠅ`0&@ѡX`h'ӊtS%6"7P @P E("w1HZCZ o*ȉFRi;WiS֑v{ʵ)A**zhXIA)$喀v?%Ї(J@E\e"iU,$nB,8:8@z Q 7_jg?w]Жc?P:<"AA!IG ""W 8@|X]GB00xj=6A$>GC!iL"WC(q8"d @'a @@6]h I}]`Xғ pl Ö!B @b3 q]P$ 0 HSQ р|@D?h&??l %!E#PIrAxt?\@tł }Cr;um*|a]Gz,Ѓ Td=Hqg5W?(hwk0@!\:ֵE\4M-! XoȲG$ aIOm<ЀCCP dwq>+*U" ;" èD"`* OLB p# CP @ H@+ A@#FZJpiXpB,h^2m( Mh&l~"@(. OCxA~_g P ~=Vϩ 2J<J^x/bxG℮?*NPdн4 h*S0/6;3X >%N?$LB%\B&) ?Ӂ!"'Hp SM[@Ke("5GXXR؁H(+4@0"`+ " iR(O4 ̐ 0H6aXDzFP0Xhˢ)hK rO0BukBVlEW "9(CbEj+OP>=-&NO*#!7i[P'I{El C@Y $BcR$CJ`F FKQٞme'4j5*(,HTo偢5ɒ Kc2iƕzƖR;\P(D8X7[b0Ld]T(cUYh\1̰ڰ@LDd}P3Aɂ@i[ )/VxK>8Rz$4\  8B7(4bX @Rbp8f2ɘK(')Xd'L7KB0Nf(Z66X̅8 턴4 G@*G~(庪+r@2'r/ IN)ml2K. b +@""84\HS=S Qa-=A0=!+6$؆h Uh Y0TRJH'R(AZAX!,5W/ 0I1iF@TZ)rhœtdE".g)Fy!IP"L9 ZX~$ZilZ>heJhXxed"eɔIёr~_9\^k-ȚfnekeV"n(dEu~ꌢiǿdgv}t%_1lllll?H_$kkffilNgMh^V_$ST6Nn^nnn~>`\''ٜE^KNI\@sƞm1q$H]HJnHloCPnH=(F8FjAZmre~EJޢyi{mVM0gTp~qq18.؂-@؂F ;Em9qen6h mfBq0FWLwjV. j1o2/V؂ oXFKI"hHF[gJ'(pr+.gb*tKt>PR(j6w<軞$WiKf !x( `mq.4 .1(avb/vc?vdOve_vaXQRpk\! uZ_[̋ntт!=Xqwywzw{w|w}*؂sVVG:wBu)?rrb^VijvTUn"rSp$/hUo@$x81ph7.D|dtQ9Kǁ-0y"x - S3AԄy=PERM=|WWaUYiE2ŗ*Zw ƄdaPaSjVՇֈfȖ$eE%`C_At?qЁ!I\K;+H? 9nT"Ĩ"%.|D‰uxq3: /hfzm{sWߞ%v-*BNӅDeԆmV ~:X$H 'md!SU`QTqQh16P3lP! ӹq 4Ѕi nẑ zo'uַջz*%x]N@r}'( f -X _p{"VlT?ⱁKAx XrG N8X8d+k`4H g=~<5 -p7,2)j4r쓟Cq &JD ʥms=/zӫwA'AG2$(BEHG"#%%$/4_~ACdtɶwAJё&,%٩d@0e2Px!2;<+/ tDKc BVɖ/nB40`DJ1K. h -9 BX kQ0[9` qz#! DH+A1fGD}Y8I$ƳZ1,lX ɰ]u(UKJG*s@GL"(H Y!•nD+8hI*`R <5%uškL˂V:Z@V!Yenh4fee&C-LX@Rm43on$FA8M-P~F̀-,4*u(/jӬ#ZqxvFFQY9dVgG:c0k EXsV:IWdQ0E4"I@:ZrO#ybA 4Yk.A#9ƚȖ=JI|K3z,‡R8w]%*l?"/)#\엤m*Mrň' ;1={&Ì\6xu(: b Z%A 0Z^o6̺ ٺ:`W H\w^}R zeGrڢ{I! l@FMBMN_qFZ% آD[D _ AdǵU~E^AYʕ_Zu&]) F&U%IyT`_)K[ \i\ UBW`q@h@FA񉡔y[aVMѡq*u@ ˒D@;PK鍊|7w7PKD>>pppXXX;;;wwwPPPrrr444RRRvvvMMM]]]hhhmmm---,,,777???JJJ̷555===zzzggguuuxxx...:::nnn^^^iiiƥ888SSStttěÁKKKqqqfffoooQQQ111{{{\\\sssLLLYYYǙAAA_]^534dcc}}}Gmlm||yaaa͸et`dcdUUUIIIkuh|{|ı=;ݘa@c>dJBNH,9aPVQ >i%JR2H`eZQ~ifcW韚pZy"m@>xBbmޛu.w i1iFvt}Гs+Qw\z9tFl>N/Jjʩ.PM=hw-ȪJ+[ vʞk0gj,lj *2>0Fc"jD1.F_˯  7Gܰ0{q ;0R$o1WFɇ ^uY|5smOW'H`/a i ߴeH`: ,Ơ^ s4j@37ȣɣ=\S; H)⪚6:gݚo m/"6qő;}gӽ6oүykfg>.:馣nrg?]֒_'-U5Y7ḮZ8hКS^MQfdE-АIo2Rt hJHѐt/5?Δ1 ];)Ԛ60y qy*%.r7L4̽S+dL3[y!X6h2`spf8[gD5BA44չ+~ `? :tG6\E/Jrt (FyMo0>BX1wcU:=SjK{|c?lꍯ{÷j5{A_g?o~/^~[{??eTm7o_7"w~ۧ#x~wOGtG# xPT"fIvX#F8SH6J#xwA6]"׀KHU,S.E6(Q8B~2X%4hB=x+$G@IxO?$M=OhNQ$ւYTt@dAC\K^'(4'\0#&&yoȆ#!xȆc@`5Zt8l!dJz,jueh(al0_y_0l ahU`#"`_0rhd0Xh:@duglX.!0&qn؍d|Xzhk`y 8x˦*(lluPXideP !%(Ў QP:pnܗ _P%P&HWv:<ٓ>9RveȒHY:  {K#x'YIy[05@FdYfyhjl (`W@XyJV Qx9hl^ !`PИ9Y ~ehqI#'Su`HYAO`YIQ-1`ɚ:/Bb% !09Y T ˹P!gɝ }R$FО9ٞ2九:P#$VٟYУ cp  0ڠ Ȱ {p} `bSx0 ` P [``N$uP5"MgE26=3 8q0tpfk`w]`n0ar~P{ b0xp @*QhФI I:"c㩨 GE * aPW[ڥ_cZgkڦosZw{ڧ𧁪QP?Iip <0"@ zX\`*djhlp*tjx|꧀ Z `!*Dɨ= jJzتڪ :j)}p`K!!  %3Fگ :jʭ*KP;!K$ˬ :jךʪ*J6PBn 8l]ex ' ; %;KS|7ƽZL/p@4-fпe y\ǸRMTmK`T@12`(0]p wX0p]͛lz&U׬G` ((6pZp wC M@ٖʘ} q|lH,ȼQ=}ЪڮP5/PdP 0. P 0 :0R-mq}t z(ݫ=NP 0> Pn.!>]Ǡ=Գ}|I<Ƚ\N}f|]W =~P0L=HDA+IGN0* .=&nsm* -4\JLNW\}MϿ̷ߩ=]4jnrw{nN}⇞z-1>nm^.^䁎=٪M .lǹ߾gos^ɮ|^-zpBp6 㔎旞~ꕽ?nDGϴ>=ڿ>N  N~ .^#p>'{~I/g0޻DZ.8.[N@/BO~Ůe?nTҮܠ⑾[_]_o>* g^7Z߹=!oGn _^gA~/N?/~u `Nr뱏AODJo-/^SA .dC!K‡UATѤK>!HUXilmᕀ! ޽D1jH&QdL6qPF*eTJ+K,Spj-./#0c1$243ANKZ{-ֳv#8[.*砓:'XtIRo> ikk菱f&{) R9ڪ!!3z+++ ;,{,*,:,OH{E`HƐhTOriGG쏹":$C0;kE+J2Ҽ+ӣv2Kn$:mAJ Ő P%'UR'1UmSf$)K`qIGx4uMV2R+$ Z"]˳=zoK´TR$Na[->)A54DAdtGM4Ee]MYy?{GͷTM_fUa5 cpezxJhLL?40`k;U֓ P #E1[:ơa+Miߘ>O稞ekQ:*-͎mc7ƏevVd㜇R2[}ppg&ܛGwg|"zIlNyi ks誮j:nA]kW'Jמ>c=kds] AO}.\8K\x-hzB?A d{'|:}{xq#<~(8 pCFgaI(B?_eX'2hZJC-Ww@_1P#фƞ%x6d`K d,# =H&< p4CA@  h_dǭڠzXJ̖u`"w ~??d|̘cs A"~GH A_f~; u-T2.n=` YC&r|d ;+:{IXjuhN쏷.T`HЂ?,bzp) Z| z,Eea@G -wˡ+oqCym~?^K .ƴvg]mQ wUP2 Q]5@ TPbH0)4R8.:H4 /ءFCZLsj WN2@ 3nAEAd#+Y8]S au!T9%9ٰ^_h/zl}-?` B`뷺6Z׫ŮK+f@5[[я+d *|B-yg^|` lN&ĵ=pH$? D#݈/Fx0YrbyiH4 W B| X##Q 4`чpc6VR G_O=zQ=}9;rR"F `>9 `b *TyH89, 6 HF؇4.Ѕ^Ȅ?aHbGTX!"+؂1;s0 R+$+hl˟IBc" 4.j+p-|.P,L4Є.(d?21JQXA&Š-(B/Jy1y;T BI+h"xÈнlC"K:JK2>+z,RùR0_EPbT')Ȁ A xOH4b4*_k1; KУ( 0_PKlB&:0q"]}h$0YФ1 #CHTd %+l]L.0HXbGXpEylzL$"} 8Ȣ80 h!H:.…] B@2J yDVC3pB0HxVx$2`~"xxpH;.:fQ0t9;@HSl%1|0J4IJJF#˲fCh3`S&xP$E ;5&sQ%rJC%'BpTR)H҅A[8(hfqM2TD`KK1`MMp%x)BЃRW#UN^hh}&8>.Pc7,P=PMP]%BHM`A02K1D\7h,D!O*,!<`P6 NN %]#$X7ϹyR*R+R,R-R.xPTIosMHOl5:ZX[;Hx NlXfa뉄*dQNB26更4$OBxS2˳:]SbO ;84H[Rȁ˭(RXUY*ҳPyT|yʬL0: 8-] U@K8O刂H E];?GL^V`V23:UVfU(TK`!#T7VmWUX[5 s8iI)@z4&(A3 0dH}$`!0 @/UP5XQ &VE9VxE%Q$p(9}Pɑ5gH$ @ȏ-?8y&X ؂[҃- s4= L=KNd$&V< Dث3҉YF[ =嬎g}$;lgʞad}jlߎ08&iCd>iNX*&]d@E㴎xV-0+nBb.+n. S"^+g^z!k>cNN9Y HV8 /خ6i)$H $6 pN] h AM'{p|IN,P/Bq~o-l )xnF}qɮpfWH,`}//0/]#سO'W3U`0nns sp^X0 @%]NH(EJ0O8TT DQ2p6Jh>aTW(ukfڅ?Ф^qY/oPo3xc6u6/_Nnl^Ng2K@*P>_@6=?x^ \?MNR(Iȅ/D@^F/xZO=rkEvAV)Nw%o  $P/TywRYT^y؂ɻFa.0{G_!PȂ.xI5HGpcShVx@-0y 8~vJ6P9,XW.yZSWw7UoT##zN7}6cF6Au0F wY/@FLTD}8 |@U(0<Wqwy5P` KH}%y0 /^@ J !B$Ȋ>$9rɌ*.7d?BbE D&Μ:w'PLĿ\ gM!FT"7T \U*Wlѹ.d+#aF2漱JAC ;C!1#R:<,+̙|m6,6 `i@YD#O.x0†9SQ#G!G#Ǐoz3?4}ŖOfiZGT`Y?[0 Q9op&"grP`D'Oˆ(*PffA$Q&p@DwD@ 9$Ő '`/;mj{$'~D ?um#0 7!pQ0 #Yc|&hA7al 6n7PA X \p1 棔9X`psFz`E:=G^Jgy _x.R]sJÚ\"œY*+Pmb $DRVdɠX@MSD *Jd ] RȜYZؤ#RFP;sæ,gߝBIGLV-̢-Pc`à2\lPp10Xؠ.07a K vـl0~ze5}NҖObnJ?c V3j`$S0?E҆C Mc(,U!{?1@ž[d,-`?P0@CKDd`T?'OyS$J' FqɠnY])TTOʬ@3f"@ %O)^خjqpHeYe@[@TBf=~"H ` j-$QU3h2*\$O&,m" - stl\}x;i~XR7 1ng ̛<,UBaE8ڧ-}pRrX6'X9K1,(8A|\k3=2:F$8!m>O!0DeJكO95,6(!~XεYp,5x<~~ ,b@;PK8D,;44PKD>>}}}mmmvvv;;;333,,,͸÷XXXJJJggg555jjj888---+++111777444666 ...222ZZZ]]]UUUiii^^^***WWW___\\\VVVhhheeeYYYsssdddccc[[[nnnqqqQQQllluuu///kkkbbbfffaaa tttwwwOOOxxxRRR:::((()))&&&HHH'''SSS???IIIoooMMMTTTLLL{{{KKKNNNzzz###GGGFFFBBBCCC"""DDDEEEAAA|||~~~ !!!$$$!,~ H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8s;C @ JѣHIӧPJJҟUjʵׯKٳh[M˶۷pٮKݻxq9pA޿ LÈ+^̸ǐ#KL˘3k̹sBQ!vT݊%`܃!D`C >:PP7hMu삧Srxtv[(>ck%@!οA0wPε!]A P'l4-TAry`>xN Q8C88vߒyi2t@@;QqpkP&n뵣#x@0 v߀^ցN[St5we Љ_ ʎq'– T6r^i@^[!Кr}i"xL:jZ9zUAM}?*\uF!0@9n2pr:A;p0 xPHBBڻn?r%йc ^IБE-}lM DU$H bMoO|+׎QT1ʏBBZwkf(QN}tClN NnC=Ipמ`^Nw@Tų-gkC^Ʀ ~ED#A(֠%N7+Ԟ(^hGp7ԡǚhQk~dK gx-jCfQd:Ć8 :A5,wb͸.)d>ך5hU)%q2封;Dn uk]e}Ԡ4fx`]vtya-pY&#i{IQi`-AAߍ %eH"$Hݟ"Qx ( rFHLͧK^=DƊI6܈"ޑLJԄK XS, gQJթ4X꒮ծzu3\Xh= ֶp\J׺xͫ^׾{5+`KؕMlH^]^a,d/'䧓͊d3K,BKXR/`H9+Ͳ62юwĚ0M4cؤJj!@_뒵? V,Z :*2CI=րsڵu;Mlx]MBNF%g-ջugh$ cp n+RK&|]; B[ 8L} a$ɿ{-L⬶eR,+ 9mܔY$1H$>f`H+>)gj %S-A A @W| ;F}S09xK2lvqm7sV P,M :pGgE4EiB{3>`[i'OD* zP*b(O1nsS[I\lqDlXh}f/fî"kJ:k=putmQϪɵ_gK%ZJv T`ENu~3QpMF(0;0$Z#: 4&繠EѠ9GС:ڡ@P!J:  ?;JLڤN;? H!iDNZIdJ#Z>?PAcsZ$@vzxJOG`'#`XjZ:". A1 "jfTjNh%SP 0Y`'Xr*)#\|4/4B1r>0OJSP=:)?0 *WlZѪ!6R\|96.NuÕc h>˺O:Zp U0zJUH/fҚ 2B!Ctvf%534>zL@rY`NJ:Uaw2iȇ9im{|ɡS @ #@AMT:bTa50-TM))u`Y#a>JKkJ)Jd}J!a{y~`v3.`r ˶@[@O,GKkzv  ptf0v vy4Q˶p"@ pr0t񲫔YfN5r7Fz˶Sk«кKea @4pK[2᫴@ Qоd;L;Kܡ+Y`L@L"B b"B@zk@Lwۼt `kP%,a^F4:fG=a! Be`!b@0D̫ У)+i8)A2i=#W:\DnûW3m+NM{IPİ 궭a R O+kJka-qR9æOnZF㱰Lv$Уp\6P8j$ЧӡK0qaH;'/ј`ȉ+)( cK)bɢɮ lPCwj/,ʄVŴ3'0/ypC1lIZ\ܡtllL@ 0p QpG;p/Ǥȣ*@. 2m I, =ldv$ H_1`t~= ULe`ى-14زm ஖ YPO;PY`.}ֿ0Qm]ܦV-k-m/ 8>ͭ;ij K1-Mm݂1Q2!S ~(>]߭|@~ٽC-0@ ! pԑn#i[ @)p-AN} 0t^v~x^@1ܑv~S|A0P 4=g 5PM>CE P0^P p o<'p(&`gnM@oi-%~؞ھ|~^پ.q,觎Mc ^;: o@ 3nȮ.K@./ >`  00 P*0>~ɞ̆_|d"A>+NJXF.M0b?`fop d@l0 0/ _VoZ ")-1>tv4 !!K @AoK|_,0r_ol.qzLX[?`o*/oǯ 4>~LծJpcp1?+ڵNǃ X:uPР 2Lu4,XpB H *xB'N@aD-ZĈ   Ő84$!B#NhFC$iJ.aʤi3Ξ? %jRN@&\aĉ pcȑ%O\JX~HСE&-3c8&L낥emܹuݛ.lU"E7v+ɔ+[9ͷ<}ZhҥMF #UƹZ<,Οz[: .^ 6z@ :2IpB L 6es /#CC7[C/+k,kҢpA;ă/F#$Кn-# 3 ,qM `Gs  (v(@;L%@kMD0EH? 1\Nv 6E SPCu"m@ 8\tϬ/H2\&WqʯJZH/ s13ӼZLeȀB`vV`:`m#ډ -v}w _{ Xn[NlNQM;TD 4b<3mC&_V70dU5>.r?1Kҩ!NAӀD=C$,Kr}hg6aTvj蠝B~ͅ^"VaƵ9 pnܸ _gp lbO,|6c *ٽe[Y̧l6g'199ZomLLGuv1iű5`ADh~ٱy7- O>^Ozt v> GIPgb'6'7>SU#⑲XV4?):aJs 2t@p֎o+q /TO2P ͧUk7 0uiCgW@}X"v(A' vJ:",#krPw `+B`G v\6Uz([Ad0>p}+FW6ċ`5#D|TXuSңE6Ct5GiѬ\8c^Fms3`oQ/#;Y"z r̜fM@Yps݄M]x :g%(A Vc<9H)?SZl Q?deQRd% K&i&;2l{9jͥF`(3 A  =NX&W)0i;zR &T>mlg#j;p@@+̅6ÙT0ɁR 1r }B_%@.B􍢋-4hƏՋt ]K 6+@TMTD6F0V]ѫT8k>6bz9l b9sĥ҆(՗PkG}#7-V4,V%f\b57LfU`! Gݍfyt%T'&P эAbu!fDhˣ-XJhjp[(0ֵvulÁ77jŰVC 1 Dg 6beX |{_! UIo(,eo-XwU;RW5Y5s!M/lSRYss|OHl_F:&xDnaJ>Ё~W66bT~MNVrT`@@Hkk8 8xvh@5u(= D=k."(A" C ɦ!A wr?LT!D$%l'L uɻ#L$.t! RӅ N;tR+t< v82v0N0 DH x6B0pAxGT`/F9qܹWDd#vF!@|B!9lr Qtc9ir|&7ByҰb}&, 4)?*̰HMÜt4Ӵ"@/gV`9G0rsJZزF$˱T "x)x* C?BžT rED%0HXQ #A |B 0(t] R'e3+RXe6/uH؀$؆m`D)D`PcCp =7g -0%`X$>d6GP"fA8+!K{XG"N61v= l5Iܜ0-@dn0!AGsF%2MVU۩,EgzHİGgQ,Q!2%r>mfnbggv Phϲh͌*F81h x,L~RYi]-hu@hVi- gؽiIi<jGkkkkMO!خkk7ɱ?HjXr(k\Y鬶\ꗶ2 hkIJVkQgh(>EH@*=4<ȃ?AXUHJF!xk6>vmH4KC0ٮIhD*#Qj֯>kjFGĭ_lF:ў831U,,7@@B)3K$ R"nJo7)`&(i&(Mg`^͞ozͶ汮鲾@eeJoʮg$ S G)0Spxq$ T !ǂ"%Xwr>NiOQq6pɆΝffosɠف+r&čc`&0WP# ^U ; /pE?pb'o`wO3j4ol^6xqEVDSe>o] 8gu.2/Mu2]~<|u]YZf\'kum_0 aLLd7u5[hUsFsV< n? ~ sg>s^vnkbXzg{׀uwOnxqg,>b*NxT_x Y `DvYׅA`cXx_0vV*.@vJ9Oez 0xd`:^6s']@ .N_.rq?0vtWdgZqk5zd= OqJ@F@ '~η魯_Ǽ=R(SOWGv'{vc=H#\mygw|dD |Oűol|"|2n1OQx{5[XO׷ZşsۏM>: hI} ik!~2Z~y!1puPР 2Lu4,XpB H *xB'N@aD-ZĈ  Ä Dk׮v6X`@@&\D/fcȑ%Odf̙5o3]BX0j.޼z/݀Š3{'eg91~GXØ|%)~2YgxF);?uMUUm!НV\obeZ-;@z`r`7B]aVYWY)U1e0Ex%%]F!Y`i!WX J]Hdx%")54Pie^7Zfdvv )b♘Lt!0G0kz_[`zט(`!YZ5hS$!?yII"_̢U<u8)${bgP-0H"(H J+^Z/Jk+ێ2b2T {'xGnLr:OKBD)-(cxw#91Ey;CZ@9_IK@ V\[3Eg@\ 11e`@ :8P +m&mL(P7Csd%«9ij_@ h6iqkx #١u%7 @.2affu ',D%prc &4ag8n<8tz@..F|Ӡ}q] p0 Af";І0:u$ $3KLΐ')I 0j h/(HA %#ZI\$u@ 7qA @M $*fQA*i 48㩾Y2` TdH%c C1\# bh6d%)֙H"p iN g87y:WH,L?τ20@u3L<ÅjyyG 4] 9RtJjR}*)C[?2vOːS9QIM LJ1KOSZգEuIwb@YժW{մ ( e+֊Wqp=\3X0]JѢ*}=_M 3P@HXuEj#2lfOYZuG{lD[&*ƴJ5Z/v=-":Q4mnU[β˅p}Kɥs%Bt%DZAnz^ZEwKuw*uY{]޾Wʽ SwKGH .S%VQI`w0!HVT3ΰ[*mzA\m_z2 Ƌܧ7V]!cK6,[ޤ9Bcz5KOeD`V9˃qeZ !t2KXϖ|qf+ ; ? =>-q i0O6h95Г ?]6Q4+Y4rsT. a%?䕓эrwwu}柶9nY?OIXyBpv:g b4Z}g7dPaOz'9#)e9ގ:$Yw Oz%Ow@ OiyW;B\nua{o=[yck|dzco!} 0ԠPA|'ԯ.' Bpz?Oz \U 29@FN: @eՀ\jZqNQFڽh A؟uZ[Y [UtAJ &hEL `Tha\_њye,!!6uh@aɭa` J@R&"<ᩁ]N$E鵃dQ@h D" ^Z@ t<""6"ۥܙ!ʊt"S*;Ԁ \ @4b ,ު%g9^lbaNI449@ܵC4a (@sm#F7Z e`pF#D. (3&FP<=VN$aE];d`apc%``CF$MյC JRddՀ5G~$>>I? h#A~!T]ԤV"g@pg4H t$9}@4Z^5űJZKNKE#"n%``CSP)b4$n@Q b tI+b]&Y^B^fL&j A |f9.&oJ`WSjX٦f9Мfo>:&p J mn-,rBN%UQY_B'jkJA@Y5 &b;@Fe}Gv>xr edyjS6IE!}Cu~*Qbfgau(VF&LVg]DQ)B)rFb(h'>#j Tfy5^'mR2XJhhC ЁhMN+Dk&Afd !FF݄ś@NG8w'5Í\}œ2XVgN #c4 &eY I @#\SaFS@CR@ hXi('C Cb*/ꁘhg<Evȁk,_PEaDjGnibB%ym5Z)f>5 /B" P+;µ:]ij j;b:ZF'ş@ɛu5@F2؛F++諲>@';d j"DO$ty)g@4Z)v"]{ʛZ?}캖kIhaʡ2X#VTiEͶ@!8e(N "V]_ffQFҢ&蔵gEš@u@~"*FX-!k֭k, ` B֠e $ArD*,&,PRlJj ۶m̾ ,@kFT<*he6.<dC+Ђ2,Ax-;`Bo+f.E s,B0AiUl* I/@ak}KͭgdL xA$D6| @8(K /c7P.W*&V1wSnJj/t\M ! ``ʒDh1bX!q ;հNo T@Dt'2(';@ RU*'o"{%QB.!al?= Æ$0%N\G@p2+?(92^24_'^<]Xl)6řmXhSʀ5rH311G`2q3c3>@P2 Ys>C+3rf!zgm&gI:2+.$0DMM&۳33;  @*J]]8Z,a6YZhESoh-xpsts|^39o<@{90.!|а|R|>2@Tי΃Z :y|kwگ=۷ڿC;. 0B =$A{B[{XcKQh>ꧾ>뷾>нf4KϏ`\~'l }`wO^>GO?ωkv}>C{d)B;?C{'Y_+KU?; 4xaB5tbD)VxcƆfx@ 0P 4!C \N .\ BܴzkVXID@Xbmp kٶm`[sֵ{v[źWt88obŋ7v1G E4R%K0eҴS'O@5T)ӧVq{,Z%' C+wp-}/ &xtөWw+cȑ%O\e̙5̹ϠC&8tiS֫]#nv/PE x!2O"FxN -Ìl˼,3/Hc4TneF0#+w!Ì Hl6h" (2;̾L< 3=cE$c 6v ˆ@`؁h;lH50#= =J(6β2>;O4JkC>?r0xS ;XSԬ3U\u8(Q+9TK+05EL?J/I$ <]ͥVjB}z5 CJLO$3 Tu:e.Ppܸ❸t  .(ޘcVCapR.ED17MӁ vEŠA$ ~U͵X]碍zP-CKLMS3W ޚ뮽Zf+Xؒ bhj";ŒdWj j ρ ? 9 (6+HmV}PzX|K~~^ٖrj(A0  _t>5f `G| jX&0*  @.(Pw $+!*h:-~`<(s B&Y! aZ Kbj7O\;Ρ=Xh (qF o0QH7/: &!,lKRG5*daHG^8uH~shHPe Q*@t%,aC.kO cqpZC Q@ DM1a CNQNS : $DE8?O~`O`@QЂc]%AB x@2v 8B%%0B4xg@#*˘_UFfАH IyyZD>u:=k`Ka0KLH1Z$QP N(`a)aҞK S}0) 2CVoJ~yO+y4*] hOuToGSB$n,lEY 0eneAgYRį7ĢCOAP[85mdk4wխY8E^`jo{kFlrc\8'TP[#v SN2v6bAOz)50o pI|(|ߌi\f =yca,pA ><x!GV2]"ўC 8J6eva-h\ ;Eӭo{׵ PHG:"1/ sdFxb c @W ~`k`:s@YCwЅ> Pp,k9sڋ#bƷ4 p; h:-0Ou֛ttk+е,Hg1 ?+قWj622k~ V@ ..|naăo5k1\!]( 2u~)?4phvq`Ԁ@ X^xhaxF0t7$Z"?t ~hu&^f%@"" 40FȰ`[2"0>uIaѭ Ѯmv4BT`[nAPzӃ W=' Q3܃2'y 1GF*? xaX\.8Kم6tOPm|>@~i]` A DO{  @($4j!,}/"p%` |$ԡ$$!gB@Me(2D KpWPl:^aq+/{x~P:0y o r8 F\p 3 ΰP  ni. 1s1H/.Qup+ gh)n?NB18NqJXD{%cHf.OnrQ3d !QsvQQ9Hqqw;c-ϑ8Bqn#[21h1 C ! p2WR"R"} #L#7RBg*d$9&$#IRrcV`%'&'F&_"kn2^r2w'y'(@)B F`)e((( rMp-I(bB +#c*ϥ*+*r:BZ2#b.B/:@B"r1\2 Β. V:00F 2j3V`0bFDځD ,R(/. b-::@*) :S+E PA)a29!-0ddr. sW3 F@7 R/3 *Ւ/G@b/F@0W73S1SW3& <7}9ۡrM3Bsn=SK@eS/C@k8 7:32a2K1 1.O<[3>2cF!.ۡA+&)/TDIDk:EU:/}0 !V 9;)sF8CK@TSrHs@gJ 9I#C١+,%4ʔ!]XهT Ip`2Y`Z^u^ | ` D@\-\ Yg] Nײ ^aV BKC5`=g``y, a!dK^I` @ F\D5;`):s-c+db4I, DdVh5bSA!:7C0"!RX).-!Vӕ@ܜ` Vl$vVbBMs,e@4SjaW˒6Qg;3jf"gu8 VqlQbt!:G0>i.,0n7o;B rgϫ. uq_Wh:q71O959Bsw`B"8[V". npUvWh`vk7rry7xww5H]yB|Wk8ÖzdwRbmm{3:#wx|ByA7S KV/ biVv{|1xC cpy2V`V/DCk@ b}\[^$] Eonn2,}`OMfD "mK^ xb4iy3-w`]k؍`wʌ2YkV@P얚!d" .9}@x쎛@ X 0X:8?Ay:@˒!7X"I[ٕ_K-U!TՖ %jp%;{-.=K4EsB&Nbnix5P€nwo65M#B/$ٙD 0 +/uI=]~Ygءuz-LaqU?7+0Ē3/74I3+s|a4U1W/nR9j׈C"Dm~4cK4LM7Wy vT` D zrkZX>?bT4]spכ7;7/s8TښY\";׀O,g'd-zrGŧ`򸨃6|[^`c9 nB bNCikBiրQY{7RZ:٨ءMm hw%"Hu" bI@  /;=ۚ4#3 {ZY6T;?Y{-iQ37'#€ "Rd  dZ@em{ :xw=4/sY48ңVW;Q]/.G7>{  "AWw P:Vz[$|$$ FQ6 {1!o863FXouWs9[?ySat|"z3òPze>h!S)+s<+'Zs*Q{O+R@ @Zx \lz9@ףb8[<,VQ.\Jȡad1} ulb Z bHԳ R]A,rcڌ]A!(&p@BtA U}ߵYv B=ۭ\0-DT_c0ƀg݃ݝ<`D|R P`R |Ⓞd:A `0愁>,>^ߞZ~ޭc}~W9M) V` ͊فNg%wf+,0_ ߽ΠƋXY0^ Bž)>mEm?E7?fv ?~ϭ lnkŅ08@'P7ß"E s0… tƋv+Z1ƍ;z2ȑ$K<v[| vXtev<ى(3СDO!,X`@(Ph &L@ \!$$x B  աGa~]DN6O@}Hv֞k6m~ Xiق B%Id*cf3IAXRJv"{|gn֟o8(XAfnCh@AN8ezrtPYInNMi#"hJUkF_mݖn_pWq>P>tu 4J}( ŠhXN"zb:kLb. nJպ4S_Q쾋QWJ▖4= p.wXh+ 5<"1 u:LF1Fi`rZcpMU{)=ki!)Ag{pBϦ=4k:;lmt{W tիz 8]O i `)p@ m`PЕ:z}Xf6uiFpir)y[RP>] <@d@̘gt)ǘȌxeyP%P9=™B"  q'wsTW .H% Y!II" Y=R `12P$ 1  J9D#EAaL*ҡD%N$ʢÒ pq/D957ʣ:A:[;+J35I*XFHʤpQ*SNPJbYZC[jXʥœΩS`:kkz!':^:j&ҦuJ!p*rtvjz|~*d*;x>xʨZjVZ1:l@ ;.ʪj9! 9!**@d4Śks퐬~*QI:ZcȬ݊ʭ j%!8嚮*zA/Q:0 <1 r)qK!ѰI81; r?4(*+[! Pz5./ 9)y?+,02[4+&I:<{>˴0qT'Nsqf*S˲U{ajX qgsKRk* ,iqڲ[#G:0@`aPF8fK&<H$5$ x*hzàs 8oTWKq>d$[l_g>A{0P+ + p+ًw˷[ D;q5GvvֽKv+4^ =k^/绿 >k’fOP^$ eZN@ɻ$8P ?P>0٬ P^G/al 2 Ge <Ok5 k!0,=SL2n ?;0MRASPLW<0\\'-;p<2u \;cS( # :@POY-$PM6mMf>zs>oU_|q f4l~OݫRn ׍O օ- 퀸<`h k87? ю \5ݫv+rFȃz+hPMm 0 k1< ===ބ["=B `;PF +k7BA<ީ$@0mYp ]Kumy}|gԠ])n02֟~;݅;ܭۖkvDS#d-WY8.OLj7 M%|hN3"XD᷈ E+"p`{^,='Up8>[}ۅ]ÌQlt = R_ ,8lm.+0`0  ^"p-Pj.~N-N:+qn$@r Ĭ\q|ja䠽KX댾z#zANpSxܹ=7]%r%<Q>Q3@POZc`t\Ȱ1]٢*a!PS#P"L!D["NQ8}1<1\?z<b<ڡ\OA]!Q];v"nv%,Á00PF=~RH%CIv6INL5męSNScGJ*dv:P$TDv0:4Q`bj`ԇaYN*nEuśW^v4<`gSbƍqM $ 5hMCT-8aҥMkDB/4&!vE `'vG Av|@sKztS?5 \xL |ZzOm}^$![S b$ 0Zk:v`ଃvj`Pj> /l'Jbg#Q`.ָ-7  ƒ2<hnvz{rDH4λv{HhdB0J+I3$a2J$1~ib  ZBШ.J: / W(\rb2ʼn*؁^`1܊:B;ԺѨ@+vTRγ:Ur.P]wպ#/ DEv&GŠ7,;(,Xo?  c)Y2wxk aJ/E5A#3!Dm.*`"`kUj5br}7R[76S5VBGuvަܠwRhݮۘR !8 =F'P2ԕ RvbY#2>zk8m9pa9NXnM ɂ]ӂky:jhpsj7̹Dj˻/gs%9h *ovEvZ XH^v0T*lλr(jaJv/^#pAT,(zn߽?qw̿;g*hv0&aZ=gL7S袧/|\ȧb3V >@SS dG;!fPB"@/a eh + `LP8@ Sx20[Eb~qߛZ8E$s7̡:p 07 fB‹-?ĨM(!6Qяu9HBҐ$Cj]3"@!vPF qSA!-)hHG !KhB8KXoXI$)Ѐ` ,B'YA2-Ә0>1G)@*J1:-Ź1ÜDg:չNtn cT#s]cE$)C 8ӟ?P4I'|LAZԢCDo15@Q"D>JPK"=y d pJ8}CNCNm!9'@Aم6թciTڑrӪWm;1,dKa DZs֟o` GM*BTcSР8R\iK_SSX.pDY@TP׻"U) ZkllXUvL .p_MXBV,f5~6+^Oֺ_eĖE=KV㱑le/2׳-i.|]`];6Ұ] oXK]ns ]Vm_ uw$Ei;ƚ׸M.{9Kok_ؿqq1(*XG<W]s9mג'=JŻ)^{LWu9e6gf{UbZV0eC8&miz'y"fT9$ 2p"nq:=otc;CCf|dK8Ӛ.O} ( 5a5.:2"oOKиowP9OU rpe:,^2 i@ƶ }W'ܳs{m2s4:=1B\uLdYξs} y?.ps{ F[!ۻV1S7ǹ n= 7;p At-]ktSX>g Zc8pO|yMfK{C!j)+Io ,o_nnL6 G{]l+=n7] x/fyoM|T;2wz ||$*^]x}f?54ηy/z |X+{e] 0`տ{V=]x^||C_O2wH=pc ?&۴kss>{;Gū[?u7:848= l,қ6w̾V;:>sCA#u*؁ DA :  7k4;6>5p0:@4)ٓD+*#@vx9;%ߋlB6BBGCs'`¨cCtCS22 '`'xG a SB*.d[x9 @L .B 4ED9[でB>OsD*H?$mc,`OlōXCCAd<7%Z;Gx ' :Fo 1cv{6rG&LDpɥPH8sDNOA(Ժ0wb)FQRǥCRI|ɘ,h]:Cʎ`¼;Ɠ|LԿhYH|Olr<Q!pK~>rPdЕm7C`Gy@r3PJ8R*<3S0H{ .wj/J pDP|Rі;<>@<؀kjc 8}(P8ȂLtWjHA=92- : 5%43لN9EeJ -?ChnK`bDU?e|TF&L1%:N%53.w6F7]'UQS(I@]U#DLTFU VD#/īdUVFcVr00X>RCDdDluP]Ĺ Soi@\ JQkuOdX/H QW3Rmju½c@lCw%$;<EPGaDTSUԘ?u_]`=SxH$TWOYMImmͱ0#T<Êa4Zt-أG*RJE1P"(><,Ze*w+%XUՃ;S1DԜ,ķ\M'\LTsa ȽB>H\&=\>v(hS%ݯ}CQͰ%"ǜt,%[MZR(sS.D:F) ܭ1Ľ*w(Ճ[ۥtU[L4b$^<ڢ 6BCV'w""`dQʵ<-T[#ISx~u[ ~smd v)f]50 ijmpetr"ȍخeG5^Jc`Z]O =(.euIM>5PHUo- Po*sv][T [v.I D^}f|nR}gkֿ9eECh*_Pe(2pb(=L(D0cHQ`Vp=HQeg`fUmgI(>BX\uteKv9(! N hj Z-ܮM6M0]n QhLF EQ(H{B$k\䜃@H4ʸ, ۉk-zhl~j@<4fiHj!#Iq9_3EH*R(eY\q-RcLf&c:vΞR RP`5(>4WX*F|mbJq-E f&^P5HODG\COObg_5 p& Ѓ `7*w&W?(p zZbm!T˅ȕ˯yuaSbV'f>hIvufȷİ.u7k2νG~#E(hɟ% X؄ \QgwdJ`v'XM7k p PΉ(qEn."v? )r$ɒ&OLr%˖._ŒQ0HO!`7`:;Ȕæ(,X`@(Ph &L@ v: ,p!B$HH WDtBguٱ' _&yAiC;'B( P Gڂ9(MGޕ^K*$@bMW% W(t ,k 6[Q\z$ݥRDuש BmMޚ@t ꁱHh뢼eÇHDI,b閡 .;GL䭉++١j̪^vAhi /sK Q8̗G`#гG!s&̪e@*$Z[ Nl5WpV `eՁ2Tš +c"J~Fm^ᇋY"p/ u+qV(;Аs޹盳SK ^zA+TbHƅԁ<B "eCj2X&)FmleJԦ+DTO $v@KةJjD@l(0ӧB5..,Ѐ #*F@SJJֶJB0+]j׻x+_׿G,a kX U],cB6,e+kb6H,g;ς6-iKkӢ%,'S–-mkk[-oÔ7ok"$M.s;PKхSukPKD Setting Archive Tracing

F Setting Archive Tracing

The Oracle database uses the LOG_ARCHIVE_TRACE parameter to enable and control the generation of comprehensive trace information for log archiving and redo transport activity. This tracing information is written to the Automatic Diagnostic Repository.

This appendix contains the following sections:


See Also:

Oracle Database Administrator's Guide for more information about the Automatic Diagnostic Repository

F.1 Setting the LOG_ARCHIVE_TRACE Initialization Parameter

The format for the archiving trace parameter is as follows, where trace_level is an integer:

LOG_ARCHIVE_TRACE=trace_level

To enable, disable, or modify the LOG_ARCHIVE_TRACE parameter, issue a SQL statement similar to the following:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_TRACE=15;

In the previous example, setting the LOG_ARCHIVE_TRACE parameter to a value of 15 sets trace levels 1, 2, 4, and 8 as described in Section F.2.

F.2 Choosing an Integer Value

The integer values for the LOG_ARCHIVE_TRACE parameter represent levels of tracing data. In general, the higher the level, the more detailed the information. The following integer levels are available:

LevelMeaning
0Disables archived redo log tracing (default setting)
1Tracks archiving of log files
2Tracks archive status by archive log file destination
4Tracks archive operational phase
8Tracks archive log destination activity
16Tracks detailed archive log destination activity
32Tracks archive log destination parameter modifications
64Tracks ARCn process state activity
128Tracks FAL server process activity
256Tracks RFS Logical Client
512Tracks LGWR redo shipping network activity
1024Tracks RFS physical client
2048Tracks RFS/ARCn ping heartbeat
4096Tracks real-time apply activity
8192Tracks Redo Apply activity (media recovery or physical standby)
16384Tracks archive I/O buffers
32768Tracks LogMiner dictionary archiving

You can combine tracing levels by setting the value of the LOG_ARCHIVE_TRACE parameter to the sum of the individual levels. For example, setting the parameter to 6 generates level 2 and level 4 trace output.

The following are examples of the ARC0 trace data generated on the primary site by the archiving of log file 387 to two different destinations: the service standby1 and the local directory /oracle/dbs.


Note:

The level numbers do not appear in the actual trace output; they are shown here for clarification only.

Level   Corresponding entry content (sample) 
-----   -------------------------------- 
( 1)    ARC0: Begin archiving log# 1 seq# 387 thrd# 1 
( 4)    ARC0: VALIDATE 
( 4)    ARC0: PREPARE 
( 4)    ARC0: INITIALIZE 
( 4)    ARC0: SPOOL 
( 8)    ARC0: Creating archive destination 2 : 'standby1' 
(16)    ARC0:  Issuing standby Create archive destination at 'standby1' 
( 8)    ARC0: Creating archive destination 1 : '/oracle/dbs/d1arc1_387.log' 
(16)    ARC0:  Archiving block 1 count 1 to : 'standby1' 
(16)    ARC0:  Issuing standby Archive of block 1 count 1 to 'standby1' 
(16)    ARC0:  Archiving block 1 count 1 to :  '/oracle/dbs/d1arc1_387.log' 
( 8)    ARC0: Closing archive destination 2  : standby1 
(16)    ARC0:  Issuing standby Close archive destination at 'standby1' 
( 8)    ARC0: Closing archive destination 1  :  /oracle/dbs/d1arc1_387.log 
( 4)    ARC0: FINISH 
( 2)    ARC0: Archival success destination 2 : 'standby1' 
( 2)    ARC0: Archival success destination 1 : '/oracle/dbs/d1arc1_387.log' 
( 4)    ARC0: COMPLETE, all destinations archived 
(16)    ARC0: ArchivedLog entry added: /oracle/dbs/d1arc1_387.log 
(16)    ARC0: ArchivedLog entry added: standby1 
( 4)    ARC0: ARCHIVED 
( 1)    ARC0: Completed archiving log# 1 seq# 387 thrd# 1 
 
(32)  Propagating archive 0 destination version 0 to version 2 
         Propagating archive 0 state version 0 to version 2 
         Propagating archive 1 destination version 0 to version 2 
         Propagating archive 1 state version 0 to version 2 
         Propagating archive 2 destination version 0 to version 1 
         Propagating archive 2 state version 0 to version 1 
         Propagating archive 3 destination version 0 to version 1 
         Propagating archive 3 state version 0 to version 1 
         Propagating archive 4 destination version 0 to version 1 
         Propagating archive 4 state version 0 to version 1 
 
(64) ARCH: changing ARC0 KCRRNOARCH->KCRRSCHED 
        ARCH: STARTING ARCH PROCESSES 
        ARCH: changing ARC0 KCRRSCHED->KCRRSTART 
        ARCH: invoking ARC0 
        ARC0: changing ARC0 KCRRSTART->KCRRACTIVE 
        ARCH: Initializing ARC0 
        ARCH: ARC0 invoked 
        ARCH: STARTING ARCH PROCESSES COMPLETE 
        ARC0 started with pid=8 
        ARC0: Archival started

The following is the trace data generated by the RFS process on the standby site as it receives archived redo log file 387 in directory /stby and applies it to the standby database:

level    trace output (sample) 
----    ------------------ 
( 4)      RFS: Startup received from ARCH pid 9272 
( 4)      RFS: Notifier 
( 4)      RFS: Attaching to standby instance 
( 1)      RFS: Begin archive log# 2 seq# 387 thrd# 1 
(32)      Propagating archive 5 destination version 0 to version 2 
(32)      Propagating archive 5 state version 0 to version 1 
( 8)      RFS: Creating archive destination file: /stby/parc1_387.log 
(16)      RFS:  Archiving block 1 count 11 
( 1)      RFS: Completed archive log# 2 seq# 387 thrd# 1 
( 8)      RFS: Closing archive destination file: /stby/parc1_387.log 
(16)      RFS: ArchivedLog entry added: /stby/parc1_387.log 
( 1)      RFS: Archivelog seq# 387 thrd# 1 available 04/02/99 09:40:53 
( 4)      RFS: Detaching from standby instance 
( 4)      RFS: Shutdown received from ARCH pid 9272
PKw)N11PKD Description of the illustration sbydb045.eps

This illustration shows a Data Guard configuration during a switchover operation. The San Francisco database (originally the primary database) has changed to the standby role, but the Boston database has not yet changed to the primary role. At this point in time, both the San Francisco and Boston databases are operating in the standby role.

No redo logs are being sent or received over the Oracle Net network. Both of the standby databases are capable of operating in read-only mode.

PKnۤPKD Description of the illustration sbydb058.eps

This illustration shows the three basic configuration options:

The computer system at location 1 (Standby 1) shows a configuration in which the primary and standby databases are located on the same system. The computer system at location 2 shows a standby database (Standby2) located on a separate system that uses the same directory structure as the primary system. The computer system at location 3 shows a standby database (Standby3) located on a separate system that uses a different directory structure from the primary system.

PK/PKD Description of the illustration sbydb026.eps

The illustration shows Database A running release x, and Database B running release y. During the upgrade, redo transport services are not transmitting data from the primary database to the standby database, so redo data is accumulating on the primary system.

PK{PKD Description of the illustration sbydb048.eps

This illustration shows a two-site Data Guard configuration after a system or software failure occurred. In this figure, the primary site (in San Francisco) is crossed out to indicate that the site is no longer operational. The Boston site that was originally a standby site is now operating as the new primary site. The Boston site is writing to online redo logs and local archived redo logs.

PK"WA<PKD Description of the illustration sbydb025.eps

The illustration shows the Data Guard configuration after both databases have been upgraded to release y. SQL Apply has been started so that redo that was accumulating on the primary database (B) is now being sent to the logical standby database (A).

PK)QxPKD Description of the illustration sbydb042.eps

This illustration shows a Data Guard configuration consisting of a primary database and a physical standby database.

From the primary database, redo is being transmitted and applied to the standby database. Log apply services apply the redo out of the standby redo log files to the standby database.

PK_PKD Description of the illustration sbydb033.eps

This illustration shows a Data Guard configuration after a switchover operation has occurred. The San Francisco database (originally the primary database) is now operating as the standby database and the Boston database is now operating as the primary database.

PKp,pPKD Description of the illustration sbydb032.eps

This illustration shows a Data Guard configuration consisting of a primary database and a logical standby database.

From the primary database, redo data is archived to the standby database where log apply services transform the archived redo log files into SQL statements, which are then executed on the open logical standby database using SQL Apply technology.

PK7 <(#PKD Description of the illustration sbydb055.eps

The text following this graphic describes the processes involved in SQL Apply and the functions each process performs.

PK*S.)PKD Description of the illustration sbydb023.eps

The illustration shows a Data Guard configuration before the upgrade begins, with the primary and logical standby databases both running the same Oracle Database software release. In the figure, the primary and standby databases are both running at version "x" and SQL Apply is actively transporting redo from the primary to logical standby database.

PK*CPKD Description of the illustration sbydb056.eps

This illustration shows each instance of a Real Application Clusters multi-instance primary database archiving redo logs to a single-instance standby database. The text following this illustration describes how the standby database correctly applies the multiple archived redo logs that are arriving from each primary database instance.

PK<'pPKD Description of the illustration sbydb049.eps

The illustration is described in the paragraph preceding the figure.

PKwgPKD Description of the illustration sbydb054.eps

This illustration shows a Data Guard configuration consisting of a primary database and a physical standby database. From the primary database, redo data is archived to the standby database where log apply services use Redo Apply technology to apply redo from the archived redo log files to the physical standby database.

PK[QPKD Description of the illustration sbydb027.eps

The illustration shows the Data Guard configuration running in mixed-version mode after a switchover has occurred. In the figure, the standby database is running release x and the primary database is running release y. SQL Apply is not running because Database A, which is still running release x, cannot apply redo data from Database B until Database is upgraded and SQL Apply is started.

PKc=8PKD Description of the illustration sbydb031.eps

The graphic displays the processing flow for the five states of SQL Apply processing: initializing, loading dictionary, applying, waiting for gap, and the idle state. These states are described in detail in the text following this graphic.

PKB PKD Description of the illustration sbydb044.eps

This illustration shows a Data Guard configuration consisting of a primary database named San Francisco and a standby database named Boston.

On the primary database in San Francisco, online redo logs are being archived locally and over Oracle Net services to the standby database in Boston. At the Boston standby location, the archived redo logs are being applied to the standby database.

PKC>PKD Description of the illustration sbydb024.eps

The illustration shows a configuration that is running mixed releases. In this configuration, redo transport services are started and the redo data that was accumulating on the primary system is being transmitted and applied on the newly upgraded logical standby database. The Data Guard configuration is running in a mixed release environment, such that the primary database is running release x and the standby database is running release y.

PK^}snPKD Data Guard Scenarios

13 Data Guard Scenarios

This chapter describes scenarios you might encounter while administering your Data Guard configuration. Each scenario can be adapted to your specific environment. Table 13-1 lists the scenarios presented in this chapter.

13.1 Configuring Logical Standby Databases After a Failover

This section presents the steps required on a logical standby database after the primary database has failed over to another standby database. After a failover has occurred, a logical standby database cannot act as a standby database for the new primary database until it has applied the final redo from the original primary database. This is similar to the way the new primary database applied the final redo during the failover. The steps you must perform depend on whether the new primary database was a physical standby or a logical standby database prior to the failover:

13.1.1 When the New Primary Database Was Formerly a Physical Standby Database

This scenario demonstrates how to configure a logical standby database to support a new primary database that was a physical standby database before it assumed the primary role. In this scenario, SAT is the logical standby database and NYC is the primary database.

Step 1   Configure the FAL_SERVER parameter to enable automatic recovery of log files.

On the SAT database, issue the following statement:

SQL> ALTER SYSTEM SET FAL_SERVER='<tns_name_to_new_primary>';
Step 2   Verify the logical standby database is capable of serving as a standby database to the new primary database.

Call the PREPARE_FOR_NEW_PRIMARY routine to verify and make ready the local logical standby for configuration with the new primary. During this step, local copies of log files that pose a risk for data divergence are deleted from the local database. These log files are then requested for re-archival directly from the new primary database.

On the SAT database, issue the following statement:

SQL> EXECUTE DBMS_LOGSTDBY.PREPARE_FOR_NEW_PRIMARY( -
>  former_standby_type => 'PHYSICAL' -
>  dblink => 'nyc_link');

Note:

If the ORA-16109 message is returned and the 'LOGSTDBY: prepare_for_new_primary failure -- applied too far, flashback required.' warning is written in the alert.log, perform the following steps:
  1. Flash back the database to the SCN as stated in the warning and then

  2. Repeat this step before continuing.

See Section 13.2.3 for an example of how to flash back a logical standby database to an Apply SCN.


Step 3   Start SQL Apply.

On the SAT database, issue the following statement:

SQL> START LOGICAL STANDBY APPLY IMMEDIATE;

13.1.2 When the New Primary Database Was Formerly a Logical Standby Database

This scenario demonstrates how to configure a logical standby database to support a new primary database that was a logical standby database before it assumed the primary role. In this scenario, SAT is the logical standby database and NYC is the primary database.

Step 1   Ensure the new primary database is ready to support logical standby databases.

On the NYC database, ensure the following query returns a value of READY. Otherwise the new primary database has not completed the work required to enable support for logical standby databases. For example:

SQL> SELECT VALUE FROM SYSTEM.LOGSTDBY$PARAMETERS - 
>   WHERE NAME = 'REINSTATEMENT_STATUS';

Note:

If the VALUE column contains NOT POSSIBLE it means that no logical standby database may be configured with the new primary database, and you must reinstate the database.

Step 2   Configure the FAL_SERVER parameter to enable automatic recovery of log files.

On the SAT database, issue the following statement:

SQL> ALTER SYSTEM SET FAL_SERVER='<tns_name_to_new_primary>';
Step 3   Verify the logical standby database is capable of being a standby to the new primary.

Call the PREPARE_FOR_NEW_PRIMARY routine to verify and make ready the local logical standby for configuration with the new primary. During this step, local copies of log files which pose a risk for data divergence are deleted from the local database. These log files are then requested for re-archival directly from the new primary database.

On the SAT database, issue the following statement:

SQL> EXECUTE DBMS_LOGSTDBY.PREPARE_FOR_NEW_PRIMARY( -
> former_standby_type => 'LOGICAL' -
> dblink => 'nyc_link');

Note:

If the ORA-16109 message is returned and the 'LOGSTDBY: prepare_for_new_primary failure -- applied too far, flashback required.' warning is written in the alert.log file, perform the following steps:
  1. Flash back the database to the SCN as stated in the warning and then

  2. Repeat this step before continuing.

See Section 13.2.3 for an example of how to flash back a logical standby database to an Apply SCN.


Step 4   Start SQL Apply.

On the SAT database, issue the following statements:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY NEW PRIMARY nyc_link;

Note that you must always issue this statement without the real-time apply option enabled. If you want to enable real-time apply on the logical standby database, wait for the above statement to complete successfully, and then issue the following statements:

SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

13.2 Converting a Failed Primary Into a Standby Database Using Flashback Database

After a failover occurs, the original primary database can no longer participate in the Data Guard configuration until it is repaired and established as a standby database in the new configuration. To do this, you can use the Flashback Database feature to recover the failed primary database to a point in time before the failover occurred, and then convert it into a physical or logical standby database in the new configuration. The following sections describe:

13.2.1 Flashing Back a Failed Primary Database into a Physical Standby Database

The following steps assume that a failover has been performed to a physical standby database and that Flashback Database was enabled on the old primary database at the time of the failover. This procedure brings the old primary database back into the Data Guard configuration as a physical standby database.

Step 1   Determine the SCN at which the old standby database became the primary database.

On the new primary database, issue the following query to determine the SCN at which the old standby database became the new primary database:

SQL> SELECT TO_CHAR(STANDBY_BECAME_PRIMARY_SCN) FROM V$DATABASE;
Step 2   Flash back the failed primary database.

Shut down the old primary database (if necessary), mount it, and flash it back to the value for STANDBY_BECAME_PRIMARY_SCN that was determined in Step 1.

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> FLASHBACK DATABASE TO SCN standby_became_primary_scn;
Step 3   Convert the database to a physical standby database.

Perform the following steps on the old primary database:

  1. Issue the following statement on the old primary database:

    SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
    

    This statement will dismount the database after successfully converting the control file to a standby control file.

  2. Shut down and restart the database:

    SQL> SHUTDOWN IMMEDIATE;
    SQL> STARTUP MOUNT;
    
Step 4   Start transporting redo to the new physical standby database.

Perform the following steps on the new primary database:

  1. Issue the following query to see the current state of the archive destinations:

    SQL> SELECT DEST_ID, DEST_NAME, STATUS, PROTECTION_MODE, DESTINATION, -
    > ERROR,SRL FROM V$ARCHIVE_DEST_STATUS;
    
  2. If necessary, enable the destination:

    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_n=ENABLE;
    
  3. Perform a log switch to ensure the standby database begins receiving redo data from the new primary database, and verify it was sent successfully. Issue the following SQL statements on the new primary database:

    SQL> ALTER SYSTEM SWITCH LOGFILE;
    SQL> SELECT DEST_ID, DEST_NAME, STATUS, PROTECTION_MODE, DESTINATION,- 
    > ERROR,SRL FROM V$ARCHIVE_DEST_STATUS;
    

    On the new standby database, you may also need to change the LOG_ARCHIVE_DEST_n initialization parameters so that redo transport services do not transmit redo data to other databases.

Step 5   Start Redo Apply on the new physical standby database.

Issue the following SQL statement on the new physical standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE -
> USING CURRENT LOGFILE DISCONNECT;

Redo Apply automatically stops each time it encounters a redo record that is generated as the result of a role transition, so Redo Apply will need to be restarted one or more times until it has applied beyond the SCN at which the new primary database became the primary database. Once the failed primary database is restored and is running in the standby role, you can optionally perform a switchover to transition the databases to their original (pre-failure) roles. See Section 8.2.1, "Performing a Switchover to a Physical Standby Database" for more information.

13.2.2 Flashing Back a Failed Primary Database into a Logical Standby Database

These steps assume that the Data Guard configuration has already completed a failover involving a logical standby database and that Flashback Database has been enabled on the old primary database. This procedure brings the old primary database back into the Data Guard configuration as a new logical standby database without having to formally instantiate it from the new primary database.

Step 1   Determine the flashback SCN and the recovery SCN.

The flashback SCN is the SCN to which the failed primary database will be flashed back. The recovery SCN is the SCN to which the failed primary database will be recovered. Issue the following query on the new primary to identify these SCNs:

SQL> SELECT merge_change# AS FLASHBACK_SCN, processed_change# AS RECOVERY_SCN -
> FROM DBA_LOGSTDBY_HISTORY -
> WHERE stream_sequence# = (SELECT MAX(stream_sequence#)-1 -
> FROM DBA_LOGSTDBY_HISTORY);
Step 2   Flash back the failed primary database to the flashback SCN identified in Step 1.
SQL> FLASHBACK DATABASE TO SCN flashback_scn;
Step 3   Convert the failed primary into a physical standby, and remount the standby database in preparation for recovery.
SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
Step 4   Configure the FAL_SERVER parameter to enable automatic recovery of log files.

On the physical standby (failed primary) issue the following statement:

SQL> ALTER SYSTEM SET FAL_SERVER='<tns_name_to_new_primary>';
Step 5   Remove divergent archive logs from the failed primary database.

Remove any archive logs created at the time of or, after the failover operation, from the failed primary database. If the failed primary database was isolated from the standby, it could have divergent archive logs that are not consistent with the current primary database. To ensure these divergent archive logs are never applied, they must be deleted from backups and the fast recovery area. You can use the following RMAN command to delete the relevant archive logs from the fast recovery area:

RMAN> DELETE FORCE ARCHIVELOG FROM SCN ARCHIVE_SCN;

Once deleted, these divergent logs and subsequent transactions can never be recovered.

Step 6   Recover until the recovery SCN identified in Step 1.
SQL> RECOVER MANAGED STANDBY DATABASE UNTIL CHANGE recovery_scn;
Step 7   Enable the database guard.
SQL> ALTER DATABASE GUARD ALL;
Step 8   Activate the physical standby to become a primary database.
SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
Step 9   Open the database.
SQL> ALTER DATABASE OPEN;
Step 10   Create a database link to the new primary, and start SQL Apply.
SQL> CREATE PUBLIC DATABASE LINK mylink -
> CONNECT TO system IDENTIFIED BY password -
> USING 'service_name_of_new_primary_database';

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY NEW PRIMARY mylink;

The role reversal is now complete.

13.2.3 Flashing Back a Logical Standby Database to a Specific Applied SCN

One of the benefits of a standby database is that Flashback Database can be performed on the standby database without affecting the primary database service. Flashing back a database to a specific point in time is a straightforward task, however on a logical standby database, you may want to flash back to a time just before a known transaction was committed. Such a need can arise when configuring a logical standby database with a new primary database after a failover.

The following steps describe how to use Flashback Database and SQL Apply to recover to a known applied SCN.

Step 1   Once you have determined the known SCN at the primary (APPLIED_SCN), issue the following query to determine the corresponding SCN at the logical standby database, to use for the flashback operation:
SQL> SELECT DBMS_LOGSTDBY.MAP_PRIMARY_SCN (PRIMARY_SCN => APPLIED_SCN) -
> AS TARGET_SCN FROM DUAL;
Step 2   Flash back the logical standby to the TARGET_SCN returned.

Issue the following SQL statements to flash back the logical standby database to the specified SCN, and open the logical standby database with the RESETLOGS option:

SQL> SHUTDOWN;
SQL> STARTUP MOUNT EXCLUSIVE;
SQL> FLASHBACK DATABASE TO SCN <TARGET_SCN>;
SQL> ALTER DATABASE OPEN RESETLOGS;
Step 3   Confirm SQL Apply has applied less than or up to the APPLIED_SCN.

Issue the following query:

SQL> SELECT APPLIED_SCN FROM V$LOGSTDBY_PROGRESS;

13.3 Using Flashback Database After Issuing an Open Resetlogs Statement

Suppose an error has occurred on the primary database in a Data Guard configuration in which the standby database is using real-time apply. In this situation, the same error will be applied on the standby database.

However, if Flashback Database is enabled, you can revert the primary and standby databases back to their pre-error condition by issuing the FLASHBACK DATABASE and OPEN RESETLOGS statements on the primary database, and then issuing a similar FLASHBACK STANDBY DATABASE statement on the standby database before restarting apply services. (If Flashback Database is not enabled, you need to re-create the standby database, as described in Chapter 3 and Chapter 4, after the point-in-time recovery was performed on the primary database.)

13.3.1 Flashing Back a Physical Standby Database to a Specific Point-in-Time

The following steps describe how to avoid re-creating a physical standby database after you issued the OPEN RESETLOGS statement on the primary database.

Step 1   Determine the SCN before the RESETLOGS operation occurred.

On the primary database, use the following query to obtain the value of the system change number (SCN) that is 2 SCNs before the RESETLOGS operation occurred on the primary database:

SQL> SELECT TO_CHAR(RESETLOGS_CHANGE# - 2) FROM V$DATABASE;
Step 2   Obtain the current SCN on the standby database.

On the standby database, obtain the current SCN with the following query:

SQL> SELECT TO_CHAR(CURRENT_SCN) FROM V$DATABASE;
Step 3   Determine if it is necessary to flash back the database.

If the value of CURRENT_SCN is larger than the value of resetlogs_change# - 2, issue the following statement to flash back the standby database.

SQL> FLASHBACK STANDBY DATABASE TO SCN resetlogs_change# -2;
  • If the value of CURRENT_SCN is less than the value of the resetlogs_change# - 2, skip to Step 4.

  • If the standby database's SCN is far enough behind the primary database's SCN, and the new branch of redo from the OPEN RESETLOGS statement has been registered at the standby, apply services will be able to continue through the OPEN RESETLOGS statement without stopping. In this case, flashing back the database is unnecessary because apply services do not stop upon reaching the OPEN RESETLOGS statement in the redo data.

Step 4   Restart Redo Apply.

To start Redo Apply on the physical standby database, issue the following statement:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE -
> USING CURRENT LOGFILE DISCONNECT;

The standby database is now ready to receive and apply redo from the primary database.

13.3.2 Flashing Back a Logical Standby Database to a Specific Point-in-Time

The following steps describe how to avoid re-creating a logical standby database after you have flashed back the primary database and opened it by issuing an OPEN RESETLOGS statement.


Note:

If SQL Apply detects the occurrence of a resetlogs operation at the primary database, it automatically mines the correct branch of redo, if it is possible to do so without having to flashback the logical standby database. Otherwise, SQL Apply stops with an error ORA-1346: LogMiner processed redo beyond specified reset log scn. In this section, it is assumed that SQL Apply has already stopped with such an error.

Step 1   Determine the SCN at the primary database.

On the primary database, use the following query to obtain the value of the system change number (SCN) that is 2 SCNs before the RESETLOGS operation occurred on the primary database:

SQL> SELECT TO_CHAR(RESETLOGS_CHANGE# - 2) AS FLASHBACK_SCN FROM V$DATABASE;
Step 2   Determine the target SCN for flashback operation at the logical standby.

In this step, the FLASHBACK_SCN value for PRIMARY_SCN is from Step 1.

SQL> SELECT DBMS_LOGSTDBY.MAP_PRIMARY_SCN (PRIMARY_SCN => FLASHBACK_SCN) -
> AS TARGET_SCN FROM DUAL;
Step 3   Flash back the logical standby to the TARGET_SCN returned.

Issue the following SQL statements to flash back the logical standby database to the specified SCN, and open the logical standby database with the RESETLOGS option:

SQL> SHUTDOWN;
SQL> STARTUP MOUNT EXCLUSIVE;
SQL> FLASHBACK DATABASE TO SCN <;TARGET_SCN>;
SQL> ALTER DATABASE OPEN RESETLOGS;
Step 4   Confirm that a log file from the primary's new branch is registered before SQL apply is started.

Issue the following query on the primary database:

SQL> SELECT resetlogs_id FROM V$DATABASE_INCARNATION WHERE status = 'CURRENT';

Issue the following query on the standby database:

SQL> SELECT * FROM DBA_LOGSTDBY_LOG WHERE resetlogs_id = resetlogs_id_at_primary;

If one or more rows are returned, it confirms that there are registered logfiles from the primary's new branch.

Step 5   Start SQL Apply.
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

13.4 Recovering After the NOLOGGING Clause Is Specified

In some SQL statements, the user has the option of specifying the NOLOGGING clause, which indicates that the database operation is not logged in the online redo log file. Even though the user specifies the clause, a redo record is still written to the online redo log file. However, there is no data associated with this record. This can result in log application or data access errors at the standby site and manual recovery might be required to resume applying log files.


Note:

To avoid these problems, Oracle recommends that you always specify the FORCE LOGGING clause in the CREATE DATABASE or ALTER DATABASE statements. See the Oracle Database Administrator's Guide.

13.4.1 Recovery Steps for Logical Standby Databases

For logical standby databases, when SQL Apply encounters a redo record for an operation performed on an interesting table with the NOLOGGING clause, it stops with the following error: ORA-16211 unsupported record found in the archived redo log.

To recover after the NOLOGGING clause is specified, re-create one or more tables from the primary database, as described in Section 10.5.5.


Note:

In general, use of the NOLOGGING clause is not recommended. Optionally, if you know in advance that operations using the NOLOGGING clause will be performed on certain tables in the primary database, you might want to prevent the application of SQL statements associated with these tables to the logical standby database by using the DBMS_LOGSTDBY.SKIP procedure.

13.4.2 Recovery Steps for Physical Standby Databases

When the archived redo log file is copied to the standby site and applied to the physical standby database, a portion of the datafile is unusable and is marked as being unrecoverable. When you either fail over to the physical standby database, or open the standby database for read-only access, and attempt to read the range of blocks that are marked as UNRECOVERABLE, you will see error messages similar to the following:

ORA-01578: ORACLE data block corrupted (file # 1, block # 2521)
ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option

To recover after the NOLOGGING clause is specified, you need to copy the datafile that contains the missing redo data from the primary site to the physical standby site. Perform the following steps:

Step 1   Determine which datafiles should be copied.

Follow these steps:

  1. Query the primary database:

    SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;
    
    NAME                                                  UNRECOVERABLE
    ----------------------------------------------------- -------------
    /oracle/dbs/tbs_1.dbf                                       5216
    /oracle/dbs/tbs_2.dbf                                          0
    /oracle/dbs/tbs_3.dbf                                          0
    /oracle/dbs/tbs_4.dbf                                          0
    4 rows selected.
    
  2. Query the standby database:

    SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;
    
    NAME                                                  UNRECOVERABLE
    ----------------------------------------------------- -------------
    /oracle/dbs/stdby/tbs_1.dbf                                 5186
    /oracle/dbs/stdby/tbs_2.dbf                                    0
    /oracle/dbs/stdby/tbs_3.dbf                                    0
    /oracle/dbs/stdby/tbs_4.dbf                                    0
    4 rows selected.
    
  3. Compare the query results of the primary and standby databases.

    Compare the value of the UNRECOVERABLE_CHANGE# column in both query results. If the value of the UNRECOVERABLE_CHANGE# column in the primary database is greater than the same column in the standby database, then the datafile needs to be copied from the primary site to the standby site.

    In this example, the value of the UNRECOVERABLE_CHANGE# in the primary database for the tbs_1.dbf datafile is greater, so you need to copy the tbs_1.dbf datafile to the standby site.

Step 2   On the primary site, back up the datafile you need to copy to the standby site.

Issue the following SQL statements:

SQL> ALTER TABLESPACE system BEGIN BACKUP;
SQL> EXIT;

Copy the needed datafile to a local directory.

SQL> ALTER TABLESPACE system END BACKUP;
Step 3   Copy the datafile to the standby database.

Copy the datafile that contains the missing redo data from the primary site to a location on the physical standby site where files related to recovery are stored.

Step 4   On the standby database, restart Redo Apply.

Issue the following SQL statement:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

You might get the following error messages (possibly in the alert log) when you try to restart Redo Apply:

ORA-00308: cannot open archived log 'standby1'
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf'

If you get the ORA-00308 error and Redo Apply does not terminate automatically, you can cancel recovery by issuing the following SQL statement from another terminal window:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

These error messages are returned when one or more log files in the archive gap have not been successfully applied. If you receive these errors, manually resolve the gaps, and repeat Step 4. See Section 6.4.3.1 for information about manually resolving an archive gap.

13.4.3 Determining If a Backup Is Required After Unrecoverable Operations

If you performed unrecoverable operations on your primary database, determine if a new backup operation is required by following these steps:

  1. Query the V$DATAFILE view on the primary database to determine the system change number (SCN) or the time at which the Oracle database generated the most recent invalidated redo data.

  2. Issue the following SQL statement on the primary database to determine if you need to perform another backup:

    SQL> SELECT UNRECOVERABLE_CHANGE#,-
    > TO_CHAR(UNRECOVERABLE_TIME, 'mm-dd-yyyy hh:mi:ss') -
    > FROM   V$DATAFILE;
    
  3. If the query in the previous step reports an unrecoverable time for a datafile that is more recent than the time when the datafile was last backed up, then make another backup of the datafile in question.

See Oracle Database Reference for more information about the V$DATAFILE view.

13.5 Creating a Standby Database That Uses OMF or Oracle ASM

Chapter 3 and Chapter 4 described how to create physical and logical standby databases. This section augments the discussions in those chapters with additional steps that must be performed if the primary database uses Oracle Managed Files (OMF) or Oracle Automatic Storage Management (Oracle ASM).


Note:

The discussion in this section is presented at a level of detail that assumes the reader already knows how to create a physical standby database and is an experienced user of the RMAN, OMF, and Oracle ASM features. For more information, see:

Perform the following tasks to prepare for standby database creation:

  1. Enable forced logging on the primary database.

  2. Enable archiving on the primary database.

  3. Set all necessary initialization parameters on the primary database.

  4. Create an initialization parameter file for the standby database.

  5. If the primary database is configured to use OMF, then Oracle recommends that the standby database be configured to use OMF, too. To do this, set the DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n initialization parameters to appropriate values. Maintenance and future role transitions are simplified if the same disk group names are used for both the primary and standby databases.


    Note:

    If OMF parameters are set on the standby, then new files on that standby are always created as OMF, regardless of how they were created on the primary. Therefore, if both the DB_FILE_NAME_CONVERT and DB_CREATE_FILE_DEST parameters are set on the standby, the DB_CREATE_FILE_DEST parameter takes precedence.

  6. Set the STANDBY_FILE_MANAGEMENT initialization parameter to AUTO.

  7. Configure Oracle Net, as required, to allow connections to the standby database.

  8. Configure redo transport authentication as described in Section 3.1.2, "Configure Redo Transport Authentication".

  9. Start the standby database instance without mounting the control file.

Perform the following tasks to create the standby database:

  1. If the standby database is going to use Oracle ASM, create an Oracle ASM instance if one does not already exist on the standby database system.

  2. Use the RMAN BACKUP command to create a backup set that contains a copy of the primary database's datafiles, archived log files, and a standby control file.

  3. Use the RMAN DUPLICATE FOR STANDBY command to copy the datafiles, archived redo log files and standby control file in the backup set to the standby database's storage area.

    The DUPLICATE FOR STANDBY command performs the actual data movement at the standby instance. If the backup set is on tape, the media manager must be configured so that the standby instance can read the backup set. If the backup set is on disk, the backup pieces must be readable by the standby instance, either by making their primary path names available through NFS, or by copying them to the standby system and using RMAN CATALOG BACKUPPIECE command to catalog the backup pieces before restoring them.

After you successfully complete these steps, continue with the steps in Section 3.2.7, to verify the configuration of the physical standby database.

To create a logical standby database, continue with the standby database creation process described in Chapter 4, but with the following modifications:

  1. For a logical standby database, setting the DB_CREATE_FILE_DEST parameter does not force the creation of OMF filenames. However, if this parameter was set on the primary database, it must also be set on the standby database.

  2. After creating a logical standby control file on the primary system, do not use an operating system command to copy this file to the standby system. Instead, use the RMAN RESTORE CONTROLFILE command to restore a copy of the logical standby control file to the standby system.

  3. If the primary database uses OMF files, use RMAN to update the standby database control file to use the new OMF files created on the standby database. To perform this operation, connect only to the standby database, as shown in the following example:

    > RMAN TARGET sys@lstdby
    
    target database Password: password
    
    RMAN> CATALOG START WITH '+stby_diskgroup';
    RMAN> SWITCH DATABASE TO COPY;
    

After you successfully complete these steps, continue with the steps in Section 4.2.5 to start, recover, and verify the logical standby database.

13.6 Recovering From Lost-Write Errors on a Primary Database

During media recovery in a Data Guard configuration, a physical standby database can be used to detect lost-write data corruption errors on the primary database. This is done by comparing SCNs of blocks stored in the redo log on the primary database to SCNs of blocks on the physical standby database. If the SCN of the block on the primary database is lower than the SCN on the standby database, then there was a lost-write error on the primary database.


Note:

Because lost-write errors are detected only when a block is read into the cache by a primary and the corresponding redo is later compared to the block on the standby, there may be undetected stale blocks on both the primary and the standby that have not yet been read and verified. These stale blocks do not affect operation of the current database because until those blocks are read, all blocks that have been used up to the SCN of the currently applied redo on the standby to do queries or updates were verified by the standby.

When a primary lost-write error is detected on the standby, one or more block error messages similar to the following for each stale block are printed in the alert file of the standby database:

Tue Dec 12 19:09:48 2006
STANDBY REDO APPLICATION HAS DETECTED THAT THE PRIMARY DATABASE
LOST A DISK WRITE OF BLOCK 26, FILE 7
NO REDO AT OR AFTER SCN 389667 CAN BE USED FOR RECOVERY.
.
.
.

The alert file then shows that an ORA-00752 error is raised on the standby database and the managed recovery is cancelled:

Slave exiting with ORA-752 exception
Errors in file /oracle/log/diag/rdbms/dgstwrite2/stwrite2/trace/stwrite2_pr00_23532.trc:
ORA-00752: recovery detected a lost write of a data block
ORA-10567: Redo is inconsistent with data block (file# 7, block# 26)
ORA-10564: tablespace TBS_2
ORA-01110: data file 7: '/oracle/dbs/btbs_21.f'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 57503
.
.
.

The standby database is then recovered to a consistent state, without any corruption to its datafiles caused by this error, at the SCN printed in the alert file:

Recovery interrupted!
Recovered data files to a consistent state at change 389569

This last message may appear significantly later in the alert file and it may have a lower SCN than the block error messages. Also, the primary database may operate without visible errors even though its datafiles may already be corrupted.

The recommended procedure to recover from such errors is a failover to the physical standby, as described in the following steps.

Steps to Failover to a Physical Standby After Lost-Writes Are Detected on the Primary

  1. Shut down the primary database. All data at or after SCN printed in the block error messages will be lost.

  2. Issue the following SQL statement on the standby database to convert it to a primary:

    SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
     
    Database altered.
     
    Tue Dec 12 19:15:23 2006
    alter database activate standby database
    ALTER DATABASE ACTIVATE [PHYSICAL] STANDBY DATABASE (stwrite2)
    RESETLOGS after incomplete recovery UNTIL CHANGE 389569
    Resetting resetlogs activation ID 612657558 (0x24846996)
    Online log /oracle/dbs/bt_log1.f: Thread 1 Group 1 was previously cleared
    Online log /oracle/dbs/bt_log2.f: Thread 1 Group 2 was previously cleared
    Standby became primary SCN: 389567
    Tue Dec 12 19:15:23 2006
    Setting recovery target incarnation to 3
    Converting standby mount to primary mount.
    ACTIVATE STANDBY: Complete - Database mounted as primary (stwrite2)
    Completed: alter database activate standby database
    
  3. Back up the new primary. Performing a backup immediately is a necessary safety measure, because you cannot recover changes made after the failover without a complete backup copy of the database. As a result of the failover, the original primary database can no longer participate in the Data Guard configuration, and all other standby databases will now receive and apply redo data from the new primary database.

  4. Open the new primary database.

  5. An optional step is to recreate the failed primary as a physical standby. This can be done using the database backup taken at the new primary in step 3. (You cannot use flashback database or the Data Guard broker to reinstantiate the old primary database in this situation.)

    Be aware that a physical standby created using the backup taken from the new primary will have the same datafiles as the old standby. Therefore, any undetected lost writes that the old standby had before it was activated will not be detected by the new standby, since the new standby will be comparing the same blocks. Any new lost writes that happen on either the primary or the standby will be detected.


See Also:

Oracle Database Backup and Recovery User's Guide for more information about enabling lost-write detection

13.7 Converting a Failed Primary into a Standby Database Using RMAN Backups

To convert a failed primary database, Oracle recommends that you enable the Flashback Database feature on the primary and follow the procedure described in either Section 13.2.1 or Section 13.2.2. The procedures in those sections describe the fastest ways to convert a failed primary into either a physical or logical standby. However, if Flashback Database was not enabled on the failed primary, you can still convert the failed primary into either a physical or logical standby using a local backup of the failed primary, as described in the following sections:

13.7.1 Converting a Failed Primary into a Physical Standby Using RMAN Backups

The steps in this section describe how to convert a failed primary into a physical standby by using RMAN backups. This procedure requires that the COMPATIBLE initialization parameter of the old primary be set to at least 11.0.0.

Step 1   Determine the SCN at which the old standby database became the primary database.

On the new primary database, issue the following query to determine the SCN at which the old standby database became the new primary database:

SQL> SELECT TO_CHAR(STANDBY_BECAME_PRIMARY_SCN) FROM V$DATABASE;
Step 2   Restore and recover the entire database.

Restore the database with a backup taken before the old primary had reached the SCN at which the standby became the new primary (standby_became_primary_scn). Then, perform a point-in-time recovery to recover the old primary to that same point.

Issue the following RMAN commands:

RMAN> RUN
    {
      SET UNTIL SCN <standby_became_primary_scn + 1>;
      RESTORE DATABASE;
      RECOVER DATABASE;
     }

With user-managed recovery, you can first restore the database manually. Typically, a backup taken a couple of hours before the failover would be old enough. You can then recover the failed primary using the following command:

SQL> RECOVER DATABASE USIING BACKUP CONTROLFILE UNTIL CHANGE -
>  <standby_became_primary_scn + 1>;

Unlike a reinstantiation that uses Flashback Database, this procedure adds one to standby_became_primary_scn. For datafiles, flashing back to an SCN is equivalent to recovering up until that SCN plus one.

Step 3   Convert the database to a physical standby database.

Perform the following steps on the old primary database:

  1. Issue the following statement on the old primary database:

    SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
    

    This statement will dismount the database after successfully converting the control file to a standby control file.

  2. Shut down and restart the database:

    SQL> SHUTDOWN IMMEDIATE;
    SQL> STARTUP MOUNT;
    
Step 4   Open the database as read-only.

Issue the following command:

SQL> ALTER DATABASE OPEN READ ONLY;

The goal of this step is to synchronize the control file with the database by using a dictionary check. After this command, check the alert log for any actions suggested by the dictionary check. Typically, no user action is needed if the old primary was not in the middle of adding or dropping datafiles during the failover.

Step 5   (Optional) Mount the standby again, if desired

If you have purchased a license for the Active Data Guard option and would like to operate your physical standby database in active query mode, skip this step. Otherwise, bring your standby database to the mount state.

For example:

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
Step 6   Restart transporting redo to the new physical standby database.

Before the new standby database was created, the new primary database probably stopped transmitting redo to the remote destination. To restart redo transport services, perform the following steps on the new primary database:

  1. Issue the following query to see the current state of the archive destinations:

    SQL> SELECT DEST_ID, DEST_NAME, STATUS, PROTECTION_MODE, DESTINATION, -
    > ERROR,SRL FROM V$ARCHIVE_DEST_STATUS;
    
  2. If necessary, enable the destination:

    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_n=ENABLE;
    
  3. Perform a log switch to ensure the standby database begins receiving redo data from the new primary database, and verify it was sent successfully.


    Note:

    This is an important step in order for the old primary to become a new standby following the new primary. If this step is not done, the old primary may recover to an incorrect database branch. The only way to correct the problem then is to convert the old primary again.

    At the SQL prompt, enter the following statements:

    SQL> ALTER SYSTEM SWITCH LOGFILE;
    SQL> SELECT DEST_ID, DEST_NAME, STATUS, PROTECTION_MODE, DESTINATION, -
    > ERROR,SRL FROM V$ARCHIVE_DEST_STATUS;
    

    On the new standby database, you may also need to change the LOG_ARCHIVE_DEST_n initialization parameters so that redo transport services do not transmit redo data to other databases. This step can be skipped if both the primary and standby database roles were set up with the VALID_FOR attribute in one server parameter file (SPFILE). By doing this, the Data Guard configuration operates properly after a role transition.

Step 7   Start Redo Apply.

Start Redo Apply on the new physical standby database, as follows:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE -
> USING CURRENT LOGFILE DISCONNECT;

Once the failed primary database is restored and is running in the standby role, you can optionally perform a switchover to transition the databases to their original (pre-failure) roles. See Section 8.2.1, "Performing a Switchover to a Physical Standby Database" for more information.

13.7.2 Converting a Failed Primary into a Logical Standby Using RMAN Backups

The steps in this section describe how to convert a failed primary into a logical standby using RMAN backups.

Step 1   Determine the SCN to which to recover the failed primary database.

On the new primary database, issue the following query to determine the SCN to which you want to recover the failed primary database:

SQL> SELECT APPLIED_SCN RECOVERY_SCN FROM V$LOGSTDBY_PROGRESS;

Also on the new primary database, determine the SCN to use in dealing with archive logs, as follows:

  1. Ensure all standby redo logs have been archived. Issue the following query, looking for a value of READY to be returned. Depending on the size of the database and the number of logs needing to be archived, it could take some time before a status of READY is returned.

    SQL> SELECT VALUE FROM SYSTEM.LOGSTDBY$PARAMETERS - 
    > WHERE NAME='REINSTATEMENT_STATUS';
    
  2. After a status of READY has been returned, run the following query to retrieve the SCN for dealing with archive logs as part of this recovery:

    SQL> SELECT VALUE ARCHIVE_SCN FROM SYSTEM.LOGSTDBY$PARAMETERS -
    > WHERE NAME='STANDBY_BECAME_PRIMARY_SCN';
    
Step 2   Remove divergent archive logs from the failed primary database.

Remove any archive logs created at the time of, or after the failover operation, from the failed primary database. If the failed primary database was isolated from the standby, it could have divergent archive logs that are not consistent with the current primary database. To ensure these divergent archive logs are never applied, they must be deleted from backups and the fast recovery area. You can use the following RMAN command to delete the relevant archive logs from the fast recovery area:

RMAN> DELETE ARCHIVELOG FROM SCN ARCHIVE_SCN;

Once deleted, these divergent logs and subsequent transactions can never be recovered.

Step 3   Determine the log files to be copied to the failed primary database.

On the new primary database, issue the following query to determine the minimum set of log files that must be copied to the failed primary database before recovering from a backup:

SQL> SELECT file_name FROM DBA_LOGSTDBY_LOG WHERE next_change# > ARCHIVE_SCN;

Retrieve the required standby logs, copy the backup set to the new standby and restore it to the new standby fast recovery area. Because these logs are coming from standby redo logs, they are not part of the standby's standard archives. The RMAN utility is able to use a partial file name to retrieve the files from the correct location.

The following is a sample use of the RMAN BACKUP command:

RMAN> BACKUP AS COPY DEVICE TYPE DISK FORMAT '/tmp/test/%U'
> ARCHIVELOG LIKE '<partial file names from above>%';

The following is a sample use of the RMAN RESTORE command:

RMAN> CATALOG START WITH '/tmp/test';
RMAN> RESTORE ARCHIVELOG FROM SEQUENCE 33 UNTIL SEQUENCE 35;
Step 4   Restore a backup and recover the database.

Restore a backup of all the original primary's data files and recover to RECOVERY_SCN + 1. Oracle recommends that you leverage the current control file.

  1. Start up the database in restricted mode to protect it from rogue transactions until the GUARD ALL command can be issued after the database has been opened.

  2. Use the backup to restore the data files of the failed primary database.

  3. Turn off flashback database, if it is enabled (necessary for the USING BACKUP CONTROLFILE clause).

  4. Perform point-in-time recovery to RECOVERY_SCN +1 in SQL*Plus.

Whether you are using a current control file or a backup control file, you must specify the USING BACKUP CONTROLFILE clause to allow you to point to the archive logs being restored. Otherwise, the recovery process could attempt to access online redo logs instead of the logs retrieved in Step 3. When prompted for the sequences retrieved in Step 3, ensure you specify the file names of the restored archive log copies, as follows:

SQL> RECOVER DATABASE UNTIL CHANGE RECOVERY_SCN + 1 USING BACKUP CONTROLFILE;
Step 5   Open the database with the RESETLOGS option.
SQL> ALTER DATABASE OPEN RESETLOGS;
Step 6    Enable Database Guard
SQL> ALTER DATABASE GUARD ALL;
Step 7    Create a database link to the new primary database and start SQL Apply.
SQL> CREATE PUBLIC DATABASE LINK myLink -
> CONNECT TO SYSTEM IDENTIFIED BY password -
> USING 'service name of new primary database';
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY NEW PRIMARY myLink;

At this point, you can disable restricted session (ALTER SYSTEM DISABLE RESTRICTED SESSION) or, if you need to restart the database to re-enable Flashback from Step 4.3, let this restart turn off RESTRICTED SESSION.

13.8 Changing the Character Set of a Primary Without Re-Creating Physical Standbys

Oracle Data Guard allows you to change both the database character set and the national character set of a primary database without requiring you to recreate any physical standby databases in the configuration. You can continue to use your physical standby database with minimal disruption while performing character set conversion of a primary database.

The process requires the running of several procedures prior to the actual conversion. The conversion itself requires that the primary database be shut down and opened in restricted mode, and that the CSALTER script be executed. Both the system data and user data are converted to the new character set.

For a detailed description of the steps involved in this process, see My Oracle Support note 1124165.1 at http://support.oracle.com. You will also need to read note 260192.1.


Note:

Similar to the obsolete ALTER DATABASE CHARACTER SET SQL statement, the CSALTER script should be used only by a system administrator. System administrators must run the Database Character Set Scanner first to confirm that the proper conditions exist for running CSALTER. Also, the database must be backed up before running CSALTER.


See Also:

Oracle Database Globalization Support Guide for more general information about using the CSALTER script to migrate character sets

PK5PKD Initialization Parameters

14 Initialization Parameters

This chapter describes the initialization parameters that affect databases in a Data Guard environment.

Table 14-1 lists the initialization parameters and indicates if the parameter applies to the primary database role, the standby database role, or both. The table also includes notes and recommendations specific to setting the parameters in a Data Guard environment. Oracle Database Reference provides complete initialization parameter information, including how to update initialization parameters by issuing the ALTER SYSTEM SET statement (for example, ALTER SYSTEM SET LOG_ARCHIVE_TRACE) or by editing the initialization parameter files. See the Oracle operating system-specific documentation for more information about setting initialization parameters.

Table 14-1 Initialization Parameters for Instances in a Data Guard Configuration

ParameterApplicable ToNotes and Recommendations

COMPATIBLE = release_number

Primary

Logical Standby

Physical Standby

Snapshot Standby

Specify the same value on the primary and standby databases if you expect to do a switchover. If the values differ, redo transport services may be unable to transmit redo data from the primary database to the standby databases. See Section 3.2.3 for an example.

A logical standby database can have a higher COMPATIBLE setting than the primary database if a switchover is not expected.

For rolling upgrades using SQL Apply, set this parameter according to the guidelines described in Section 12.4, "Performing a Rolling Upgrade By Creating a New Logical Standby Database".

CONTROL_FILE_RECORD_KEEP_TIME = number_of_days

Primary

Logical Standby

Physical Standby

Snapshot Standby

Optional. Use this parameter to avoid overwriting a reusable record in the control file (that contains needed information such as an archived redo log file) for the specified number of days (from 0 to 365).

CONTROL_FILES = 'control_file_name', 'control_file_name', '...'

Primary

Logical Standby

Physical Standby

Snapshot Standby

Required. Specify the path name and filename for one or more control files. The control files must already exist on the database. Oracle recommends using 2 control files. If another copy of the current control file is available, then an instance can be easily restarted after copying the good control file to the location of the bad control file. See Section 3.2.3 for an example.

DB_FILE_NAME_CONVERT = 'location_of_primary_database_datafile','location_of_standby_database_datafile'

Physical Standby

Snapshot Standby

Required if the standby database is on the same system as the primary database or if the directory where the datafiles are located on the standby system is different from the primary system. This parameter must specify paired strings. The first string is a sequence of characters to be looked for in a primary database filename. If that sequence of characters is matched, it is replaced by the second string to construct the standby database filename. You can specify multiple pairs of filenames. See also Example 3-1.

DB_UNIQUE_NAME = Unique name for the database

Primary

Logical Standby

Physical Standby

Snapshot Standby

Recommended, but required if you specify the LOG_ARCHIVE_CONFIG parameter. Specifies a unique name for this database. This name does not change even if the primary and standby databases reverse roles. The DB_UNIQUE_NAME parameter defaults to the value of the DB_NAME parameter.

FAL_CLIENT = Oracle_Net_service_name

Physical Standby

Snapshot Standby

This parameter is no longer required. If it is not set, the fetch archive log (FAL) server will obtain the client's network address from the LOG_ARCHIVE_DEST_n parameter that corresponds to the client's DB_UNIQUE_NAME.

FAL_SERVER = Oracle_Net_service_name

Physical Standby

Snapshot Standby

Specifies one or more Oracle Net service names for the databases from which this standby database can fetch (request) missing archived redo log files.

INSTANCE_NAME

Primary

Logical Standby

Physical Standby

Snapshot Standby

Optional. If this parameter is defined and the primary and standby databases reside on the same host, specify a different name for the standby database than you specify for the primary database. See Section 3.2.3 for an example.

LOG_ARCHIVE_CONFIG ='DG_CONFIG ( db_unique_name, db_unique_name, ... )'

Primary

Logical Standby

Physical Standby

Snapshot Standby

Highly recommended. The DG_CONFIG attribute of this parameter must be explicitly set on each database in a Data Guard configuration to enable full Data Guard functionality. Set DG_CONFIG to a text string that contains the DB_UNIQUE_NAME of each database in the configuration, with each name in this list separated by a comma.

LOG_ARCHIVE_DEST_n = {LOCATION=path_name | SERVICE=service_name, attribute, attribute, ...}

Primary

Logical Standby

Physical Standby

Snapshot Standby

Required. Define up to thirty (where n = 1, 2, 3, ... 31) destinations, each of which must specify either the LOCATION or SERVICE attribute. Specify a corresponding LOG_ARCHIVE_DEST_STATE_n parameter for every LOG_ARCHIVE_DEST_n parameter.

LOG_ARCHIVE_DEST_STATE_n = {ENABLE|DEFER|ALTERNATE}

Primary

Logical Standby

Physical Standby

Snapshot Standby

Required. Specify a LOG_ARCHIVE_DEST_STATE_n parameter to enable or disable redo transport services to transmit redo data to the specified (or to an alternate) destination. Define a LOG_ARCHIVE_DEST_STATE_n parameter for every LOG_ARCHIVE_DEST_n parameter. See also Chapter 15.

LOG_ARCHIVE_FORMAT=log%d_%t_%s_%r.arc

Primary

Logical Standby

Physical Standby

Snapshot Standby

The LOG_ARCHIVE_FORMAT and LOG_ARCHIVE_DEST_n parameters are concatenated together to generate fully qualified archived redo log filenames on a database.

LOG_ARCHIVE_LOCAL_FIRST = {TRUE | FALSE}

Primary

Snapshot Standby

This parameter has been deprecated and is maintained for backward compatibility only. Oracle recommends that this parameter only be set to TRUE if it is explicitly set.

LOG_ARCHIVE_MAX_PROCESSES =integer

Primary

Logical Standby

Physical Standby

Snapshot Standby

Optional. Specify the number (from 1 to 30) of archiver (ARCn) processes you want Oracle software to invoke initially. The default value is 4.

LOG_ARCHIVE_MIN_SUCCEED_DEST

Primary

Logical Standby

Physical Standby

Snapshot Standby

Optional. This parameter specifies the number of local or remote MANDATORY destinations, or local OPTIONAL destinations, that a logfile group must be archived to before it can be re-used.

LOG_ARCHIVE_TRACE=integer

Primary

Logical Standby

Physical Standby

Snapshot Standby

Optional. Set this parameter to trace the transmission of redo data to the standby site. The valid integer values are described in Appendix F.

LOG_FILE_NAME_CONVERT = 'location_of_primary_database_redo_logs','location_of_standby_database_redo_logs'

Logical Standby

Physical Standby

Snapshot Standby

Required when the standby database is on the same system as the primary database or when the directory structure where the log files are located on the standby site is different from the primary site. This parameter converts the path names of the primary database online redo log file to path names on the standby database. See Section 3.2.3 for an example.

REMOTE_LOGIN_PASSWORDFILE = {EXCLUSIVE|SHARED}

Primary

Logical Standby

Physical Standby

Snapshot Standby

Optional if operating system authentication is used for administrative users and SSL is used for redo transport authentication. Otherwise, this parameter must be set to EXCLUSIVE or SHARED on every database in a Data Guard configuration.

SHARED_POOL_SIZE = bytes

Primary

Logical Standby

Physical Standby

Snapshot Standby

Optional. Use to specify the system global area (SGA) to stage the information read from the online redo log files. The more SGA that is available, the more information that can be staged.

STANDBY_ARCHIVE_DEST = filespec

Logical Standby

Physical Standby

Snapshot Standby

This parameter has been deprecated and is maintained for backward compatibility only.

STANDBY_FILE_MANAGEMENT = {AUTO | MANUAL}

Primary

Physical Standby

Snapshot Standby

Set the STANDBY_FILE_MANAGEMENT parameter to AUTO so that when datafiles are added to or dropped from the primary database, corresponding changes are made in the standby database without manual intervention. If the directory structures on the primary and standby databases are different, you must also set the DB_FILE_NAME_CONVERT initialization parameter to convert the filenames of one or more sets of datafiles on the primary database to filenames on the (physical) standby database. See Example 3-1 for more information and examples.


PKw;(T#TPKD Introduction to Oracle Data Guard

1 Introduction to Oracle Data Guard

Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruptions. Data Guard maintains these standby databases as copies of the production database. Then, if the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, minimizing the downtime associated with the outage. Data Guard can be used with traditional backup, restoration, and cluster techniques to provide a high level of data protection and data availability.

With Data Guard, administrators can optionally improve production database performance by offloading resource-intensive backup and reporting operations to standby systems.

This chapter includes the following topics that describe the highlights of Oracle Data Guard:

1.1 Data Guard Configurations

A Data Guard configuration consists of one production database and one or more standby databases. The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on where the databases are located, provided they can communicate with each other. For example, you can have a standby database on the same system as the production database, along with two standby databases on other systems at remote locations.

You can manage primary and standby databases using the SQL command-line interfaces or the Data Guard broker interfaces, including a command-line interface (DGMGRL) and a graphical user interface that is integrated in Oracle Enterprise Manager.

1.1.1 Primary Database

A Data Guard configuration contains one production database, also referred to as the primary database, that functions in the primary role. This is the database that is accessed by most of your applications.

The primary database can be either a single-instance Oracle database or an Oracle Real Application Clusters (Oracle RAC) database.

1.1.2 Standby Databases

A standby database is a transactionally consistent copy of the primary database. Using a backup copy of the primary database, you can create up to thirty standby databases and incorporate them in a Data Guard configuration. Once created, Data Guard automatically maintains each standby database by transmitting redo data from the primary database and then applying the redo to the standby database.

Similar to a primary database, a standby database can be either a single-instance Oracle database or an Oracle RAC database.

The types of standby databases are as follows:

  • Physical standby database

    Provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, are the same. A physical standby database is kept synchronized with the primary database, through Redo Apply, which recovers the redo data received from the primary database and applies the redo to the physical standby database.

    As of Oracle Database 11g release 1 (11.1), a physical standby database can receive and apply redo while it is open for read-only access. A physical standby database can therefore be used concurrently for data protection and reporting.

  • Logical standby database

    Contains the same logical information as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database through SQL Apply, which transforms the data in the redo received from the primary database into SQL statements and then executes the SQL statements on the standby database.

    A logical standby database can be used for other business purposes in addition to disaster recovery requirements. This allows users to access a logical standby database for queries and reporting purposes at any time. Also, using a logical standby database, you can upgrade Oracle Database software and patch sets with almost no downtime. Thus, a logical standby database can be used concurrently for data protection, reporting, and database upgrades.

  • Snapshot Standby Database

    A snapshot standby database is a fully updatable standby database.

    Like a physical or logical standby database, a snapshot standby database receives and archives redo data from a primary database. Unlike a physical or logical standby database, a snapshot standby database does not apply the redo data that it receives. The redo data received by a snapshot standby database is not applied until the snapshot standby is converted back into a physical standby database, after first discarding any local updates made to the snapshot standby database.

    A snapshot standby database is best used in scenarios that require a temporary, updatable snapshot of a physical standby database. Note that because redo data received by a snapshot standby database is not applied until it is converted back into a physical standby, the time needed to recover from a primary database failure is directly proportional to the amount of redo data that needs to be applied.

1.1.3 Configuration Example

Figure 1-1 shows a typical Data Guard configuration that contains a primary database that transmits redo data to a standby database. The standby database is remotely located from the primary database for disaster recovery and backup operations. You can configure the standby database at the same location as the primary database. However, for disaster recovery purposes, Oracle recommends you configure standby databases at remote locations.

Figure 1-1 Typical Data Guard Configuration

Description of Figure 1-1 follows
Description of "Figure 1-1 Typical Data Guard Configuration"

1.2 Data Guard Services

The following sections explain how Data Guard manages the transmission of redo data, the application of redo data, and changes to the database roles:

  • Redo Transport Services

    Control the automated transfer of redo data from the production database to one or more archival destinations.

  • Apply Services

    Apply redo data on the standby database to maintain transactional synchronization with the primary database. Redo data can be applied either from archived redo log files, or, if real-time apply is enabled, directly from the standby redo log files as they are being filled, without requiring the redo data to be archived first at the standby database.

  • Role Transitions

    Change the role of a database from a standby database to a primary database, or from a primary database to a standby database using either a switchover or a failover operation.

1.2.1 Redo Transport Services

Redo transport services control the automated transfer of redo data from the production database to one or more archival destinations.

Redo transport services perform the following tasks:

  • Transmit redo data from the primary system to the standby systems in the configuration

  • Manage the process of resolving any gaps in the archived redo log files due to a network failure

  • Automatically detect missing or corrupted archived redo log files on a standby system and automatically retrieve replacement archived redo log files from the primary database or another standby database

1.2.2 Apply Services

The redo data transmitted from the primary database is written to the standby redo log on the standby database. Apply services automatically apply the redo data on the standby database to maintain consistency with the primary database. It also allows read-only access to the data.

The main difference between physical and logical standby databases is the manner in which apply services apply the archived redo data:

  • For physical standby databases, Data Guard uses Redo Apply technology, which applies redo data on the standby database using standard recovery techniques of an Oracle database, as shown in Figure 1-2.

Figure 1-2 Automatic Updating of a Physical Standby Database

Description of Figure 1-2 follows
Description of "Figure 1-2 Automatic Updating of a Physical Standby Database "

  • For logical standby databases, Data Guard uses SQL Apply technology, which first transforms the received redo data into SQL statements and then executes the generated SQL statements on the logical standby database, as shown in Figure 1-3.

Figure 1-3 Automatic Updating of a Logical Standby Database

Description of Figure 1-3 follows
Description of "Figure 1-3 Automatic Updating of a Logical Standby Database"

1.2.3 Role Transitions

An Oracle database operates in one of two roles: primary or standby. Using Data Guard, you can change the role of a database using either a switchover or a failover operation.

A switchover is a role reversal between the primary database and one of its standby databases. A switchover ensures no data loss. This is typically done for planned maintenance of the primary system. During a switchover, the primary database transitions to a standby role, and the standby database transitions to the primary role.

A failover is when the primary database is unavailable. Failover is performed only in the event of a failure of the primary database, and the failover results in a transition of a standby database to the primary role. The database administrator can configure Data Guard to ensure no data loss.

The role transitions described in this documentation are invoked manually using SQL statements. You can also use the Oracle Data Guard broker to simplify role transitions and automate failovers using Oracle Enterprise Manager or the DGMGRL command-line interface, as described in Section 1.3.

1.3 Data Guard Broker

The Data Guard broker is a distributed management framework that automates the creation, maintenance, and monitoring of Data Guard configurations. You can use either the Oracle Enterprise Manager graphical user interface (GUI) or the Data Guard command-line interface (DGMGRL) to:

  • Create and enable Data Guard configurations, including setting up redo transport services and apply services

  • Manage an entire Data Guard configuration from any system in the configuration

  • Manage and monitor Data Guard configurations that contain Oracle RAC primary or standby databases

  • Simplify switchovers and failovers by allowing you to invoke them using either a single key click in Oracle Enterprise Manager or a single command in the DGMGRL command-line interface.

  • Enable fast-start failover to fail over automatically when the primary database becomes unavailable. When fast-start failover is enabled, the Data Guard broker determines if a failover is necessary and initiates the failover to the specified target standby database automatically, with no need for DBA intervention.

In addition, Oracle Enterprise Manager automates and simplifies:

  • Creating a physical or logical standby database from a backup copy of the primary database

  • Adding new or existing standby databases to an existing Data Guard configuration

  • Monitoring log apply rates, capturing diagnostic information, and detecting problems quickly with centralized monitoring, testing, and performance tools


See Also:

Oracle Data Guard Broker for more information

1.3.1 Using Oracle Enterprise Manager Grid Control

Oracle Enterprise Manager Grid Control (also referred to as Enterprise Manager in this book) provides a web-based interface for viewing, monitoring, and administering primary and standby databases in a Data Guard configuration. Enterprise Manager's easy-to-use interfaces, combined with the broker's centralized management and monitoring of the Data Guard configuration, enhance the Data Guard solution for high availability, site protection, and data protection of an enterprise.

From the Central Console of Enterprise Manager Grid Control, you can perform all management operations either locally or remotely. You can view home pages for Oracle databases, including primary and standby databases and instances, create or add existing standby databases, start and stop instances, monitor instance performance, view events, schedule jobs, and perform backup and recovery operations.

1.3.2 Using the Data Guard Command-Line Interface

The Data Guard command-line interface (DGMGRL) enables you to control and monitor a Data Guard configuration from the DGMGRL prompt or within scripts. You can perform most of the activities required to manage and monitor the databases in the configuration using DGMGRL. See Oracle Data Guard Broker for complete DGMGRL reference information and examples.

1.4 Data Guard Protection Modes

In some situations, a business cannot afford to lose data regardless of the circumstances. In other situations, the availability of the database may be more important than any potential data loss in the unlikely event of a multiple failure. Finally, some applications require maximum database performance at all times, and can therefore tolerate a small amount of data loss if any component should fail. The following descriptions summarize the three distinct modes of data protection.

Maximum availability This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to the standby redo log on at least one synchronized standby database. If the primary database cannot write its redo stream to at least one synchronized standby database, it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database.

This protection mode ensures zero data loss except in the case of certain double faults, such as failure of a primary database after failure of the standby database.

Maximum performance This is the default protection mode. It provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log. Redo data is also written to one or more standby databases, but this is done asynchronously with respect to transaction commitment, so primary database performance is unaffected by delays in writing redo data to the standby database(s).

This protection mode offers slightly less data protection than maximum availability mode and has minimal impact on primary database performance.

Maximum protection This protection mode ensures that no data loss will occur if the primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to the standby redo log on at least one synchronized standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions, if it cannot write its redo stream to at least one synchronized standby database.

All three protection modes require that specific redo transport options be used to send redo data to at least one standby database.


See Also:


1.5 Client Failover

A high availability architecture requires a fast failover capability for databases and database clients.

Client failover encompasses failure notification, stale connection cleanup, and transparent reconnection to the new primary database. Oracle Database provides the capability to integrate database failover with failover procedures that automatically redirect clients to a new primary database within seconds of a database failover.


See Also:

  • Oracle Data Guard Broker for information about Fast Application Notification (FAN) and Fast Connection Failover (FCF) configuration requirements specific to Data Guard

  • The Maximum Availability Architecture client failover best practices white paper at

    http://www.oracle.com/goto/maa


1.6 Data Guard and Complementary Technologies

Oracle Database provides several unique technologies that complement Data Guard to help keep business critical systems running with greater levels of availability and data protection than when using any one solution by itself. The following list summarizes some Oracle high-availability technologies:

  • Oracle Real Application Clusters (Oracle RAC)

    Oracle RAC enables multiple independent servers that are linked by an interconnect to share access to an Oracle database, providing high availability, scalability, and redundancy during failures. Oracle RAC and Data Guard together provide the benefits of both system-level, site-level, and data-level protection, resulting in high levels of availability and disaster recovery without loss of data:

    • Oracle RAC addresses system failures by providing rapid and automatic recovery from failures, such as node failures and instance crashes. It also provides increased scalability for applications.

    • Data Guard addresses site failures and data protection through transactionally consistent primary and standby databases that do not share disks, enabling recovery from site disasters and data corruption.

    Many different architectures using Oracle RAC and Data Guard are possible depending on the use of local and remote sites and the use of nodes and a combination of logical and physical standby databases. See Appendix D, "Data Guard and Oracle Real Application Clusters" and Oracle Database High Availability Overview for Oracle RAC and Data Guard integration.

  • Flashback Database

    The Flashback Database feature provides fast recovery from logical data corruption and user errors. By allowing you to flash back in time, previous versions of business information that might have been erroneously changed or deleted can be accessed once again. This feature:

    • Eliminates the need to restore a backup and roll forward changes up to the time of the error or corruption. Instead, Flashback Database can roll back an Oracle database to a previous point-in-time, without restoring datafiles.

    • Provides an alternative to delaying the application of redo to protect against user errors or logical corruptions. Therefore, standby databases can be more closely synchronized with the primary database, thus reducing failover and switchover times.

    • Avoids the need to completely re-create the original primary database after a failover. The failed primary database can be flashed back to a point in time before the failover and converted to be a standby database for the new primary database.

    See Oracle Database Backup and Recovery User's Guide for information about Flashback Database, and Section 7.2.2 for information describing the application of redo data.

  • Recovery Manager (RMAN)

    RMAN is an Oracle utility that simplifies backing up, restoring, and recovering database files. Like Data Guard, RMAN is a feature of the Oracle database and does not require separate installation. Data Guard is well integrated with RMAN, allowing you to:

    • Use the Recovery Manager DUPLICATE command to create a standby database from backups of your primary database.

    • Take backups on a physical standby database instead of the production database, relieving the load on the production database and enabling efficient use of system resources on the standby site. Moreover, backups can be taken while the physical standby database is applying redo.

    • Help manage archived redo log files by automatically deleting the archived redo log files used for input after performing a backup.

    See Appendix E, "Creating a Standby Database with Recovery Manager" and Oracle Database Backup and Recovery User's Guide.

1.7 Summary of Data Guard Benefits

Data Guard offers these benefits:

  • Disaster recovery, data protection, and high availability

    Data Guard provides an efficient and comprehensive disaster recovery and high availability solution. Easy-to-manage switchover and failover capabilities allow role reversals between primary and standby databases, minimizing the downtime of the primary database for planned and unplanned outages.

  • Complete data protection

    Data Guard can ensure zero data loss, even in the face of unforeseen disasters. A standby database provides a safeguard against data corruption and user errors. Because the redo data received from a primary database is validated at a standby database, storage level physical corruptions on the primary database do not propagate to the standby database. Similarly, logical corruptions or user errors that cause the primary database to be permanently damaged can be resolved.

  • Efficient use of system resources

    The standby database tables that are updated with redo data received from the primary database can be used for other tasks such as backups, reporting, summations, and queries, thereby reducing the primary database workload necessary to perform these tasks, saving valuable CPU and I/O cycles.

  • Flexibility in data protection to balance availability against performance requirements

    Oracle Data Guard offers maximum protection, maximum availability, and maximum performance modes to help enterprises balance data availability against system performance requirements.

  • Automatic gap detection and resolution

    If conwnectivity is lost between the primary and one or more standby databases (for example, due to network problems), redo data being generated on the primary database cannot be sent to those standby databases. Once a connection is reestablished, the missing archived redo log files (referred to as a gap) are automatically detected by Data Guard, which then automatically transmits the missing archived redo log files to the standby databases. The standby databases are synchronized with the primary database, without manual intervention by the DBA.

  • Centralized and simple management

    The Data Guard broker provides a graphical user interface and a command-line interface to automate management and operational tasks across multiple databases in a Data Guard configuration. The broker also monitors all of the systems within a single Data Guard configuration.

  • Integration with Oracle Database

    Data Guard is a feature of Oracle Database Enterprise Edition and does not require separate installation.

  • Automatic role transitions

    When fast-start failover is enabled, the Data Guard broker automatically fails over to a synchronized standby site in the event of a disaster at the primary site, requiring no intervention by the DBA. In addition, applications are automatically notified of the role transition.

PK遆wPKD Managing a Logical Standby Database

10 Managing a Logical Standby Database

This chapter contains the following topics:

10.1 Overview of the SQL Apply Architecture

SQL Apply uses a collection of background processes to apply changes from the primary database to the logical standby database.

Figure 10-1 shows the flow of information and the role that each process performs.

Figure 10-1 SQL Apply Processing

Description of Figure 10-1 follows
Description of "Figure 10-1 SQL Apply Processing"

The different processes involved and their functions during log mining and apply processing are as follows:

During log mining:

  • The READER process reads redo records from the archived redo log files or standby redo log files.

  • The PREPARER process converts block changes contained in redo records into logical change records (LCRs). Multiple PREPARER processes can be active for a given redo log file. The LCRs are staged in the system global area (SGA), known as the LCR cache.

  • The BUILDER process groups LCRs into transactions, and performs other tasks, such as memory management in the LCR cache, checkpointing related to SQL Apply restart and filtering out of uninteresting changes.

During apply processing:

  • The ANALYZER process identifies dependencies between different transactions.

  • The COORDINATOR process (LSP) assigns transactions to different appliers and coordinates among them to ensure that dependencies between transactions are honored.

  • The APPLIER processes applies transactions to the logical standby database under the supervision of the coordinator process.

You can query the V$LOGSTDBY_PROCESS view to examine the activity of the SQL Apply processes. Another view that provides information about current activity is the V$LOGSTDBY_STATS view that displays statistics, current state, and status information for the logical standby database during SQL Apply activities. These and other relevant views are discussed in more detail in Section 10.3, "Views Related to Managing and Monitoring a Logical Standby Database".


Note:

All SQL Apply processes (including the coordinator process lsp0) are true background processes. They are not regulated by resource manager. Therefore, creating resource groups at the logical standby database does not affect the SQL Apply processes.

10.1.1 Various Considerations for SQL Apply

This section contains the following topics:

10.1.1.1 Transaction Size Considerations

SQL Apply categorizes transactions into two classes: small and large:

  • Small transactions—SQL Apply starts applying LCRs belonging to a small transaction once it has encountered the commit record for the transaction in the redo log files.

  • Large transactions—SQL Apply breaks large transactions into smaller pieces called transaction chunks, and starts applying the chunks before the commit record for the large transaction is seen in the redo log files. This is done to reduce memory pressure on the LCR cache and to reduce the overall failover time.

    For example, without breaking into smaller pieces, a SQL*Loader load of ten million rows, each 100 bytes in size, would use more than 1 GB of memory in the LCR cache. If the memory allocated to the LCR cache was less than 1 GB, it would result in pageouts from the LCR cache.

    Apart from the memory considerations, if SQL Apply did not start applying the changes related to the ten million row SQL*Loader load until it encountered the COMMIT record for the transaction, it could stall a role transition. A switchover or a failover that is initiated after the transaction commit cannot finish until SQL Apply has applied the transaction on the logical standby database.

    Despite the use of transaction chunks, SQL Apply performance may degrade when processing transactions that modify more than eight million rows. For transactions larger than 8 million rows, SQL Apply uses the temporary segment to stage some of the internal metadata required to process the transaction. You will need to allocate enough space in your temporary segment for SQL Apply to successfully process transactions larger than 8 million rows.

All transactions start out categorized as small transactions. Depending on the amount of memory available for the LCR cache and the amount of memory consumed by LCRs belonging to a transaction, SQL Apply determines when to recategorize a transaction as a large transaction.

10.1.1.2 Pageout Considerations

Pageouts occur in the context of SQL Apply when memory in the LCR cache is exhausted and space needs to be released for SQL Apply to make progress.

For example, assume the memory allocated to the LCR cache is 100 MB and SQL Apply encounters an INSERT transaction to a table with a LONG column of size 300 MB. In this case, the log-mining component will page out the first part of the LONG data to read the later part of the column modification. In a well-tuned logical standby database, pageout activities will occur occasionally and should not effect the overall throughput of the system.


See Also:

See Section 10.5, "Customizing a Logical Standby Database" for more information about how to identify problematic pageouts and perform corrective actions

10.1.1.3 Restart Considerations

Modifications made to the logical standby database do not become persistent until the commit record of the transaction is mined from the redo log files and applied to the logical standby database. Thus, every time SQL Apply is stopped, whether as a result of a user directive or because of a system failure, SQL Apply must go back and mine the earliest uncommitted transaction again.

In cases where a transaction does little work but remains open for a long period of time, restarting SQL Apply from the start could be prohibitively costly because SQL Apply would have to mine a large number of archived redo log files again, just to read the redo data for a few uncommitted transactions. To mitigate this, SQL Apply periodically checkpoints old uncommitted data. The SCN at which the checkpoint is taken is reflected in the RESTART_SCN column of V$LOGSTDBY_PROGRESS view. Upon restarting, SQL Apply starts mining redo records that are generated at an SCN greater than value shown by the RESTART_SCN column. Archived redo log files that are not needed for restart are automatically deleted by SQL Apply.

Certain workloads, such as large DDL transactions, parallel DML statements (PDML), and direct-path loads, will prevent the RESTART_SCN from advancing for the duration of the workload.

10.1.1.4 DML Apply Considerations

SQL Apply has the following characteristics when applying DML transactions that affect the throughput and latency on the logical standby database:

  • Batch updates or deletes done on the primary database, where a single statement results in multiple rows being modified, are applied as individual row modifications on the logical standby database. Thus, it is imperative for each maintained table to have a unique index or a primary key. See Section 4.1.2, "Ensure Table Rows in the Primary Database Can Be Uniquely Identified" for more information.

  • Direct path inserts performed on the primary database are applied using a conventional INSERT statement on the logical standby database.

  • Parallel DML (PDML) transactions are not executed in parallel on the logical standby database.

10.1.1.5 DDL Apply Considerations

SQL Apply has the following characteristics when applying DDL transactions that affect the throughput and latency on the logical standby database:

  • DDL transactions are applied serially on the logical standby database. Thus, DDL transactions applied concurrently on the primary database are applied one at a time on the logical standby database.

  • CREATE TABLE AS SELECT (CTAS) statements are executed such that the DML activities (that are part of the CTAS statement) are suppressed on the logical standby database. The rows inserted in the newly created table as part of the CTAS statement are mined from the redo log files and applied to the logical standby database using INSERT statements.

  • SQL Apply reissues the DDL that was performed at the primary database, and ensures that DMLs that occur within the same transaction on the same object that is the target of the DDL operation are not replicated at the logical standby database. Thus, the following two cases will cause the primary and standby sites to diverge from each other:

    • The DDL contains a non-literal value that is derived from the state at the primary database. An example of such a DDL is:

      ALTER TABLE hr.employees ADD (start_date date default sysdate);
      

      Because SQL Apply will reissue the same DDL at the logical standby, the function sysdate() will be reevaluated at the logical standby. Thus, the column start_date will be created with a different default value than at the primary database.

    • The DDL fires DML triggers defined on the target table. Since the triggered DMLs occur in the same transaction as the DDL, and operate on the table that is the target of the DDL, these triggered DMLs will not be replicated at the logical standby.

      For example, assume you create a table as follows:

       create table HR.TEMP_EMPLOYEES (
       emp_id       number primary key,
       first_name   varchar2(64),
       last_name    varchar2(64),
       modify_date  timestamp);
      

      Assume you then create a trigger on the table such that any time the table is updated the modify_date is updated to reflect the time of change:

       CREATE OR REPLACE TRIGGER TRG_TEST_MOD_DT  BEFORE UPDATE ON HR.TEST_EMPLOYEES
       REFERENCING  
       NEW  AS NEW_ROW  FOR EACH ROW
       BEGIN  
       :NEW_ROW.MODIFY_DATE:= SYSTIMESTAMP;  
       END;
      /
      

      This table will be maintained correctly under the usual DML/DDL workload. However if you add a column with the default value to the table, the ADD COLUMN DDL fires this update trigger and changes the MODIFY_DATE column of all rows in the table to a new timestamp. These changes to the MODIFY_DATE column are not replicated at the logical standby database. Subsequent DMLs to the table will stop SQL Apply because the MODIFY_DATE column data recorded in the redo stream will not match the data that exists at the logical standby database.

10.1.1.6 Password Verification Functions

Password verification functions that check for the complexity of passwords must be created in the SYS schema. Because SQL Apply does not replicate objects created in the SYS schema, such verification functions will not be replicated to the logical standby database. You must create the password verification function manually at the logical standby database, and associate it with the appropriate profiles.

10.2 Controlling User Access to Tables in a Logical Standby Database

The SQL ALTER DATABASE GUARD statement controls user access to tables in a logical standby database. The database guard is set to ALL by default on a logical standby database.

The ALTER DATABASE GUARD statement allows the following keywords:

  • ALL

    Specify ALL to prevent all users, other than SYS, from making changes to any data in the logical standby database.

  • STANDBY

    Specify STANDBY to prevent all users, other than SYS, from making DML and DDL changes to any table or sequence being maintained through SQL Apply.

  • NONE

    Specify NONE if you want typical security for all data in the database.

For example, use the following statement to enable users to modify tables not maintained by SQL Apply:

SQL> ALTER DATABASE GUARD STANDBY;

Privileged users can temporarily turn the database guard off and on for the current session using the ALTER SESSION DISABLE GUARD and ALTER SESSION ENABLE GUARD statements, respectively. This statement replaces the DBMS_LOGSTDBY.GUARD_BYPASS PL/SQL procedure that performed the same function in Oracle9i. The ALTER SESSION [ENABLE|DISABLE] GUARD statement is useful when you want to temporarily disable the database guard to make changes to the database, as described in Section 10.5.4.


Note:

Be careful not to let the primary and logical standby databases diverge while the database guard is disabled.

10.3 Views Related to Managing and Monitoring a Logical Standby Database

The following performance views monitor the behavior of SQL Apply maintaining a logical standby database. The following sections describe the key views that can be used to monitor a logical standby database:


See Also:

Oracle Database Reference for complete reference information about views

10.3.1 DBA_LOGSTDBY_EVENTS View

The DBA_LOGSTDBY_EVENTS view records interesting events that occurred during the operation of SQL Apply. By default, the view records the most recent 10,000 events. However, you can change the number of recorded events by calling DBMS_LOGSTDBY.APPLY_SET() PL/SQL procedure. If SQL Apply should stop unexpectedly, the reason for the problem is also recorded in this view.


Note:

Errors that cause SQL Apply to stop are recorded in the events table These events are put into the ALERT.LOG file as well, with the LOGSTDBY keyword included in the text. When querying the view, select the columns in order by EVENT_TIME_STAMP, COMMIT_SCN, and CURRENT_SCN to ensure the desired ordering of events.

The view can be customized to contain other information, such as which DDL transactions were applied and which were skipped. For example:

SQL> ALTER SESSION SET NLS_DATE_FORMAT  = 'DD-MON-YY HH24:MI:SS';
Session altered.
SQL> COLUMN STATUS FORMAT A60
SQL> SELECT EVENT_TIME, STATUS, EVENT FROM DBA_LOGSTDBY_EVENTS -
> ORDER BY EVENT_TIMESTAMP, COMMIT_SCN, CURRENT_SCN;

EVENT_TIME         STATUS
------------------------------------------------------------------------------
EVENT
-------------------------------------------------------------------------------
23-JUL-02 18:20:12 ORA-16111: log mining and apply setting up
23-JUL-02 18:25:12 ORA-16128: User initiated shut down successfully completed
23-JUL-02 18:27:12 ORA-16112: log mining and apply stopping
23-JUL-02 18:55:12 ORA-16128: User initiated shut down successfully completed
23-JUL-02 18:57:09 ORA-16111: log mining and apply setting up
23-JUL-02 20:21:47 ORA-16204: DDL successfully applied
create table hr.test_emp (empno number, ename varchar2(64))
23-JUL-02 20:22:55 ORA-16205: DDL skipped due to skip setting 
create database link link_to_boston connect to system identified by change_on_inst
7 rows selected.

This query shows that SQL Apply was started and stopped a few times. It also shows what DDL was applied and skipped.

10.3.2 DBA_LOGSTDBY_LOG View

The DBA_LOGSTDBY_LOG view provides dynamic information about archived logs being processed by SQL Apply.

For example:

SQL> COLUMN DICT_BEGIN FORMAT A10;
SQL> SET NUMF 99999999;
SQL> SELECT FILE_NAME, SEQUENCE# AS SEQ#, FIRST_CHANGE# AS F_SCN#, -
>  NEXT_CHANGE# AS N_SCN#, TIMESTAMP, -
>  DICT_BEGIN AS BEG, DICT_END AS END, -
>  THREAD# AS THR#, APPLIED FROM DBA_LOGSTDBY_LOG -
>  ORDER BY SEQUENCE#;

FILE_NAME                 SEQ# F_SCN    N_SCN TIMESTAM BEG END THR# APPLIED
------------------------- ---- ------- ------- -------- --- --- --- ---------
/oracle/dbs/hq_nyc_2.log  2     101579  101588 11:02:58 NO  NO  1     YES
/oracle/dbs/hq_nyc_3.log  3     101588  142065 11:02:02 NO  NO  1     YES
/oracle/dbs/hq_nyc_4.log  4     142065  142307 11:02:10 NO  NO  1     YES
/oracle/dbs/hq_nyc_5.log  5     142307  142739 11:02:48 YES YES 1     YES
/oracle/dbs/hq_nyc_6.log  6     142739  143973 12:02:10 NO  NO  1     YES
/oracle/dbs/hq_nyc_7.log  7     143973  144042 01:02:11 NO  NO  1     YES
/oracle/dbs/hq_nyc_8.log  8     144042  144051 01:02:01 NO  NO  1     YES
/oracle/dbs/hq_nyc_9.log  9     144051  144054 01:02:16 NO  NO  1     YES
/oracle/dbs/hq_nyc_10.log 10    144054  144057 01:02:21 NO  NO  1     YES
/oracle/dbs/hq_nyc_11.log 11    144057  144060 01:02:26 NO  NO  1  CURRENT
/oracle/dbs/hq_nyc_12.log 12    144060  144089 01:02:30 NO  NO  1  CURRENT
/oracle/dbs/hq_nyc_13.log 13    144089  144147 01:02:41 NO  NO  1       NO

The YES entries in the BEG and END columns indicate that a LogMiner dictionary build starts at log file sequence number 5. The most recent archived redo log file is sequence number 13, and it was received at the logical standby database at 01:02:41.The APPLIED column indicates that SQL Apply has applied all redo before SCN 144057. Since transactions can span multiple archived log files, multiple archived log files may show the value CURRENT in the APPLIED column.

10.3.3 V$DATAGUARD_STATS View

This view provides information related to the failover characteristics of the logical standby database, including:

  • The time to failover (apply finish time)

  • How current is the committed data in the logical standby database (apply lag)

  • What the potential data loss will be in the event of a disaster (transport lag).

For example:

SQL> COL NAME FORMAT A20
SQL> COL VALUE FORMAT A12
SQL> COL UNIT FORMAT A30
SQL> SELECT NAME, VALUE, UNIT FROM V$DATAGUARD_STATS;
 
NAME                 VALUE        UNIT
-------------------- ------------ ------------------------------
apply finish time    +00 00:00:00 day(2) to second(1) interval
apply lag            +00 00:00:00 day(2) to second(0) interval
transport lag        +00 00:00:00 day(2) to second(0) interval

This output is from a logical standby database that has received and applied all redo generated from the primary database.

10.3.4 V$LOGSTDBY_PROCESS View

This view provides information about the current state of the various processes involved with SQL Apply, including;

  • Identifying information (sid | serial# | spid)

  • SQL Apply process: COORDINATOR, READER, BUILDER, PREPARER, ANALYZER, or APPLIER (type)

  • Status of the process's current activity (status_code | status)

  • Highest redo record processed by this process (high_scn)

For example:

SQL> COLUMN SERIAL# FORMAT 9999
SQL> COLUMN SID FORMAT 9999
SQL> SELECT SID, SERIAL#, SPID, TYPE, HIGH_SCN FROM V$LOGSTDBY_PROCESS;
 
  SID   SERIAL#   SPID         TYPE            HIGH_SCN
  ----- -------   ----------- ---------------- ----------
   48        6    11074        COORDINATOR     7178242899
   56       56    10858        READER          7178243497
   46        1    10860        BUILDER         7178242901
   45        1    10862        PREPARER        7178243295
   37        1    10864        ANALYZER        7178242900
   36        1    10866        APPLIER         7178239467
   35        3    10868        APPLIER         7178239463
   34        7    10870        APPLIER         7178239461
   33        1    10872        APPLIER         7178239472
 
9 rows selected.

The HIGH_SCN column shows that the reader process is ahead of all other processes, and the PREPARER and BUILDER process ahead of the rest.

SQL> COLUMN STATUS FORMAT A40
SQL> SELECT TYPE, STATUS_CODE, STATUS FROM V$LOGSTDBY_PROCESS;
 
TYPE             STATUS_CODE STATUS
---------------- ----------- -----------------------------------------
COORDINATOR            16117 ORA-16117: processing
READER                 16127 ORA-16127: stalled waiting for additional
                             transactions to be applied
BUILDER                16116 ORA-16116: no work available
PREPARER               16116 ORA-16117: processing
ANALYZER               16120 ORA-16120: dependencies being computed for
                             transaction at SCN 0x0001.abdb440a
APPLIER                16124 ORA-16124: transaction 1 13 1427 is waiting
                             on another transaction
APPLIER                16121 ORA-16121: applying transaction with commit
                             SCN 0x0001.abdb4390
APPLIER                16123 ORA-16123: transaction 1 23  1231 is waiting
                             for commit approval
APPLIER                16116 ORA-16116: no work available

The output shows a snapshot of SQL Apply running. On the mining side, the READER process is waiting for additional memory to become available before it can read more, the PREPARER process is processing redo records, and the BUILDER process has no work available. On the apply side, the COORDINATOR is assigning more transactions to APPLIER processes, the ANALYZER is computing dependencies at SCN 7178241034, one APPLIER has no work available, while two have outstanding dependencies that are not yet satisfied.

10.3.5 V$LOGSTDBY_PROGRESS View

This view provides detailed information regarding progress made by SQL Apply, including:

  • SCN and time at which all transactions that have been committed on the primary database have been applied to the logical standby database (applied_scn, applied_time)

  • SCN and time at which SQL Apply would begin reading redo records (restart_scn, restart_time) on restart

  • SCN and time of the latest redo record received on the logical standby database (latest_scn, latest_time)

  • SCN and time of the latest record processed by the BUILDER process (mining_scn, mining_time)

For example:

SQL> SELECT APPLIED_SCN, LATEST_SCN, MINING_SCN, RESTART_SCN -
> FROM V$LOGSTDBY_PROGRESS;
 
APPLIED_SCN  LATEST_SCN MINING_SCN RESTART_SCN
----------- ----------- ---------- -----------
 7178240496  7178240507 7178240507  7178219805

According to the output:

  • SQL Apply has applied all transactions committed on or before SCN of 7178240496

  • The latest redo record received at the logical standby database was generated at SCN 7178240507

  • The mining component has processed all redo records generate on or before SCN 7178240507

  • If SQL Apply stops and restarts for any reason, it will start mining redo records generated on or after SCN 7178219805

SQL> ALTER SESSION SET NLS_DATE_FORMAT='yy-mm-dd hh24:mi:ss';
Session altered
 
SQL> SELECT APPLIED_TIME, LATEST_TIME, MINING_TIME, RESTART_TIME - 
> FROM V$LOGSTDBY_PROGRESS;
 
APPLIED_TIME      LATEST_TIME       MINING_TIME       RESTART_TIME     
----------------- ----------------- ----------------- -----------------
05-05-12 10:38:21 05-05-12 10:41:53 05-05-12 10:41:21 05-05-12 10:09:30

According to the output:

  • SQL Apply has applied all transactions committed on or before the time 05-05-12 10:38:21 (APPLIED_TIME)

  • The last redo was generated at time 05-05-12 10:41:53 at the primary database (LATEST_TIME)

  • The mining engine has processed all redo records generated on or before 05-05-12 10:41:21 (MINING_TIME)

  • In the event of a restart, SQL Apply will start mining redo records generated after the time 05-05-12 10:09:30

10.3.6 V$LOGSTDBY_STATE View

This view provides a synopsis of the current state of SQL Apply, including:

  • The DBID of the primary database (primary_dbid).

  • The LogMiner session ID allocated to SQL Apply (session_id).

  • Whether or not SQL Apply is applying in real time (realtime_apply).

For example:

SQL> COLUMN REALTIME_APPLY FORMAT a15
SQL> COLUMN STATE FORMAT a16
SQL> SELECT * FROM V$LOGSTDBY_STATE;

PRIMARY_DBID SESSION_ID REALTIME_APPLY  STATE
------------ ---------- --------------- ----------------
  1562626987          1 Y               APPLYING

The output shows that SQL Apply is running in the real-time apply mode and is currently applying redo data received from the primary database, the primary database's DBID is 1562626987 and the LogMiner session identifier associated the SQL Apply session is 1.

10.3.7 V$LOGSTDBY_STATS View

The V$LOGSTDBY_STATS view displays statistics, current state, and status information related to SQL Apply. No rows are returned from this view when SQL Apply is not running. This view is only meaningful in the context of a logical standby database.

For example:

 SQL> ALTER SESSION SET NLS_DATE_FORMAT='dd-mm-yyyy hh24:mi:ss';
 Session altered

 SQL> SELECT SUBSTR(name, 1, 40) AS NAME, SUBSTR(value,1,32) AS VALUE FROM V$LOGSTDBY_STATS;
 
 NAME                                     VALUE
 ---------------------------------------- --------------------------------
 logminer session id                      1
 number of preparers                      1
 number of appliers                       5
 server processes in use                  9
 maximum SGA for LCR cache (MB)           30
 maximum events recorded                  10000
 preserve commit order                    TRUE
 transaction consistency                  FULL
 record skipped errors                    Y
 record skipped DDLs                      Y
 record applied DDLs                      N
 record unsupported operations            N
 realtime apply                           Y
 apply delay (minutes)                    0
 coordinator state                        APPLYING
 coordinator startup time                 19-06-2007 09:55:47
 coordinator uptime (seconds)             3593
 txns received from logminer              56
 txns assigned to apply                   23
 txns applied                             22
 txns discarded during restart            33
 large txns waiting to be assigned        2
 rolled back txns mined                   4
 DDL txns mined                           40
 CTAS txns mined                          0
 bytes of redo mined                      60164040
 bytes paged out                          0
 pageout time (seconds)                   0
 bytes checkpointed                       4845
 checkpoint time (seconds)                0
 system idle time (seconds)               2921
 standby redo logs mined                  0
 archived logs mined                      5
 gap fetched logs mined                   0
 standby redo log reuse detected          1
 logfile open failures                    0
 current logfile wait (seconds)           0
 total logfile wait (seconds)             2910
 thread enable mined                      0
 thread disable mined                     0
 .
 40 rows selected. 

10.4 Monitoring a Logical Standby Database

This section contains the following topics:

10.4.1 Monitoring SQL Apply Progress

SQL Apply can be in any of six states of progress: initializing SQL Apply, waiting for dictionary logs, loading the LogMiner dictionary, applying (redo data), waiting for an archive gap to be resolved, and idle. Figure 10-2 shows the flow of these states.

Figure 10-2 Progress States During SQL Apply Processing

Description of Figure 10-2 follows
Description of "Figure 10-2 Progress States During SQL Apply Processing"

The following subsections describe each state in more detail.

Initializing State

When you start SQL Apply by issuing an ALTER DATABASE START LOGICAL STANDBY APPLY statement, it goes into the initializing state.

To determine the current state of SQL Apply, query the V$LOGSTDBY_STATE view. For example:

SQL> SELECT SESSION_ID, STATE FROM V$LOGSTDBY_STATE;

SESSION_ID    STATE
----------    -------------
1             INITIALIZING

The SESSION_ID column identifies the persistent LogMiner session created by SQL Apply to mine the archived redo log files generated by the primary database.

Waiting for Dictionary Logs

The first time the SQL Apply is started, it needs to load the LogMiner dictionary captured in the redo log files. SQL Apply will stay in the WAITING FOR DICTIONARY LOGS state until it has received all redo data required to load the LogMiner dictionary.

Loading Dictionary State

This loading dictionary state can persist for a while. Loading the LogMiner dictionary on a large database can take a long time. Querying the V$LOGSTDBY_STATE view returns the following output when loading the dictionary:

SQL> SELECT SESSION_ID, STATE FROM V$LOGSTDBY_STATE;

SESSION_ID    STATE
----------    ------------------
1             LOADING DICTIONARY

Only the COORDINATOR process and the mining processes are spawned until the LogMiner dictionary is fully loaded. Therefore, if you query the V$LOGSTDBY_PROCESS at this point, you will not see any of the APPLIER processes. For example:

SQL> SELECT SID, SERIAL#, SPID, TYPE FROM V$LOGSTDBY_PROCESS;

SID     SERIAL#     SPID       TYPE
------  ---------   ---------  ---------------------
47      3           11438      COORDINATOR
50      7           11334      READER
45      1           11336      BUILDER
44      2           11338      PREPARER
43      2           11340      PREPARER

You can get more detailed information about the progress in loading the dictionary by querying the V$LOGMNR_DICTIONARY_LOAD view. The dictionary load happens in three phases:

  1. The relevant archived redo log files or standby redo logs files are mined to gather the redo changes relevant to load the LogMiner dictionary.

  2. The changes are processed and loaded in staging tables inside the database.

  3. The LogMiner dictionary tables are loaded by issuing a series of DDL statements.

For example:

SQL> SELECT PERCENT_DONE, COMMAND -
> FROM V$LOGMNR_DICTIONARY_LOAD -
> WHERE SESSION_ID = (SELECT SESSION_ID FROM V$LOGSTDBY_STATE);

PERCENT_DONE     COMMAND
-------------    -------------------------------
40               alter table SYSTEM.LOGMNR_CCOL$ exchange partition 
                 P101 with table SYS.LOGMNRLT_101_CCOL$ excluding
                 indexes without validation

If the PERCENT_DONE or the COMMAND column does not change for a long time, query the V$SESSION_LONGOPS view to monitor the progress of the DDL transaction in question.

Applying State

In this state, SQL Apply has successfully loaded the initial snapshot of the LogMiner dictionary, and is currently applying redo data to the logical standby database.

For detailed information about the SQL Apply progress, query the V$LOGSTDBY_PROGRESS view:

SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
SQL> SELECT APPLIED_TIME, APPLIED_SCN, MINING_TIME, MINING_SCN -
> FROM V$LOGSTDBY_PROGRESS;

APPLIED_TIME            APPLIED_SCN   MINING_TIME           MINING_SCN
--------------------    -----------   --------------------  -----------
10-JAN-2005 12:00:05    346791023     10-JAN-2005 12:10:05  3468810134

All committed transactions seen at or before APPLIED_SCN (or APPLIED_TIME) on the primary database have been applied to the logical standby database. The mining engine has processed all redo records generated at or before MINING_SCN (and MINING_TIME) on the primary database. At steady state, the value of MINING_SCN (and MINING_TIME) will always be ahead of APPLIED_SCN (and APPLIED_TIME).

Waiting On Gap State

This state occurs when SQL Apply has mined and applied all available redo records, and is waiting for a new log file (or a missing log file) to be archived by the RFS process.

SQL> SELECT STATUS FROM V$LOGSTDBY_PROCESS WHERE TYPE = 'READER';

STATUS
------------------------------------------------------------------------
ORA-16240: waiting for log file (thread# 1, sequence# 99)

Idle State

SQL Apply enters this state once it has applied all redo generated by the primary database.

10.4.2 Automatic Deletion of Log Files

Foreign archived logs contain redo that was shipped from the primary database. There are two ways to store foreign archive logs:

  • In the fast recovery area

  • In a directory outside of the fast recovery area

Foreign archived logs stored in the fast recovery area are always managed by SQL Apply. After all redo records contained in the log have been applied at the logical standby database, they are retained for the time period specified by the DB_FLASHBACK_RETENTION_TARGET parameter (or for 1440 minutes if DB_FLASHBACK_RETENTION_TARGET is not specified). You cannot override automatic management of foreign archived logs that are stored in the fast recovery area.

Foreign archived logs that are not stored in fast recovery area are by default managed by SQL Apply. Under automatic management, foreign archived logs that are not stored in the fast recovery area are retained for the time period specified by the LOG_AUTO_DEL_RETENTION_TARGET parameter once all redo records contained in the log have been applied at the logical standby database. You can override automatic management of foreign archived logs not stored in fast recovery area by executing the following PL/SQL procedure:

SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('LOG_AUTO_DELETE', 'FALSE');

Note:

Use the DBMS_LOGTSDBY.APPLY_SET procedure to set this parameter. If you do not specify LOG_AUTO_DEL_RETENTION_TARGET explicitly, it defaults to DB_FLASHBACK_RETENTION_TARGET set in the logical standby database, or to 1440 minutes in case DB_FLASHBACK_RETENTION_TARGET is not set.

If you are overriding the default automatic log deletion capability, periodically perform the following steps to identify and delete archived redo log files that are no longer needed by SQL Apply:

  1. To purge the logical standby session of metadata that is no longer needed, enter the following PL/SQL statement:

    SQL> EXECUTE DBMS_LOGSTDBY.PURGE_SESSION;
    

    This statement also updates the DBA_LOGMNR_PURGED_LOG view that displays the archived redo log files that are no longer needed.

  2. Query the DBA_LOGMNR_PURGED_LOG view to list the archived redo log files that can be removed:

    SQL> SELECT * FROM DBA_LOGMNR_PURGED_LOG;
    
       FILE_NAME
       ------------------------------------
       /boston/arc_dest/arc_1_40_509538672.log
       /boston/arc_dest/arc_1_41_509538672.log
       /boston/arc_dest/arc_1_42_509538672.log
       /boston/arc_dest/arc_1_43_509538672.log
       /boston/arc_dest/arc_1_44_509538672.log
       /boston/arc_dest/arc_1_45_509538672.log
       /boston/arc_dest/arc_1_46_509538672.log
       /boston/arc_dest/arc_1_47_509538672.log
    
  3. Use an operating system-specific command to delete the archived redo log files listed by the query.

10.5 Customizing a Logical Standby Database

This section contains the following topics:

10.5.1 Customizing Logging of Events in the DBA_LOGSTDBY_EVENTS View

The DBA_LOGSTDBY_EVENTS view can be thought of as a circular log containing the most recent interesting events that occurred in the context of SQL Apply. By default the last 10,000 events are remembered in the event view. You can change the number of events logged by invoking the DBMS_LOGSTDBY.APPLY_SET procedure. For example, to ensure that the last 100,000 events are recorded, you can issue the following statement:

SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET ('MAX_EVENTS_RECORDED', '100000');

Errors that cause SQL Apply to stop are always recorded in the DBA_LOGSTDBY_EVENTS view (unless there is insufficient space in the SYSTEM tablespace). These events are always put into the alert file as well, with the keyword LOGSTDBY included in the text. When querying the view, select the columns in order by EVENT_TIME, COMMIT_SCN, and CURRENT_SCN. This ordering ensures a shutdown failure appears last in the view.

The following examples show DBMS_LOGSTDBY subprograms that specify events to be recorded in the view.

Example 1   Determining If DDL Statements Have Been Applied

For example, to record applied DDL transactions to the DBA_LOGSTDBY_EVENTS view, issue the following statement:

SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET ('RECORD_APPLIED_DDL', 'TRUE');
Example 2   Checking the DBA_LOGSTDBY_EVENTS View for Unsupported Operations

To capture information about transactions running on the primary database that will not be supported by a logical standby database, issue the following statements:

SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;SQL> EXEC DBMS_LOGSTDBY.APPLY_SET('RECORD_UNSUPPORTED_OPERATIONS', 'TRUE');SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE; 

Then, check the DBA_LOGSTDBY_EVENTS view for any unsupported operations. Usually, an operation on an unsupported table is silently ignored by SQL Apply. However, during rolling upgrade (while the standby database is at a higher version and mining redo generated by a lower versioned primary database), if you performed an unsupported operation on the primary database, the logical standby database may not be the one to which you want to perform a switchover. Data Guard will log at least one unsupported operation per table in the DBA_LOGSTDBY_EVENTS view. Chapter 12, "Using SQL Apply to Upgrade the Oracle Database" provides detailed information about rolling upgrades.

10.5.2 Using DBMS_LOGSTDBY.SKIP to Prevent Changes to Specific Schema Objects

By default, all supported tables in the primary database are replicated in the logical standby database. You can change the default behavior by specifying rules to skip applying modifications to specific tables. For example, to omit changes to the HR.EMPLOYEES table, you can specify rules to prevent application of DML and DDL changes to the specific table. For example:

  1. Stop SQL Apply:

    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    
  2. Register the SKIP rules:

    SQL> EXECUTE DBMS_LOGSTDBY.SKIP (stmt => 'DML', schema_name => 'HR', -
    > object_name => 'EMPLOYEES');
    
    SQL> EXECUTE DBMS_LOGSTDBY.SKIP (stmt => 'SCHEMA_DDL', schema_name => 'HR', -
    > object_name => 'EMPLOYEES');
    
  3. Start SQL Apply:

    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    

10.5.3 Setting up a Skip Handler for a DDL Statement

You can create a procedure to intercept certain DDL statements and replace the original DDL statement with a different one. For example, if the file system organization in the logical standby database is different than that in the primary database, you can write a DBMS_LOGSTDBY.SKIP procedure to transparently handle DDL transactions with file specifications.

The following procedure can handle different file system organization between the primary database and standby database, as long as you use a specific naming convention for your file-specification string.

  1. Create the skip procedure to handle tablespace DDL transactions:

    CREATE OR REPLACE PROCEDURE SYS.HANDLE_TBS_DDL ( 
      OLD_STMT  IN  VARCHAR2, 
      STMT_TYP  IN  VARCHAR2, 
      SCHEMA    IN  VARCHAR2, 
      NAME      IN  VARCHAR2, 
      XIDUSN    IN  NUMBER, 
      XIDSLT    IN  NUMBER, 
      XIDSQN    IN  NUMBER, 
      ACTION    OUT NUMBER, 
      NEW_STMT  OUT VARCHAR2 
    ) AS 
    BEGIN 
      
    -- All primary file specification that contains a directory 
    -- /usr/orcl/primary/dbs 
    -- should go to /usr/orcl/stdby directory specification
     
     
      NEW_STMT := REPLACE(OLD_STMT, 
                         '/usr/orcl/primary/dbs', 
                         '/usr/orcl/stdby');
     
      ACTION := DBMS_LOGSTDBY.SKIP_ACTION_REPLACE;
     
    EXCEPTION
      WHEN OTHERS THEN
        ACTION := DBMS_LOGSTDBY.SKIP_ACTION_ERROR;
        NEW_STMT := NULL;
    END HANDLE_TBS_DDL; 
    
  2. Stop SQL Apply:

    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    
  3. Register the skip procedure with SQL Apply:

    SQL> EXECUTE DBMS_LOGSTDBY.SKIP (stmt => 'TABLESPACE', -
    > proc_name => 'sys.handle_tbs_ddl');
    
  4. Start SQL Apply:

    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    

10.5.4 Modifying a Logical Standby Database

Logical standby databases can be used for reporting activities, even while SQL statements are being applied. The database guard controls user access to tables in a logical standby database, and the ALTER SESSION DISABLE GUARD statement is used to bypass the database guard and allow modifications to the tables in the logical standby database.


Note:

To use a logical standby database to host other applications that process data being replicated from the primary database while creating other tables of their own, the database guard must be set to STANDBY. For such applications to work seamlessly, make sure that you are running with PRESERVE_COMMIT_ORDER set to TRUE (the default setting for SQL Apply). (See Oracle Database PL/SQL Packages and Types Reference for information about the PRESERVE_COMMIT_ORDER parameter in the DBMS_LOGSTDBY PL/SQL package.)

Issue the following SQL statement to set the database guard to STANDBY:

SQL> ALTER DATABASE GUARD STANDBY;

Under this guard setting, tables being replicated from the primary database are protected from user modifications, but tables created on the standby database can be modified by the applications running on the logical standby.


By default, a logical standby database operates with the database guard set to ALL, which is its most restrictive setting, and does not allow any user changes to be performed to the database. You can override the database guard to allow changes to the logical standby database by executing the ALTER SESSION DISABLE GUARD statement. Privileged users can issue this statement to turn the database guard off for the current session.

The following sections provide some examples. The discussions in these sections assume that the database guard is set to ALL or STANDBY.

10.5.4.1 Performing DDL on a Logical Standby Database

This section describes how to add a constraint to a table maintained through SQL Apply.

By default, only accounts with SYS privileges can modify the database while the database guard is set to ALL or STANDBY. If you are logged in as SYSTEM or another privileged account, you will not be able to issue DDL statements on the logical standby database without first bypassing the database guard for the session.

The following example shows how to stop SQL Apply, bypass the database guard, execute SQL statements on the logical standby database, and then reenable the guard. In this example, a soundex index is added to the surname column of SCOTT.EMP in order to speed up partial match queries. A soundex index could be prohibitive to maintain on the primary server.

SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
Database altered.

SQL> ALTER SESSION DISABLE GUARD;
PL/SQL procedure successfully completed.

SQL> CREATE INDEX EMP_SOUNDEX ON SCOTT.EMP(SOUNDEX(ENAME));
Table altered.

SQL> ALTER SESSION ENABLE GUARD;
PL/SQL procedure successfully completed.

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
Database altered.

SQL> SELECT ENAME,MGR FROM SCOTT.EMP WHERE SOUNDEX(ENAME) = SOUNDEX('CLARKE');

ENAME            MGR
----------       ----------
CLARK             7839

Oracle recommends that you do not perform DML operations on tables maintained by SQL Apply while the database guard bypass is enabled. This will introduce deviations between the primary and standby databases that will make it impossible for the logical standby database to be maintained.

10.5.4.2 Modifying Tables That Are Not Maintained by SQL Apply

Sometimes, a reporting application must collect summary results and store them temporarily or track the number of times a report was run. Although the main purpose of the application is to perform reporting activities, the application might need to issue DML (insert, update, and delete) operations on a logical standby database. It might even need to create or drop tables.

You can set up the database guard to allow reporting operations to modify data as long as the data is not being maintained through SQL Apply. To do this, you must:

  • Specify the set of tables on the logical standby database to which an application can write data by executing the DBMS_LOGSTDBY.SKIP procedure. Skipped tables are not maintained through SQL Apply.

  • Set the database guard to protect only standby tables.

In the following example, it is assumed that the tables to which the report is writing are also on the primary database.

The example stops SQL Apply, skips the tables, and then restarts SQL Apply. The reporting application will be able to write to TESTEMP% in HR. They will no longer be maintained through SQL Apply.

SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
Database altered.

SQL> EXECUTE DBMS_LOGSTDBY.SKIP(stmt => 'SCHEMA_DDL',-
     schema_name => 'HR', -
     object_name => 'TESTEMP%');
PL/SQL procedure successfully completed.

SQL> EXECUTE DBMS_LOGSTDBY.SKIP('DML','HR','TESTEMP%');
PL/SQL procedure successfully completed.

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
Database altered.

Once SQL Apply starts, it needs to update metadata on the standby database for the newly specified tables added in the skip rules. Attempts to modify the newly skipped table until SQL Apply has had a chance to update the metadata will fail. You can find out if SQL Apply has successfully taken into account the SKIP rule you just added by issuing the following query:

SQL> SELECT VALUE FROM SYSTEM.LOGSTDBY$PARAMETERS WHERE NAME = 'GUARD_STANDBY';

VALUE
---------------
Ready  

When the VALUE column displays Ready, SQL Apply has successfully updated all relevant metadata for the skipped table, and it is safe to modify the table.

10.5.5 Adding or Re-Creating Tables On a Logical Standby Database

Typically, you use the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure to re-create a table after an unrecoverable operation. You can also use this procedure to enable SQL Apply on a table that was formerly skipped.

Before you can create a table, it must meet the requirements described in Section 4.1.2, "Ensure Table Rows in the Primary Database Can Be Uniquely Identified". Then, you can use the following steps to re-create a table named HR.EMPLOYEES and resume SQL Apply. The directions assume that there is already a database link BOSTON defined to access the primary database.

The following list shows how to re-create a table and restart SQL Apply on that table:

  1. Stop SQL Apply:

    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    
  2. Ensure no operations are being skipped for the table in question by querying the DBA_LOGSTDBY_SKIP view:

    SQL> SELECT * FROM DBA_LOGSTDBY_SKIP;
    
    ERROR  STATEMENT_OPT        OWNER          NAME                PROC
    -----  -------------------  -------------  ----------------    -----
    N      SCHEMA_DDL           HR             EMPLOYEES
    N      DML                  HR             EMPLOYEES
    N      SCHEMA_DDL           OE             TEST_ORDER
    N      DML                  OE             TEST_ORDER
    

    Because you already have skip rules associated with the table that you want to re-create on the logical standby database, you must first delete those rules. You can accomplish that by calling the DBMS_LOGSTDBY.UNSKIP procedure. For example:

    SQL> EXECUTE DBMS_LOGSTDBY.UNSKIP(stmt => 'DML', -
    > schema_name => 'HR', -
    > object_name => 'EMPLOYEES');
    
    SQL> EXECUTE DBMS_LOGSTDBY.UNSKIP(stmt => 'SCHEMA_DDL', -
    > schema_name => 'HR', -
    > object_name => 'EMPLOYEES');
    
  3. Re-create the table HR.EMPLOYEES with all its data in the logical standby database by using the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure. For example:

    SQL> EXECUTE DBMS_LOGSTDBY.INSTANTIATE_TABLE(schema_name => 'HR', -
    > table_name => 'EMPLOYEES', -
    > dblink => 'BOSTON');
    
  4. Start SQL Apply:

    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    

    See Also:

    Oracle Database PL/SQL Packages and Types Reference for information about the DBMS_LOGSTDBY.UNSKIP and the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedures

To ensure a consistent view across the newly instantiated table and the rest of the database, wait for SQL Apply to catch up with the primary database before querying this table. You can do this by performing the following steps:

  1. On the primary database, determine the current SCN by querying the V$DATABASE view:

    SQL> SELECT CURRENT_SCN FROM V$DATABASE@BOSTON;
    
    CURRENT_SCN
    ---------------------
    345162788
    
  2. Make sure SQL Apply has applied all transactions committed before the CURRENT_SCN returned in the previous query:

    SQL> SELECT APPLIED_SCN FROM V$LOGSTDBY_PROGRESS;
    
    APPLIED_SCN
    --------------------------
    345161345
    

    When the APPLIED_SCN returned in this query is greater than the CURRENT_SCN returned in the first query, it is safe to query the newly re-created table.

10.6 Managing Specific Workloads In the Context of a Logical Standby Database

This section contains the following topics:

10.6.1 Importing a Transportable Tablespace to the Primary Database

Perform the following steps to import a tablespace to the primary database.

  1. Disable the guard setting so that you can modify the logical standby database:

    SQL> ALTER DATABASE GUARD STANDBY;
    
  2. Import the tablespace at the logical standby database.

  3. Enable the database guard setting:

    SQL> ALTER DATABASE GUARD ALL;
    
  4. Import the tablespace at the primary database.

10.6.2 Using Materialized Views

Logical Standby automatically skips DDL statements related to materialized views:

  • CREATE, ALTER, or DROP MATERIALIZED VIEW

  • CREATE, ALTER or DROP MATERIALIZED VIEW LOG

New materialized views that are created, altered, or dropped on the primary database after the logical standby database has been created will not be created on the logical standby database. However, materialized views created on the primary database prior to the logical standby database being created will be present on the logical standby database.

Logical Standby supports the creation and maintenance of new materialized views locally on the logical standby database in addition to other kinds of auxiliary data structure. For example, online transaction processing (OLTP) systems frequently use highly normalized tables for update performance but these can lead to slower response times for complex decision support queries. Materialized views that denormalize the replicated data for more efficient query support on the logical standby database can be created, as follows (connect as user SYS before issuing these statements):

SQL> ALTER SESSION DISABLE GUARD; 
 
SQL> CREATE MATERIALIZED VIEW LOG ON SCOTT.EMP -
>  WITH ROWID (EMPNO, ENAME, MGR, DEPTNO) INCLUDING NEW VALUES;

SQL> CREATE MATERIALIZED VIEW LOG ON SCOTT.DEPT -
>  WITH ROWID (DEPTNO, DNAME) INCLUDING NEW VALUES;

SQL> CREATE MATERIALIZED VIEW SCOTT.MANAGED_BY -
>  REFRESH ON DEMAND -
>  ENABLE QUERY REWRITE -
>  AS SELECT  E.ENAME, M.ENAME AS MANAGER -
>  FROM SCOTT.EMP E, SCOTT.EMP M WHERE E.MGR=M.EMPNO;

SQL> CREATE MATERIALIZED VIEW SCOTT.IN_DEPT -
>  REFRESH FAST ON COMMIT -
>  ENABLE QUERY REWRITE -
>  AS SELECT E.ROWID AS ERID, D.ROWID AS DRID, E.ENAME, D.DNAME -
>  FROM SCOTT.EMP E, SCOTT.DEPT D WHERE E.DEPTNO=D.DEPTNO;

On a logical standby database:

  • An ON-COMMIT materialized view is refreshed automatically on the logical standby database when the transaction commit occurs.

  • An ON-DEMAND materialized view is not automatically refreshed: the DBMS_MVIEW.REFRESH procedure must be executed to refresh it.

For example, issuing the following command would refresh the ON-DEMAND materialized view created in the previous example:

SQL> ALTER SESSION DISABLE GUARD; 
 
SQL> EXECUTE DBMS_MVIEW.REFRESH (LIST => 'SCOTT.MANAGED_BY', METHOD => 'C');

If DBMS_SCHEDULER jobs are being used to periodically refresh on-demand materialized views, the database guard must be set to STANDBY. (It is not possible to use the ALTER SESSION DISABLE GUARD statement inside a PL/SQL block and have it take effect.)

10.6.3 How Triggers and Constraints Are Handled on a Logical Standby Database

By default, triggers and constraints are automatically enabled and handled on logical standby databases.

For triggers and constraints on tables maintained by SQL Apply:

  • Constraints — Check constraints are evaluated on the primary database and do not need to be re-evaluated on the logical standby database.

  • Triggers — The effects of the triggers executed on the primary database are logged and applied on the standby database.

For triggers and constraints on tables not maintained by SQL Apply:

  • Constraints are evaluated

  • Triggers are fired

10.6.4 Using Triggers to Replicate Unsupported Tables

DML triggers created on a table have their DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY fire_once parameter set to TRUE by default. The triggers fire only when the table is modified by a user process. They are automatically disabled inside SQL Apply processes, and thus do not fire when a SQL Apply process modifies the table. There are two ways to fire a trigger as a result of SQL Apply process making a change to a maintained table:

  • Set the fire_once parameter of a trigger to FALSE, which allows it to fire in either the context of a user process or a SQL Apply process

  • Set the apply_server_only parameter to TRUE which results in the trigger firing only in the context of a SQL Apply process and not in the context of a user process

fire_onceapply_server_onlydescription
TRUEFALSEThis is the default property setting for a DML trigger. The trigger fires only when a user process modifies the base table.
FALSEFALSEThe trigger will fire both in the context of a user process and in the context of a SQL Apply process modifying the base table.You can distinguish the two contexts by using the DBMS_LOGSTDBY.IS_APPLY_SERVER function.
TRUE/FALSETRUEThe trigger only fires when a SQL Apply process modifies the base table. The trigger does not fire when a user process modifies the base table.Thus, the apply_server_only property overrides the fire_once parameter of a trigger.

Tables that are unsupported due to simple object type columns can be replicated by creating triggers that will fire in the context of a SQL Apply process (either by setting the fire_once parameter of such a trigger to FALSE or by setting the apply_server_only parameter of such a trigger to TRUE). A regular DML trigger can be used on the primary database to flatten the object type into a table that can be supported. The trigger that will fire in the context of SQL Apply process on the logical standby will reconstitute the object type and update the unsupported table in a transactional manner.


See Also:


The following example shows how a table with a simple object type could be replicated using triggers. This example shows how to handle inserts; the same principle can be applied to updating and deleting. Nested tables and VARRAYs can also be replicated using this technique with the additional step of a loop to normalize the nested data.

-- simple object type
create or replace type Person as object
(
  FirstName    varchar2(50),
  LastName     varchar2(50),
  BirthDate    Date
)
 
-- unsupported object table
create table employees
(
  IdNumber     varchar2(10) ,
  Department   varchar2(50),
  Info         Person
)
 
-- supported table populated via trigger
create table employees_transfer
(
  t_IdNumber   varchar2(10),
  t_Department varchar2(50),
  t_FirstName  varchar2(50),
  t_LastName   varchar2(50),
  t_BirthDate  Date
)
--
-- create this trigger to flatten object table on the primary
-- this trigger will not fire on the standby
--
create or replace trigger flatten_employees
  after insert on employees for each row
declare
begin
  insert into employees_transfer
    (t_IdNumber, t_Department, t_FirstName, t_LastName, t_BirthDate)
  values
    (:new.IdNumber, :new.Department,
 :new.Info.FirstName,:new.Info.LastName, :new.Info.BirthDate);
end
 
--
-- Option#1 (Better Option: Create a trigger and 
-- set its apply-server-only property to TRUE)
-- create this trigger at the logical standby database
-- to populate object table on the standby
-- this trigger only fires when apply replicates rows 
-- to the standby
--
create or replace trigger reconstruct_employees_aso
  after insert on employees_transfer for each row
begin
  
    insert into employees (IdNumber, Department, Info)
    values (:new.t_IdNumber, :new.t_Department,
Person(:new.t_FirstName, :new.t_LastName,  :new.t_BirthDate));
  
end
 
-- set this trigger to fire from the apply server
execute dbms_ddl.set_trigger_firing_property( -
trig_owner => 'scott', -
trig_name  => 'reconstruct_employees_aso', 
property => dbms_ddl.apply_server_only,
setting => TRUE);
 
--
-- Option#2 (Create a trigger and set 
--           its fire-once property to FALSE)
-- create this trigger at the logical standby database
-- to populate object table on the standby
-- this trigger will fire when apply replicates rows to -- the standby, but we will need to make sure we are
-- are executing inside a SQL Apply process by invoking
-- dbms_logstdby.is_apply_server function
--
create or replace trigger reconstruct_employees_nfo
  after insert on employees_transfer for each row
begin
  if dbms_logstdby.is_apply_server() then
    insert into employees (IdNumber, Department, Info)
    values (:new.t_IdNumber, :new.t_Department,
Person(:new.t_FirstName, :new.t_LastName,  :new.t_BirthDate));
  end if;
end
 
-- set this trigger to fire from the apply server
execute dbms_ddl.set_trigger_firing_property( -
trig_owner => 'scott', -
trig_name  => 'reconstruct_employees_nfo', 
property => dbms_ddl.fire_once,
setting => FALSE);

10.6.5 Recovering Through the Point-in-Time Recovery Performed at the Primary

When a logical standby database receives a new branch of redo data, SQL Apply automatically takes the new branch of redo data. For logical standby databases, no manual intervention is required if the standby database did not apply redo data past the new resetlogs SCN (past the start of the new branch of redo data)

The following table describes how to resynchronize the standby database with the primary database branch.

If the standby database. . .Then. . .Perform these steps. . .
Has not applied redo data past the new resetlogs SCN (past the start of the new branch of redo data)SQL Apply automatically takes the new branch of redo data.No manual intervention is necessary. SQL Apply automatically resynchronizes the standby database with the new branch of redo data.
Has applied redo data past the new resetlogs SCN (past the start of the new branch of redo data) and Flashback Database is enabled on the standby databaseThe standby database is recovered in the future of the new branch of redo data.
  1. Follow the procedure in Section 13.3.2, "Flashing Back a Logical Standby Database to a Specific Point-in-Time" to flash back a logical standby database.
  2. Restart SQL Apply to continue application of redo onto the new reset logs branch.

SQL Apply automatically resynchronizes the standby database with the new branch.

Has applied redo data past the new resetlogs SCN (past the start of the new branch of redo data) and Flashback Database is not enabled on the standby databaseThe primary database has diverged from the standby on the indicated primary database branch.Re-create the logical standby database following the procedures in Chapter 4, "Creating a Logical Standby Database".
Is missing archived redo log files from the end of the previous branch of redo dataSQL Apply cannot continue until the missing log files are retrieved.Locate and register missing archived redo log files from the previous branch.

See Oracle Database Backup and Recovery User's Guide for more information about database incarnations, recovering through an OPEN RESETLOGS operation, and Flashback Database.

10.6.6 Running an Oracle Streams Capture Process on a Logical Standby Database

You can run an Oracle Streams capture process on a logical standby database to capture changes from any table that exists on the logical standby database (whether it is a local table or a maintained table that is being replicated from the primary database). When capturing changes to a maintained table, there will be additional latency as compared to running an Oracle Streams capture process at the primary database. The additional latency is because of the fact that when you are running at a logical standby, the Oracle Streams capture process must wait for the changes to be shipped from the primary to the logical standby and applied by SQL Apply. In most cases, if you are running real time apply, this will be no more than a few seconds.

It is important to note that the Oracle Streams capture process is associated with the database where it was created; the role of the database is irrelevant. For example, suppose you have a primary database named Boston and a logical standby named London. You cannot move the Oracle Streams capture process from one database to the other as you go through role transitions. For instance, if you created an Oracle Streams capture process on London when it was a logical standby, it will remain on London even when London becomes the primary as a result of a role transition operation such as a switchover or failover. For the Oracle Streams capture process to continue working after a role transition, you must write a role transition trigger such as the following:

create or replace trigger streams_aq_job_role_change1 
after DB_ROLE_CHANGE on database 
declare
cursor capture_aq_jobs is 
  select job_name, database_role 
   from dba_scheduler_job_roles 
   where job_name like 'AQ_JOB%'; 
u capture_aq_jobs%ROWTYPE; 
my_db_role  varchar2(16); 
begin 
 
  if (dbms_logstdby.db_is_logstdby() = 1) then my_db_role := 'LOGICAL STANDBY';
  else my_db_role := 'PRIMARY';
  end if; 
 
 open capture_aq_jobs; 
 loop 
   fetch capture_aq_jobs into u; 
   exit when capture_aq_jobs%NOTFOUND; 
 
   if (u.database_role != my_db_role) then 
     dbms_scheduler.set_attribute(u.job_name, 
              'database_role', 
               my_db_role); 
 
   end if; 
 end loop; 
 close capture_aq_jobs; 
 
exception
 when others then 
 begin 
   raise; 
 end;  
end;

10.7 Tuning a Logical Standby Database

This section contains the following topics:

10.7.1 Create a Primary Key RELY Constraint

On the primary database, if a table does not have a primary key or a unique index and you are certain the rows are unique, then create a primary key RELY constraint. On the logical standby database, create an index on the columns that make up the primary key. The following query generates a list of tables with no index information that can be used by a logical standby database to apply to uniquely identify rows. By creating an index on the following tables, performance can be improved significantly.

SQL> SELECT OWNER, TABLE_NAME FROM DBA_TABLES -
> WHERE OWNER NOT IN (SELECT OWNER FROM DBA_LOGSTDBY_SKIP -
> WHERE STATEMENT_OPT = 'INTERNAL SCHEMA') -
> MINUS -
> SELECT DISTINCT TABLE_OWNER, TABLE_NAME FROM DBA_INDEXES -
> WHERE INDEX_TYPE NOT LIKE ('FUNCTION-BASED%') -
> MINUS -
> SELECT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED;

You can add a rely primary key constraint to a table on the primary database, as follows:

  1. Add the primary key rely constraint at the primary database:

    SQL> ALTER TABLE HR.TEST_EMPLOYEES ADD PRIMARY KEY (EMPNO) RELY DISABLE;
    

    This will ensure that the EMPNO column, which can be used to uniquely identify the rows in HR.TEST_EMPLOYEES table, will be supplementally logged as part of any updates done on that table.

    Note that the HR.TEST_EMPLOYEES table still does not have any unique index specified on the logical standby database. This may cause UPDATE statements to do full table scans on the logical standby database. You can remedy that by adding a unique index on the EMPNO column on the logical standby database.See Section 4.1.2, "Ensure Table Rows in the Primary Database Can Be Uniquely Identified" and Oracle Database SQL Language Reference for more information about RELY constraints.

    Perform the remaining steps on the logical standby database.

  2. Stop SQL Apply:

    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    
  3. Disable the guard so that you can modify a maintained table on the logical standby database:

    SQL> ALTER SESSION DISABLE GUARD;
    
  4. Add a unique index on EMPNO column:

    SQL> CREATE UNIQUE INDEX UI_TEST_EMP ON HR.TEST_EMPLOYEES (EMPNO);
    
  5. Enable the guard:

    SQL> ALTER SESSION ENABLE GUARD;
    
  6. Start SQL Apply:

    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    

10.7.2 Gather Statistics for the Cost-Based Optimizer

Statistics should be gathered on the standby database because the cost-based optimizer (CBO) uses them to determine the optimal query execution path. New statistics should be gathered after the data or structure of a schema object is modified in ways that make the previous statistics inaccurate. For example, after inserting or deleting a significant number of rows into a table, collect new statistics on the number of rows.

Statistics should be gathered on the standby database because DML and DDL operations on the primary database are executed as a function of the workload. While the standby database is logically equivalent to the primary database, SQL Apply might execute the workload in a different way. This is why using the STATS pack on the logical standby database and the V$SYSSTAT view can be useful in determining which tables are consuming the most resources and table scans.

10.7.3 Adjust the Number of Processes

The following sections describe:

There are three parameters that can be modified to control the number of processes allocated to SQL Apply: MAX_SERVERS, APPLY_SERVERS, and PREPARE_SERVERS. The following relationships must always hold true:

  • APPLY_SERVERS + PREPARE_SERVERS = MAX_SERVERS - 3

    This is because SQL Apply always allocates one process for the READER, BUILDER, and ANALYZER roles.

  • By default, MAX_SERVE)9RS is set to 9, PREPARE_SERVERS is set to 1, and APPLY_SERVERS is set to 5.

  • Oracle recommends that you only change the MAX_SERVERS parameter through the DBMS_LOGSTDBY.APPLY_SET procedure, and allow SQL Apply to distribute the server processes appropriately between prepare and apply processes.

  • SQL Apply uses a process allocation algorithm that allocates 1 PREPARE_SERVER for every 20 server processes allocated to SQL Apply as specified by MAX_SERVER and limits the number of PREPARE_SERVERS to 5. Thus, if you set MAX_SERVERS to any value between 1 and 20, SQL Apply allocates 1 server process to act as a PREPARER, and allocates the rest of the processes as APPLIERS while satisfying the relationship previously described. Similarly, if you set MAX_SERVERS to a value between 21 and 40, SQL Apply allocates 2 server processes to act as PREPARERS and the rest as APPLIERS, while satisfying the relationship previously described. You can override this internal process allocation algorithm by setting APPLY_SERVERS and PREPARE_SERVERS directly, provided that the previously described relationship is satisfied.

10.7.3.1 Adjusting the Number of APPLIER Processes

Perform the following steps to find out whether adjusting the number of APPLIER processes will help you achieve greater throughput:

  1. Determine if APPLIER processes are busy by issuing the following query:

    SQL> SELECT COUNT(*) AS IDLE_APPLIER -
    > FROM V$LOGSTDBY_PROCESS -
    > WHERE TYPE = 'APPLIER' and status_code = 16116;
    
    IDLE_APPLIER
    -------------------------
    0
    
  2. Once you are sure there are no idle APPLIER processes, issue the following query to ensure there is enough work available for additional APPLIER processes if you choose to adjust the number of APPLIERS:

    SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'txns applied' OR NAME = 'distinct txns in queue';
    

    These two statistics keep a cumulative total of transactions that are ready to be applied by the APPLIER processes and the number of transactions that have already been applied.

    If the number (distinct txns in queue - txns applied) is higher than twice the number of APPLIER processes available, an improvement in throughput is possible if you increase the number of APPLIER processes.


    Note:

    The number is a rough measure of ready work. The workload may be such that an interdependency between ready transactions will prevent additional available APPLIER processes from applying them. For instance, if the majority of the transactions that are ready to be applied are DDL transactions, adding more APPLIER processes will not result in a higher throughput.

    Suppose you want to adjust the number of APPLIER processes to 20 from the default value of 5, while keeping the number of PREPARER processes to 1. Because you have to satisfy the following equation:

    APPLY_SERVERS + PREPARE_SERVERS = MAX_SERVERS - 3
    

    you will first need to set MAX_SERVERS to 24. Once you have done that, you can then set the number of APPLY_SERVERS to 20, as follows:

    SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('MAX_SERVERS', 24);
    SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('APPLY_SERVERS', 20);
    

10.7.3.2 Adjusting the Number of PREPARER Processes

In only rare cases do you need to adjust the number of PREPARER processes. Before you decide to increase the number of PREPARER processes, ensure the following conditions are true:

  • All PREPARER processes are busy

  • The number of transactions ready to be applied is less than the number of APPLIER processes available

  • There are idle APPLIER processes

The following steps show how to determine these conditions are true:

  1. Ensure all PREPARER processes are busy:

    SQL> SELECT COUNT(*) AS IDLE_PREPARER -
    > FROM V$LOGSTDBY_PROCESS -
    > WHERE TYPE = 'PREPARER' and status_code = 16116;
    
    IDLE_PREPARER
    -------------
    0
    
  2. Ensure the number of transactions ready to be applied is less than the number of APPLIER processes:

    SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'txns applied' OR - > NAME = 'distinct txns in queue';
    
    NAME                          VALUE
    ---------------------         -------
    txns applied                   27892
    distinct txns in queue         12896
    
    SQL> SELECT COUNT(*) AS APPLIER_COUNT -
    > FROM V$LOGSTDBY_PROCESS WHERE TYPE = 'APPLIER';
    
    APPLIER_COUNT
    -------------
    20
    

    Note: Issue this query several times to ensure this is not a transient event.

  3. Ensure there are idle APPLIER processes:

    SQL> SELECT COUNT(*) AS IDLE_APPLIER -
    > FROM V$LOGSTDBY_PROCESS -
    > WHERE TYPE = 'APPLIER' and status_code = 16116;
    
    IDLE_APPLIER
    -------------------------
    19
    

In the example, all three conditions necessary for increasing the number of PREPARER processes have been satisfied. Suppose you want to keep the number of APPLIER processes set to 20, and increase the number of PREPARER processes from 1 to 3. Because you always have to satisfy the following equation:

APPLY_SERVERS + PREPARE_SERVERS = MAX_SERVERS - 3

you will first need to increase the number MAX_SERVERS from 24 to 26 to accommodate the increased number of preparers. You can then increase the number of PREPARER processes, as follows:

SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('MAX_SERVERS', 26);
SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('PREPARE_SERVERS', 3);

10.7.4 Adjust the Memory Used for LCR Cache

For some workloads, SQL Apply may use a large number of pageout operations, thereby reducing the overall throughput of the system. To find out whether increasing memory allocated to LCR cache will be beneficial, perform the following steps:

  1. Issue the following query to obtain a snapshot of pageout activity:

    SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME LIKE '%page%' -
    > OR NAME LIKE '%uptime%' OR NAME LIKE '%idle%';
    
    NAME                             VALUE
    ----------------------------     --------------
    coordinator uptime (seconds)             894856
    bytes paged out                           20000
    pageout time (seconds)                        2
    system idle time (seconds)                 1000
    
  2. Issue the query again in 5 minutes:

    SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME LIKE '%page%' -
    > OR NAME LIKE '%uptime%' OR NAME LIKE '%idle%';
    
    NAME                             VALUE
    ----------------------------     --------------
    coordinator uptime (seconds)             895156
    bytes paged out                         1020000
    pageout time (seconds)                      100
    system idle time (seconds)                 1000
    
  3. Compute the normalized pageout activity. For example:

    Change in coordinator uptime (C)= (895156 – 894856) = 300 secs
    Amount of additional idle time (I)= (1000 – 1000) = 0
    Change in time spent in pageout (P) = (100 – 2) = 98 secs
    Pageout time in comparison to uptime = P/(C-I) = 98/300 ~ 32.67%
    

Ideally, the pageout activity should not consume more than 5 percent of the total uptime. If you continue to take snapshots over an extended interval and you find the pageout activities continue to consume a significant portion of the apply time, increasing the memory size may provide some benefits. You can increase the memory allocated to SQL Apply by setting the memory allocated to LCR cache (for this example, the SGA is set to 1 GB):

SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('MAX_SGA', 1024);
PL/SQL procedure successfully completed

10.7.5 Adjust How Transactions are Applied On the Logical Standby Database

By default transactions are applied on the logical standby database in the exact order in which they were committed on the primary database. The strict default order of committing transactions allow any application to run transparently on the logical standby database.

However, many applications do not require such strict ordering among all transactions. Such applications do not require transactions containing non-overlapping sets of rows to be committed in the same order that they were committed at the primary database. This less strict ordering typically results in higher apply rates at the logical standby database. You can change the default order of committing transactions by performing the following steps:

  1. Stop SQL Apply:

    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    Database altered
    
  2. Issue the following to allow transactions to be applied out of order from how they were committed on the primary databases:

    SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('PRESERVE_COMMIT_ORDER', 'FALSE');
    PL/SQL procedure successfully completed
    
  3. Start SQL Apply:

    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    Database altered
    

Should you ever wish to, you can change back the apply mode as follows:

  1. Stop SQL Apply:

    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    Database altered
    
  2. Restore the default value for the PRESERVE_COMMIT_ORDER parameter:

    SQL> EXECUTE DBMS_LOGSTDBY.APPLY_UNSET('PRESERVE_COMMIT_ORDER');
    PL/SQL procedure successfully completed
    
  3. Start SQL Apply:

    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    Database altered
    

    For a typical online transaction processing (OLTP) workload, the nondefault mode can provide a 50 percent or better throughput improvement over the default apply mode.

10.8 Backup and Recovery in the Context of a Logical Standby Database

You can back up your logical standby database using the traditional methods available and then recover it by restoring the database backup and performing media recovery on the archived logs, in conjunction with the backup. The following items are relevant in the context of a logical standby database.

Considerations When Creating and Using a Local RMAN Recovery Catalog

If you plan to create the RMAN recovery catalog or perform any RMAN activity that modifies the catalog, you must be running with GUARD set to STANDBY at the logical standby database.

You can leave GUARD set to ALL, if the local recovery catalog is kept only in the logical standby control file.

Considerations For Control File Backup

Oracle recommends that you take a control file backup immediately after instantiating a logical standby database.

Considerations For Point-in-Time Recovery

When SQL Apply is started for the first time following point-in-time recovery, it must be able to either find the required archived logs on the local system or to fetch them from the primary database. Use the V$LOGSTDBY_PROCESS view to determine if any archived logs need to be restored on the primary database.

Considerations For Tablespace Point-in-Time Recovery

If you perform point-in-time recovery for a tablespace in a logical standby database, you must ensure one of the following:

  • The tablespace contains no tables or partitions that are being maintained by the SQL Apply process

  • If the tablespace contains tables or partitions that are being maintained by the SQL Apply process, you should either use the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure to reinstantiate all of the maintained tables contained in the recovered tablespace at the logical standby database, or use DBMS_LOGSTDBY.SKIP procedure to register all tables contained in the recovered tablespace to be skipped from the maintained table list at the logical standby database.

PK:=)PKD Reference

Part II

Reference

This part provides reference material to be used in conjunction with the Oracle Data Guard standby database features. For more complete reference material, refer to the Oracle Database 10g documentation set.

This part contains the following chapters:

PKCQ PKD Apply Services

7 Apply Services

This chapter describes how redo data is applied to a standby database. It includes the following topics:

7.1 Introduction to Apply Services

Apply services automatically apply redo to standby databases to maintain synchronization with the primary database and allow transactionally consistent access to the data.

By default, apply services waits for a standby redo log file to be archived before applying the redo that it contains. However, you can enable real-time apply, which allows apply services to apply the redo in the current standby redo log file as it is being filled. Real-time apply is described in more detail in Section 7.2.1.

Apply services use the following methods to maintain physical and logical standby databases:

  • Redo Apply (physical standby databases only)

    Uses media recovery to keep the primary and physical standby databases synchronized.

  • SQL Apply (logical standby databases only)

    Reconstitutes SQL statements from the redo received from the primary database and executes the SQL statements against the logical standby database.

The sections in this chapter describe Redo Apply, SQL Apply, real-time apply, and delayed apply in more detail.

7.2 Apply Services Configuration Options

This section contains the following topics:

7.2.1 Using Real-Time Apply to Apply Redo Data Immediately

If the real-time apply feature is enabled, apply services can apply redo data as it is received, without waiting for the current standby redo log file to be archived. This results in faster switchover and failover times because the standby redo log files have been applied already to the standby database by the time the failover or switchover begins.

Use the ALTER DATABASE statement to enable the real-time apply feature, as follows:

  • For physical standby databases, issue the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE statement.

  • For logical standby databases, issue the ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE statement.

Real-time apply requires a standby database that is configured with a standby redo log and that is in ARCHIVELOG mode.

Figure 7-1 shows a Data Guard configuration with a local destination and a standby destination. As the remote file server (RFS) process writes the redo data to standby redo log files on the standby database, apply services can recover redo from standby redo log files as they are being filled.

Figure 7-1 Applying Redo Data to a Standby Destination Using Real-Time Apply

Description of Figure 7-1 follows
Description of "Figure 7-1 Applying Redo Data to a Standby Destination Using Real-Time Apply"

7.2.2 Specifying a Time Delay for the Application of Archived Redo Log Files

In some cases, you may want to create a time lag between the time when redo data is received from the primary site and when it is applied to the standby database. You can specify a time interval (in minutes) to protect against the application of corrupted or erroneous data to the standby database. When you set a DELAY interval, it does not delay the transport of the redo data to the standby database. Instead, the time lag you specify begins when the redo data is completely archived at the standby destination.


Note:

If you define a delay for a destination that has real-time apply enabled, the delay is ignored.

Specifying a Time Delay

You can set a time delay on primary and standby databases using the DELAY=minutes attribute of the LOG_ARCHIVE_DEST_n initialization parameter to delay applying archived redo log files to the standby database. By default, there is no time delay. If you specify the DELAY attribute without specifying a value, then the default delay interval is 30 minutes.

Canceling a Time Delay

You can cancel a specified delay interval as follows:

  • For physical standby databases, use the NODELAY keyword of the RECOVER MANAGED STANDBY DATABASE clause:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY;
    
  • For logical standby databases, specify the following SQL statement:

    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY NODELAY;
    

These commands result in apply services immediately beginning to apply archived redo log files to the standby database, before the time interval expires.

7.2.2.1 Using Flashback Database as an Alternative to Setting a Time Delay

As an alternative to setting an apply delay, you can use Flashback Database to recover from the application of corrupted or erroneous data to the standby database. Flashback Database can quickly and easily flash back a standby database to an arbitrary point in time.

See Chapter 13 for scenarios showing how to use Data Guard with Flashback Database, and Oracle Database Backup and Recovery User's Guide for more information about enabling and using Flashback Database.

7.3 Applying Redo Data to Physical Standby Databases

By default, the redo data is applied from archived redo log files. When performing Redo Apply, a physical standby database can use the real-time apply feature to apply redo directly from the standby redo log files as they are being written by the RFS process.

This section contains the following topics:

7.3.1 Starting Redo Apply

To start apply services on a physical standby database, ensure the physical standby database is started and mounted and then start Redo Apply using the SQL ALTER DATABASE RECOVER MANAGED STANDBY DATABASE statement.

You can specify that Redo Apply runs as a foreground session or as a background process, and enable it with real-time apply.

  • To start Redo Apply in the foreground, issue the following SQL statement:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;
    

    If you start a foreground session, control is not returned to the command prompt until recovery is canceled by another session.

  • To start Redo Apply in the background, include the DISCONNECT keyword on the SQL statement. For example:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
    

    This statement starts a detached server process and immediately returns control to the user. While the managed recovery process is performing recovery in the background, the foreground process that issued the RECOVER statement can continue performing other tasks. This does not disconnect the current SQL session.

  • To start real-time apply, include the USING CURRENT LOGFILE clause on the SQL statement. For example:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;
    

7.3.2 Stopping Redo Apply

To stop Redo Apply, issue the following SQL statement:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

7.3.3 Monitoring Redo Apply on Physical Standby Databases

To monitor the status of apply services on a physical standby database, see Section 9.5.1. You can also monitor the standby database using Oracle Enterprise Manager. Also, see the Oracle Database Reference for complete reference information about views.

7.4 Applying Redo Data to Logical Standby Databases

SQL Apply converts the data from the archived redo log or standby redo log in to SQL statements and then executes these SQL statements on the logical standby database. Because the logical standby database remains open, tables that are maintained can be used simultaneously for other tasks such as reporting, summations, and queries.

This section contains the following topics:

7.4.1 Starting SQL Apply

To start SQL Apply, start the logical standby database and issue the following statement:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;

To start real-time apply on the logical standby database to immediately apply redo data from the standby redo log files on the logical standby database, include the IMMEDIATE keyword as shown in the following statement:

SQL>  ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

7.4.2 Stopping SQL Apply on a Logical Standby Database

To stop SQL Apply, issue the following statement on the logical standby database:

SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;

When you issue this statement, SQL Apply waits until it has committed all complete transactions that were in the process of being applied. Thus, this command may not stop the SQL Apply processes immediately.

7.4.3 Monitoring SQL Apply on Logical Standby Databases

To monitor SQL Apply, see Section 10.3. You can also monitor the standby database using Oracle Enterprise Manager. See Appendix A, "Troubleshooting Data Guard" and Oracle Data Guard Broker.

PK`BBPKD Oracle® Data Guard Concepts and Administration, 11g Release 2 (11.2) Cover Oracle Data Guard Concepts and Administration , 11g Release 2 (11.2) List of Examples List of Figures List of Tables Oracle Data Guard Concepts and Administration, 11g Release 2 (11.2) Preface What's New in Oracle Data Guard? Concepts and Administration Introduction to Oracle Data Guard Getting Started with Data Guard Creating a Physical Standby Database Creating a Logical Standby Database Data Guard Protection Modes Redo Transport Services Apply Services Role Transitions Managing Physical and Snapshot Standby Databases Managing a Logical Standby Database Using RMAN to Back Up and Restore Files Using SQL Apply to Upgrade the Oracle Database Data Guard Scenarios Reference Initialization Parameters LOG_ARCHIVE_DEST_n Parameter Attributes SQL Statements Relevant to Data Guard Views Relevant to Oracle Data Guard Appendixes Troubleshooting Data Guard Upgrading and Downgrading Databases in a Data Guard Configuration Data Type and DDL Support on a Logical Standby Database Data Guard and Oracle Real Application Clusters Creating a Standby Database with Recovery Manager Setting Archive Tracing Index Copyright PK_yPKD Troubleshooting Data Guard

A Troubleshooting Data Guard

This appendix provides help troubleshooting a standby database. This appendix contains the following sections:

A.1 Common Problems

If you encounter a problem when using a standby database, it is probably because of one of the following reasons:

A.1.1 Renaming Datafiles with the ALTER DATABASE Statement

You cannot rename the datafile on the standby site when the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO. When you set the STANDBY_FILE_MANAGEMENT initialization parameter to AUTO, use of the following SQL statements is not allowed:

  • ALTER DATABASE RENAME

  • ALTER DATABASE ADD/DROP LOGFILE

  • ALTER DATABASE ADD/DROP STANDBY LOGFILE MEMBER

  • ALTER DATABASE CREATE DATAFILE AS

If you attempt to use any of these statements on the standby database, an error is returned. For example:

SQL> ALTER DATABASE RENAME FILE '/disk1/oracle/oradata/payroll/t_db2.log' to 'dummy';

alter database rename file '/disk1/oracle/oradata/payroll/t_db2.log' to 'dummy' 
* 
ERROR at line 1: 
ORA-01511: error in renaming log/datafiles 
ORA-01270: RENAME operation is not allowed if STANDBY_FILE_MANAGEMENT is auto

See Section 9.3.1 to learn how to add datafiles to a physical standby database.

A.1.2 Standby Database Does Not Receive Redo Data from the Primary Database

If the standby site is not receiving redo data, query the V$ARCHIVE_DEST view and check for error messages. For example, enter the following query:

SQL> SELECT DEST_ID "ID", -
> STATUS "DB_status", -
> DESTINATION "Archive_dest", -
> ERROR "Error" -
> FROM V$ARCHIVE_DEST WHERE DEST_ID <=5;

ID DB_status Archive_dest                   Error   
-- --------- ------------------------------ ------------------------------------
 1  VALID    /vobs/oracle/work/arc_dest/arc                          
 2  ERROR    standby1                       ORA-16012: Archivelog standby database identifier mismatch  
 3  INACTIVE                            
 4  INACTIVE                    
 5  INACTIVE                                           
5 rows selected.

If the output of the query does not help you, check the following list of possible issues. If any of the following conditions exist, redo transport services will fail to transmit redo data to the standby database:

  • The service name for the standby instance is not configured correctly in the tnsnames.ora file for the primary database.

  • The Oracle Net service name specified by the LOG_ARCHIVE_DEST_n parameter for the primary database is incorrect.

  • The LOG_ARCHIVE_DEST_STATE_n parameter for the standby database is not set to the value ENABLE.

  • The listener.ora file has not been configured correctly for the standby database.

  • The listener is not started at the standby site.

  • The standby instance is not started.

  • You have added a standby archiving destination to the primary SPFILE or text initialization parameter file, but have not yet enabled the change.

  • Redo transport authentication has not been configured properly. See section 3.1.2 for redo transport authentication configuration requirements.

  • You used an invalid backup as the basis for the standby database (for example, you used a backup from the wrong database, or did not create the standby control file using the correct method).

A.1.3 You Cannot Mount the Physical Standby Database

You cannot mount the standby database if the standby control file was not created with the ALTER DATABASE CREATE [LOGICAL] STANDBY CONTROLFILE ... statement or RMAN command. You cannot use the following types of control file backups:

  • An operating system-created backup

  • A backup created using an ALTER DATABASE statement without the PHYSICAL STANDBY or LOGICAL STANDBY option

A.2 Log File Destination Failures

If you specify REOPEN for a MANDATORY destination, redo transport services stall the primary database when redo data cannot be successfully transmitted.

The REOPEN attribute is required when you use the MAX_FAILURE attribute. Example A-1 shows how to set a retry time of 5 seconds and limit retries to 3 times.

Example A-1 Setting a Retry Time and Limit

LOG_ARCHIVE_DEST_1='LOCATION=/arc_dest REOPEN=5 MAX_FAILURE=3'

Use the ALTERNATE attribute of the LOG_ARCHIVE_DEST_n parameter to specify alternate archive destinations. An alternate archiving destination can be used when the transmission of redo data to a standby database fails. If transmission fails and the REOPEN attribute was not specified or the MAX_FAILURE attribute threshold was exceeded, redo transport services attempts to transmit redo data to the alternate destination on the next archival operation.

Use the NOALTERNATE attribute to prevent the original archive destination from automatically changing to an alternate archive destination when the original archive destination fails.

Example A-2 shows how to set the initialization parameters so that a single, mandatory, local destination will automatically fail over to a different destination if any error occurs.

Example A-2 Specifying an Alternate Destination

LOG_ARCHIVE_DEST_1='LOCATION=/disk1 MANDATORY ALTERNATE=LOG_ARCHIVE_DEST_2'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_2='LOCATION=/disk2 MANDATORY'
LOG_ARCHIVE_DEST_STATE_2=ALTERNATE

If the LOG_ARCHIVE_DEST_1 destination fails, the archiving process will automatically switch to the LOG_ARCHIVE_DEST_2 destination at the next log file switch on the primary database.

A.3 Handling Logical Standby Database Failures

An important tool for handling logical standby database failures is the DBMS_LOGSTDBY.SKIP_ERROR procedure. Depending on how important a table is, you might want to do one of the following:

  • Ignore failures for a table or specific DDL

  • Associate a stored procedure with a filter so at runtime a determination can be made about skipping the statement, executing this statement, or executing a replacement statement

Taking one of these actions prevents SQL Apply from stopping. Later, you can query the DBA_LOGSTDBY_EVENTS view to find and correct any problems that exist. See Oracle Database PL/SQL Packages and Types Reference for more information about using the DBMS_LOGSTDBY package with PL/SQL callout procedures.

A.4 Problems Switching Over to a Physical Standby Database

In most cases, following the steps described in Chapter 8 will result in a successful switchover. However, if the switchover is unsuccessful, the following sections may help you to resolve the problem:

A.4.1 Switchover Fails Because Redo Data Was Not Transmitted

If the switchover does not complete successfully, you can query the SEQUENCE# column in the V$ARCHIVED_LOG view to see if the last redo data transmitted from the original primary database was applied on the standby database. If the last redo data was not transmitted to the standby database, you can manually copy the archived redo log file containing the redo data from the original primary database to the old standby database and register it with the SQL ALTER DATABASE REGISTER LOGFILE file_specification statement. If you then start apply services, the archived redo log file will be applied automatically. Query the SWITCHOVER_STATUS column in the V$DATABASE view. A switchover to the primary role is now possible if the SWITCHOVER_STATUS column returns TO PRIMARY or SESSIONS ACTIVE.

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS 
----------------- 
TO PRIMARY 
1 row selected 

See Chapter 17 for information about other valid values for the SWITCHOVER_STATUS column of the V$DATABASE view.

To continue with the switchover, follow the instructions in Section 8.2.1 for physical standby databases or Section 8.3.1 for logical standby databases, and try again to switch the target standby database to the primary role.

A.4.2 Switchover Fails Because SQL Sessions Are Still Active

If you do not include the WITH SESSION SHUTDOWN clause as a part of the ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY statement, active SQL sessions might prevent a switchover from being processed. Active SQL sessions can include other Oracle Database processes.

When sessions are active, an attempt to switch over fails with the following error message:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY;
 
ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY * 
ORA-01093: ALTER DATABASE CLOSE only permitted with no sessions connected

Action: Query the V$SESSION view to determine which processes are causing the error. For example:

SQL> SELECT SID, PROCESS, PROGRAM FROM V$SESSION -   
> WHERE TYPE = 'USER' -
>  AND SID <> (SELECT DISTINCT SID FROM V$MYSTAT);

SID        PROCESS   PROGRAM 
---------  --------  ------------------------------------------------ 
        7      3537  oracle@nhclone2 (CJQ0)
       10
       14
       16
       19
       21
 6 rows selected.

In the previous example, the JOB_QUEUE_PROCESSES parameter corresponds to the CJQ0 process entry. Because the job queue process is a user process, it is counted as a SQL session that prevents switchover from taking place. The entries with no process or program information are threads started by the job queue controller.

Verify the JOB_QUEUE_PROCESSES parameter is set using the following SQL statement:

SQL> SHOW PARAMETER JOB_QUEUE_PROCESSES; 

NAME                           TYPE      VALUE
------------------------------ -------   -------------------- 
job_queue_processes            integer   5

Then, set the parameter to 0. For example:

SQL> ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0; 
Statement processed.

Because JOB_QUEUE_PROCESSES is a dynamic parameter, you can change the value and have the change take effect immediately without having to restart the instance. You can now retry the switchover procedure.

Do not modify the parameter in your initialization parameter file. After you shut down the instance and restart it after the switchover completes, the parameter will be reset to the original value. This applies to both primary and physical standby databases.

Table A-1 summarizes the common processes that prevent switchover and what corrective action you need to take.

Table A-1 Common Processes That Prevent Switchover

Type of ProcessProcess DescriptionCorrective Action

CJQ0

Job Queue Scheduler Process

Change the JOB_QUEUE_PROCESSES dynamic parameter to the value 0. The change will take effect immediately without having to restart the instance.

QMN0

Advanced Queue Time Manager

Change the AQ_TM_PROCESSES dynamic parameter to the value 0. The change will take effect immediately without having to restart the instance.

DBSNMP

Oracle Enterprise Manager Management Agent

Issue the emctl stop agent command from the operating system prompt.


A.4.3 Switchover Fails with the ORA-01102 Error

Suppose the standby database and the primary database reside on the same site. After both the ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY and the ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY statements are successfully executed, shut down and restart the physical standby database and the primary database.


Note:

It is not necessary to shut down and restart the physical standby database if it has not been opened read-only since the instance was started.

However, the startup of the second database fails with ORA-01102 error "cannot mount database in EXCLUSIVE mode."

This could happen during the switchover if you did not set the DB_UNIQUE_NAME parameter in the initialization parameter file that is used by the standby database (that is, the original primary database). If the DB_UNIQUE_NAME parameter of the standby database is not set, the standby and the primary databases both use the same mount lock and cause the ORA-01102 error during the startup of the second database.

Action: Add DB_UNIQUE_NAME=unique_database_name to the initialization parameter file used by the standby database, and shut down and restart the standby and primary databases.

A.4.4 Redo Data Is Not Applied After Switchover

The archived redo log files are not applied to the new standby database after the switchover.

This might happen because some environment or initialization parameters were not properly set after the switchover.

Action:

  • Check the tnsnames.ora file at the new primary site and the listener.ora file at the new standby site. There should be entries for a listener at the standby site and a corresponding service name at the primary site.

  • Start the listener at the standby site if it has not been started.

  • Check if the LOG_ARCHIVE_DEST_n initialization parameter was set to properly transmit redo data from the primary site to the standby site. For example, query the V$ARCHIVE_DEST fixed view at the primary site as follows:

    SQL> SELECT DEST_ID, STATUS, DESTINATION FROM V$ARCHIVE_DEST;
    

    If you do not see an entry corresponding to the standby site, you need to set LOG_ARCHIVE_DEST_n and LOG_ARCHIVE_DEST_STATE_n initialization parameters.

  • Set the STANDBY_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT initialization parameters correctly at the standby site so that the archived redo log files are applied to the desired location. (Note that the STANDBY_ARCHIVE_DEST parameter has been deprecated and is supported for backward compatibility only.)

  • At the standby site, set the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT initialization parameters. Set the STANDBY_FILE_MANAGEMENT initialization parameter to AUTO if you want the standby site to automatically add new datafiles that are created at the primary site.

A.4.5 Roll Back After Unsuccessful Switchover and Start Over

For physical standby databases in situations where an error occurred and it is not possible to continue with the switchover, it might still be possible to revert the new physical standby database back to the primary role by using the following steps. (This functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2).)

  1. Shut down and mount the new standby database (old primary).

  2. Start Redo Apply on the new standby database.

  3. Verify that the new standby database is ready to be switched back to the primary role. Query the SWITCHOVER_STATUS column of the V$DATABASE view on the new standby database. For example:

    SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
     
    SWITCHOVER_STATUS 
    ----------------- 
    TO_PRIMARY 
    1 row selected
    

    A value of TO PRIMARY or SESSIONS ACTIVE indicates that the new standby database is ready to be switched to the primary role. Continue to query this column until the value returned is either TO PRIMARY or SESSIONS ACTIVE.

  4. Issue the following statement to convert the new standby database back to the primary role:

    SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;
    

    If this statement is successful, the database will be running in the primary database role, and you do not need to perform any more steps.

    If this statement is unsuccessful, then continue with Step 5.

  5. When the switchover to change the role from primary to physical standby was initiated, a trace file was written in the log directory. This trace file contains the SQL statements required to re-create the original primary control file. Locate the trace file and extract the SQL statements into a temporary file. Execute the temporary file from SQL*Plus. This will revert the new standby database back to the primary role.

  6. Shut down the original physical standby database.

  7. Create a new standby control file. This is necessary to resynchronize the primary database and physical standby database. Copy the physical standby control file to the original physical standby system. Section 3.2.2 describes how to create a physical standby control file.

  8. Restart the original physical standby instance.

    If this procedure is successful and archive gap management is enabled, the FAL processes will start and re-archive any missing archived redo log files to the physical standby database. Force a log switch on the primary database and examine the alert logs on both the primary database and physical standby database to ensure the archived redo log file sequence numbers are correct.

    See Section 6.4.3.1 for information about archive gap management and Appendix F for information about locating the trace files.

  9. Try the switchover again.

    At this point, the Data Guard configuration has been rolled back to its initial state, and you can try the switchover operation again (after correcting any problems that might have led to the initial unsuccessful switchover).

A.5 Problems Switching Over to a Logical Standby Database

A switchover operation involving a logical standby database usually consists of two phases: preparing and committing. The exceptions to this are for rolling upgrades of Oracle software using a logical standby database or if you are using Data Guard broker. If you experience failures in the context of doing a rolling upgrade using a logical standby database or during a switchover operation initiated by Data Guard broker, you should go directly to Section A.5.2.


Note:

Oracle recommends that Flashback Database be enabled for all databases in a Data Guard configuration. The steps in this section assume that you have Flashback Database enabled on all databases in your Data Guard configuration.

A.5.1 Failures During the Prepare Phase of a Switchover Operation

If a failure occurs during the preparation phase of a switchover operation, you should cancel the switchover and retry the switchover operation from the very beginning.

A.5.1.1 Failure While Preparing the Primary Database

If you encounter failure while executing the ALTER DATABASE PREPARE TO SWITCHOVER TO LOGICAL STANDBY statement, you can cancel the prepare phase of a switchover by issuing the following SQL statement at the primary database:

SQL> ALTER DATABASE PREPARE TO SWITCHOVER TO LOGICAL STANDBY CANCEL;

You can now retry the switchover operation from the beginning.

A.5.1.2 Failure While Preparing the Logical Standby Database

If you encounter failure while executing the ALTER DATABASE PREPARE TO SWITCHOVER TO PRIMARY statement, you will need to cancel the prepare operation at the primary database and at the target standby database. Take the following steps:

  1. At the primary database, cancel the statement you had issued to prepare for the switchover:

    SQL> ALTER DATABASE PREPARE TO SWITCHOVER TO LOGICAL STANDBY CANCEL;
    
  2. At the logical standby database that was the target of the switchover, cancel the statement you had issued to prepare to switch over:

    SQL> ALTER DATABASE PREPARE TO SWITCHOVER TO PRIMARY CANCEL;
    

    You can now retry the switchover operation from the beginning.

A.5.2 Failures During the Commit Phase of a Switchover Operation

Although committing to a switchover involves a single SQL statement, internally a number of operations are performed. The corrective actions that you need to take depend on the state of the commit to switchover operation when the error was encountered.

A.5.2.1 Failure to Convert the Original Primary Database

If you encounter failures while executing the ALTER DATABASE COMMIT TO SWITCHOVER TO LOGICAL STANDBY statement, you can take the following steps:

  1. Check the DATABASE_ROLE column of the V$DATABASE fixed view on the original primary database:

    SQL> SELECT DATABASE_ROLE FROM V$DATABASE;
    
    • If the column contains a value of LOGICAL STANDBY, the switchover operation has completed, but has failed during a post-switchover task. In this situation, Oracle recommends that you shut down and reopen the database.

    • If the column contains a value of PRIMARY, proceed to Step 2.

  2. Perform the following query on the original primary:

    SQL> SELECT COUNT(*) FROM SYSTEM.LOGSTDBY$PARAMETERS -
    > WHERE NAME = 'END_PRIMARY';
    
    • If the query returns a 0, the primary is in a state identical to that it was in before the commit to switchover command was issued. You do not need to take any corrective action. You can proceed with the commit to switchover operation or cancel the switchover operation as outlined in Section A.5.1.2.

    • If the query returns a 1, the primary is in an inconsistent state, and you need to proceed to Step 3.

  3. Take corrective action at the original primary database to maintain its ability to be protected by existing or newly instantiated logical standby databases.

    You can either fix the underlying cause of the error raised during the commit to switchover operation and reissue the SQL statement (ALTER DTABASE COMMIT TO SWITCHOVER TO LOGICAL STANDBY) or you can take the following steps:

    1. From the alert log of the instance where you initiated the commit to switchover command, determine the SCN needed to flash back to the original primary. This information is displayed after the ALTER DATABASE COMMIT TO SWITCHOVER TO LOGICAL STANDBY SQL statement:

      LOGSTDBY: Preparing the COMMIT TO SWITCHOVER TO LOGICAL STANDBY DDL at scn [flashback_scn].
      
    2. Shut down all instances of the primary database:

      SQL> SHUTDOWN IMMEDIATE;
      
    3. Mount the primary database in exclusive mode:

      SQL> STARTUP MOUNT;
      
    4. Flash back the database to the SCN taken from the alert log:

      SQL> FLASHBACK DATABASE TO BEFORE SCN <flashback_scn>;
      
    5. Open the primary database:

      SQL> STARTUP;
      
    6. Lower the database guard at the original primary database:

      SQL> ALTER DATABASE GUARD NONE;
      

    At this point the primary is in a state identical to that it was in before the commit switchover command was issued. You do not need to take any corrective action. you can proceed with the commit to switchover operation or cancel the switchover operation as outlined in Section A.5.1.1.

A.5.2.2 Failure to Convert the Target Logical Standby Database

If you encounter failures while executing the ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY statement, take the following steps:

  1. Check the DATABASE_ROLE column of the V$DATABASE fixed view on the target standby database:

    SQL> SELECT DATABASE_ROLE FROM V$DATABASE;
    
    • If the column contains a value PRIMARY, the switchover operation has completed, but has failed during a post-switchover task. In this situation, you must perform the following steps:

      1. Shut down and reopen the database.

      2. Issue an ALTER DATABASE GUARD NONE command to remove write restrictions to the database.

    • If the column contains a value of LOGICAL STANDBY, proceed to Step 2.

  2. Perform the following query on the target logical standby:

    SQL> SELECT COUNT(*) FROM SYSTEM.LOGSTDBY$PARAMETERS -
    > WHERE NAME = 'BEGIN_PRIMARY';
    
    • If the query returns a 0, the logical standby is in a state identical to that it was in before the commit to switchover command was issued. You do not need to take any corrective action. You can proceed with the commit to switchover operations or cancel the switchover operation as outlined in Section A.5.1.2.

    • If the query returns a 1, the logical standby is in an inconsistent state, and you should proceed to Step 3.

  3. Take corrective action at the logical standby to maintain its ability to either become the new primary or become a bystander to a different new primary.

    You can either fix the underlying cause of the error raised during the commit to switchover operation and reissue the SQL statement (ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY) or you can take the following steps to flash back the logical standby database to a point of consistency just prior to the commit to switchover attempt:

    1. From the alert log of the instance where you initiated the commit to switchover command, determine the SCN needed to flash back to the logical standby. This information is displayed after the ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY SQL statement:

      LOGSTDBY: Preparing the COMMIT TO SWITCHOVER TO PRIMARY DDL at scn [flashback_scn].
      
    2. Shut down all instances of the target standby database:

      SQL> SHUTDOWN IMMEDIATE;
      
    3. Mount the target logical standby database:

      SQL> STARTUP MOUNT;
      
    4. Flash back the target logical standby to the desired SCN:

      SQL> FLASHBACK DATABASE TO BEFORE SCN <flashback_scn>;
      
    5. Open the database (in case of an Oracle RAC, open all instances);

      SQL> STARTUP OPEN;
      

At this point the target standby is in a state identical to that it was in before the commit to switchover command was issued. You do not need to take any further corrective action. You can proceed with the commit to switchover operation.

A.6 What to Do If SQL Apply Stops

Apply services cannot apply unsupported DML statements, DDL statements, and Oracle supplied packages to a logical standby database running SQL Apply.

When an unsupported statement or package is encountered, SQL Apply stops. You can take the actions described in Table A-2 to correct the situation and start SQL Apply on the logical standby database again.

Table A-2 Fixing Typical SQL Apply Errors

If...Then...

You suspect an unsupported statement or Oracle supplied package was encountered

Find the last statement in the DBA_LOGSTDBY_EVENTS view. This will indicate the statement and error that caused SQL Apply to fail. If an incorrect SQL statement caused SQL Apply to fail, transaction information, as well as the statement and error information, can be viewed. The transaction information can be used with LogMiner tools to understand the cause of the problem.

An error requiring database management occurred, such as running out of space in a particular tablespace

Fix the problem and resume SQL Apply using the ALTER DATABASE START LOGICAL STANDBY APPLY statement.

An error occurred because a SQL statement was entered incorrectly, such as an incorrect standby database filename being entered in a tablespace statement

Enter the correct SQL statement and use the DBMS_LOGSTDBY.SKIP_TRANSACTION procedure to ensure the incorrect statement is ignored the next time SQL Apply is run. Then, restart SQL Apply using the ALTER DATABASE START LOGICAL STANDBY APPLY statement.

An error occurred because skip parameters were incorrectly set up, such as specifying that all DML for a given table be skipped but CREATE, ALTER, and DROP TABLE statements were not specified to be skipped

Issue the DBMS_LOGSTDBY.SKIP('TABLE','schema_name','table_name',null) procedure, then restart SQL Apply.


See Chapter 17 for information about querying the DBA_LOGSTDBY_EVENTS view to determine the cause of failures.

A.7 Network Tuning for Redo Data Transmission

For optimal performance, set the Oracle Net SDU parameter to 32 kilobytes in each Oracle Net connect descriptor used by redo transport services.

The following example shows a database initialization parameter file segment that defines a remote destination netserv:

LOG_ARCHIVE_DEST_3='SERVICE=netserv'

The following example shows the definition of that service name in the tnsnames.ora file:

netserv=(DESCRIPTION=(SDU=32768)(ADDRESS=(PROTOCOL=tcp)(HOST=host) (PORT=1521)) (CONNECT_DATA=(SERVICE_NAME=srvc)))

The following example shows the definition in the listener.ora file:

LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)
(HOST=host)(PORT=1521))))

SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(SDU=32768)(SID_NAME=sid)
(GLOBALDBNAME=srvc)(ORACLE_HOME=/oracle)))

If you archive to a remote site using a high-latency or high-bandwidth network link, you can improve performance by using the SQLNET.SEND_BUF_SIZE and SQLNET.RECV_BUF_SIZE Oracle Net profile parameters to increase the size of the network send and receive I/O buffers.

See Oracle Database Net Services Administrator's Guide for information about other ways to change the Oracle NET SDU parameter.

A.8 Slow Disk Performance on Standby Databases

If asynchronous I/O on the file system itself is showing performance problems, try mounting the file system using the Direct I/O option or setting the FILESYSTEMIO_OPTIONS=SETALL initialization parameter. The maximum I/O size setting is 1 MB.

A.9 Log Files Must Match to Avoid Primary Database Shutdown

If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the standby redo log files on each standby database exactly matches the size of the online redo log files on the primary database.

At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database:

  • The primary database will shut down if it is operating in maximum protection mode,

    or

  • The RFS process on the standby database will create an archived redo log file on the standby database and write the following message in the alert log:

    No standby log files of size <#> blocks available.
    

For example, if the primary database uses two online redo log groups whose log files are 100K, then the standby database should have 3 standby redo log groups with log file sizes of 100K.

Also, whenever you add a redo log group to the primary database, you must add a corresponding standby redo log group to the standby database. This reduces the probability that the primary database will be adversely affected because a standby redo log file of the required size is not available at log switch time.

A.10 Troubleshooting a Logical Standby Database

This section contains the following topics:

A.10.1 Recovering from Errors

Logical standby databases maintain user tables, sequences, and jobs. To maintain other objects, you must reissue the DDL statements seen in the redo data stream.

If SQL Apply fails, an error is recorded in the DBA_LOGSTDBY_EVENTS table. The following sections demonstrate how to recover from two such errors.

A.10.1.1 DDL Transactions Containing File Specifications

DDL statements are executed the same way on the primary database and the logical standby database. If the underlying file structure is the same on both databases, the DDL will execute on the standby database as expected.

If an error was caused by a DDL transaction containing a file specification that did not match in the logical standby database environment, perform the following steps to fix the problem:

  1. Use the ALTER SESSION DISABLE GUARD statement to bypass the database guard so you can make modifications to the logical standby database:

    SQL> ALTER SESSION DISABLE GUARD;
    
  2. Execute the DDL statement, using the correct file specification, and then reenable the database guard. For example:

    SQL> ALTER TABLESPACE t_table ADD DATAFILE '/dbs/t_db.f' SIZE 100M REUSE;
    SQL> ALTER SESSION ENABLE GUARD;
    
  3. Start SQL Apply on the logical standby database and skip the failed transaction.

    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE -
    > SKIP FAILED TRANSACTION;
    

In some situations, the problem that caused the transaction to fail can be corrected and SQL Apply restarted without skipping the transaction. An example of this might be when available space is exhausted. (Do not let the primary and logical standby databases diverge when skipping DDL transactions. If possible, you should manually execute a compensating transaction in place of the skipped transaction.)

The following example shows SQL Apply stopping, the error being corrected, and then restarting SQL Apply:

SQL> SET LONG 1000
SQL> ALTER SESSION SET NLS_DATE_FORMAT  = 'DD-MON-YY HH24:MI:SS';

Session altered.

SQL> SELECT EVENT_TIME, COMMIT_SCN, EVENT, STATUS FROM DBA_LOGSTDBY_EVENTS;

EVENT_TIME              COMMIT_SCN
------------------ ---------------
EVENT
-------------------------------------------------------------------------------
STATUS
-------------------------------------------------------------------------------
22-OCT-03 15:47:58

ORA-16111: log mining and apply setting up

22-OCT-03 15:48:04          209627
insert into "SCOTT"."EMP"
values
   "EMPNO" = 7900,
   "ENAME" = 'ADAMS',
   "JOB" = 'CLERK',
   "MGR" IS NULL,
   "HIREDATE" = TO_DATE('22-OCT-03', 'DD-MON-RR'),
   "SAL" = 950,
   "COMM" IS NULL,
   "DEPTNO" IS NULL
ORA-01653: unable to extend table SCOTT.EMP by %200 bytes in tablespace T_TABLE

In the example, the ORA-01653 message indicates that the tablespace was full and unable to extend itself. To correct the problem, add a new datafile to the tablespace. For example:

SQL> ALTER TABLESPACE t_table ADD DATAFILE '/dbs/t_db.f' SIZE 60M;
Tablespace altered.

Then, restart SQL Apply:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
Database altered.

When SQL Apply restarts, the transaction that failed will be reexecuted and applied to the logical standby database.

A.10.1.2 Recovering from DML Failures

Do not use the SKIP_TRANSACTION procedure to filter DML failures. Not only is the DML that is seen in the events table skipped, but so is all the DML associated with the transaction. This will cause multiple tables.

DML failures usually indicate a problem with a specific table. For example, assume the failure is an out-of-storage error that you cannot resolve immediately. The following steps demonstrate one way to respond to this problem.

  1. Bypass the table, but not the transaction, by adding the table to the skip list:

    SQL> EXECUTE DBMS_LOGSTDBY.SKIP('DML','SCOTT','EMP');
    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    

    From this point on, DML activity for the SCOTT.EMP table is not applied. After you correct the storage problem, you can fix the table, provided you set up a database link to the primary database that has administrator privileges to run procedures in the DBMS_LOGSTDBY package.

  2. Using the database link to the primary database, drop the local SCOTT.EMP table and then re-create it, and pull the data over to the standby database.

    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    SQL> EXECUTE DBMS_LOGSTDBY.INSTANTIATE_TABLE('SCOTT','EMP','PRIMARYDB');
    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    
  3. To ensure a consistent view across the newly instantiated table and the rest of the database, wait for SQL Apply to catch up with the primary database before querying this table. Refer to Section 10.5.5, "Adding or Re-Creating Tables On a Logical Standby Database" for a detailed example.

A.10.2 Troubleshooting SQL*Loader Sessions

Oracle SQL*Loader provides a method of loading data from different sources into the Oracle Database. This section analyzes some of the features of the SQL*Loader utility as it pertains to SQL Apply.

Regardless of the method of data load chosen, the SQL*Loader control files contain an instruction on what to do to the current contents of the Oracle table into which the new data is to be loaded, via the keywords of APPEND and REPLACE. The following examples show how to use these keywords on a table named LOAD_STOK:

  • When using the APPEND keyword, the new data to be loaded is appended to the contents of the LOAD_STOK table:

    LOAD DATA
    INTO TABLE LOAD_STOK APPEND
    
  • When using the REPLACE keyword, the contents of the LOAD_STOK table are deleted prior to loading new data. Oracle SQL*Loader uses the DELETE statement to purge the contents of the table, in a single transaction:

    LOAD DATA
    INTO TABLE LOAD_STOK REPLACE
    

Rather than using the REPLACE keyword in the SQL*Loader script, Oracle recommends that prior to loading the data, issue the SQL*Plus TRUNCATE TABLE command against the table on the primary database. This will have the same effect of purging both the primary and standby databases copy of the table in a manner that is both fast and efficient because the TRUNCATE TABLE command is recorded in the online redo log files and is issued by SQL Apply on the logical standby database.

The SQL*Loader script may continue to contain the REPLACE keyword, but it will now attempt to DELETE zero rows from the object on the primary database. Because no rows were deleted from the primary database, there will be no redo recorded in the redo log files. Therefore, no DELETE statement will be issued against the logical standby database.

Issuing the REPLACE keyword without the SQL statement TRUNCATE TABLE provides the following potential problems for SQL Apply when the transaction needs to be applied to the logical standby database.

  • If the table currently contains a significant number of rows, then these rows need to be deleted from the standby database. Because SQL Apply is not able to determine the original syntax of the statement, SQL Apply must issue a DELETE statement for each row purged from the primary database.

    For example, if the table on the primary database originally had 10,000 rows, then Oracle SQL*Loader will issue a single DELETE statement to purge the 10,000 rows. On the standby database, SQL Apply does not know that all rows are to be purged, and instead must issue 10,000 individual DELETE statements, with each statement purging a single row.

  • If the table on the standby database does not contain an index that can be used by SQL Apply, then the DELETE statement will issue a Full Table Scan to purge the information.

    Continuing with the previous example, because SQL Apply has issued 10,000 individual DELETE statements, this could result in 10,000 Full Table Scans being issued against the standby database.

A.10.3 Troubleshooting Long-Running Transactions

One of the primary causes for long-running transactions in a SQL Apply environment is because of Full Table Scans. Additionally, long-running transactions could be the result of SQL statements being replicated to the standby database, such as when creating or rebuilding an index.

Identifying Long-Running Transactions

If SQL Apply is executing a single SQL statement for a long period of time, then a warning message similar to the following is reported in the alert log of the SQL Apply instance:

Mon Feb 17 14:40:15 2003
WARNING: the following transaction makes no progress
WARNING: in the last 30 seconds for the given message!
WARNING: xid =
0x0016.007.000017b6 cscn = 1550349, message# = 28, slavid = 1
knacrb: no offending session found (not ITL pressure)

Note the following about the warning message:

  • This warning is similar to the warning message returned for interested transaction list (ITL) pressure, with the exception being the last line that begins with knacrb. The final line indicates:

    • A Full Table Scan may be occurring

    • This issue has nothing to do with interested transaction list (ITL) pressure

  • This warning message is reported only if a single statement takes more than 30 seconds to execute.

It may not be possible to determine the SQL statement being executed by the long-running statement, but the following SQL statement may help in identifying the database objects on which SQL Apply is operating:

SQL> SELECT SAS.SERVER_ID -
>  , SS.OWNER -
>  , SS.OBJECT_NAME -
>  , SS.STATISTIC_NAME -
>  , SS.VALUE -
>  FROM V$SEGMENT_STATISTICS SS -
>  , V$LOCK L -
>  , V$STREAMS_APPLY_SERVER SAS -
>  WHERE SAS.SERVER_ID = &SLAVE_ID -
>  AND L.SID = SAS.SID -
>  AND L.TYPE = 'TM' -
>  AND SS.OBJ# = L.ID1;

Additionally, you can issue the following SQL statement to identify the SQL statement that has resulted in a large number of disk reads being issued per execution:

SQL> SELECT SUBSTR(SQL_TEXT,1,40) -
>  , DISK_READS -
>  , EXECUTIONS -
>  , DISK_READS/EXECUTIONS -
>  , HASH_VALUE -
>  , ADDRESS -
>  FROM V$SQLAREA -
>  WHERE DISK_READS/GREATEST(EXECUTIONS,1) > 1 -
>  AND ROWNUM < 10 -
>  ORDER BY DISK_READS/GREATEST(EXECUTIONS,1) DESC;

Oracle recommends that all tables have primary key constraints defined, which automatically means that the column is defined as NOT NULL. For any table where a primary-key constraint cannot be defined, an index should be defined on an appropriate column that is defined as NOT NULL. If a suitable column does not exist on the table, then the table should be reviewed and, if possible, skipped by SQL Apply. The following steps describe how to skip all DML statements issued against the FTS table on the SCOTT schema:

  1. Stop SQL Apply:

    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    Database altered
    
  2. Configure the skip procedure for the SCOTT.FTS table for all DML transactions:

    SQL> EXECUTE DBMS_LOGSTDBY.SKIP(stmt => 'DML' , -
    >  schema_name => 'SCOTT' , -
    >  object_name => 'FTS');
    PL/SQL procedure successfully completed
    
  3. Start SQL Apply:

    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    Database altered
    

Troubleshooting ITL Pressure

Interested transaction list (ITL) pressure is reported in the alert log of the SQL Apply instance. Example A-3 shows an example of the warning messages.

Example A-3 Warning Messages Reported for ITL Pressure

Tue Apr 22 15:50:42 2003
WARNING: the following transaction makes no progress
WARNING: in the last 30 seconds for the given message!
WARNING: xid =
0x0006.005.000029fa cscn = 2152982, message# = 2, slavid = 17

Real-Time Analysis

The messages shown in Example A-3 indicate that the SQL Apply process (slavid) #17 has not made any progress in the last 30 seconds. To determine the SQL statement being issued by the Apply process, issue the following query:

SQL> SELECT SA.SQL_TEXT -
>  FROM V$SQLAREA SA -
  >  , V$SESSION S -
  >  , V$STREAMS_APPLY_SERVER SAS -
  >  WHERE SAS.SERVER_ID = &SLAVEID -
  >  AND S.SID = SAS.SID -
  >  AND SA.ADDRESS = S.SQL_ADDRESS

SQL_TEXT
------------------------------------------------------------
insert into "APP"."LOAD_TAB_1" p("PK","TEXT")values(:1,:2)

An alternative method to identifying ITL pressure is to query the V$LOCK view, as shown in the following example. Any session that has a request value of 4 on a TX lock, is waiting for an ITL to become available.

SQL> SELECT SID,TYPE,ID1,ID2,LMODE,REQUEST -
> FROM V$LOCK -
> WHERE TYPE = 'TX'

SID        TY ID1        ID2        LMODE      REQUEST
---------- -- ---------- ---------- ---------- ----------
         8 TX     327688         48          6          0
        10 TX     327688         48          0          4

In this example, SID 10 is waiting for the TX lock held by SID 8.

Post-Incident Review

Pressure for a segment's ITL is unlikely to last for an extended period of time. In addition, ITL pressure that lasts for less than 30 seconds will not be reported in the standby databases alert log. Therefore, to determine which objects have been subjected to ITL pressure, issue the following statement:

SQL> SELECT SEGMENT_OWNER, SEGMENT_NAME, SEGMENT_TYPE -
>  FROM V$SEGMENT_STATISTICS -
>  WHERE STATISTIC_NAME = 'ITL WAITS' -
>  AND VALUE > 0 -
>  ORDER BY VALUE

This statement reports all database segments that have had ITL pressure at some time since the instance was last started.


Note:

This SQL statement is not limited to a logical standby databases in the Data Guard environment. It is applicable to any Oracle database.

Resolving ITL Pressure

To increase the INITRANS integer for a particular database object, it is necessary to first stop SQL Apply.


See Also:

Oracle Database SQL Language Reference for more information about specifying the INITRANS integer, which it the initial number of concurrent transaction entries allocated within each data block allocated to the database object

The following example shows the necessary steps to increase the INITRANS for table load_tab_1 in the schema app.

  1. Stop SQL Apply:

    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    Database altered.
    
  2. Temporarily bypass the database guard:

    SQL> ALTER SESSION DISABLE GUARD;
    Session altered.
    
  3. Increase the INITRANS on the standby database. For example:

    SQL> ALTER TABLE APP.LOAD_TAB_1 INITRANS 30;
    Table altered
    
  4. Reenable the database guard:

    SQL> ALTER SESSION ENABLE GUARD;
    Session altered
    
  5. Start SQL Apply:

    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    Database altered.
    

Also, consider modifying the database object on the primary database, so in the event of a switchover, the error should not occur on the new standby database.

A.10.4 Troubleshooting ORA-1403 Errors with Flashback Transactions

If SQL Apply returns the ORA-1403: No Data Found error, then it may be possible to use Flashback Transaction to reconstruct the missing data. This is reliant upon the UNDO_RETENTION initialization parameter specified on the standby database instance.

Under normal circumstances, the ORA-1403 error should not be seen in a logical standby database environment. The error occurs when data in a table that is being managed by SQL Apply is modified directly on the standby database and then the same data is modified on the primary database. When the modified data is updated on the primary database and is subsequently received on the logical standby database, SQL Apply verifies the original version of the data is present on the standby database before updating the record. When this verification fails, the ORA-1403: No Data Found error is returned.

The Initial Error

When SQL Apply verification fails, the error message is reported in the alert log of the logical standby database and a record is inserted in the DBA_LOGSTDBY_EVENTS view.The information in the alert log is truncated, while the error is reported in it's entirety in the database view. For example:

LOGSTDBY stmt: UPDATE "SCOTT"."MASTER"
  SET
    "NAME" = 'john'
  WHERE 
    "PK" = 1 and 
    "NAME" = 'andrew' and 
    ROWID = 'AAAAAAAAEAAAAAPAAA'
LOGSTDBY status: ORA-01403: no data found
LOGSTDBY PID 1006, oracle@staco03 (P004)
LOGSTDBY XID 0x0006.00e.00000417, Thread 1, RBA 0x02dd.00002221.10

The Investigation

The first step is to analyze the historical data of the table that caused the error. This can be achieved using the VERSIONS clause of the SELECT statement. For example, you can issue the following query on the primary database:

SELECT VERSIONS_XID
      , VERSIONS_STARTSCN
      , VERSIONS_ENDSCN
      , VERSIONS_OPERATION
      , PK
      , NAME
   FROM SCOTT.MASTER
        VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE
  WHERE PK = 1
  ORDER BY NVL(VERSIONS_STARTSCN,0);

VERSIONS_XID     VERSIONS_STARTSCN VERSIONS_ENDSCN V  PK NAME
---------------- ----------------- --------------- - --- -------
03001900EE070000           3492279         3492290 I   1 andrew
02000D00E4070000           3492290                 D   1 andrew

Depending upon the amount of undo retention that the database is configured to retain (UNDO_RETENTION) and the activity on the table, the information returned might be extensive and you may need to change the versions between syntax to restrict the amount of information returned.From the information returned, it can be seen that the record was first inserted at SCN 3492279 and then was deleted at SCN 3492290 as part of transaction ID 02000D00E4070000.Using the transaction ID, the database should be queried to find the scope of the transaction. This is achieved by querying the FLASHBACK_TRANSACTION_QUERY view.

SELECT OPERATION
     , UNDO_SQL
  FROM FLASHBACK_TRANSACTION_QUERY
 WHERE XID = HEXTORAW('02000D00E4070000');

OPERATION  UNDO_SQL
---------- ------------------------------------------------
DELETE     insert into "SCOTT"."MASTER"("PK","NAME") values
           ('1','andrew');
BEGIN

Note that there is always one row returned representing the start of the transaction. In this transaction, only one row was deleted in the master table. The UNDO_SQL column when executed will restore the original data into the table.

SQL> INSERT INTO "SCOTT"."MASTER"("PK","NAME") VALUES ('1','ANDREW');SQL> COMMIT;

When you restart SQL Apply, the transaction will be applied to the standby database:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
PKPPKD Data Type and DDL Support on a Logical Standby Database

C Data Type and DDL Support on a Logical Standby Database

When setting up a logical standby database, you must ensure the logical standby database can maintain the datatypes and tables in your primary database. This appendix lists the various database objects, storage types, and PL/SQL supplied packages that are supported and unsupported by logical standby databases. It contains the following topics:

C.1 Datatype Considerations

The following sections list the supported and unsupported database objects:

C.1.1 Supported Datatypes in a Logical Standby Database

Logical standby databases support the following datatypes:

  • BINARY_DOUBLE

  • BINARY_FLOAT

  • BLOB

  • CHAR

  • CLOB and NCLOB

  • DATE

  • INTERVAL YEAR TO MONTH

  • INTERVAL DAY TO SECOND

  • LONG

  • LONG RAW

  • NCHAR

  • NUMBER

  • NVARCHAR2

  • RAW

  • TIMESTAMP

  • TIMESTAMP WITH TIMEZONE

  • TIMESTAMP WITH LOCAL TIMEZONE

  • VARCHAR and VARCHAR2

  • LOBs stored as SecureFiles (requires that the primary database be run at a compatibility of 11.2 or higher. See Section C.14, "Support for SecureFiles LOBs".)

  • XMLType data for all storage models, assuming the following primary database compatibility requirements:

    • XMLType stored in CLOB format requires primary database to run at a compatibility of 11.1 or higher

    • XMLType stored in object-relational format or as binary XML requires that the primary database be running Oracle Database 11g Release 2 (11.2.0.3) or higher with a redo compatibility setting of 11.2.0.3 or higher

C.1.1.1 Compatibility Requirements

SQL Apply support for the following has compatibility requirements on the primary database:

  • Multibyte CLOB support requires primary database to run at a compatibility of 10.1 or higher.

  • IOT support without LOBs and Overflows requires primary database to run at a compatibility of 10.1 or higher.

  • IOT support with LOB and Overflow requires primary database to run at a compatibility of 10.2 or higher.

  • TDE support requires primary database to run at a compatibility of 11.1 or higher.

  • Segment compression requires primary database to run at a compatibility of 11.1 or higher.

  • Hybrid Columnar Compression support is dependent on the underlying storage system.


See Also:


C.1.2 Unsupported Datatypes in a Logical Standby Database

The following data types are not supported by Logical standby databases. If a table contains columns having any of these unsupported data types, then the entire table is ignored by SQL Apply.


BFILE
Collections (including VARRAYS and nested tables)
Multimedia data types (including Spatial, Image, and Oracle Text)
ROWID, UROWID
User-defined types

C.2 Support for Transparent Data Encryption (TDE)

Data Guard SQL Apply can be used to provide data protection for a primary database with Transparent Data Encryption (TDE) enabled. Consider the following when using a logical standby database to provide data protection for applications with advanced security requirements:

  • Tables with Transparent Data Encryption using server held keys are replicated on a logical standby database when both the primary and the standby databases are running at a compatibility level of 11.1 or higher.

  • Transparent Data Encryption in the context of Hardware Security Modules is not supported for logical standby databases in 11g Release 1.

You must consider the following restrictions when, in the context of a logical standby database, you want to replicate tables that have encrypted columns:

  1. To translate encrypted redo records, SQL Apply must have access to an open wallet containing the Transparent Data Encryption keys. Therefore, you must copy the wallet containing the keys from the primary database to the standby database after it has been created.

  2. The wallet must be copied from the primary database to the logical standby database every time the master key is changed.

  3. Oracle recommends that you not rekey the master key at the logical standby database while the logical standby database is replicating encrypted tables from the primary database. Doing so may cause SQL Apply to halt when it encounters an encrypted redo record.

  4. You can rekey the encryption key of a replicated table at the logical standby database. This requires that you lower the guard setting to NONE before you issue the rekey command.

  5. Replicated encrypted tables can use a different encryption scheme for columns than the one used in the primary database. For example, if the SALARY column of the HR.EMPLOYEES table is encrypted at the primary database using the AES192 encryption algorithm, it can be encrypted at the logical standby using the AES256 encryption algorithm. Or, the SALARY column can remain unencrypted at the logical standby database.

C.3 Support for Tablespace Encryption

Data Guard SQL Apply can be used to provide data protection for a primary database that has tablespace encryption enabled. In such a case, restrictions 1, 2, and 3 listed in Section C.2, "Support for Transparent Data Encryption (TDE)" will apply.


Note:

In some cases, when SQL Apply mines and applies redo records for changes made to tables in encrypted tablespaces, records of user data in unencrypted form may be kept for a long period of time. If this is not acceptable, you should issue the following command to move all metadata tables pertaining to the mining component of SQL Apply to an encrypted tablespace:
SQL> DBMS_LOGMNR_D.SET_TABLESPACE(NEW_TABLESPACE => 'ENCRYPTED_LOGMNR_TS'); 

C.4 Support For Row-level Security and Fine-Grained Auditing

As of Oracle Database 11g, Logical Standby can automatically replicate the security environment provided through the DBMS_RLS and DBMS_FGA PL/SQL packages. This support simplifies management of security considerations when a server fails over to the standby since the security environment will transparently be maintained. It also ensures that access control policies applied to the primary data can be automatically forwarded to the standby, and the standby data transparently given the same level of protection. If a standby server is newly created with 11g, this replication is enabled by default; otherwise it has to be enabled by the DBA at an appropriate time.

Support for the replication of these PL/SQL packages requires that both the primary and the standby be running with a compatibility setting of 11.1 or higher.

It also requires that the table referenced be a Logical Standby maintained object. For example, a table with a rowid column will not have its data maintained by Logical Standby, in which case DBMS_RLS and DBMS_FGA calls referencing that table will also not be maintained.

C.4.1 Row-level Security

Row-Level Security, also known as Virtual Private Database (VPD), is a feature that enforces security at a fine level of granularity, when accessing tables, views, or synonyms. When a user directly or indirectly accesses a table, view, or synonym protected with a VPD policy, the server dynamically modifies the SQL statement of the user. The modification creates a WHERE condition (known as a predicate) returned by a function implementing the security policy. The statement is modified dynamically, transparently to the user, using any condition that can be expressed in, or returned by, a function. VPD policies can be applied to SELECT, INSERT, UPDATE, INDEX, and DELETE statements. VPD is implemented by using the DBMS_RLS package to apply security policies.

When a DBMS_RLS procedure is executed on the primary, additional information is captured in the redo that allows the procedure call to be logically reconstructed and executed on the standby. Logical Standby supports replication of ancillary objects for VPD such as Contexts, Database Logon Triggers, and their supporting packages. You must ensure that these objects are placed in maintained schemas and that no DDL skips have been configured that would stop their replication.

C.4.2 Fine-Grained Auditing

Fine-grained auditing provides a way to audit select statements. The DBMS_FGA package enables all select statements that access a table to be captured, together with what data was accessed. An FGA policy may be applied to a particular column or even to only those select statements that return rows for which a specified predicate returns TRUE.

When a DBMS_FGA procedure is executed on the primary, additional information is captured to the redo that allows the procedure call to be logically reconstructed and executed on the standby.

C.4.3 Skipping and Enabling PL/SQL Replication

PL/SQL can be configured with skip and skip_error rules exactly as DDL statements except that wildcarding on the package and procedure are not supported. For example to skip all aspects of VPD, do the following:

DBMS_LOGSTDBY.Skip (
stmt => 'PL/SQL',
schema_name => 'SYS',
object_name =>'DBMS_RLS',
use_like => FALSE);

Note that the schema specified is the schema in which the package is defined. To skip an individual procedure in a package, the syntax would be as follows:

DBMS_LOGSTDBY.Skip (
stmt => 'PL/SQL',
schema_name => 'SYS',
object_name =>'DBMS_RLS.Add_Policy',
use_like => FALSE);

In order to skip VPD on certain schemas or tables, a skip procedure must be used. The skip procedure will be passed the fully qualified PL/SQL statement that is to be executed, for example:

DBMS_RLS.Drop_Policy(
object_schema => 'SCOTT, 
object_name  => 'EMP',
policy_name => 'MYPOLICY');

The procedure could then parse the statement to decide whether to skip it, to apply it, or to stop apply and let the DBA take a compensating action.

Unlike DDL, skip procedures on PL/SQL do not support returning a replacement statement.

C.5 Oracle Label Security

Logical standby databases do not support Oracle Label Security. If Oracle Label Security is installed on the primary database, SQL Apply fails on the logical standby database with an internal error during startup.

C.6 Oracle E-Business Suite

Logical standby databases do not fully support an Oracle E-Business Suite implementation because there are tables that contain unsupported data types. However, using SKIP rules, it is possible for you to replicate a subset of the E-Business Suite schemas and tables to offload applications to the logical standby.


See Also:

The My Oracle Support note 851603.1 at http://support.oracle.com for additional information about using Logical standby with Oracle E-Business Suite

C.7 Supported Table Storage Types

Logical standby databases support the following table storage types:

  • Cluster tables (including index clusters and heap clusters)

  • Index-organized tables (partitioned and nonpartitioned, including overflow segments)

  • Heap-organized tables (partitioned and nonpartitioned)

  • OLTP table compression (COMPRESS FOR OLTP) and basic table compression (COMPRESS BASIC). OLTP table compression and basic table compression require that the compatibility setting of the primary database be set to 11.1.0 or higher.

  • Tables with virtual columns (provided the table has no other columns or properties not supported by logical standby). This support is available only in Oracle Database 11g Release 2 (11.2.0.3) and higher.

  • Tables using Hybrid Columnar Compression


    See Also:


C.8 Unsupported Table Storage Types

Logical standby databases do not support the following table storage types:

  • Tables containing LOB columns stored as SecureFiles (unless the compatibility level is set to 11.2 or higher)

C.9 PL/SQL Supplied Packages Considerations

This section discusses the following considerations regarding PL/SQL supplied packages:


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about Oracle PL/SQL supplied packages

C.9.1 Supported PL/SQL Supplied Packages

Oracle PL/SQL supplied packages that do not modify system metadata or user data leave no footprint in the archived redo log files, and hence are safe to use on the primary database. Examples of such packages are DBMS_OUTPUT, DBMS_RANDOM, DBMS_PIPE, DBMS_DESCRIBE, DBMS_OBFUSCATION_TOOLKIT, DBMS_TRACE, DBMS_METADATA, DBMS_CRYPTO.

Oracle PL/SQL supplied packages that do not modify system metadata but may modify user data are supported by SQL Apply, as long as the modified data belongs to the supported data types listed in Section C.1.1. Examples of such packages are DBMS_LOB, DBMS_SQL, and DBMS_TRANSACTION.

Data Guard logical standby supports replication of actions performed through the following packages: DBMS_RLS, DBMS_FGA, and DBMS_REDEFINITION.

C.9.2 Unsupported PL/SQL Supplied Packages

Oracle PL/SQL supplied packages that modify system metadata typically are not supported by SQL Apply, and therefore their effects are not visible on the logical standby database. Examples of such packages are DBMS_JAVA, DBMS_REGISTRY, DBMS_ALERT, DBMS_SPACE_ADMIN, DBMS_REFRESH, and DBMS_AQ.

Specific support for DBMS_JOB has been provided. Jobs created on the primary database are replicated on the standby database, but will not be run as long as the standby maintains its standby role. In the event of a switchover or failover, jobs scheduled on the original primary database will automatically begin running on the new primary database.

You can also create jobs at the logical standby. These jobs will only run as long as the logical standby maintains it standby role.

Specific support for DBMS_SCHEDULER has been provided to allow jobs to be run on a standby database. A new attribute of a scheduler job has been created in 11g called database_role whose contents match the database_role attribute of V$DATABASE. When a scheduler job is created, it defaults to the local role (that is, a job created on the standby defaults to a database_role of LOGICAL STANDBY). The job scheduler executes only jobs specific to the current role. On switchover or failover, the scheduler automatically switches to running jobs specific to the new role.

Scheduler jobs are not replicated to the standby. However, existing jobs can be activated under the new role by using the DBMS_SCHEDULER.Set_Attribute procedure. Alternatively, jobs that should run in both roles can be cloned and the copy made specific to the other role. The DBA_SCHEDULER_JOB_ROLES view shows which jobs are specific to which role.

Scheduler jobs obey the database guard when they run on a logical standby database. Thus, in order to run jobs that need to modify unmaintained tables, the database guard should be set to STANDBY. (It is not possible to use the ALTER SESSION DISABLE GUARD statement inside a PL/SQL block and have it take effect.)


See Also:

Oracle Database PL/SQL Packages and Types Reference for details about specific packages

C.9.3 Handling XML and XDB PL/SQL Packages in Logical Standby

In Oracle Database 11g release 1 (11.1), Logical Standby supports XML when it is stored in CLOB format. However, there are several PL/SQL packages used in conjunction with XML that are not fully supported.

The PL/SQL packages and procedures that are supported by Logical Standby only modify in-memory structures; they do not modify data stored in the database. These packages do not generate redo and therefore are not replicated to a Logical Standby.

Certain PL/SQL packages and procedures related to XML and XDB that are not supported by Logical Standby, but that require corresponding invocations at the logical standby database for replication activities to continue, are instrumented such that invocations of these procedures at the primary database will generate additional redo records indicating procedure invocation. When SQL Apply encounters such redo records, it stops and writes an error message in the DBA_LOGSTDBY_EVENTS table, indicating the procedure name. This allows the DBA to invoke the corresponding procedure at the logical standby database at the appropriate time so that subsequent redo records generated at the primary database can be applied successfully at the logical standby database. See Section C.9.3.1 through Section C.9.3.6 for more information about dealing with these unsupported procedures.

The following packages contain unsupported procedures:

  • DBMS_XMLSCHEMA

  • DBMS_XMLINDEX

In addition to these packages, Logical Standby does not support any modifications to the XDB schema. The objects within the XDB schema are considered to be system metadata and direct modifications to them are not replicated.

Tables managed by the Oracle XML DB Repository, also known as hierarchy-enabled tables, are not supported by Logical Standby. These tables are used to store XML data and can be accessed using the FTP and HTTP protocols, as well as the normal SQL access. For more information on these tables, refer to the Oracle XML DB Developer's Guide.

C.9.3.1 The DBMS_XMLSCHEMA Schema

The following procedures within the DBMS_XMLSCHEMA package are unsupported and cannot be replicated by Logical Standby. Logical Standby stops when it encounters calls to these procedures to provide the user an opportunity to take a compensating action for these calls. Sections Section C.9.3.3 through Section C.9.3.6 provide more information on the alternatives available for dealing with these unsupported procedures.

  • REGISTERSCHEMA

  • REGISTERURI

  • DELETESCHEMA

  • PURGESCHEMA

  • COPYEVOLVE

  • INPLACEEVOLVE

  • COMPILESCHEMA

The XDB schema is an Oracle managed schema. Any changes to this schema are automatically skipped by Logical Standby. The following procedure makes changes to the XDB schema which will not be replicated:

  • GENERATEBEAN

The following procedures and functions do not generate redo and therefore do not stop Logical Standby:

  • GENERATESCHEMAS

  • GENERATESCHEMA

C.9.3.2 The DBMS_XMLINDEX Package

The SYNCINDEX procedure within the DBMS_XMLINDEX package is marked as unsupported and cannot be replicated by Logical Standby. Logical Standby stops when it encounters calls to it.

The following functions and procedures do not generate redo and therefore do not stop Logical Standby:

  • NODEREFGETREF

  • NODEREFGETVALUE

  • NODEREFGETPARENTREF

  • NODEREFGETNAME

  • NODEREFGETNAMESPACE

C.9.3.3 Dealing With Unsupported PL/SQL Procedures

There are a couple options for dealing with unsupported PL/SQL procedures. The first option is to allow the Logical Standby apply process to stop and to manually perform some compensating action. The second option is to take a preemptive action and to skip the unsupported PL/SQL either by using Logical Standby skip procedures. Each of these options is discussed in the following sections.

C.9.3.4 Manually Compensating for Unsupported PL/SQL

When Logical Standby encounters something that is unsupported, it stops the apply process and records an error in the DBA_LOGSTDBY_EVENTS table. You can query this table to determine what action caused the standby to stop and what action, if any, needs to be taken to compensate.

The following example shows a sample of what this query and its output might look like:

select status, event from dba_logstdby_events
          where commit_scn >= (select applied_scn from dba_logstdby_progress) and
          status_code = 16265
          order by commit_scn desc;
 
STATUS
--------------------------------------------------------------------------------
EVENT
--------------------------------------------------------------------------------
ORA-16265: Unsupported PL/SQL procedure encountered
begin
 "XDB"."DBMS_XMLSCHEMA"."REGISTERSCHEMA" (
   "SCHEMAURL" => 'xmlplsqlsch2
 
ORA-16265: Unsupported PL/SQL procedure encountered
begin
 "XDB"."DBMS_XMLSCHEMA"."REGISTERSCHEMA" (
   "SCHEMAURL" => 'xmlplsqlsch2
 
 
2 rows selected.

Two rows with the same information are returned because Logical Standby automatically retries the failed transaction. The results show that the standby was stopped when a call to DBMS_XMLSCHEMA.REGISTERSCHEMA was encountered for the xmlplsqlsch2 schema. You can use this information to transfer any needed files from the primary and register the schema on the standby.

Once the schema has been successfully registered on the standby, the apply process on the Logical Standby can be restarted. This must be performed using the SKIP FAILED TRANSACTION option, for example:

alter database start logical standby apply skip failed transaction'

Logical Standby skips past the offending transaction and continues applying redo from the primary.

The general procedure for manually replicating unsupported PL/SQL follows these steps:

  1. Some unsupported PL/SQL is executed on the primary database.

  2. The standby database encounters the unsupported PL/SQL and stops Apply.

  3. You examine the DBA_LOGSTDBY_EVENTS table to determine what caused Apply to stop.

  4. You execute some compensating actions on the standby for the unsupported PL/SQL.

  5. You restart apply on the standby.

C.9.3.5 Proactively Compensating for Unsupported PL/SQL

In certain cases, you know that an action you are going to perform on the primary database will cause the standby to halt. In those cases, you may want to take action ahead of time to either minimize or eliminate the time that the standby is not applying redo.

For example, suppose you know that a new application is going to be installed. Part of the installation requires a large number of XML schemas to be registered. You can register these schemas on the standby before they are registered on the primary. You can also install a skip procedure on the standby for the DBMS_XMLSCHEMA.REGISTERSCHEMA procedure which will check to see if the XML schema is registered and if so, it will tell Logical Standby to skip that PL/SQL call.

This approach can also be used for some of the other PL/SQL procedures that are unsupported. For example, DBMS_XMLSCHEMA.DELETESCHEMA can be handled in a similar way. A skip procedure can be written to see if the schema is installed on the standby and if it is not, then that PL/SQL can be safely skipped because it would not have had any meaningful affect on the standby.

C.9.3.6 Compensating for Ordering Sensitive Unsupported PL/SQL

Although the previous approach is useful, it cannot be used in all cases. It can only be safely used when the time that the PL/SQL is executed relative to other transactions is not critical. One case that this should not be used for is that of DBMS_XMLSCHEMA.copyEvolve.

This procedure evolves, or changes, a schema and can modify tables by adding and or removing columns and it can also change whether or not XML documents are valid. The timing of when this procedure should be executed on the Logical Standby is critical. The only time guaranteed to be safe is when apply has stopped on the Logical Standby when it sees that this procedure was executed on the primary database.

Before evolving a schema, it is also important to quiesce any traffic on the primary that may be using the schema. Otherwise, a transaction that is executed close in time to the evolveSchema on the primary may be executed in a different order on the Logical Standby because the dependency between the two transactions is not apparent to the Logical Standby. Therefore, when ordering sensitive PL/SQL is involved, you should follow these steps:

  1. Quiesce changes to dependent tables on the primary.

  2. Execute the CopyEvolve on the primary.

  3. Wait for the standby to stop on the CopyEvolve PL/SQL.

  4. Apply the compensating CopyEvolve on the standby.

  5. Restart apply on the standby.

Example C-1 shows a sample of the procedures that could be used to determine how to handle RegisterSchema calls.

Example C-1 PL/SQL Skip Procedure for RegisterSchema

-- Procedures to determine how to handle registerSchema calls
 
-- This procedure extracts the schema URL, or name, from the statement
-- string that is passed into the skip procedure.
 
Create or replace procedure sec_mgr.parse_schema_str(
  statement             in varchar2,
  schema_name      out varchar2)
Is
  pos1 number;
  pos2 number;
  workingstr   varchar2(32767);
Begin
 
-- Find the correct argument
pos1 := instr(statement, '"SCHEMAURL" => ''');
workingstr := substr(statement, pos1 + 16);
 
-- Find the end of the schema name
pos1 := instr(workingstr, '''');
 
-- Get just the schema name
workingstr := substr(workingstr, 1, pos1 - 1);
 
schema_name := workingstr;
 
End parse_schema_str;
/
show errors
 
 
-- This procedure checks if a schema is already registered. If so,
-- it returns the value DBMS_LOGSTDBY.SKIP_ACTION_SKIP to indicate that
-- the PL/SQL should be skipped. Otherwise, the value 
-- DBMS_LOGSTDBY.SKIP_ACTION_SKIP is returned and Logical Standby apply 
-- will halt to allow the DBA to deal with the registerSchema call.
 
Create or replace procedure sec_mgr.skip_registerschema(
  statement             in varchar2,
  package_owner            in varchar2,
  package_name             in varchar2,
  procedure_name                 in varchar2,
  current_user                   in varchar2,
  xidusn                in number,
  xidslt                in number,
  xidsqn                in number, 
  exit_status            in number, 
  skip_action            out number)
Is
  schema_exists number;
  schemastr varchar2(2000);
Begin
 
  skip_action := DBMS_LOGSTDBY.SKIP_ACTION_SKIP;
 
  -- get the schame name from statement
  parse_schema_str(statement, schemastr);
 
  -- see if the schema is already registered
  select count(*) into schema_exists from sys.all_xml_schemas s 
                                     where s.schema_url = schemastr and
                                           s.owner = current_user;
 
  IF schema_exists = 0 THEN
      -- if the schema is not  registered, then we must stop apply
      skip_action := DBMS_LOGSTDBY.SKIP_ACTION_APPLY;     
  ELSE
      -- if the schema is already registered, then we can skip this statement
      skip_action := DBMS_LOGSTDBY.SKIP_ACTION_SKIP;     
  END IF;
 
End skip_registerschema;
/
show errors
 
-- Register the skip procedure to deal with the unsupported registerSchema 
-- PL/SQL.
Begin
   sys.dbms_logstdby.skip(stmt => 'PL/SQL', 
        schema_name => 'XDB', 
                                        object_name   => 'DBMS_XMLSCHEMA.REGISTERSCHEMA', 
                                        proc_name     => 'SEC_MGR.SKIP_REGISTERSCHEMA',
        use_like         => FALSE );
                  End;
    /
show errors

C.10 Unsupported Tables

It is important to identify unsupported database objects on the primary database before you create a logical standby database because changes made to unsupported data types and tables on the primary database will be automatically skipped by SQL Apply on the logical standby database. Moreover, no error message will be returned.

There are three types of objects on a database, from the perspective of logical standby support:

  • Objects that are explicitly maintained by SQL Apply

  • Objects that are implicitly maintained by SQL Apply

  • Objects that are not maintained by SQL Apply

Some schemas that ship with the Oracle database (for example, SYSTEM) contain objects that will be implicitly maintained by SQL Apply. However, if you put a user-defined table in SYSTEM, it will not be maintained even if it has columns of supported data types. To discover which objects are not maintained by SQL Apply, you must run two queries. The first query is as follows:

SQL> SELECT OWNER FROM DBA_LOGSTDBY_SKIP WHERE STATEMENT_OPT = 'INTERNAL SCHEMA';

This will return all schemas that are considered to be internal. User tables placed in these schemas will not be replicated on a logical standby database and will not show up in the DBA_LOGSTDBY_UNSUPPORTED view. Tables in these schemas that are created by Oracle will be maintained on a logical standby, if the feature implemented in the schema is supported in the context of logical standby.

The second query you must run is as follows. It returns tables that do not belong to internal schemas and will not be maintained by SQL Apply because of unsupported data types:

SQL> SELECT DISTINCT OWNER,TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED -
> ORDER BY OWNER,TABLE_NAME;

OWNER        TABLE_NAME
-----------  --------------------------
HR           COUNTRIES
OE           ORDERS
OE           CUSTOMERS
OE           WAREHOUSES

To view the column names and data types for one of the tables listed in the previous query, use a SELECT statement similar to the following:

SQL> SELECT COLUMN_NAME,DATA_TYPE FROM DBA_LOGSTDBY_UNSUPPORTED -
> WHERE OWNER='OE' AND TABLE_NAME = 'CUSTOMERS';

COLUMN_NAME                      DATA_TYPE
-------------------------------  -------------------
CUST_ADDRESS                     CUST_ADDRESS_TYP
PHONE_NUMBERS                    PHONE_LIST_TYP
CUST_GEO_LOCATION                SDO_GEOMETRY

If the primary database contains unsupported tables, SQL Apply automatically excludes these tables when applying redo data to the logical standby database.


Note:

If you determine that the critical tables in your primary database will not be supported on a logical standby database, then you might want to consider using a physical standby database. Physical standby databases do not have any such data type restrictions.

C.11 Skipped SQL Statements on a Logical Standby Database

By default, the following SQL statements are automatically skipped by SQL Apply:


ALTER DATABASE
ALTER MATERIALIZED VIEW
ALTER MATERIALIZED VIEW LOG
ALTER SESSION
ALTER SYSTEM
CREATE CONTROL FILE
CREATE DATABASE
CREATE DATABASE LINK
CREATE PFILE FROM SPFILE
CREATE MATERIALIZED VIEW
CREATE MATERIALIZED VIEW LOG
CREATE SCHEMA AUTHORIZATION
CREATE SPFILE FROM PFILE
DROP DATABASE LINK
DROP MATERIALIZED VIEW
DROP MATERIALIZED VIEW LOG
EXPLAIN
LOCK TABLE
SET CONSTRAINTS
SET ROLE
SET TRANSACTION

All other SQL statements executed on the primary database are applied to the logical standby database.

C.12 DDL Statements Supported by a Logical Standby Database

Table C-1 lists the supported values for the stmt parameter of the DBMS_LOGSTDBY.SKIP procedure. The left column of the table lists the keywords that may be used to identify the set of SQL statements to the right of the keyword. In addition, any of the SQL statements listed in the sys.audit_actions table (shown in the right column of Table 1-13) are also valid values. Note that keywords are generally defined by database object.

Table C-1 Values for stmt Parameter of the DBMS_LOGSTDBY.SKIP procedure

KeywordAssociated SQL Statements

There is no keyword for this group of SQL statements.

GRANT
REVOKE
ANALYZE TABLE
ANALYZE INDEX
ANALYZE CLUSTER

CLUSTER

AUDIT CLUSTER
CREATE CLUSTER
DROP CLUSTER
TRUNCATE CLUSTER

CONTEXT

CREATE CONTEXT
DROP CONTEXT

DATABASE LINK

CREATE DATABASE LINK
CREATE PUBLIC DATABASE LINK
DROP DATABASE LINK
DROP PUBLIC DATABASE LINK

DIMENSION

ALTER DIMENSION
CREATE DIMENSION
DROP DIMENSION

DIRECTORY

CREATE DIRECTORY
DROP DIRECTORY

DML

Includes DML statements on a table (for example: INSERT, UPDATE, and DELETE)

INDEX

ALTER INDEX
CREATE INDEX
DROP INDEX

NON_SCHEMA_DDL

All DDL that does not pertain to a particular schema

Note: SCHEMA_NAME and OBJECT_NAME must be null

PROCEDUREFoot 1 

ALTER FUNCTION
ALTER PACKAGE
ALTER PACKAGE BODY
ALTER PROCEDURE
CREATE FUNCTION
CREATE LIBRARY
CREATE PACKAGE
CREATE PACKAGE BODY
CREATE PROCEDURE
DROP FUNCTION
DROP LIBRARY
DROP PACKAGE
DROP PACKAGE BODY
DROP PROCEDURE

PROFILE

ALTER PROFILE
CREATE PROFILE
DROP PROFILE

PUBLIC DATABASE LINK

CREATE PUBLIC DATABASE LINK
DROP PUBLIC DATABASE LINK

PUBLIC SYNONYM

CREATE PUBLIC SYNONYM
DROP PUBLIC SYNONYM

ROLE

ALTER ROLE
CREATE ROLE
DROP ROLE
SET ROLE

ROLLBACK SEGMENT

ALTER ROLLBACK SEGMENT
CREATE ROLLBACK SEGMENT
DROP ROLLBACK SEGMENT

SCHEMA_DDL

All DDL statements that create, modify, or drop schema objects (for example: tables, indexes, and columns)

Note: SCHEMA_NAME and OBJECT_NAME must not be null

SEQUENCE

ALTER SEQUENCE
CREATE SEQUENCE
DROP SEQUENCE

SYNONYM

CREATE PUBLIC SYNONYM
CREATE SYNONYM
DROP PUBLIC SYNONYM
DROP SYNONYM

SYSTEM AUDIT

AUDIT SQL_statements
NOAUDIT SQL_statements

TABLE

CREATE TABLE
ALTER TABLE
DROP TABLE
TRUNCATE TABLE

TABLESPACE

CREATE TABLESPACE
DROP TABLESPACE
ALTER TABLESPACE

TRIGGER

ALTER TRIGGER
CREATE TRIGGER
DISABLE ALL TRIGGERS
DISABLE TRIGGER
DROP TRIGGER
ENABLE ALL TRIGGERS
ENABLE TRIGGER

TYPE

ALTER TYPE
ALTER TYPE BODY
CREATE TYPE
CREATE TYPE BODY
DROP TYPE
DROP TYPE BODY

USER

ALTER USER
CREATE USER
DROP USER

VIEW

CREATE VIEW
DROP VIEW

Footnote 1 Java schema objects (sources, classes, and resources) are considered the same as procedures for purposes of skipping (ignoring) SQL statements.

C.12.1 DDL Statements that Use DBLINKS

SQL Apply may not correctly apply DDL statements such as the following, that reference a database link:

CREATE TABLE tablename AS SELECT * FROM bar@dblink

This is because the dblink at the logical standby database may not point to the same database as the primary database. If SQL Apply fails while executing such a DDL statement, you should use the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure for the table being created, and then restart SQL APPLY operations.

C.12.2 Replication of AUD$ and FGA_LOG$ on Logical Standbys

Auditing and fine-grained auditing are supported on logical standbys. Changes made to the AUD$ and FGA_AUD$ tables at the primary database are replicated at the logical standby.

Both the AUD$ table and the FGA_AUD$ table have a DBID column. If the DBID value is that of the primary database, then the row was replicated to the logical standby based on activities at the primary. If the DBID value is that of the logical standby database, then the row was inserted as a result of local activities at the logical standby.

After the logical standby database assumes the primary role as a result of a role transition (either a switchover or failover), the AUD$ and FGA_AUD$ tables at the new primary (originally the logical standby) and at the new logical standby (originally the primary) are not necessarily synchronized. Therefore, it is possible that not all rows in the AUD$ or FGA_AUD$ tables at the new primary database will be present in the new logical standby database. However, all rows in AUD$ and FGA_LOG$ that were inserted while the database was in a primary role are replicated and present in the logical standby database.

C.13 Distributed transactions and XA Support

You can perform distributed transactions using either of the following methods:

  • Modify tables in multiple databases in a coordinated manner using database links.

  • Use the XA interface, as exposed by the DBMS_XA package in supplied PL/SQL packages or via OCI or JDBC libraries. The XA interface implements X/Open Distributed Transaction Processing (DTP) architecture.

Changes made to the primary database during a distributed transaction using either of these two methods are replicated to the logical standby database.

However, the distributed transaction state is not replicated. The logical standby database does not inherit the in-doubt or prepared state of such a transaction, and it does not replicate the changes using the same global transaction identifier used at the primary database for the XA transactions. As a result, if you fail over to a logical standby database before committing a distributed transaction, the changes are rolled back at the logical standby. This rollback occurs even if the distributed transaction on the primary database is in a prepared state and has successfully completed the first phase of the two-phased commit protocol. Switchover operations wait for all active distributed transactions to complete, and are not affected by this restriction.

XA transactions can be performed in two ways:

  • tightly coupled, where different XA branches share locks

  • loosely coupled, where different XA branches do not share locks

Replication of changes made by loosely coupled XA branches is supported regardless of the COMPATIBLE parameter value. Replication of changes made by tightly coupled branches on an Oracle RAC primary (introduced in 11g Release 1) is supported only with COMPATIBLE=11.2 or higher.

C.14 Support for SecureFiles LOBs

SecureFiles LOBs are supported when the database compatibility level is set to 11.2 or higher.

Transparent data encryption and data compression can be enabled on SecureFiles LOB columns at the primary database. De-duplication of SecureFiles LOB columns is not supported. Also, the following operations contained within the DBMS_LOB PL/SQL package are not supported on SecureFiles LOB columns:

FRAGMENT_DELETE, FRAGMENT_INSERT, FRAGMENT_MOVE, FRAGMENT_REPLACE, COPY_FROM_DBFS_LINK, MOVE_TO_DBFS_LINK, SET_DBFS_LINK, COPY_DBFS_LINK, SETCONTENTTYPE

If SQL Apply encounters redo generated by any of these operations, it stops with an ORA-16211: Unsupported record found in the archived redo log error. To continue, add a skip rule for the affected table using DBMS_LOGSTDBY.SKIP and restart SQL Apply.

C.15 Character Set Considerations

Configurations are not supported in which the primary database and standby database have different character sets.

PK.Lh\MPKD Oracle® Data Guard Concepts and Administration, 11g Release 2 (11.2) en-US E25608-04 Oracle Corporation Oracle Corporation Oracle® Data Guard Concepts and Administration, 11g Release 2 (11.2) 2012-12-18T11:02:33Z Provides a comprehensive overview of Oracle Data Guard concepts and describes how to configure and implement standby databases that can take over production operations if your production database becomes unusable. This guide includes several database scenarios such as creating, recovering, failing over, switching over, configuring, and backing up standby and primary databases. PKnW55PKD List of Figures PKĢ3 PKDV%ȣOΏ9??:a"\fSrğjAsKJ:nOzO=}E1-I)3(QEQEQEQEQEQEQE֝Hza<["2"pO#f8M[RL(,?g93QSZ uy"lx4h`O!LŏʨXZvq& c՚]+: ǵ@+J]tQ]~[[eϸ (]6A&>ܫ~+כzmZ^(<57KsHf妬Ϧmnẁ&F!:-`b\/(tF*Bֳ ~V{WxxfCnMvF=;5_,6%S>}cQQjsOO5=)Ot [W9 /{^tyNg#ЄGsֿ1-4ooTZ?K Gc+oyڙoNuh^iSo5{\ܹ3Yos}$.nQ-~n,-zr~-|K4R"8a{]^;I<ȤL5"EԤP7_j>OoK;*U.at*K[fym3ii^#wcC'IIkIp$󿉵|CtĈpW¹l{9>⪦׺*ͯj.LfGߍԁw] |WW18>w.ӯ! VӃ :#1~ +މ=;5c__b@W@ +^]ևՃ7 n&g2I8Lw7uҭ$"&"b eZ":8)D'%{}5{; w]iu;_dLʳ4R-,2H6>½HLKܹR ~foZKZ࿷1[oZ7׫Z7R¢?«'y?A}C_iG5s_~^ J5?œ tp]X/c'r%eܺA|4ծ-Ե+ْe1M38Ǯ `|Kյ OVڅu;"d56, X5kYR<̭CiطXԮ];Oy)OcWj֩}=܅s۸QZ*<~%뺃ȶp f~Bðzb\ݳzW*y{=[ C/Ak oXCkt_s}{'y?AmCjޓ{ WRV7r. g~Q"7&͹+c<=,dJ1V߁=T)TR՜*N4 ^Bڥ%B+=@fE5ka}ędܤFH^i1k\Sgdk> ֤aOM\_\T)8靠㡮3ģR: jj,pk/K!t,=ϯZ6(((((((49 xn_kLk&f9sK`zx{{y8H 8b4>ÇНE|7v(z/]k7IxM}8!ycZRQ pKVr(RPEr?^}'ðh{x+ՀLW154cK@Ng C)rr9+c:׹b Жf*s^ fKS7^} *{zq_@8# pF~ [VPe(nw0MW=3#kȵz晨cy PpG#W:%drMh]3HH<\]ԁ|_W HHҡb}P>k {ZErxMX@8C&qskLۙOnO^sCk7ql2XCw5VG.S~H8=(s1~cV5z %v|U2QF=NoW]ո?<`~׮}=ӬfԵ,=;"~Iy7K#g{ñJ?5$y` zz@-~m7mG宝Gٱ>G&K#]؃y1$$t>wqjstX.b̐{Wej)Dxfc:8)=$y|L`xV8ߙ~E)HkwW$J0uʟk>6Sgp~;4֌W+חc"=|ř9bc5> *rg {~cj1rnI#G|8v4wĿhFb><^ pJLm[Dl1;Vx5IZ:1*p)إ1ZbAK(1ׅ|S&5{^ KG^5r>;X׻K^? s fk^8O/"J)3K]N)iL?5!ƾq:G_=X- i,vi2N3 |03Qas ! 7}kZU781M,->e;@Qz T(GK(ah(((((((Y[×j2F}o־oYYq $+]%$ v^rϭ`nax,ZEuWSܽ,g%~"MrsrY~Ҿ"Fت;8{ѰxYEfP^;WPwqbB:c?zp<7;SBfZ)dϛ; 7s^>}⍱x?Bix^#hf,*P9S{w[]GF?1Z_nG~]kk)9Sc5Ո<<6J-ϛ}xUi>ux#ţc'{ᛲq?Oo?x&mѱ'#^t)ϲbb0 F«kIVmVsv@}kҡ!ˍUTtxO̧]ORb|2yԵk܊{sPIc_?ħ:Ig)=Z~' "\M2VSSMyLsl⺿U~"C7\hz_ Rs$~? TAi<lO*>U}+'f>7_K N s8g1^CeКÿE ;{+Y\ O5|Y{/o+ LVcO;7Zx-Ek&dpzbӱ+TaB0gNy׭ 3^c T\$⫫?F33?t._Q~Nln:U/Ceb1-im WʸQM+VpafR3d׫é|Aү-q*I P7:y&]hX^Fbtpܩ?|Wu󭏤ʫxJ3ߴm"(uqA}j.+?S wV ~ [B&<^U?rϜ_OH\'.;|.%pw/ZZG'1j(#0UT` Wzw}>_*9m>󑓀F?EL3"zpubzΕ$+0܉&3zڶ+jyr1QE ( ( ( ( ( ( ( (UIdC0EZm+]Y6^![ ԯsmܶ捆?+me+ZE29)B[;я*wGxsK7;5w)}gH~.Ɣx?X\ߚ}A@tQ(:ͧ|Iq(CT?v[sKG+*רqҍck <#Ljα5݈`8cXP6T5i.K!xX*p&ќZǓϘ7 *oƽ:wlຈ:Q5yIEA/2*2jAҐe}k%K$N9R2?7ýKMV!{W9\PA+c4w` Wx=Ze\X{}yXI Ү!aOÎ{]Qx)#D@9E:*NJ}b|Z>_k7:d$z >&Vv󃏽WlR:RqJfGإd9Tm(ҝEtO}1O[xxEYt8,3v bFF )ǙrPNE8=O#V*Cc𹾾&l&cmCh<.P{ʦ&ۣY+Gxs~k5$> ӥPquŽўZt~Tl>Q.g> %k#ú:Kn'&{[yWQGqF}AЅ׮/}<;VYZa$wQg!$;_ $NKS}“_{MY|w7G!"\JtRy+贾d|o/;5jz_6fHwk<ѰJ#]kAȎ J =YNu%dxRwwbEQEQEQEQEQEQEQEQEQE'fLQZ(1F)hQ@X1KEQE-Q@ 1KE3h=iPb(((1GjZ(-ʹRPbR@ 1KE7`bڒyS0(-&)P+ ڎԴP11F)h&:LRmQ@Q@Š(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((?l:ϊw "{{-3j3%{sj~2= 7 ~MڅKrHb|P3 r=Ҁ +Ş/$iu7=q2dԂxn⸷9$l]H #WI񯄴;\[ݚD8C3p&0U9^AnK vI+!I8>5(zqj03Y.X ,@85ߛ8>pq8=} hQEQ^GxZs[V(HM޹;*Axg]d3k{Hܣ 9+,rr{= |8Yk 9o+4d)%XۖP'qj03Y.X ,@84r*z66h)[ELrɜ/$wr}gYywx4r@c ~5OM.5ӪhnV 2Bq}jQEe]m.mR-oM}>-!~eր5(zn6q+i-fYT6*H8NŞ/$iu7=q2d&êC˨Z&2oѦQ+_-q#7zwp]ɝA G@(wZݾhQlX QTY2K;iVQ|3@#*NFA (n.%"BI#TP2I'4%c wxJ;!9 8Prp?lPEcxzew ҭ.g7\FTZO ռW4x䍃+s&êC˨Z&2oѦQ+u̻#8ӞsGÍ{F~.?TPα ]o]nj2v>¢6sTm}O3;sߓ9z<9^Y1aA' .bew`g(: ~Ѥɞ2kzP>k/Hqרxo߅|#_d# ?%(H$a؎?*6LwBD26eo Dqܰe#rr1MI2 A~/viQQ"<x#$;h;Nc}MZ$>m+{m{Lռ'֑\[lށ83ZI熿k %^\-@A|Α !p1_A^bAާaϫ]Ce.]T ǀIzWGɼRv-Kkg L~Dc$^cZwPJH8ܬ.0H;zN ҿ㶎[>tw?±k/|R}=Ȩ|kĐN6CX]5h#ӝlBY ^'*8 *psA$5xF4M/Q_1QM>Kzĝw~+t/×RAo `%q注;6ܜc7*)1g7-8;W|mxIԵ'm.+kdC!y  'g9WCI\$P**>|-Y>qBRY˖DIrx`u1º~ ^ <<dxQ[oi:"~I»7|5Ꮛ ڽp~Y/n7P7y_bYBoΡ%嘷[#HEϚw|oJhwdr ;ky"3v߷PK'}@yI< #%ZdVv*H#$V!sᴂ(C[TPFII5MkRѧ^[A1 _iUÑ :,'s2\E_Bl;vğ/T~d~ M#˜8|GVg%(Ҟ4D+38D%^Î+(~0|2E*%wE(8$l(`r}LW5DY]ɘ#ŗih45/k;upj:x2\%L9t ~'~6c%̫ (l@ϣsQ$x@/ Y.8#ۀ}df>Ӯ>}=[ͼ#<[p`r9Cqifn朣/^Ġ᰸Lぐy_( *m{x4|Y)~k]N[IpQгrsYz_wwwiob)rK,6 —$| ;£FV] /xx̋Cݘo _oI™a©Wn.{) <f^zr_^j?.4/Fүc˃n/mm(r`n#]%ir]M)LdAr9BX8ҸQm}ldm4[{$%LT,@$-S@z7:~>!ҵK M,Y8X.A ;g|)kھm ֢s@ ؎A<XѴOޜ}8 5(3W VP-'oŕhXRQg ,h7'7bJ|=N\as@$a}3rYZZk}P3cd-@(x|zqo%b#]BFx$81צ_|RU&br*X-Fq^I~JZ2 bDT7r\%',F# TvxaFi(sheH# B<xN伿wO"[\;e_lc@9K“:w"xWܿ#rO%r+!@>®QEps}sq,Gs2($WGչ5oXk6D46h$m*( gCC=ad3F=UN`k+FKJ%ԑY,BA#uQWQEs~%_qƻu<R9m'8%32q; 4sHdpX/[x]ȧhldt]^Lӭ,mbHaME(֬QEG<[oqsA*92<Gn%;yfE(ڻ(м)fmt=2 ($r͍''ym"h%BG"WR0A%Ÿۿ|l^^s8ێ1+sO~xmvMPm^H݂J*F:V_0 {M r쌄2px7gt OD$8FVF œ*3a]hw4x4&0nϷs<$I [F%gKY%32y&\ ˒z(b+?BK9HȂY!At9ֻ(,5m" {Vi6@HNLztQ@e~|Gok:]tC4A0 8ZPNnD3:yVr{A]gd0F#E$G$Պ(ï^k$# lS*/<+ |=ϭ\"@TP0ׄ|Ux/5 Y'~- lWQEe~4oecjfTX cZPo@>2 HP`#;zѴ=/rXiYZSnc՛eI&( ( ( ( ( ( ( ( ( ( ( ( ( +@|I&’ɪkNQTG#*ݼ?yn1.;j4&Mf*0A1`%rB'Ǎ 7ӄ:CueOv$߅u|Ko)}IH ?UZn̼ >|a7NgI(T7m0!|io5_A|wby9yJG/ve]9#W_§F]KO\ԥE8&L eIV PH\Ox_]W-x̅Z3m { `>(5xE\V:Lmj-&)8 YP=EjPj)]]x/c,G$U _LGVxad lhO࿍/jlu |DFұB1c7PQU:O.'gNPKN=+;}G$^,f}<eApb]N2UFt3qyV?-{|gtk[K=^R!Pmp`6Ty 2c$](= | ƞu?kZG (-aqE{&[Yx5-I.>]m,'Wh}}Exg9/mJI&{74Bsv䜚q_W~]ϛyX"sW?O5 cX{-5_jC q)c |𯇯4~F|̉F[ic[৆#;MrU~ĞXr`cޱC'C?IPQ^w{Ʒd.6b8H,B`So_dt~Mm?5jD;*͌ @.I=k [߇*'ԷEw+fP79fʖf!LEAYWמ 𮽪zlOsijUUebG|: NJhLr>wFh K`3QEOVm-.hEAIJx<~|@𗇭Ex[뫵I!>G!Q1y'S}x$O*Z keiV[ -GqSӱפ<G'~:x{2u[In\O4J`pc?^^?: *O꽂 +CsU,9+lR˸62ېݹW,> O]GR^żS [`0'T| ?J (((((((((rY<7-R{ɦY?"P 3lxsj[ªN^W=Ȓ@~|uḀvAOF#QKVRقJ`A*6Tr g?O5 ?6x+M6w455ӫ0bUF0V}g],}j+3TmL( fCԞ06!J#cFg]:<˦,H$U@!"n;[W$Gj5n4VX(PN'>/pOMt@#{A&xK>/HϸҵTL;v}3:g}E"[7~KCc*e$0Pr௄,n4K(ndIT̥̀3;Whf=I jZYK;w##0*n= 6py#Z x跷wv:e(Iel 0x xgQ]Ck6Xoy>m?<ŭǮkVki@@ }3N˵!7'Z@w=Bo06!J#MEA(g**\,T t MYT)ݭnFP6ZزVcGOvu qo,l1sݯRzCĺ7u}Rci쬦ddBq+;_|1I&\;=r_6̐BU&b'}g5}33 &|_o*Uj,I*o3Gv?2᳎NAiEK[|!cq\YEwm>p+"Jde,ڻ@5kz&=}'VE|ȷn*Ax4r ặXTeu# 8 s^G: *O<-S~a{ukbO,ed90z1]%i0{X[xW_( ۜ~ Ho4|'mke@N捗$Iicвs^j:o1&N? ypEI}aggiݬ AS߳ς$u[( (p]=}8 :|sFūh9ɠ&%d_wJszUQ6K3kYF|%$nsln88NZm5E=/żTmvI ˘]lNTCW+hOjmޙyo8<feA OP^?G<+SVatk.X/m䷑ 0WR3k|A-sƨ(7O Gv%ծ<;ǽt5V/T"[kWUbYpw+ aoJC'C?IWW8?ZWa_?dM/[cfcj\W []i")41|ͣdepac_|C>ۍJkKX I Wïi^*OMћ1#- HI0FKWS{jJ2OonUیKIoQ%zr I=CӮ<=mHmHBO }AҺ((((((((((+_hτ jqϫIn$([tJV ީc|sICyO-Gvg\nQր=q_G}a鲬7nYd2J]QEWZ7}.]Btxf1#mS8n@}k((((((MgTƟaiho1)1#óĨ( Ex~_{5۽+^#HGDm8$ҽClCj%2A+$ AENw!~ԢmYq88oG>ێQ\߀}KCu}wy:yr]NҲܔ (i87Zڏ eizD[$ԧ7Ȼ37(7)g />uIGH]JlFP$rT8Ȑup>=}wt;<˫>ihROJآ(N~:zotⷞ(񂽆O! ;`Igx ~i85H4pFʧA!@ EgkSO$wPH*x88C5!E\F״+gxd}~VNNQEQEQEQEQEQEQEs$?]sHHlr3ބ6ߘNqѾx'@RN KGI&A2 a#ֻ (((((((((((7_[2uy'kk<>vnXrp]_Y<:.1|8.u䎞Vh!k{ u6I#C+A8w?]ŃwM&>8e AB$P'WK_xV#[X7Hۏ>`8 ^7tVl6mQJu |+p?:gq_GԴG&ٟqn\.T*IY7- ]ˮIK#ms +)Np j.滺}& L I*]x˜qA[@ޏ(r\I{W. Ȝc s֧>?k7-[&LDYc#2='CGA+j&-)V:2obN{Nş A?:|5w.'.z12+HX|梖$ 994~DmZ6I0y$ v.p=kx_mkzGw ?>l3bn]_,<{'|)o|'%uI]RhLw.0p/ Y})|ʮ%eTb0>pc#5eI)A&a Go"(a0 0'|1WAJ|l|h<łq޸S} {Zxlc͎R9991@g|?}_݈4DfrO%@˷ ۥp>!>_5ˇ1:rpT+ oh|5d:lr>*⦳4I#%eu#gqcfoǨ;wYVGi3 atQ\uC7͸+khqz(vr@ Yg̞u֫6 ~t!9j9=q_>xٺ" ݚFN ibgq.Ǿ=n&OuXISцrs+ 㗀cݩy{~ͼc=N9 ė˟'Hq!. mstQEC²訫+g'dQWPmCۿG]?xkV)k$](<RAEP^/"\ixSŗH@Z3r\wVreܖ,IQIny$Ǔ(q^? ڇ{A}}>^?qچ'~r6c9뎍2QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEq0/%~!ƭ_;^F?獶p]Pxa-7Q Q{Hpv F-f}#]iKaH*ι2HPQ@GmJ&Oj|l2z}RLӭ,mbHaME(֬Q@y xiľ{8H֔mw/,T .2]p.L~e2H';YI\ߊ~x^ާaUmfٲ&ȰG$uH>?yͻdWuZ3IEmۿb݌gMhQ@zp_gd(ZٴLW daZSO[!\ *_t  Exr%*_}!#U #4 & ֩3|b]L ]t b+Da&R_2lEٱZ`aC)/яmvUkS r(-iPE Vv_{z GLt\2s!F A#葡JY r|AA,hB}q|B`du }00(䡆<pb,G+oB C0p/x$…– ]7 @2HFc ) @AD \0 LHG',(A` `@SC)_" PH`}Y+_|1.K8pAKMA @?3҄$[JPA)+NH I ,@8G0/@R T,`pF8Ѓ)$^$ DDTDlA@ s;PKPKDz'TQuw7Ŀ KX߁M2=S'TQt?.5w'97;~pq=" ~k?`'9q6 E|yayM^Om'fkC&<5x' ?A?Zx'jß={=SßM gVC.5+Hd֪xc^)Җufz{Cީ|D Vkznq|+Xa+{50rx{|OG.OϞ~f/ xxX[2H )c+#jpUOZYX\=SG ߨC|K@;_߆'e?LT?]:?>w ڔ`D^So~xo[Ӡ3i7B:Q8 Vc-ďoi:FM292~y_*_闱YN\Fr=xZ3鳎OwW_QEzW~c]REeaSM}}Hӏ4&.E]u=gMѠ+mF`rNn$w9gMa꺢nTuhf2Xv>އ a(Û6߭?<=>z'TQuw7Ŀ KX߁M2=S'TQt?.5Kko\.8S$TOX߀Gw?Zx汴X)C7~.i6(Щ=+4{mGӭ¸-]&'t_kV*I<1)4thtIsqpQJ+> \m^[aJ5)ny:4o&QEnyAEPEEss 72,PDۢ׃K W{Wjr+wگ iM/;pd?~&?@;7E4gv8 $l'z'TQuw7Ŀ Gֱ=ɿ&G?. iR(5W*$|?w᫼gkmIbHe/_t>tg%y.l}N5[]+Mk0ĠeHdPrsst'UiC,y8`V%9ZIia|ܪvi מYG,o}+kk{YbyIeb*sAtի82zWoEK5z*o-eo;n(P u-I)4Š(HQEQEQEQEhz(X/Đ?}Bk˩ ݏrk0]4>8XzV? }6$}d^F>nU K ?Bտk_9׾x~w'ߞ  uDŽtL ؈5c-E/"|_Oo.IH쐍=i*Iw5(ںw?t5s.)+tQ2dUt5Vĺ.jZ"@IRrZƅY4ߡ_;}ų(KyQf1Aǵt?sZg+?F5_oQR&Dg߿]6FuRD u>ڿxl7?IT8'shj^=.=J1rj1Wl$얲cPx;E,p$֟ˏkw qg"45(ǛkV/=+ũ)bYl~K#˝J_כ5&\F'I#8/|wʾ_Xj Q:os^T1.M_|TO.;?_  jF?g N 8nA2F%i =qW,G=5OU u8]Rq?wr'˻S+۾.ܼ 87Q^elo/T*?L|ۚ<%<,/v_OKs B5f/29n0=zqQq(ª=VX@*J(э(f5qJN_EVǞQEOuoѕOuoa5}gO?:߂8Wא|cڽ~]N&O( (<]>͠@VQ=^~U ̴m&\խ5i:}|}r~9՝f}_>'vVֲ$~^f30^in{\_.O F8to}?${φ|#x^#^n~w=~k~?'KRtO.㌡h![3Zu*ٷճ(ԟ]z_/W1(ԟ]v~g|Yq<ז0 ; b8֮s,w9\?uEyStKaª@\,)) (!EPEPEPEPEPzѧts{v>C/"N6`d*J2gGӧWqBq_1ZuΓ\X]r?=Ey88Mp&pKtO-"wR2 K^-Z< \c>V0^@O7x2WFjs<׻kZ(<Т(OFw/6$1[:ޯԯ#q~4|,LVPem=@=YLUxӃV}AUbcUB.Ds5*kٸAeG>PJxt͝ b88?*$~@ׯD VkraiJs}Q.20x&mXξ,Z]“A-J#`+-E/"<]\a'tZGy.(|lދ~gMK OZdxDŽU9T6ϯ^<Ϡt5CZ]].t۫S=s`ڳ%8iVK:nqe+#<.T6U>zWoy3^I {F?J~=G}k)K$$;$de8*G Uӟ4Ocºw}|]4=ݣ\x$ʠms?q^ipw\"ȿPs^Z Q_0GڼU.t}ROM[G#]8wٞ ӫ87}Cgw vHȩBM55vof =A_٭`Ygx[6 P,5}>蚊(0(+?>+?> k|TuXq6_ +szk :u_ Z߶Ak_U}Jc2u/1[_»ݸG41-bሬ۴}}Eȹפ_c?5gi @cL\L<68hF_Ih>X4K7UТ sMj =J7CKo>Օ5s:߀t ~ηaٿ?|gdL8+gG%o?x`دOqȱwc¨&TW_V_aI=dpG!wu۞սZ1yL50$(l3(:~'ַo A}a3N*[0ǭ HKQV}G@֜$ 9of$ArNqUOgË05#m?D)^_h//5_/<?4}Jį+GkpG4"$ r| >S4Ђ"S 1%R:ȝ 8;PKPz PKD >@,4`H.|`a (Q 9:&[|ځ,4p Y&BDb,!2@, $wPA'ܠǃ@CO~/d.`I @8ArHx9H75j L 3B/` P#qD*s 3A:3,H70P,R@ p!(F oԥ D;"0 ,6QBRɄHhI@@VDLCk8@NBBL2&pClA?DAk%$`I2 #Q+l7 "=&dL&PRSLIP)PɼirqМ'N8[_}w;PK-PKD Oracle Legal Notices

Oracle Legal Notices

Copyright Notice

Copyright © 1994-2012, Oracle and/or its affiliates. All rights reserved.

Trademark Notice

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

License Restrictions Warranty/Consequential Damages Disclaimer

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

Warranty Disclaimer

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

Restricted Rights Notice

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.

Hazardous Applications Notice

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Third-Party Content, Products, and Services Disclaimer

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

Alpha and Beta Draft Documentation Notice

If this document is in prerelease status:

This documentation is in prerelease status and is intended for demonstration and preliminary use only. It may not be specific to the hardware on which you are using the software. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to this documentation and will not be responsible for any loss, costs, or damages incurred due to the use of this documentation.

Oracle Logo

PKN61PKD p { display: none; } /* Class Selectors */ .ProductTitle { font-family: sans-serif; } .BookTitle { font-family: sans-serif; } .VersionNumber { font-family: sans-serif; } .PrintDate { font-family: sans-serif; font-size: small; } .PartNumber { font-family: sans-serif; font-size: small; } PKeӺ1,PKDꑈ53=Z]'yuLG*)g^!8C?-6(29K"Ĩ0Яl;U+K9^u2,@@ (\Ȱ Ë $P`lj 8x I$4H *(@͉0dа8tA  DсSP v"TUH PhP"Y1bxDǕ̧_=$I /& .)+ 60D)bB~=0#'& *D+l1MG CL1&+D`.1qVG ( "D2QL,p.;u. |r$p+5qBNl<TzB"\9e0u )@D,¹ 2@C~KU 'L6a9 /;<`P!D#Tal6XTYhn[p]݅ 7}B a&AƮe{EɲƮiEp#G}D#xTIzGFǂEc^q}) Y# (tۮNeGL*@/%UB:&k0{ &SdDnBQ^("@q #` @1B4i@ aNȅ@[\B >e007V[N(vpyFe Gb/&|aHZj@""~ӎ)t ? $ EQ.սJ$C,l]A `8A o B C?8cyA @Nz|`:`~7-G|yQ AqA6OzPbZ`>~#8=./edGA2nrBYR@ W h'j4p'!k 00 MT RNF6̙ m` (7%ꑀ;PKl-OJPKDxAܽ[G.\rQC wr}BŊQ A9ᾑ#5Y0VȒj0l-GqF>ZpM rb ;=.ސW-WѻWo ha!}~ْ ; t 53 :\ 4PcD,0 4*_l0K3-`l.j!c Aa|2L4/1C`@@md;(H*80L0L(h*҇҆o#N84pC (xO@ A)J6rVlF r  fry†$r_pl5xhA+@A=F rGU a 1х4s&H Bdzt x#H%Rr (Ѐ7P`#Rщ'x" #0`@~i `HA'Tk?3!$`-A@1l"P LhʖRG&8A`0DcBH sq@AXB4@&yQhPAppxCQ(rBW00@DP1E?@lP1%T` 0 WB~nQ@;PKGC PKD!/;xP` (Jj"M6 ;PK枰pkPKD 1) collapsible = false; for (var k = 0; k < p.length; k++) { if ( getTextContent(p[k]).split(" ").length > 12 ) collapsible = false; c.push(p[k]); } } if (collapsible) { for (var j = 0; j < c.length; j++) { c[j].style.margin = "0"; } } } function getTextContent(e) { if (e.textContent) return e.textContent; if (e.innerText) return e.innerText; } } addLoadEvent(compactLists); function processIndex() { try { if (!/\/index.htm(?:|#.*)$/.test(window.location.href)) return false; } catch(e) {} var shortcut = []; lastPrefix = ""; var dd = document.getElementsByTagName("dd"); for (var i = 0; i < dd.length; i++) { if (dd[i].className != 'l1ix') continue; var prefix = getTextContent(dd[i]).substring(0, 2).toUpperCase(); if (!prefix.match(/^([A-Z0-9]{2})/)) continue; if (prefix == lastPrefix) continue; dd[i].id = prefix; var s = document.createElement("a"); s.href = "#" + prefix; s.appendChild(document.createTextNode(prefix)); shortcut.push(s); lastPrefix = prefix; } var h2 = document.getElementsByTagName("h2"); for (var i = 0; i < h2.length; i++) { var nav = document.createElement("div"); nav.style.position = "relative"; nav.style.top = "-1.5ex"; nav.style.left = "1.5em"; nav.style.width = "90%"; while (shortcut[0] && shortcut[0].toString().charAt(shortcut[0].toString().length - 2) == getTextContent(h2[i])) { nav.appendChild(shortcut.shift()); nav.appendChild(document.createTextNode("\u00A0 ")); } h2[i].parentNode.insertBefore(nav, h2[i].nextSibling); } function getTextContent(e) { if (e.textContent) return e.textContent; if (e.innerText) return e.innerText; } } addLoadEvent(processIndex); PKo"nR M PKD*1$#"%+ ( E' n7Ȇ(,҅(L@(Q$\x 8=6 'נ9tJ&"[Epljt p#ѣHb :f F`A =l|;&9lDP2ncH R `qtp!dȐYH›+?$4mBA9 i@@ ]@ꃤFxAD*^Ŵ#,(ε  $H}F.xf,BD Z;PK1FAPKD"p`ƒFF "a"E|ժOC&xCRz OBtX>XE*O>tdqAJ +,WxP!CYpQ HQzDHP)T njJM2ꔀJ2T0d#+I:<жk 'ꤱF AB @@nh Wz' H|-7f\A#yNR5 /PM09u UjćT|q~Yq@&0YZAPa`EzI /$AD Al!AAal 2H@$ PVAB&c*ؠ p @% p-`@b`uBa l&`3Ap8槖X~ vX$Eh`.JhAepA\"Bl, :Hk;PKx[?:PKD_*OY0J@pw'tVh;PKp*c^PKD#Sb(clhUԂ̗4DztSԙ9ZQҀEPEPEPEPEPEPEPM=iԍP Gii c*yF 1׆@\&o!QY00_rlgV;)DGhCq7~..p&1c:u֫{fI>fJL$}BBP?JRWc<^j+χ5b[hֿ- 5_j?POkeQ^hֿ1L^ H ?Qi?z?+_xɔŪ\썽O]χ>)xxV/s)e6MI7*ߊޛv֗2J,;~E4yi3[nI`Ѱe9@zXF*W +]7QJ$$=&`a۾?]N T䏟'X)Ɣkf:j |>NBWzYx0t!* _KkoTZ?K Gc+UyڹgNuh^iSo5{\ܹ3Yos}.>if FqR5\/TӮ#]HS0DKu{($"2xִ{SBJ8=}Y=.|Tsц2UЫ%.InaegKo z ݎ3ֹxxwM&2S%';+I',kW&-"_¿_ Vq^ܫ6pfT2RV A^6RKetto^[{w\jPZ@ޢN4/XN#\42j\(z'j =~-I#:q[Eh|X:sp* bifp$TspZ-}NM*B-bb&*xUr#*$M|QWY ~p~- fTED6O.#$m+t$˙H"Gk=t9r娮Y? CzE[/*-{c*[w~o_?%ƔxZ:/5𨴟q}/]22p qD\H"K]ZMKR&\C3zĽ[PJm]AS)Ia^km M@dК)fT[ijW*hnu Ͳiw/bkExG£@f?Zu.s0(<`0ֹoxOaDx\zT-^ѧʧ_1+CP/p[w 9~U^[U<[tĽwPv[yzD1W='u$Oeak[^ |Gk2xv#2?¹TkSݕ| rݞ[Vi _Kz*{\c(Ck_܏|?u jVڔ6f t?3nmZ6f%QAjJf9Rq _j7Z-y.pG$Xb]0')[_k;$̭?&"0FOew7 z-cIX岛;$u=\an$ zmrILu uٞ% _1xcUW%dtÀx885Y^gn;}ӭ)場QEQ@Q@Q@Q@Q@Q@!4xPm3w*]b`F_931˜[ן+(> E ly;<;MF-qst+}DH @YKlLmؤciN<|]IU)Lw(8t9FS(=>og<\Z~u_+X1ylsj'eՃ*U3`C!N9Q_WܱhKc93^ua>H ƕGk=8~e#_?{ǀe-[2ٔ7;=&K挑5zsLdx(e8#{1wS+ΝVkXq9>&yஏh$zq^0~/j@:/«Vnce$$uoPp}MC{$-akH@ɫ1O !8R9s5ԦYmϧ'OUṡ5T,!Ԛ+s#1Veo=[)g>#< s)ƽُA^䠮ωFUj(ǩ|N3Jڷ睁ϱuږZYGOTsI<&drav?A^_f׻B$,O__ԿC`it{6>G׈C~&$y؎v1q9Sc1fH[ѽ>,gG'0'@Vw,BO [#>ﱺg5ΒFVD%Yr:O5 Tu+O멃]ی38Ze}R&ѝ_xzc1DXgس;<,_,{ƽY'AS#oF.M#~cBuEx7G+Y)(5q+GCV;qF+CLQ)qEC&6z𿊘z}?&w=+)??&\g{;V??׻xGœdٿ׼-Nc')3K]N)iLTӿCdb7Q^a N sd>Fz[0S^s'Zi 77D}kWus ab~~H(>.fif9,~|Jk;YN3H8Y(t6Q݉k͇_÷Z+2߄&[ +Tr^藺97~c܎=[f1RrBǓ^kEMhxYVm<[џ6| kqbѱ| YA{G8p?\UM7Z66 g1U1igU69 u5Pƪ:VVZC=[@ҹ¨$kSmɳО\vFz~i3^a Osŧυ9Q}_3 όO{/wgoet39 vO2ea;Ύ7$U#?k+Ek&dpzbӱ+TaB0gN{[N7Gי}U7&@?>Fz~E!a@s ?'67XxO*!?qi]֏TQN@tI+\^s8l0)2k!!iW8F$(yOּT.k,/#1:}8uT˾+5=O/`IW G֯b.-<= HOm;~so~hW5+kS8s.zwE| ?4ӿw/K N 9?j(#0UT` Wzw}:_*9m>󑓀F?ELzv=8q:=WgJ`nDr Zе<ֹ](Q@Q@Q@Q@Q@Q@Q@Q@ 'IdC0EYJVcMty_~u+Sw-aO n<[YJgL#6i g5ЖDZ14cʝ!!\/M}/_AYR__>oC? _?7_G#RERW쏞KB}JxGSkǕA pƱơP m]hwB7U$Zq M95"3q1ioATߚ{g.t uu2k=;h#YB= fgS :TdLԃ!44mFK{Hrd^7oz|BVr<{)6AXգV»|>*/hS܏z͆OM=Εq (s|s׊LKQI :9NJ)P+!ʣoAF>+=@I}"x/}۠1aנc¹4emC:>p_xWKX` >R3_S½èųp3޺u3N e یbmͺ<_ mnݮ1Op?Gm)Qb%N585'%Ahs\6yw!"&Ɨ._wk)}GP;Z!#\"< *oƾ\)}N>"լ/~]Lg}pBG X?<zZ#x69S=6) jzx=y9O&>+e!!? ?s~k5Gʏ)?*ce7Ox~k5􇔾Q/e7/Ԑ#3OgNC0] ;_FiRl>Q.g>!%k#ú:Kn'&}?U@\pџPtp)v<{_i}Oվֲ3XIYIx~b<D?(=_JXH=bbi=Oh?_ C_O)}oW쏜? %Ƶ;-RYFi`wۭ{ϖZMtQ$"c_+ԃx1*0b;ԕ݋ESQEQEQEQEQEQEQEQEQEQZ(1F)h1K@XLRE&9P (bf{RӨ&)PEPEPbԴPGKZ(iإbn(:A%S0(-&)P+ ڎԴP11F)h&:LRmQ@Q@Š(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((PKje88PKD Oracle Data Guard Concepts and Administration , 11g Release 2 (11.2)

Contents

List of Examples

List of Figures

List of Tables

Title and Copyright Information

Preface

What's New in Oracle Data Guard?

Part I Concepts and Administration

1 Introduction to Oracle Data Guard

2 Getting Started with Data Guard

3 Creating a Physical Standby Database

4 Creating a Logical Standby Database

5 Data Guard Protection Modes

6 Redo Transport Services

7 Apply Services

8 Role Transitions

9 Managing Physical and Snapshot Standby Databases

10 Managing a Logical Standby Database

11 Using RMAN to Back Up and Restore Files

12 Using SQL Apply to Upgrade the Oracle Database

13 Data Guard Scenarios

Part II Reference

14 Initialization Parameters

15 LOG_ARCHIVE_DEST_n Parameter Attributes

16 SQL Statements Relevant to Data Guard

17 Views Relevant to Oracle Data Guard

Part III Appendixes

A Troubleshooting Data Guard

B Upgrading and Downgrading Databases in a Data Guard Configuration

C Data Type and DDL Support on a Logical Standby Database

D Data Guard and Oracle Real Application Clusters

E Creating a Standby Database with Recovery Manager

F Setting Archive Tracing

Index

PKPKD Data Guard Protection Modes

5 Data Guard Protection Modes

This chapter contains the following sections:

5.1 Data Guard Protection Modes

This section describes the Data Guard protection modes.

In these descriptions, a synchronized standby database is meant to be one that meets the minimum requirements of the configured data protection mode and that does not have a redo gap. Redo gaps are discussed in Section 6.4.3.

Maximum Availability

This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to the standby redo log on at least one synchronized standby database. If the primary database cannot write its redo stream to at least one synchronized standby database, it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database.

This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.

Maximum Performance

This protection mode provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log. Redo data is also written to one or more standby databases, but this is done asynchronously with respect to transaction commitment, so primary database performance is unaffected by delays in writing redo data to the standby database(s).

This protection mode offers slightly less data protection than maximum availability mode and has minimal impact on primary database performance.

This is the default protection mode.

Maximum Protection

This protection mode ensures that no data loss will occur if the primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to the standby redo log on at least one synchronized standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions, if it cannot write its redo stream to at least one synchronized standby database.

Transactions on the primary are considered protected as soon as Data Guard has written the redo data to persistent storage in a standby redo log file. Once that is done, acknowledgment is quickly made back to the primary database so that it can proceed to the next transaction. This minimizes the impact of synchronous transport on primary database throughput and response time. To fully benefit from complete Data Guard validation at the standby database, be sure to operate in real-time apply mode so that redo changes are applied to the standby database as fast as they are received. Data Guard signals any corruptions that are detected so that immediate corrective action can be taken.

Because this data protection mode prioritizes data protection over primary database availability, Oracle recommends that a minimum of two standby databases be used to protect a primary database that runs in maximum protection mode to prevent a single standby database failure from causing the primary database to shut down.


Note:

Asynchronously committed transactions are not protected by Data Guard against loss until the redo generated by those transactions has been written to the standby redo log of at least one synchronized standby database.

For more information about the asynchronous commit feature, see:


5.2 Setting the Data Protection Mode of a Primary Database

Perform the following steps to set the data protection mode of a primary database:

Step 1   Select a data protection mode that meets your availability, performance, and data protection requirements.

See Section 5.1 for a description of the data protection modes.

Step 2   Verify that at least one standby database meets the redo transport requirements for the desired data protection mode.

The LOG_ARCHIVE_DEST_n database initialization parameter that corresponds to at least one standby database must include the redo transport attributes listed in Table 5-1 for the desired data protection mode.

The standby database must also have a standby redo log.

See Chapter 6, "Redo Transport Services" for more information about configuring redo transport and standby redo logs.

Table 5-1 Required Redo Transport Attributes for Data Protection Modes

Maximum AvailabilityMaximum PerformanceMaximum Protection

AFFIRM

NOAFFIRM

AFFIRM

SYNC

ASYNC

SYNC

DB_UNIQUE_NAME

DB_UNIQUE_NAME

DB_UNIQUE_NAME


Step 3   Verify that the DB_UNIQUE_NAME database initialization parameter has been set to a unique value on the primary database and on each standby database.
Step 4   Verify that the LOG_ARCHIVE_CONFIG database initialization parameter has been defined on the primary database and on each standby database, and that its value includes a DG_CONFIG list that includes the DB_UNIQUE_NAME of the primary database and each standby database.

For example, the following SQL statement might be used to configure the LOG_ARCHIVE_CONFIG parameter:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(CHICAGO,BOSTON)';
Step 5   Set the data protection mode.

Execute the following SQL statement on the primary database:

SQL> ALTER DATABASE -
> SET STANDBY DATABASE TO MAXIMIZE {AVAILABILITY | PERFORMANCE | PROTECTION};

Note that the data protection mode can be set to MAXIMUM PROTECTION on an open database only if the current data protection mode is MAXIMUM AVAILABILITY and if there is at least one synchronized standby database.

Step 6   Confirm that the primary database is operating in the new protection mode.

Perform the following query on the primary database to confirm that it is operating in the new protection mode:

SQL> SELECT PROTECTION_MODE FROM V$DATABASE;
PKKW.{.PKD Redo Transport Services

6 Redo Transport Services

This chapter describes how to configure and monitor Oracle redo transport services. The following topics are discussed:

6.1 Introduction to Redo Transport Services

Redo transport services performs the automated transfer of redo data between Oracle databases. The following redo transport destinations are supported:

  • Oracle Data Guard standby databases

    This guide describes how to create and manage standby databases.

  • Archive Log repository

    This destination type is used for temporary offsite storage of archived redo log files. An archive log repository consists of an Oracle database instance and a physical standby control file. An archive log repository does not contain datafiles, so it cannot support role transitions.

    The procedure used to create an archive log repository is identical to the procedure used to create a physical standby database, except for the copying of datafiles.

  • Oracle Streams downstream capture databases

    See Oracle Streams Concepts and Administration for more information about Oracle Streams downstream capture databases.

  • Oracle Change Data Capture staging databases

    See Oracle Warehouse Builder Sources and Targets Guide for more information about Oracle Change Data Capture staging databases.

An Oracle database can send redo data to up to thirty redo transport destinations. Each redo transport destination is individually configured to receive redo data via one of two redo transport modes:

  • Synchronous

    The synchronous redo transport mode transmits redo data synchronously with respect to transaction commitment. A transaction cannot commit until all redo generated by that transaction has been successfully sent to every enabled redo transport destination that uses the synchronous redo transport mode.

    Note that although there is no limit on the distance between a primary database and a SYNC redo transport destination, transaction commit latency increases as network latency increases between a primary database and a SYNC redo transport destination.

    This transport mode is used by the Maximum Protection and Maximum Availability data protection modes described in Chapter 5, "Data Guard Protection Modes".

  • Asynchronous

    The asynchronous redo transport mode transmits redo data asynchronously with respect to transaction commitment. A transaction can commit without waiting for the redo generated by that transaction to be successfully sent to any redo transport destination that uses the asynchronous redo transport mode.

    This transport mode is used by the Maximum Performance data protection mode described in Chapter 5, "Data Guard Protection Modes".

6.2 Configuring Redo Transport Services

This section describes how to configure redo transport services. The following topics are discussed:

The section is written at a level of detail that assumes that the reader has a thorough understanding of the following topics, which are described in the Oracle Database Administrator's Guide, Oracle Database Backup and Recovery User's Guide, and Oracle Database Net Services Administrator's Guide:

  • Database administrator authentication

  • Database initialization parameters

  • Managing a redo log

  • Managing archived redo logs

  • Fast recovery areas

  • Oracle Net Configuration

6.2.1 Redo Transport Security

Redo transport uses Oracle Net sessions to transport redo data. These redo transport sessions are authenticated using either the Secure Socket Layer (SSL) protocol or a remote login password file.

6.2.1.1 Redo Transport Authentication Using SSL

Secure Sockets Layer (SSL) is an industry standard protocol for securing network connections. SSL uses RSA public key cryptography and symmetric key cryptography to provide authentication, encryption, and data integrity. SSL is automatically used for redo transport authentication between two Oracle databases if:

  • The databases are members of the same Oracle Internet Directory (OID) enterprise domain and that domain allows the use of current user database links.

  • The LOG_ARCHIVE_DEST_n, and FAL_SERVER database initialization parameters that correspond to the databases use Oracle Net connect descriptors configured for SSL.

  • Each database has an Oracle wallet or a supported hardware security module that contains a user certificate with a distinguished name (DN) that matches the DN in the OID entry for the database.


See Also:


6.2.1.2 Redo Transport Authentication Using a Password File

If the SSL authentication requirements are not met, each database must use a remote login password file. In a Data Guard configuration, all physical and snapshot standby databases must use a copy of the password file from the primary database, and that copy must be refreshed whenever the SYSOPER or SYSDBA privilege is granted or revoked, and after the password of any user with these privileges is changed.

When a password file is used for redo transport authentication, the password of the user account used for redo transport authentication is compared between the database initiating a redo transport session and the target database. The password must be the same at both databases to create a redo transport session.

By default, the password of the SYS user is used to authenticate redo transport sessions when a password file is used. The REDO_TRANSPORT_USER database initialization parameter can be used to select a different user password for redo transport authentication by setting this parameter to the name of any user who has been granted the SYSOPER privilege. For administrative ease, Oracle recommends that the REDO_TRANSPORT_USER parameter be set to the same value on the redo source database and at each redo transport destination.


See Also:

Oracle Database Administrator's Guide for more information creating and maintaining remote login password files

6.2.2 Configuring an Oracle Database to Send Redo Data

This section describes how to configure an Oracle database to send redo data to a redo transport destination.

The LOG_ARCHIVE_DEST_n database initialization parameter (where n is an integer from 1 to 31) is used to specify the location of a local archive redo log or to specify a redo transport destination. This section describes the latter use of this parameter.

There is a LOG_ARCHIVE_DEST_STATE_n database initialization parameter (where n is an integer from 1 to 31) that corresponds to each LOG_ARCHIVE_DEST_n parameter. This parameter is used to enable or disable the corresponding redo destination. Table 6-1 shows the valid values that can be assigned to this parameter.

Table 6-1 LOG_ARCHIVE_DEST_STATE_n Initialization Parameter Values

ValueDescription

ENABLE

Redo transport services can transmit redo data to this destination. This is the default.

DEFER

Redo transport services will not transmit redo data to this destination.

ALTERNATE

This destination will become enabled if communication to its associated destination fails.


A redo transport destination is configured by setting the LOG_ARCHIVE_DEST_n parameter to a character string that includes one or more attributes. This section briefly describes the most commonly used attributes. See Chapter 15 for a full description of all LOG_ARCHIVE_DEST_n parameter attributes.

The SERVICE attribute, which is a mandatory attribute for a redo transport destination, must be the first attribute specified in the attribute list. The SERVICE attribute is used to specify the Oracle Net service name used to connect to the redo transport destination. The service name must be resolvable through an Oracle Net naming method to an Oracle Net connect descriptor that matches the Oracle Net listener(s) at the redo transport destination. The connect descriptor must specify that a dedicated server connection be used, unless that is the default connection type for the redo transport destination.


See Also:

Oracle Database Net Services Administrator's Guide for information about Oracle Net service names, connect descriptors, listeners, and network security

The SYNC attribute is used to specify that the synchronous redo transport mode be used to send redo data to a redo transport destination.

The ASYNC attribute is used to specify that the asynchronous redo transport mode be used to send redo data to a redo transport destination. The asynchronous redo transport mode will be used if neither the SYNC nor the ASYNC attribute is specified.

The NET_TIMEOUT attribute is used to specify how long the LGWR process will block waiting for an acknowledgement that redo data has been successfully received by a destination that uses the synchronous redo transport mode. If an acknowledgement is not received within NET_TIMEOUT seconds, the redo transport connection is terminated and an error is logged.

Oracle recommends that the NET_TIMEOUT attribute be specified whenever the synchronous redo transport mode is used, so that the maximum duration of a redo source database stall caused by a redo transport fault can be precisely controlled. See Section 6.4.2 for information about monitoring synchronous redo transport mode response time.

The AFFIRM attribute is used to specify that redo received from a redo source database is not acknowledged until it has been written to the standby redo log. The NOAFFIRM attribute is used to specify that received redo is acknowledged without waiting for received redo to be written to the standby redo log.

The DB_UNIQUE_NAME attribute is used to specify the DB_UNIQUE_NAME of a redo transport destination. The DB_UNIQUE_NAME attribute must be specified if the LOG_ARCHIVE_CONFIG database initialization parameter has been defined and its value includes a DG_CONFIG list.

If the DB_UNIQUE_NAME attribute is specified, its value must match one of the DB_UNIQUE_NAME values in the DG_CONFIG list. It must also match the value of the DB_UNIQUE_NAME database initialization parameter at the redo transport destination. If either match fails, an error is logged and redo transport will not be possible to that destination.

The VALID_FOR attribute is used to specify when redo transport services transmits redo data to a redo transport destination. Oracle recommends that the VALID_FOR attribute be specified for each redo transport destination at every site in a Data Guard configuration so that redo transport services will continue to send redo data to all standby databases after a role transition, regardless of which standby database assumes the primary role.

The REOPEN attribute is used to specify the minimum number of seconds between automatic reconnect attempts to a redo transport destination that is inactive because of a previous error.

The COMPRESSION attribute is used to specify that redo data is transmitted to a redo transport destination in compressed form. Redo transport compression can significantly improve redo transport performance on network links with low bandwidth and high latency.

Redo transport compression is a feature of the Oracle Advanced Compression option. You must purchase a license for this option before using the redo transport compression feature.

The following example uses all of the LOG_ARCHIVE_DEST_n attributes described in this section. A DB_UNIQUE_NAME has been specified for both destinations, as has the use of compression. If a redo transport fault occurs at either destination, redo transport will attempt to reconnect to that destination, but not more frequently than once every 60 seconds.

DB_UNIQUE_NAME=BOSTON
LOG_ARCHIVE_CONFIG='DG_CONFIG=(BOSTON,CHICAGO,HARTFORD)' 
LOG_ARCHIVE_DEST_2='SERVICE=CHICAGO ASYNC NOAFFIRM VALID_FOR=(ONLINE_LOGFILE, 
PRIMARY_ROLE) REOPEN=60 COMPRESSION=ENABLE  DB_UNIQUE_NAME=CHICAGO' 
LOG_ARCHIVE_DEST_STATE_2='ENABLE' 
LOG_ARCHIVE_DEST_3='SERVICE=HARTFORD SYNC AFFIRM NET_TIMEOUT=30 
VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE) REOPEN=60 COMPRESSION=ENABLE   
DB_UNIQUE_NAME=HARTFORD' 
LOG_ARCHIVE_DEST_STATE_3='ENABLE'

6.2.2.1 Viewing Attributes With V$ARCHIVE_DEST

The V$ARCHIVE_DEST view can be queried to see the current settings and status for each redo transport destination.

6.2.3 Configuring an Oracle Database to Receive Redo Data

This section describes how to configure a redo transport destination to receive and to archive redo data from a redo source database.

The following topics are discussed:

6.2.3.1 Creating and Managing a Standby Redo Log

The synchronous and asynchronous redo transport modes require that a redo transport destination have a standby redo log. A standby redo log is used to store redo received from another Oracle database. Standby redo logs are structurally identical to redo logs, and are created and managed using the same SQL statements used to create and manage redo logs.

Redo received from another Oracle database via redo transport is written to the current standby redo log group by an RFS foreground process. When a log switch occurs on the redo source database, incoming redo is then written to the next standby redo log group, and the previously used standby redo log group is archived by an ARCn foreground process.

The process of sequentially filling and then archiving redo log file groups at a redo source database is mirrored at each redo transport destination by the sequential filling and archiving of standby redo log groups.

Each standby redo log file must be at least as large as the largest redo log file in the redo log of the redo source database. For administrative ease, Oracle recommends that all redo log files in the redo log at the redo source database and the standby redo log at a redo transport destination be of the same size.

The standby redo log must have at least one more redo log group than the redo log at the redo source database, for each redo thread at the redo source database. At the redo source database, query the V$LOG view to determine how many redo log groups are in the redo log at the redo source database and query the V$THREAD view to determine how many redo threads exist at the redo source database.

Perform the following query on a redo source database to determine the size of each log file and the number of log groups in the redo log:

SQL> SELECT GROUP#, BYTES FROM V$LOG;

Perform the following query on a redo destination database to determine the size of each log file and the number of log groups in the standby redo log:

SQL> SELECT GROUP#, BYTES FROM V$STANDBY_LOG;

Oracle recommends that a standby redo log be created on the primary database in a Data Guard configuration so that it is immediately ready to receive redo data following a switchover to the standby role.

The ALTER DATABASE ADD STANDBY LOGFILE SQL statement is used to create a standby redo log and to add standby redo log groups to an existing standby redo log.

For example, assume that the redo log on the redo source database has two redo log groups and that each of those contain one 500 MB redo log file. In this case, the standby redo log should have at least 3 standby redo log groups to satisfy the requirement that a standby redo log must have at least one more redo log group than the redo log at the redo source database.

The following SQL statements might be used to create a standby redo log that is appropriate for the previous scenario:

SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/oracle/dbs/slog1.rdo') SIZE 500M;
 
SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/oracle/dbs/slog2.rdo') SIZE 500M;
 
SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/oracle/dbs/slog3.rdo') SIZE 500M;

If the redo source database is an Oracle Real Applications Cluster (Oracle RAC) or Oracle One Node database, query the V$LOG view at the redo source database to determine how many redo threads exist and specify the corresponding thread numbers when adding redo log groups to the standby redo log.

For example, the following SQL statements might be used to create a standby redo log at a database that is to receive redo from a redo source database that has two redo threads:

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 SIZE 500M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 SIZE 500M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 SIZE 500M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 SIZE 500M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 SIZE 500M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 SIZE 500M;

Caution:

Whenever a redo log group is added to a primary database, a log group must also be added to the standby redo log of each standby database in the configuration. Otherwise, the standby database may become unsynchronized after a primary log switch, which could temporarily prevent a zero data loss failover or cause a primary database operating in maximum protection mode to shut down.

6.2.3.2 Configuring Standby Redo Log Archival

This section describes how to configure standby redo log archival.


See Also:


6.2.3.2.1 Enable Archiving

If archiving is not enabled, issue the following statements to put the database in ARCHIVELOG mode and to enable automatic archiving:

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;

Note that the database must be in ARCHIVELOG mode for standby redo log archival to be performed.

6.2.3.2.2 Standby Redo Log Archival to a fast recovery area

Take the following steps to set up standby redo log archival to a fast recovery area:

  1. Set the LOCATION attribute of a LOG_ARCHIVE_DEST_n parameter to USE_DB_RECOVERY_FILE_DEST.

  2. Set the VALID_FOR attribute of the same LOG_ARCHIVE_DEST_n parameter to a value that allows standby redo log archival.

The following are some sample parameter values that might be used to configure a physical standby database to archive its standby redo log to the fast recovery area:

LOG_ARCHIVE_DEST_2 = 'LOCATION=USE_DB_RECOVERY_FILE_DEST
VALID_FOR=(STANDBY_LOGFILE,STANDBY_ROLE)'
LOG_ARCHIVE_DEST_STATE_2=ENABLE

Oracle recommends the use of a fast recovery area, because it simplifies the management of archived redo log files.

6.2.3.2.3 Standby Redo Log Archival to a Local FIle System Location

Take the following steps to set up standby redo log archival to a local file system location:

  1. Set the LOCATION attribute of a LOG_ARCHIVE_DEST_n parameter to a valid pathname.

  2. Set the VALID_FOR attribute of the same LOG_ARCHIVE_DEST_n parameter to a value that allows standby redo log archival.

The following are some sample parameter values that might be used to configure a physical standby database to archive its standby redo log to a local file system location:

LOG_ARCHIVE_DEST_2 = 'LOCATION = /disk2/archive
VALID_FOR=(STANDBY_LOGFILE,STANDBY_ROLE)'
LOG_ARCHIVE_DEST_STATE_2=ENABLE

6.2.3.3 Cases Where Redo Is Written Directly To an Archived Redo Log File

Redo received by a standby database is written directly to an archived redo log file if a standby redo log group is not available or if the redo was sent to resolve a redo gap. When this occurs, redo is written to the location specified by the LOCATION attribute of one LOG_ARCHIVE_DEST_n parameter that is valid for archiving redo received from another database. The LOG_ARCHIVE_DEST_n parameter that is used for this purpose is determined when the standby database is mounted, and this choice is reevaluated each time a LOG_ARCHIVE_DEST_n parameter is modified.

6.3 Cascaded Redo Transport Destinations


Note:

To use the Oracle Data Guard cascading redo trank!sport destination feature described in this section, you should be using Oracle Database 11g Release 2 (11.2.0.2). Releases prior to 11.2.0.2 have several limitations for this feature that are not present in release 11.2.0.2.

A cascaded redo transport destination receives primary database redo indirectly from a standby database rather than directly from a primary database.

A standby database that cascades primary database redo to one or more cascaded destinations is known as a cascading standby database.

Cascading offloads the overhead associated with performing redo transport from a primary database to a cascading standby database.

A cascading standby database can cascade primary database redo to up to 30 cascaded destinations, each of which can be a physical, logical, or snapshot standby database.

Primary database redo is written to the standby redo log as it is received at a cascading standby database. The redo is not immediately cascaded however. It is cascaded after the standby redo log file that it was written to has been archived locally. A cascaded destination will therefore always have a greater redo transport lag, with respect to the primary database, than the cascading standby database.

Cascading has the following restrictions:

  • A physical standby database is the only standby database type that can cascade redo

  • The Data Guard broker does not support cascaded destinations

The rest of this section contains the following information:

6.3.1 Configuring a Cascaded Destination

Perform the following steps to configure a cascaded destination:

  1. Select a physical standby database to configure as a cascading standby database.

  2. On the cascading standby database, configure the FAL_SERVER attribute with the Oracle Net alias of the primary database or of a standby database that receives redo directly from the primary database.

  3. On the cascading standby database, configure a LOG_ARCHIVE_DEST_n database initialization parameter for the cascaded destination. Configure the SERVICE attribute of this destination with the Oracle Net alias of the cascaded destination, and the VALID attribute to be valid for archival of the standby redo log while in the standby role.

    If the SYNC or ASYNC redo transport attributes are specified, they are ignored.

  4. At the cascaded destination, configure the FAL_SERVER database initialization parameter with the Oracle Net alias of the cascading standby database or of another standby database that is directly connected to the primary database. Although it is also possible to specify the primary database, this would defeat the purpose of cascading, which is to reduce the redo transport overhead on the primary database.

  5. Example 6-1 shows some of the database initialization parameters used by the members of a Data Guard configuration that includes a primary database named boston that sends redo to a local physical standby database named boston2, which then cascades primary database redo to a remote physical standby database named denver.

    Note that a LOG_ARCHIVE_DEST_n database initialization parameter could also be configured on database boston that is valid for standby redo log archival to database denver when database boston is in the standby role. This would allow redo cascading to database denver to continue if a switchover is performed between database boston and database boston2.

Example 6-1 Some of the Initialization Parameters Used When Cascading Redo

Primary Database

DB_UNIQUE_NAME=boston
 
FAL_SERVER=boston2
 
LOG_ARCHIVE_CONFIG='DG_CONFIG=(boston,boston2,denver)'
 
LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST
VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=boston'
 
LOG_ARCHIVE_DEST_2='SERVICE=boston2 SYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=boston2'

Cascading Physical Standby Database

DB_UNIQUE_NAME=boston2
 
FAL_SERVER=boston
 
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(boston,boston2,denver)'
 
LOG_ARCHIVE_DEST_1='LOCATION= USE_DB_RECOVERY_FILE_DEST
VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=boston2'
 
LOG_ARCHIVE_DEST_2= 'SERVICE=denver
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=denver'
 

Cascaded Physical Standby Database

DB_UNIQUE_NAME=denver
 
FAL_SERVER=boston2
 
LOG_ARCHIVE_CONFIG='DG_CONFIG=(boston,boston2,denver)'
 
LOG_ARCHIVE_DEST_1='LOCATION= USE_DB_RECOVERY_FILE_DEST
VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=denver'

6.3.2 Data Protection Considerations

Oracle recommends that a standby database primarily intended for disaster recovery purposes receive redo data directly from the primary database. This will result in the highest level of data protection. A cascaded standby database can be used as a second line of defense, but by definition it will always have received less primary database redo than a standby database that is receiving redo directly from the primary.

6.3.3 Cascading Scenarios

This section describes two typical cascading scenarios

6.3.3.1 Cascading to a Physical Standby

In this scenario, you have a mission-critical primary database. This database has stringent performance and data protection requirements, so you have decided to deploy a local physical standby database to provide zero data loss protection and a remote, cascaded physical standby database to protect against regional disasters at the primary and local standby database sites.

You can achieve the objectives described above by performing the following steps:

  1. Create a physical standby database at a local site.

  2. Create a physical standby database at a site that is sufficiently remote to provide protection against a regional disasters at the primary and local standby database sites.

  3. Configure the local standby database as a SYNC redo transport destination of the primary database.

  4. Configure the remote physical standby database as a cascaded destination of the local standby database.

6.3.3.2 Cascading to Multiple Physical Standbys

In this scenario, you have a primary database in North America and you wish to deploy three replicas of this database in Europe to support read-only reporting applications. For cost and performance reasons, you do not wish to maintain network links from North America to each of your European sites.

You can achieve the objectives described above by performing the following steps:

  1. Create a network link between your North American site and one of your European sites.

  2. Create a physical standby database at each of your European sites.

  3. Open your physical standby databases in real-time query mode, as described in Section 9.2.

  4. Configure the physical standby database at the European endpoint of your transatlantic network link to cascade redo to your other European standby databases.

  5. Configure the physical standby database at the European endpoint of your transatlantic network link as a cascaded destination of your primary database.

6.4 Monitoring Redo Transport Services

This section discusses the following topics:

6.4.1 Monitoring Redo Transport Status

This section describes the steps used to monitor redo transport status on a redo source database.

Step 1   Determine the most recently archived redo log file.

Perform the following query on the redo source database to determine the most recently archived sequence number for each thread:

SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;
Step 2   Determine the most recently archived redo log file at each redo transport destination.

Perform the following query on the redo source database to determine the most recently archived redo log file at each redo transport destination:

SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ# -
> FROM V$ARCHIVE_DEST_STATUS -
> WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE';
 
DESTINATION         STATUS  ARCHIVED_THREAD#  ARCHIVED_SEQ#
------------------  ------  ----------------  -------------
/private1/prmy/lad   VALID                 1            947
standby1             VALID                 1            947

The most recently archived redo log file should be the same for each destination. If it is not, a status other than VALID may identify an error encountered during the archival operation to that destination.

Step 3   Find out if archived redo log files have been received at a redo transport destination.

A query can be performed at a redo source database to find out if an archived redo log file has been received at a particular redo transport destination. Each destination has an ID number associated with it. You can query the DEST_ID column of the V$ARCHIVE_DEST view on a database to identify each destination's ID number.

Assume that destination 1 points to the local archived redo log and that destination 2 points to a redo transport destination. Perform the following query at the redo source database to find out if any log files are missing at the redo transport destination:

SQL> SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM -
> (SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1) -
> LOCAL WHERE -
> LOCAL.SEQUENCE# NOT IN -
> (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND -
> THREAD# = LOCAL.THREAD#);
 
THREAD#    SEQUENCE#
---------  ---------
  1        12
  1        13
  1        14
Step 4   Trace the progression of redo transmitted to a redo transport destination.

Set the LOG_ARCHIVE_TRACE database initialization parameter at a redo source database and at each redo transport destination to trace redo transport progress. See Appendix F for complete details and examples.

6.4.2 Monitoring Synchronous Redo Transport Response Time

The V$REDO_DEST_RESP_HISTOGRAM view contains response time data for each redo transport destination. This response time data is maintained for redo transport messages sent via the synchronous redo transport mode.

The data for each destination consists of a series of rows, with one row for each response time. To simplify record keeping, response times are rounded up to the nearest whole second for response times less than 300 seconds. Response times greater than 300 seconds are round up to 600, 1200, 2400, 4800, or 9600 seconds.

Each row contains four columns: FREQUENCY, DURATION, DEST_ID, and TIME.

The FREQUENCY column contains the number of times that a given response time has been observed. The DURATION column corresponds to the response time. The DEST_ID column identifies the destination. The TIME column contains a timestamp taken when the row was last updated.

The response time data in this view is useful for identifying synchronous redo transport mode performance issues that can affect transaction throughput on a redo source database. It is also useful for tuning the NET_TIMEOUT attribute.

The next three examples show example queries for destination 2, which corresponds to the LOG_ARCHIVE_DEST_2 parameter. To display response time data for a different destination, simply change the DEST_ID in the query.

Perform the following query on a redo source database to display the response time histogram for destination 2:

SQL> SELECT FREQUENCY, DURATION FROM -
> V$REDO_DEST_RESP_HISTOGRAM WHERE DEST_ID=2 AND FREQUENCY>1;

Perform the following query on a redo source database to display the slowest response time for destination 2:

SQL> SELECT max(DURATION) FROM V$REDO_DEST_RESP_HISTOGRAM -
> WHERE DEST_ID=2 AND FREQUENCY>1;

Perform the following query on a redo source database to display the fastest response time for destination 2:

SQL> SELECT min( DURATION) FROM V$REDO_DEST_RESP_HISTOGRAM -
> WHERE DEST_ID=2 AND FREQUENCY>1;

Note:

The highest observed response time for a destination cannot exceed the highest specified NET_TIMEOUT value specified for that destination, because synchronous redo transport mode sessions are terminated if a redo transport destination does not respond to a redo transport message within NET_TIMEOUT seconds.

6.4.3 Redo Gap Detection and Resolution

A redo gap occurs whenever redo transmission is interrupted. When redo transmission resumes, redo transport services automatically detects the redo gap and resolves it by sending the missing redo to the destination.

The time needed to resolve a redo gap is directly proportional to the size of the gap and inversely proportional to the effective throughput of the network link between the redo source database and the redo transport destination. Redo transport services has two options that may reduce redo gap resolution time when low performance network links are used:

  • Redo Transport Compression

    The COMPRESSION attribute of the LOG_ARCHIVE_DEST_n parameter is used to specify that redo data be compressed before transmission to the destination.

  • Parallel Redo Transport Network Sessions

    The MAX_CONNECTIONS attribute of the LOG_ARCHIVE_DEST_n parameter can be used to specify that more than one network session be used to send the redo needed to resolve a redo gap.

See Chapter 15, "LOG_ARCHIVE_DEST_n Parameter Attributes" for more information about the COMPRESSION and MAX_CONNECTIONS attributes.

6.4.3.1 Manual Gap Resolution

In some situations, gap resolution cannot be performed automatically and it must be performed manually. For example, redo gap resolution must be performed manually on a logical standby database if the primary database is unavailable.

Perform the following query at the physical standby database to determine if there is redo gap on a physical standby database:

SQL> SELECT * FROM V$ARCHIVE_GAP;

    THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
-----------  -------------  --------------
          1              7              10

The output from the previous example indicates that the physical standby database is currently missing log files from sequence 7 to sequence 10 for thread 1.

Perform the following query on the primary database to locate the archived redo log files on the primary database (assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1):

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND -
> DEST_ID=1 AND SEQUENCE# BETWEEN 7 AND 10;

NAME
--------------------------------------------------------------------------------
/primary/thread1_dest/arcr_1_7.arc 
/primary/thread1_dest/arcr_1_8.arc 
/primary/thread1_dest/arcr_1_9.arc

Note:

This query may return consecutive sequences for a given thread. In that case, there is no actual gap, but the associated thread was disabled and enabled within the time period of generating these two archived logs. The query also does not identify the gap that may exist at the tail end for a given thread. For instance, if the primary database has generated archived logs up to sequence 100 for thread 1, and the latest archived log that the logical standby database has received for the given thread is the one associated with sequence 77, this query will not return any rows, although we have a gap for the archived logs associated with sequences 78 to 100.

Copy these log files to the physical standby database and register them using the ALTER DATABASE REGISTER LOGFILE. For example:

SQL> ALTER DATABASE REGISTER LOGFILE -
> '/physical_standby1/thread1_dest/arcr_1_7.arc';

SQL> ALTER DATABASE REGISTER LOGFILE -
> '/physical_standby1/thread1_dest/arcr_1_8.arc';

SQL> ALTER DATABASE REGISTER LOGFILE -
> '/physical_standby1/thread1_dest/arcr_1_9.arc';

Note:

The V$ARCHIVE_GAP view on a physical standby database only returns the gap that is currently blocking Redo Apply from continuing. After resolving the gap, query the V$ARCHIVE_GAP view again on the physical standby database to determine if there is another gap sequence. Repeat this process until there are no more gaps.

To determine if there is a redo gap on a logical standby database, query the DBA_LOGSTDBY_LOG view on the logical standby database. For example, the following query indicates there is a gap in the sequence of archived redo log files because it displays two files for THREAD 1 on the logical standby database. (If there are no gaps, the query will show only one file for each thread.) The output shows that the highest registered file is sequence number 10, but there is a gap at the file shown as sequence number 6:

SQL> COLUMN FILE_NAME FORMAT a55
SQL> SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L -
> WHERE NEXT_CHANGE# NOT IN -
> (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#) -
> ORDER BY THREAD#, SEQUENCE#;
 
   THREAD#  SEQUENCE# FILE_NAME
---------- ---------- -----------------------------------------------
         1          6 /disk1/oracle/dbs/log-1292880008_6.arc
         1         10 /disk1/oracle/dbs/log-1292880008_10.arc

Copy the missing log files, with sequence numbers 7, 8, and 9, to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE statement. For example:

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE -
> '/disk1/oracle/dbs/log-1292880008_7.arc'; 

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE -
> '/disk1/oracle/dbs/log-1292880008_8.arc';

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE -
> '/disk1/oracle/dbs/log-1292880008_9.arc';

Note:

A query based on the DBA_LOGSTDBY_LOG view on a logical standby database, as specified above, only returns the gap that is currently blocking SQL Apply from continuing. After resolving the gap, query the DBA_LOGSTDBY_LOG view again on the logical standby database to determine if there is another gap sequence. Repeat this process until there are no more gaps.

6.4.4 Redo Transport Services Wait Events

Table 6-2 lists several of the Oracle wait events used to track redo transport wait time on a redo source database. These wait events are found in the V$SYSTEM_EVENT dynamic performance view.

For a complete list of the Oracle wait events used by redo transport, see the Oracle Data Guard Redo Transport and Network Best Practices white paper on the Oracle Maximum Availability Architecture (MAA) home page at:

http://www.oracle.com/goto/maa

Table 6-2 Redo Transport Wait Events

Wait EventDescription

LNS wait on ATTACH

Total time spent waiting for redo transport sessions to be established to all ASYNC and SYNC redo transport destinations

LNS wait on SENDREQ

Total time spent waiting for redo data to be written to all ASYNC and SYNC redo transport destinations

LNS wait on DETACH

Total time spent waiting for redo transport connections to be terminated to all ASYNC and SYNC redo transport destinations


6.5 Tuning Redo Transport

The Oracle Data Guard Redo Transport and Network Configuration Best Practices white paper describes how to optimize redo transport for best performance. This paper is available on the Oracle Maximum Availability Architecture (MAA) home page at:

http://www.oracle.com/goto/maa

PK?lPKD Creating a Physical Standby Database

3 Creating a Physical Standby Database

This chapter steps you through the process of creating a physical standby database. It includes the following main topics:

The steps described in this chapter configure the standby database for maximum performance mode, which is the default data protection mode. Chapter 5 provides information about configuring the different data protection modes.


See Also:

  • Oracle Database Administrator's Guide for information about creating and using server parameter files

  • Oracle Data Guard Broker and the Enterprise Manager online help system for information about using the graphical user interface to automatically create a physical standby database

  • Appendix E for information about creating a standby database with Recovery Manager (RMAN)


3.1 Preparing the Primary Database for Standby Database Creation

Before you create a standby database you must first ensure the primary database is properly configured.

Table 3-1 provides a checklist of the tasks that you perform on the primary database to prepare for physical standby database creation. There is also a reference to the section that describes the task in more detail.


Note:

Perform these preparatory tasks only once. After you complete these steps, the database is prepared to serve as the primary database for one or more standby databases.

3.1.1 Enable Forced Logging

Place the primary database in FORCE LOGGING mode after database creation using the following SQL statement:

SQL> ALTER DATABASE FORCE LOGGING;

This statement can take a considerable amount of time to complete, because it waits for all unlogged direct write I/O to finish.

3.1.2 Configure Redo Transport Authentication

Data Guard uses Oracle Net sessions to transport redo data and control messages between the members of a Data Guard configuration. These redo transport sessions are authenticated using either the Secure Sockets Layer (SSL) protocol or a remote login password file.

SSL is used to authenticate redo transport sessions between two databases if:

  • The databases are members of the same Oracle Internet Directory (OID) enterprise domain and it allows the use of current user database links

  • The LOG_ARCHIVE_DEST_n, and FAL_SERVER database initialization parameters that correspond to the databases use Oracle Net connect descriptors configured for SSL

  • Each database has an Oracle wallet or supported hardware security module that contains a user certificate with a distinguished name (DN) that matches the DN in the OID entry for the database

If the SSL authentication requirements are not met, each member of a Data Guard configuration must be configured to use a remote login password file and every physical standby database in the configuration must have an up-to-date copy of the password file from the primary database.


Note:

Whenever you grant or revoke the SYSDBA or SYSOPER privileges or change the login password of a user who has these privileges, you must replace the password file at each physical or snapshot standby database in the configuration with a fresh copy of the password file from the primary database.

3.1.3 Configure the Primary Database to Receive Redo Data

Although this task is optional, Oracle recommends that a primary database be configured to receive redo data when a Data Guard configuration is created. By following this best practice, your primary database will be ready to quickly transition to the standby role and begin receiving redo data.

See Section 6.2.3 for a complete discussion of how to configure a database to receive redo data.

3.1.4 Set Primary Database Initialization Parameters

On the primary database, you define initialization parameters that control redo transport services while the database is in the primary role. There are additional parameters you need to add that control the receipt of the redo data and apply services when the primary database is transitioned to the standby role.

Example 3-1 shows the primary role initialization parameters that you maintain on the primary database. This example represents a Data Guard configuration with a primary database located in Chicago and one physical standby database located in Boston. The parameters shown in Example 3-1 are valid for the Chicago database when it is running in either the primary or the standby database role. The configuration examples use the names shown in the following table:

DatabaseDB_UNIQUE_NAMEOracle Net Service Name
Primarychicagochicago
Physical standbybostonboston

Example 3-1 Primary Database: Primary Role Initialization Parameters

DB_NAME=chicago
DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)'
CONTROL_FILES='/arch1/chicago/control1.ctl', '/arch2/chicago/control2.ctl'
LOG_ARCHIVE_DEST_1=
 'LOCATION=/arch1/chicago/ 
  VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
  DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_2=
 'SERVICE=boston ASYNC
  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) 
  DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30

These parameters control how redo transport services transmit redo data to the standby system and the archiving of redo data on the local file system. Note that the example specifies asynchronous (ASYNC) network transmission to transmit redo data on the LOG_ARCHIVE_DEST_2 initialization parameter. These are the recommended settings and require standby redo log files (see Section 3.1.3, "Configure the Primary Database to Receive Redo Data").

Example 3-2 shows the additional standby role initialization parameters on the primary database. These parameters take effect when the primary database is transitioned to the standby role.

Example 3-2 Primary Database: Standby Role Initialization Parameters

FAL_SERVER=boston
DB_FILE_NAME_CONVERT='boston','chicago'
LOG_FILE_NAME_CONVERT=
 '/arch1/boston/','/arch1/chicago/','/arch2/boston/','/arch2/chicago/' 
STANDBY_FILE_MANAGEMENT=AUTO

Specifying the initialization parameters shown in Example 3-2 sets up the primary database to resolve gaps, converts new datafile and log file path names from a new primary database, and archives the incoming redo data when this database is in the standby role. With the initialization parameters for both the primary and standby roles set as described, none of the parameters need to change after a role transition.

The following table provides a brief explanation about each parameter setting shown in Example 3-1 and Example 3-2.

ParameterRecommended Setting
DB_NAMEOn a primary database, specify the name used when the database was created. On a physical standby database, use the DB_NAME of the primary database.
DB_UNIQUE_NAMESpecify a unique name for each database. This name stays with the database and does not change, even if the primary and standby databases reverse roles.
LOG_ARCHIVE_CONFIGThe DG_CONFIG attribute of this parameter must be explicitly set on each database in a Data Guard configuration to enable full Data Guard functionality. Set DG_CONFIG to a text string that contains the DB_UNIQUE_NAME of each database in the configuration, with each name in this list separated by a comma.
CONTROL_FILESSpecify the path name for the control files on the primary database. Example 3-1 shows how to do this for two control files. It is recommended that a second copy of the control file is available so an instance can be easily restarted after copying the good control file to the location of the bad control file.
LOG_ARCHIVE_DEST_nSpecify where the redo data is to be archived on the primary and standby systems. In Example 3-1:
  • LOG_ARCHIVE_DEST_1 archives redo data generated by the primary database from the local online redo log files to the local archived redo log files in /arch1/chicago/.

  • LOG_ARCHIVE_DEST_2 is valid only for the primary role. This destination transmits redo data to the remote physical standby destination boston.

Note: If a fast recovery area was configured (with the DB_RECOVERY_FILE_DEST initialization parameter) and you have not explicitly configured a local archiving destination with the LOCATION attribute, Data Guard automatically uses the LOG_ARCHIVE_DEST_1 initialization parameter (if it has not already been set) as the default destination for local archiving. Also, see Chapter 15 for complete LOG_ARCHIVE_DEST_n information.

LOG_ARCHIVE_DEST_STATE_nSpecify ENABLE to allow redo transport services to transmit redo data to the specified destination.
REMOTE_LOGIN_PASSWORDFILEThis parameter must be set to EXCLUSIVE or SHARED if a remote login password file is used to authenticate administrative users or redo transport sessions.
LOG_ARCHIVE_FORMATSpecify the format for the archived redo log files using a thread (%t), sequence number (%s), and resetlogs ID (%r).
LOG_ARCHIVE_MAX_PROCESSES =integerSpecify the maximum number (from 1 to 30) of archiver (ARCn) processes you want Oracle software to invoke initially. The default value is 4.
FAL_SERVERSpecify the Oracle Net service name of the FAL server (typically this is the database running in the primary role). When the Chicago database is running in the standby role, it uses the Boston database as the FAL server from which to fetch (request) missing archived redo log files if Boston is unable to automatically send the missing log files.
DB_FILE_NAME_CONVERTSpecify the path name and filename location of the standby database datafiles followed by the primary location. This parameter converts the path names of the primary database datafiles to the standby datafile path names. If the standby database is on the same system as the primary database or if the directory structure where the datafiles are located on the standby site is different from the primary site, then this parameter is required. Note that this parameter is used only to convert path names for physical standby databases. Multiple pairs of paths may be specified by this parameter.
LOG_FILE_NAME_CONVERTSpecify the location of the standby database online redo log files followed by the primary location. This parameter converts the path names of the primary database log files to the path names on the standby database. If the standby database is on the same system as the primary database or if the directory structure where the log files are located on the standby system is different from the primary system, then this parameter is required. Multiple pairs of paths may be specified by this parameter.
STANDBY_FILE_MANAGEMENTSet to AUTO so when datafiles are added to or dropped from the primary database, corresponding changes are made automatically to the standby database.


Caution:

Review the initialization parameter file for additional parameters that may need to be modified. For example, you may need to modify the dump destination parameters if the directory location on the standby database is different from those specified on the primary database.

3.1.5 Enable Archiving

If archiving is not enabled, issue the following statements to put the primary database in ARCHIVELOG mode and enable automatic archiving:

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;

See Oracle Database Administrator's Guide for information about archiving.

3.2 Step-by-Step Instructions for Creating a Physical Standby Database

This section describes the tasks you perform to create a physical standby database. It is written at a level of detail that requires you to already have a thorough understanding of the following topics, which are described in the Oracle Database Administrator's Guide, Oracle Database Backup and Recovery User's Guide, and Oracle Database Net Services Administrator's Guide:

  • Database administrator authentication

  • Database initialization parameters

  • Managing redo logs, data files, and control files

  • Managing archived redo logs

  • Fast recovery areas

  • Oracle Net Configuration

Table 3-2 provides a checklist of the tasks that you perform to create a physical standby database and the database or databases on which you perform each task. There is also a reference to the section that describes the task in more detail.

3.2.1 Create a Backup Copy of the Primary Database Datafiles

You can use any backup copy of the primary database to create the physical standby database, as long as you have the necessary archived redo log files to completely recover the database. Oracle recommends that you use the Recovery Manager utility (RMAN).

See Oracle Database Backup and Recovery User's Guide for information about how to perform a database backup operation.

3.2.2 Create a Control File for the Standby Database

If the backup procedure required you to shut down the primary database, issue the following SQL*Plus statement to start the primary database:

SQL> STARTUP MOUNT;

Then, create the control file for the standby database, and open the primary database to user access, as shown in the following example:

SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/boston.ctl';
SQL> ALTER DATABASE OPEN;

Note:

You cannot use a single control file for both the primary and standby databases.

3.2.3 Create a Parameter File for the Standby Database

Perform the following steps to create a parameter file for the standby database.

Step 1   Create a parameter file (PFILE) from the server parameter file (SPFILE) used by the primary database.

For example:

SQL> CREATE PFILE='/tmp/initboston.ora' FROM SPFILE;

In Section 3.2.5, you will create a server parameter file from this parameter file, after it has been modified to contain parameter values appropriate for use at the physical standby database.

Step 2   Modify the parameter values in the parameter file created in the previous step.

Although most of the initialization parameter settings in the parameter file are also appropriate for the physical standby database, some modifications must be made.

Example 3-3 shows, in bold typeface, the parameters from Example 3-1 and Example 3-2 that must be changed.

Example 3-3 Modifying Initialization Parameters for a Physical Standby Database

.
.
.
DB_NAME=chicago
DB_UNIQUE_NAME=boston
LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)'
CONTROL_FILES='/arch1/boston/control1.ctl', '/arch2/boston/control2.ctl'
DB_FILE_NAME_CONVERT='chicago','boston'
LOG_FILE_NAME_CONVERT=
 '/arch1/chicago/','/arch1/boston/','/arch2/chicago/','/arch2/boston/'
LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
LOG_ARCHIVE_DEST_1=
 'LOCATION=/arch1/boston/
  VALID_FOR=(ALL_LOGFILES,ALL_ROLES) 
  DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_2=
 'SERVICE=chicago ASYNC
  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) 
  DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
STANDBY_FILE_MANAGEMENT=AUTO
FAL_SERVER=chicago
.
.
.

Ensure the COMPATIBLE initialization parameter is set to the same value on both the primary and standby databases. If the values differ, redo transport services may be unable to transmit redo data from the primary database to the standby databases.

It is always a good practice to use the SHOW PARAMETERS command to verify no other parameters need to be changed.

The following table provides a brief explanation about the parameter settings shown in Example 3-3 that have different settings from the primary database.

ParameterRecommended Setting
DB_UNIQUE_NAMESpecify a unique name for this database. This name stays with the database and does not change even if the primary and standby databases reverse roles.
CONTROL_FILESSpecify the path name for the control files on the standby database. Example 3-3 shows how to do this for two control files. It is recommended that a second copy of the control file is available so an instance can be easily restarted after copying the good control file to the location of the bad control file.
DB_FILE_NAME_CONVERTSpecify the path name and filename location of the primary database datafiles followed by the standby location. This parameter converts the path names of the primary database datafiles to the standby datafile path names. If the standby database is on the same system as the primary database or if the directory structure where the datafiles are located on the standby site is different from the primary site, then this parameter is required.
LOG_FILE_NAME_CONVERTSpecify the location of the primary database online redo log files followed by the standby location. This parameter converts the path names of the primary database log files to the path names on the standby database. If the standby database is on the same system as the primary database or if the directory structure where the log files are located on the standby system is different from the primary system, then this parameter is required.
LOG_ARCHIVE_DEST_nSpecify where the redo data is to be archived. In Example 3-3:
  • LOG_ARCHIVE_DEST_1 archives redo data received from the primary database to archived redo log files in /arch1/boston/.

  • LOG_ARCHIVE_DEST_2 is currently ignored because this destination is valid only for the primary role. If a switchover occurs and this instance becomes the primary database, then it will transmit redo data to the remote Chicago destination.

Note: If a fast recovery area was configured (with the DB_RECOVERY_FILE_DEST initialization parameter) and you have not explicitly configured a local archiving destination with the LOCATION attribute, Data Guard automatically uses the LOG_ARCHIVE_DEST_1 initialization parameter (if it has not already been set) as the default destination for local archiving. Also, see Chapter 15 for complete information about LOG_ARCHIVE_DEST_n.

FAL_SERVERSpecify the Oracle Net service name of the FAL server (typically this is the database running in the primary role). When the Boston database is running in the standby role, it uses the Chicago database as the FAL server from which to fetch (request) missing archived redo log files if Chicago is unable to automatically send the missing log files.


Caution:

Review the initialization parameter file for additional parameters that may need to be modified. For example, you may need to modify the dump destination parameters if the directory location on the standby database is different from those specified on the primary database.

3.2.4 Copy Files from the Primary System to the Standby System

Use an operating system copy utility to copy the following binary files from the primary system to the standby system:

3.2.5 Set Up the Environment to Support the Standby Database

Perform the following steps to create a Windows-based service, create a password file, set up the Oracle Net environment, and create a SPFILE.

Step 1   Create a Windows-based service.

If the standby database will be hosted on a Windows system, use the ORADIM utility to create a Windows service. For example:

WINNT> oradim –NEW –SID boston –STARTMODE manual

See Oracle Database Platform Guide for Microsoft Windows for more information about using the ORADIM utility.

Step 2   Copy the remote login password file from the primary database system to the standby database system

If the primary database has a remote login password file, copy it to the appropriate directory on the physical standby database system. Note that the password file must be re-copied each time the SYSDBA or SYSOPER privilege is granted or revoked and whenever the login password of a user with these privileges is changed.

This step is optional if operating system authentication is used for administrative users and if SSL is used for redo transport authentication

Step 3   Configure listeners for the primary and standby databases.

On both the primary and standby sites, use Oracle Net Manager to configure a listener for the respective databases.

To restart the listeners (to pick up the new definitions), enter the following LSNRCTL utility commands on both the primary and standby systems:

% lsnrctl stop
% lsnrctl start

See Oracle Database Net Services Administrator's Guide.

Step 4   Create Oracle Net service names.

On both the primary and standby systems, use Oracle Net Manager to create a network service name for the primary and standby databases that will be used by redo transport services.

The Oracle Net service name must resolve to a connect descriptor that uses the same protocol, host address, port, and service that you specified when you configured the listeners for the primary and standby databases. The connect descriptor must also specify that a dedicated server be used.

See the Oracle Database Net Services Administrator's Guide and the Oracle Database Administrator's Guide.

Step 5   Create a server parameter file for the standby database.

On an idle standby database, use the SQL CREATE statement to create a server parameter file for the standby database from the text initialization parameter file that was edited in Step 2. For example:

SQL> CREATE SPFILE FROM PFILE='initboston.ora';
Step 6   Copy the primary database encryption wallet to the standby database system

If the primary database has a database encryption wallet, copy it to the standby database system and configure the standby database to use this wallet.


Note:

The database encryption wallet must be copied from the primary database system to each standby database system whenever the master encryption key is updated.

Encrypted data in a standby database cannot be accessed unless the standby database is configured to point to a database encryption wallet or hardware security module that contains the current master encryption key from the primary database.



See Also:

Oracle Database Advanced Security Administrator's Guide for more information about transparent database encryption

3.2.6 Start the Physical Standby Database

Perform the following steps to start the physical standby database and Redo Apply.

Step 1   Start the physical standby database.

On the standby database, issue the following SQL statement to start and mount the database:

SQL> STARTUP MOUNT;
Step 2   Prepare the Standby Database to Receive Redo Data

Prepare the standby database to receive and archive redo data from the primary database, by performing the steps described in Section 6.2.3.

Step 3   Create an Online Redo Log on the Standby Database

Although this step is optional, Oracle recommends that an online redo log be created when a standby database is created. By following this best practice, a standby database will be ready to quickly transition to the primary database role.

The size and number of redo log groups in the online redo log of a standby database should be chosen so that the standby database performs well if it transitions to the primary role.

Step 4   Start Redo Apply.

On the standby database, issue the following command to start Redo Apply:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE - 
> DISCONNECT FROM SESSION;

The statement includes the DISCONNECT FROM SESSION option so that Redo Apply runs in a background session. See Section 7.3, "Applying Redo Data to Physical Standby Databases" for more information.

The statement also includes the USING CURRENT LOGFILE clause so that redo can be applied as soon as it has been received. See Section 7.3.1, "Starting Redo Apply" for more information.

3.2.7 Verify the Physical Standby Database Is Performing Properly

Once you create the physical standby database and set up redo transport services, you may want to verify database modifications are being successfully transmitted from the primary database to the standby database.

To see that redo data is being received on the standby database, you should first identify the existing archived redo log files on the standby database, force a log switch and archive a few online redo log files on the primary database, and then check the standby database again. The following steps show how to perform these tasks.

Step 1   Identify the existing archived redo log files.

On the standby database, query the V$ARCHIVED_LOG view to identify existing files in the archived redo log. For example:

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME -
>  FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

 SEQUENCE# FIRST_TIME         NEXT_TIME
---------- ------------------ ------------------
         8 11-JUL-07 17:50:45 11-JUL-07 17:50:53
         9 11-JUL-07 17:50:53 11-JUL-07 17:50:58
        10 11-JUL-07 17:50:58 11-JUL-07 17:51:03

3 rows selected.
Step 2   Force a log switch to archive the current online redo log file.

On the primary database, issue the ALTER SYSTEM SWITCH LOGFILE statement to force a log switch and archive the current online redo log file group:

SQL> ALTER SYSTEM SWITCH LOGFILE;
Step 3   Verify the new redo data was archived on the standby database.

On the standby database, query the V$ARCHIVED_LOG view to verify the redo data was received and archived on the standby database:

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME -
> FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

 SEQUENCE# FIRST_TIME         NEXT_TIME
---------- ------------------ ------------------
         8 11-JUL-07 17:50:45 11-JUL-07 17:50:53
         9 11-JUL-07 17:50:53 11-JUL-07 17:50:58
        10 11-JUL-07 17:50:58 11-JUL-07 17:51:03
        11 11-JUL-07 17:51:03 11-JUL-07 18:34:11
4 rows selected.

The archived redo log files are now available to be applied to the physical standby database.

Step 4   Verify that received redo has been applied.

On the standby database, query the V$ARCHIVED_LOG view to verify that received redo has been applied:

SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG -
> ORDER BY SEQUENCE#;

SEQUENCE# APP
--------- ---
        8 YES
        9 YES
       10 YES
       11 IN-MEMORY

4 rows selected.


Note:

The value of the APPLIED column for the most recently received log file will be either IN-MEMORY or YES if that log file has been applied.

3.3 Post-Creation Steps

At this point, the physical standby database is running and can provide the maximum performance level of data protection. The following list describes additional preparations you can take on the physical standby database:

  • Upgrade the data protection mode

    The Data Guard configuration is initially set up in the maximum performance mode (the default).

  • Enable Flashback Database

    Flashback Database removes the need to re-create the primary database after a failover. Flashback Database enables you to return a database to its state at a time in the recent past much faster than traditional point-in-time recovery, because it does not require restoring datafiles from backup nor the extensive application of redo data. You can enable Flashback Database on the primary database, the standby database, or both. See Section 13.2 and Section 13.3 for scenarios showing how to use Flashback Database in a Data Guard environment. Also, see Oracle Database Backup and Recovery User's Guide for more information about Flashback Database.

PK;ϫPKD Appendixes PKHPKD Views Relevant to Oracle Data Guard

17 Views Relevant to Oracle Data Guard

This chapter describes the views that are especially useful when monitoring a Data Guard environment. The view described in this chapter are a subset of the views that are available for Oracle databases.

Table 17-1 describes the views and indicates if a view applies to physical standby databases, logical standby databases, snapshot standby databases, or primary databases. See Oracle Database Reference for complete information about views.

Table 17-1 Views That Are Pertinent to Data Guard Configurations

ViewDatabaseDescription

DBA_LOGSTDBY_EVENTS

Logical only

Contains information about the activity of a logical standby database. It can be used to determine the cause of failures that occur when SQL Apply is applying redo to a logical standby database.

DBA_LOGSTDBY_HISTORY

Logical only

Displays the history of switchovers and failovers for logical standby databases in a Data Guard configuration. It does this by showing the complete sequence of redo log streams processed or created on the local system, across all role transitions. (After a role transition, a new log stream is started and the log stream sequence number is incremented by the new primary database.)

DBA_LOGSTDBY_LOG

Logical only

Shows the log files registered for logical standby databases.

DBA_LOGSTDBY_NOT_UNIQUE

Logical only

Identifies tables that have no primary and no non-null unique indexes.

DBA_LOGSTDBY_PARAMETERS

Logical only

Contains the list of parameters used by SQL Apply.

DBA_LOGSTDBY_SKIP

Logical only

Lists the tables that will be skipped by SQL Apply.

DBA_LOGSTDBY_SKIP_TRANSACTION

Logical only

Lists the skip settings chosen.

DBA_LOGSTDBY_UNSUPPORTED

Logical only

Identifies the schemas and tables (and columns in those tables) that contain unsupported data types. Use this view when you are preparing to create a logical standby database.

V$ARCHIVE_DEST

Primary, physical, snapshot, and logical

Describes all of the destinations in the Data Guard configuration, including each destination's current value, mode, and status.

Note: The information in this view does not persist across an instance shutdown.

V$ARCHIVE_DEST_STATUS

Primary, physical, snapshot, and logical

Displays runtime and configuration information for the archived redo log destinations.

Note: The information in this view does not persist across an instance shutdown.

V$ARCHIVE_GAP

Physical, snapshot, and logical

Displays information to help you identify a gap in the archived redo log files.

V$ARCHIVED_LOG

Primary, physical, snapshot, and logical

Displays archive redo log information from the control file, including names of the archived redo log files.

V$DATABASE

Primary, physical, snapshot, and logical

Provides database information from the control file. Includes information about fast-start failover (available only with the Data Guard broker).

V$DATABASE_INCARNATION

Primary, physical, snapshot, and logical

Displays information about all database incarnations. Oracle Database creates a new incarnation whenever a database is opened with the RESETLOGS option. Records about the current and the previous incarnation are also contained in the V$DATABASE view.

V$DATAFILE

Primary, physical, snapshot, and logical

Provides datafile information from the control file.

V$DATAGUARD_CONFIG

Primary, physical, snapshot, and logical

Lists the unique database names defined with the DB_UNIQUE_NAME and LOG_ARCHIVE_CONFIG initialization parameters.

V$DATAGUARD_STATS

Primary, physical, snapshot, and logical

Displays various Data Guard statistics, including apply lag and transport lag. This view can be queried on any instance of a standby database. No rows are returned if queried on a primary database. See also Section 8.1.2 for an example and more information.

V$DATAGUARD_STATUS

Primary, physical, snapshot, and logical

Displays and records events that would typically be triggered by any message to the alert log or server process trace files.

V$FS_FAILOVER_STATS

Primary

Displays statistics about fast-start failover occurring on the system.

V$LOG

Primary, physical, snapshot, and logical

Contains log file information from the online redo log files.

V$LOGFILE

Primary, physical, snapshot, and logical

Contains information about the online redo log files and standby redo log files.

V$LOG_HISTORY

Primary, physical, snapshot, and logical

Contains log history information from the control file.

V$LOGSTDBY_PROCESS

Logical only

Provides dynamic information about what is happening with SQL Apply. This view is very helpful when you are diagnosing performance problems during SQL Apply on the logical standby database, and it can be helpful for other problems.

V$LOGSTDBY_PROGRESS

Logical only

Displays the progress of SQL Apply on the logical standby database.

V$LOGSTDBY_STATE

Logical only

Consolidates information from the V$LOGSTDBY_PROCESS and V$LOGSTDBY_STATS views about the running state of SQL Apply and the logical standby database.

V$LOGSTDBY_STATS

Logical only

Displays LogMiner statistics, current state, and status information for a logical standby database during SQL Apply. If SQL Apply is not running, the values for the statistics are cleared.

V$LOGSTDBY_TRANSACTION

Logical only

Displays information about all active transactions being processed by SQL Apply on the logical standby database.

V$MANAGED_STANDBY

Physical and snapshot

Displays current status information for Oracle database processes related to physical standby databases.

Note: The information in this view does not persist across an instance shutdown.

V$REDO_DEST_RESP_HISTOGRAM

Primary

Contains the response time information for destinations that are configured for SYNC transport.

Note: The information in this view does not persist across an instance shutdown.

V$STANDBY_EVENT_HISTOGRAM

Physical

Contains a histogram of apply lag values for the physical standby. An entry is made in the corresponding apply lag bucket by the Redo Apply process every second. (This view returns rows only on a physical standby database that has been open in real-time query mode.)

Note: The information in this view does not persist across an instance shutdown.

V$STANDBY_LOG

Physical, snapshot, and logical

Contains log file information from the standby redo log files.


PKYlWWPKD Using SQL Apply to Upgrade the Oracle Database

12 Using SQL Apply to Upgrade the Oracle Database

Starting with Oracle Database 10g release 1 (10.1.0.3), you can use a logical standby database to perform a rolling upgrade of Oracle Database 10g software. During a rolling upgrade, you can run different releases of an Oracle database on the primary and logical standby databases while you upgrade them, one at a time, incurring minimal downtime on the primary database.


Note:

This chapter describes an alternative to the usual upgrade procedure involving longer downtime, as described in Appendix B, "Upgrading and Downgrading Databases in a Data Guard Configuration". Do not attempt to combine steps from the method described in this chapter with steps from Appendix B.

The instructions in this chapter describe how to minimize downtime while upgrading an Oracle database. This chapter provides the following topics:

12.1 Benefits of a Rolling Upgrade Using SQL Apply

Performing a rolling upgrade with SQL Apply has the following advantages:

  • Your database will incur very little downtime. The overall downtime can be as little as the time it takes to perform a switchover.

  • You eliminate application downtime due to PL/SQL recompilation.

  • You can validate the upgraded database release without affecting the primary database.

12.2 Requirements to Perform a Rolling Upgrade Using SQL Apply

The rolling upgrade procedure requires the following:

  • A primary database that is running Oracle Database release x and a logical standby database that is running Oracle Database release y.

  • The databases must not be part of a Data Guard Broker configuration. See Oracle Data Guard Broker for information about removing databases from a broker configuration.

  • The Data Guard protection mode must be set to either maximum availability or maximum performance. Query the PROTECTION_LEVEL column in the V$DATABASE view to find out the current protection mode setting.

  • To ensure the primary database can proceed while the logical standby database is being upgraded, the LOG_ARCHIVE_DEST_n initialization parameter for the logical standby database destination must not be set to MANDATORY.

  • The COMPATIBLE initialization parameter must match the software release prior to the upgrade. That is, a rolling upgrade from release x to release y requires that the COMPATIBLE initialization parameter be set to release x on both the primary and standby databases.

12.3 Figures and Conventions Used in the Upgrade Instructions

Figure 12-1 shows a Data Guard configuration before the upgrade begins, with the primary and logical standby databases both running the same Oracle Database software release.

Figure 12-1 Data Guard Configuration Before Upgrade

Description of Figure 12-1 follows
Description of "Figure 12-1 Data Guard Configuration Before Upgrade"

During the upgrade process, the Data Guard configuration operates with mixed database releases at several points in this process. Data protection is not available across releases. During these steps, consider having a second standby database in the Data Guard configuration to provide data protection.

The steps and figures describing the upgrade procedure refer to the databases as Database A and Database B rather than as the primary database and standby database. This is because the databases switch roles during the upgrade procedure. Initially, Database A is the primary database and Database B is the logical standby database, as shown in Figure 12-1.

The following sections describe scenarios in which you can use the SQL Apply rolling upgrade procedure:

12.4 Performing a Rolling Upgrade By Creating a New Logical Standby Database

This scenario assumes that you do not have an existing Data Guard configuration, but you are going to create a logical standby database solely for the purpose of performing a rolling upgrade of the Oracle Database.

Table 12-1 lists the steps to prepare the primary and standby databases for upgrading.

Table 12-1 Steps to Perform a Rolling Upgrade by Creating a New Logical Standby

StepDescription

Step 1


Identify unsupported data types and storage attributes


Step 2


Create a logical standby database


Step 3


Perform a rolling upgrade



Step 1   Identify unsupported data types and storage attributes

To identify unsupported database objects on the primary database and decide how to handle them, follow these steps:

  1. Identify unsupported data types and storage attributes for tables:

    • Review the list of supported data types and storage attributes provided in Appendix C, "Data Type and DDL Support on a Logical Standby Database".

    • Query the DBA_LOGSTDBY_UNSUPPORTED and DBA_LOGSTDBY_SKIP views on the primary database. Changes that are made to the listed tables and schemas on the primary database will not be applied on the logical standby database. Use the following query to see a list of unsupported tables:

      SQL> SELECT DISTINCT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED;
      

      Use the following query to see a list of unsupported internal schemas:

      SQL> SELECT OWNER FROM DBA_LOGSTDBY_SKIP -
      >  WHERE STATEMENT_OPT = 'INTERNAL SCHEMA';
      
  2. Decide how to handle unsupported tables.

    If unsupported objects are being modified on your primary database, it might be possible to perform the upgrade anyway by temporarily suspending changes to the unsupported tables for the period of time it takes to perform the upgrade procedure.

    If you can prevent changes to unsupported data, then using SQL Apply might still be a viable way to perform the upgrade procedure. This method requires that you prevent users from modifying any unsupported tables from the time you create the logical standby control file to the time you complete the upgrade. For example, assume that the Payroll department updates an object table, but that department updates the database only Monday through Friday. However, the Customer Service department requires database access 24 hours a day, 7 days a week, but uses only supported data types and tables. In this scenario, you could perform the upgrade over a weekend. You can monitor transaction activity in the DBA_LOGSTDBY_EVENTS view and discontinue the upgrade (if necessary) up to the time you perform the first switchover.

    If you cannot prevent changes to unsupported tables during the upgrade, any unsupported transactions that occur are recorded in the DBA_LOGSTDBY_EVENTS table on the logical standby database. After the upgrade is completed, you might be able to use the Oracle Data Pump Export/Import utility to import the changed tables to the upgraded databases.

    The size of the changed tables will determine how long database operations will be unavailable, so you must decide if a table is too large to export and import its data into the standby database. For example, a 4-terabyte table is not a good candidate for the export/import process.


Note:

If you cannot use a logical standby database because the data types in your application are unsupported, then perform the upgrade as documented in Oracle Database Upgrade Guide.

Step 2   Create a logical standby database

To create a logical standby database, follow the instructions in Chapter 4.


Note:

Before you start SQL Apply for the first time, make sure you capture information about transactions running on the primary database that will not be supported by a logical standby database. Run the following procedures to capture and record the information as events in the DBA_LOGSTDBY_EVENTS view:
EXECUTE DBMS_LOGSTDBY.APPLY_SET('MAX_EVENTS_RECORDED',
 DBMS_LOGSTDBY.MAX_EVENTS);

EXECUTE DBMS_LOGSTDBY.APPLY_SET('RECORD_UNSUPPORTED_OPERATIONS',
 'TRUE');

Oracle recommends configuring a standby redo log on the logical standby database to minimize downtime.

Step 3   Perform a rolling upgrade

Now that you have created a logical standby database, you can follow the procedure described in Section 12.5, "Performing a Rolling Upgrade With an Existing Logical Standby Database", which assumes that you have a logical standby running the same Oracle software.

12.5 Performing a Rolling Upgrade With an Existing Logical Standby Database

This section provides a step-by-step procedure for upgrading the logical standby database and the primary database. Table 12-2 lists the steps.

Step 1   Prepare for rolling upgrade

Follow these steps to prepare to perform a rolling upgrade of Oracle Software:

  1. Stop SQL Apply by issuing the following statement on the logical standby database (Database B):

    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    
  2. Set compatibility, if needed, to the highest value.

    Ensure the COMPATIBLE initialization parameter specifies the release number for the Oracle Database software running on the primary database prior to the upgrade.

    For example, if the primary database is running release 10.1, then set the COMPATIBLE initialization parameter to 10.1 on both databases. Be sure to set the COMPATIBLE initialization parameter on the standby database first before you set it on the primary database.

Step 2   Upgrade the logical standby database

Upgrade Oracle database software on the logical standby database (Database B) to release y. While the logical standby database is being upgraded, it will not accept redo data from the primary database.

To upgrade Oracle Database software, refer to the Oracle Database Upgrade Guide for the applicable Oracle Database release.

Figure 12-2 shows Database A running release x, and Database B running release y. During the upgrade, redo data accumulates on the primary system.

Figure 12-2 Upgrade the Logical Standby Database Release

Description of Figure 12-2 follows
Description of "Figure 12-2 Upgrade the Logical Standby Database Release"

Step 3   Restart SQL Apply on the upgraded logical standby database

Restart SQL Apply and operate with release x on Database A and release y on Database B. To start SQL Apply, issue the following statement on Database B:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

The redo data that was accumulating on the primary system is automatically transmitted and applied on the newly upgraded logical standby database. The Data Guard configuration can run the mixed releases shown in Figure 12-3 for an arbitrary period while you verify that the upgraded Oracle Database software release is running properly in the production environment.

Figure 12-3 Running Mixed Releases

Description of Figure 12-3 follows
Description of "Figure 12-3 Running Mixed Releases"

To monitor how quickly Database B is catching up to Database A, query the V$LOGSTDBY_PROGRESS view on Database B. For example:

SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YY HH24:MI:SS';
Session altered.

SQL> SELECT SYSDATE, APPLIED_TIME FROM V$LOGSTDBY_PROGRESS;

SYSDATE            APPLIED_TIME
------------------ ------------------
27-JUN-05 17:07:06 27-JUN-05 17:06:50
Step 4   Monitor events on the upgraded standby database

You should frequently query the DBA_LOGSTDBY_EVENTS view to learn if there are any DDL and DML statements that have not been applied on Database B. Example 12-1 demonstrates how monitoring events can alert you to potential differences in the two databases.

Example 12-1 Monitoring Events with DBA_LOGSTDBY_EVENTS

SQL> SET LONG 1000
SQL> SET PAGESIZE 180
SQL> SET LINESIZE 79
SQL> SELECT EVENT_TIMESTAMP, EVENT, STATUS FROM DBA_LOGSTDBY_EVENTS -
> ORDER BY EVENT_TIMESTAMP;

EVENT_TIMESTAMP
---------------------------------------------------------------------------
EVENT
--------------------------------------------------------------------------------
STATUS
--------------------------------------------------------------------------------
…
24-MAY-05 05.18.29.318912 PM
CREATE TABLE SYSTEM.TST (one number)
ORA-16226: DDL skipped due to lack of support
 
24-MAY-05 05.18.29.379990 PM
"SYSTEM"."TST"
ORA-16129: unsupported dml encountered

In the preceding example:

  • The ORA-16226 error shows a DDL statement that could not be supported. In this case, it could not be supported because it belongs to an internal schema.

  • The ORA-16129 error shows that a DML statement was not applied.

These types of errors indicate that not all of the changes that occurred on Database A have been applied to Database B. At this point, you must decide whether or not to continue with the upgrade procedure. If you are certain that this difference between the logical standby database and the primary database is acceptable, then continue with the upgrade procedure. If not, discontinue and reinstantiate Database B and perform the upgrade procedure at another time.

Step 5   Begin a switchover

When you are satisfied that the upgraded database software is operating properly, perform a switchover to reverse the database roles by issuing the following statement on Database A:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO LOGICAL STANDBY;

This statement must wait for existing transactions to complete. To minimize the time it takes to complete the switchover, users still connected to Database A should log off immediately and reconnect to Database B.


Note:

The usual two-phased prepared switchover described in Section 8.3.1 cannot be used because it requires both primary and standby databases to be running the same version of the Oracle software and at this point, the primary database is running a lower version of the Oracle software. Instead, the single-phased unprepared switchover procedure documented above is used. The unprepared switchover should only be used in the context of a rolling upgrade using a logical standby database.


Note:

If you suspended activity to unsupported tables or packages on Database A when it was the primary database, you must continue to suspend the same activities on Database B while it is the primary database if you eventually plan to switch back to Database A.

Step 6   Import any tables that were modified during the upgrade

Step 4 "Monitor events on the upgraded standby database" described how to list unsupported tables that are being modified. If unsupported DML statements were issued on the primary database (as described in Example 12-1), import the latest version of those tables using an import utility such as Oracle Data Pump.

For example, the following import command truncates the scott.emp table and populates it with data matching the former primary database (A):

IMPDP SYSTEM NETWORK_LINK=DATABASEA TABLES=SCOTT.EMP TABLE_EXIST_ACTION=TRUNCATE

Note that this command will prompt you for the impdp password before executing.

Step 7   Complete the switchover and activate user applications

When you are satisfied that the upgraded database software is operating properly, complete the switchover to reverse the database roles:

  1. On Database B, query the SWITCHOVER_STATUS column of the V$DATABASE view, as follows:

    SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
    
    SWITCHOVER_STATUS
    --------------------
    TO PRIMARY
    
  2. When the SWITCHOVER_STATUS column displays TO PRIMARY, complete the switchover by issuing the following statement on Database B:

    SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
    

    Note:

    The usual two-phased prepared switchover described in Section 8.3.1 cannot be used because it requires both primary and standby databases to be running the same version of the Oracle software and at this point, the primary database is running a lower version of the Oracle software. Instead, the single-phased unprepared switchover procedure documented above is used. The unprepared switchover should only be used in the context of a rolling upgrade using a logical standby database.

  3. Activate the user applications and services on Database B, which is now running in the primary database role.

After the switchover, you cannot send redo data from the new primary database (B) that is running the new database software release to the new standby database (A) that is running an older software release. This means the following:

  • Redo data is accumulating on the new primary database.

  • The new primary database is unprotected at this time.

Figure 12-4 shows Database B, the former standby database (running release y), is now the primary database, and Database A, the former primary database (running release x), is now the standby database. The users are connected to Database B.

If Database B can adequately serve as the primary database and your business does not require a logical standby database to support the primary database, then you have completed the rolling upgrade process. Allow users to log in to Database B and begin working there, and discard Database A when it is convenient. Otherwise, continue with step 8.

Figure 12-4 After a Switchover

Description of Figure 12-4 follows
Description of "Figure 12-4 After a Switchover"

Step 8   Upgrade the old primary database

Database A is still running release x and cannot apply redo data from Database B until you upgrade it and start SQL Apply.

For more information about upgrading Oracle Database software, see the Oracle Database Upgrade Guide for the applicable Oracle Database release.

Figure 12-5 shows the system after both databases have been upgraded.

Figure 12-5 Both Databases Upgraded

Description of Figure 12-5 follows
Description of "Figure 12-5 Both Databases Upgraded"

Step 9   Start SQL Apply on the old primary database

Issue the following statement to start SQL Apply on Database A and, if necessary, create a database link to Database B:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE NEW PRIMARY db_link_to_b;

Note:

You will need to create a database link (if one has not already been set up) and to use the NEW PRIMARY clause, because in Step 4 the single-phased unprepared switchover was used to turn Database A into a standby database.

You will need to connect as SYS user or with an account with similar level of privileges for the database link.


When you start SQL Apply on Database A, the redo data that is accumulating on the primary database (B) is sent to the logical standby database (A). The primary database is protected against data loss once all the redo data is available on the standby database.

Step 10   Optionally, raise the compatibility level on both databases

Raise the compatibility level of both databases by setting the COMPATIBLE initialization parameter. You must raise the compatibility level at the logical standby database before you raise it at the primary database. Set the COMPATIBLE parameter on the standby database before you set it on the primary database. See Oracle Database Reference for more information about the COMPATIBLE initialization parameter.

Step 11   Monitor events on the new logical standby database

To ensure that all changes performed on Database B are properly applied to the logical standby database (A), you should frequently query the DBA_LOGSTDBY_EVENTS view, as you did for Database A in step 4. (See Example 12-1.)

If changes were made that invalidate Database A as a copy of your existing primary database, you can discard Database A and create a new logical standby database in its place. See Chapter 4, "Creating a Logical Standby Database" for complete information.

Step 12   Optionally, perform another switchover

Optionally, perform another switchover of the databases so Database A is once again running in the primary database role (as shown in Figure 12-1).


Note:

You will use the two-phased prepared switchover described in Section 8.3.1 since at this time, both Database A and Database B are running the same version of the Oracle software.

12.6 Performing a Rolling Upgrade With an Existing Physical Standby Database

The steps in this section show you how to perform a rolling upgrade of Oracle software and then get back to your original configuration in which A is the primary database and B is the physical standby database, and both of them are running the upgraded Oracle software.


Note:

The steps in this section assume that you have a primary database (A) and a physical standby database (B) already set up and using Oracle Database release 11.1 or later.

Table 12-3 summarizes the steps involved.

Step 1   Prepare the primary database for a rolling upgrade (perform these steps on Database A)
  1. Enable Flashback Database, if it is not already enabled:

    SQL> SHUTDOWN IMMEDIATE;
    SQL> STARTUP MOUNT;
    SQL> ALTER DATABASE FLASHBACK ON;
    SQL> ALTER DATABASE OPEN;
    
  2. Create a guaranteed restore point:

    SQL> CREATE RESTORE POINT pre_upgrade GUARANTEE FLASHBACK DATABASE;
    
Step 2   Convert the physical standby database into a logical standby database (perform these steps on Database B)
  1. Follow the steps outlined in Chapter 4, "Creating a Logical Standby Database" except for the following difference. In Section 4.2.4.1, "Convert to a Logical Standby Database" you must use a different command to convert the logical standby database. Instead of ALTER DATABASE RECOVER TO LOGICAL STANDBY db_name, issue the following command:

    SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY KEEP IDENTITY;
    SQL> ALTER DATABASE OPEN;
    
  2. You must take the following actions before you start SQL Apply for the first time:

    1. Disable automatic deletion of foreign archived logs at the logical standby, as follows:

      SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('LOG_AUTO_DELETE', 'FALSE');
      

      Note:

      You should not delete any remote archived logs processed by the logical standby database (Database B). These remote archived logs are required later during the rolling upgrade process. If you are using the recovery area to store the remote archived logs, you must ensure that it has enough space to accommodate these logs without interfering with the normal operation of the logical standby database.

    2. Make sure you capture information about transactions running on the primary database that will not be supported by a logical standby database. Run the following procedures to capture and record the information as events in the DBA_LOGSTDBY_EVENTS table:

      SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('MAX_EVENTS_RECORDED', -
      > DBMS_LOGSTDBY.MAX_EVENTS);
       
      SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('RECORD_UNSUPPORTED_OPERATIONS', 'TRUE');
      
    3. Start SQL Apply for the first time, as follows:

      SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
      

    See Also:


Step 3    Upgrade the logical standby database and catch up with the primary database (perform these steps on Database B)

You can now follow Steps 1 through 8 as described in Section 12.5, "Performing a Rolling Upgrade With an Existing Logical Standby Database". At the end of these steps, Database B will be your primary database running the upgraded version of the Oracle software, and Database A has become your logical standby database.

Move on to the next step to turn Database A into the physical standby for Database B.

Step 4   Flashback Database A to the guaranteed restore point (perform these steps on Database A)
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> FLASHBACK DATABASE TO RESTORE POINT pre_upgrade;
SQL> SHUTDOWN IMMEDIATE;
Step 5   Mount Database A using the new version of Oracle software

At this point, you should switch the Oracle binary at Database A to use the higher version of the Oracle software. You will not run the upgrade scripts, since Database A will be turned into a physical standby, and will be upgraded automatically as it applies the redo data generated by Database B.

Mount Database A, as follows:

SQL> STARTUP MOUNT;
Step 6   Convert Database A to a physical standby
SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
SQL> SHUTDOWN IMMEDIATE;
Step 7   Start managed recovery on Database A

Database A will be upgraded automatically as it applies the redo data generated by Database B. Managed recovery will wait until the new incarnation branch from the primary is registered before it starts applying redo.

SQL> STARTUP MOUNT;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE - 
>  DISCONNECT FROM SESSION;

Note:

When Redo Apply restarts, it waits for a new incarnation from the current primary database (Database B) to be registered.

Step 8   Perform a switchover to make Database A the primary database

At this point, Database B is your primary database and Database A is your physical standby, both running the higher version of the Oracle software. To make Database A the primary database, follow the steps described in Section 8.2.1, "Performing a Switchover to a Physical Standby Database".

Step 9   Clean up the guaranteed restore point created in Database A

To preserve disk space, drop the existing guaranteed restore point:

SQL> DROP RESTORE POINT PRE_UPGRADE;

See Also:

The "Database Rolling Upgrade Using Transient Logical Standby: Oracle Data Guard 11g" best practices white paper available on the Oracle Maximum Availability Architecture (MAA) home page at:

http://www.oracle.com/goto/maa


PK  PKD List of Tables PKc'PKD Creating a Standby Database with Recovery Manager

E Creating a Standby Database with Recovery Manager

This appendix describes how to use Oracle Recovery Manager to create a standby database. This appendix contains the following topics:

E.1 Prerequisites

This appendix assumes that you have read the chapter on database duplication in Oracle Database Backup and Recovery User's Guide. Because you use the DUPLICATE command to create a standby database with RMAN, you should familiarize yourself with the DUPLICATE command entry in Oracle Database Backup and Recovery Reference.

Familiarize yourself with how to create a standby database in Chapter 3, "Creating a Physical Standby Database" and Chapter 4, "Creating a Logical Standby Database" before you attempt the RMAN creation procedures described in this chapter.

E.2 Overview of Standby Database Creation with RMAN

This section explains the purpose and basic concepts involved in standby database creation with RMAN.

E.2.1 Purpose of Standby Database Creation with RMAN

You can use either manual techniques or the RMAN DUPLICATE command to create a standby database from backups of your primary database. Creating a standby database with RMAN has the following advantages over manual techniques:

  • RMAN can create a standby database by copying the files currently in use by the primary database. No backups are required.

  • RMAN can create a standby database by restoring backups of the primary database to the standby site. Thus, the primary database is not affected during the creation of the standby database.

  • RMAN automates renaming of files, including Oracle Managed Files (OMF) and directory structures.

  • RMAN restores archived redo log files from backups and performs media recovery so that the standby and primary databases are synchronized.

E.2.2 Basic Concepts of Standby Creation with RMAN

The procedure for creating a standby database with RMAN is almost the same as for creating a duplicate database. You need to amend the duplication procedures described in Oracle Database Backup and Recovery User's Guide to account for issues specific to a standby database.

To create a standby database with the DUPLICATE command you must connect as target to the primary database and specify the FOR STANDBY option. You cannot connect to a standby database and create an additional standby database. RMAN creates the standby database by restoring and mounting a control file. RMAN can use an existing backup of the primary database control file, so you do not need to create a control file backup especially for the standby database.

A standby database, unlike a duplicate database created by DUPLICATE without the FOR STANDBY OPTION, does not get a new DBID. Thus, you should not register the standby database with your recovery catalog.

E.2.2.1 Active Database and Backup-Based Duplication

You must choose between active and backup-based duplication. If you specify FROM ACTIVE DATABASE, then RMAN copies the datafiles directly from the primary database to the standby database. The primary database must be mounted or open.

If you not specify FROM ACTIVE DATABASE, then RMAN performs backup-based duplication. RMAN restores backups of the primary datafiles to the standby database. All backups and archived redo log files needed for creating and recovering the standby database must be accessible by the server session on the standby host. RMAN restores the most recent datafiles unless you execute the SET UNTIL command.

E.2.2.2 DB_UNIQUE_NAME Values in an RMAN Environment

A standby database, unlike a duplicate database created by DUPLICATE without the FOR STANDBY option, does not get a new DBID. When using RMAN in a Data Guard environment, you should always connect it to a recovery catalog. The recovery catalog can store the metadata for all primary and standby databases in the environment. You should not explicitly register the standby database in the recovery catalog.

A database in a Data Guard environment is uniquely identified by means of the DB_UNIQUE_NAME parameter in the initialization parameter file. The DB_UNIQUE_NAME must be unique across all the databases with the same DBID for RMAN to work correctly in a Data Guard environment.


See Also:

Oracle Database Backup and Recovery User's Guide for a conceptual overview of RMAN operation in a Data Guard environment

E.2.2.3 Recovery of a Standby Database

By default, RMAN does not recover the standby database after creating it. RMAN leaves the standby database mounted, but does not place the standby database in manual or managed recovery mode. RMAN disconnects and does not perform media recovery of the standby database.

If you want RMAN to recover the standby database after creating it, then the standby control file must be usable for the recovery. The following conditions must be met:

  • The end recovery time of the standby database must be greater than or equal to the checkpoint SCN of the standby control file.

  • An archived redo log file containing the checkpoint SCN of the standby control file must be available at the standby site for recovery.

One way to ensure these conditions are met is to issue the ALTER SYSTEM ARCHIVE LOG CURRENT statement after backing up the control file on the primary database. This statement archives the online redo log files of the primary database. Then, either back up the most recent archived redo log file with RMAN or move the archived redo log file to the standby site.

Use the DORECOVER option of the DUPLICATE command to specify that RMAN should recover the standby database. RMAN performs the following steps after creating the standby database files:

  1. RMAN begins media recovery. If recovery requires archived redo log files, and if the log files are not already on disk, then RMAN attempts to restore backups.

  2. RMAN recovers the standby database to the specified time, system change number (SCN), or log file sequence number, or to the latest archived redo log file generated if none of the preceding are specified.

  3. RMAN leaves the standby database mounted after media recovery is complete, but does not place the standby database in manual or managed recovery mode.

E.2.2.4 Standby Database Redo Log Files

RMAN automatically creates the standby redo log files on the standby database. After the log files are created, the standby database maintains and archives them according to the normal rules for log files.

If you use backup-based duplication, then the only option when naming the standby redo log files on the standby database is the file names for the log files, as specified in the standby control file. If the log file names on the standby must be different from the primary file names, then one option is to specify file names for the standby redo logs by setting LOG_FILE_NAME_CONVERT in the standby initialization parameter file.

Note the following restrictions when specifying file names for the standby redo log files on the standby database:

  • You must use the LOG_FILE_NAME_CONVERT parameter to name the standby redo log files if the primary and standby databases use different naming conventions for the log files.

  • You cannot use the SET NEWNAME or CONFIGURE AUXNAME commands to rename the standby redo log files.

  • You cannot use the LOGFILE clause of the DUPLICATE command to specify file names for the standby redo log files.

  • If you want the standby redo log file names on the standby database to be the same as the primary redo log file names, then you must specify the NOFILENAMECHECK clause of the DUPLICATE command. Otherwise, RMAN signals an error even if the standby database is created on a different host.

E.2.2.5 Password Files for the Standby Database

If you are using active database duplication, then RMAN always copies the password file to the standby host because the password file on the standby database must be an exact copy of the password file on the target database. In this case, the PASSWORD FILE clause is not necessary. RMAN overwrites any existing password file for the auxiliary instance. With backup-based duplication you must copy the password file used on the primary to the standby, for Data Guard to ship logs.

E.3 Using the DUPLICATE Command to Create a Standby Database

The procedure for creating a standby database is basically identical to the duplication procedure described in Oracle Database Backup and Recovery User's Guide.

E.3.1 Creating a Standby Database with Active Database Duplication

To create a standby database from files that are active in the primary database, specify both FOR STANDBY and FROM ACTIVE DATABASE. Optionally, specify the DORECOVER option to recover the database after standby creation.

This scenario assumes that the standby host and primary database host have the same directory structure.

To create a standby database from active database files:

  1. Prepare the auxiliary database instance as explained in Oracle Database Backup and Recovery User's Guide.

    Because you are using active database duplication, you must create a password file for the auxiliary instance and establish Oracle Net connectivity. This is a temporary password file as it will be overwritten during the duplicate operation.

  2. Decide how to provide names for the standby control files, datafiles, online redo logs, and tempfiles. This step is explained in Oracle Database Backup and Recovery User's Guide.

    In this scenario, the standby database files will be named the same as the primary database files.

  3. Start and configure RMAN as explained in Oracle Database Backup and Recovery User's Guide.

  4. Execute the DUPLICATE command.

    The following example illustrates how to use DUPLICATE for active duplication. This example requires the NOFILENAMECHECK option because the primary database files have the same names as the standby database files. The SET clauses for SPFILE are required for log shipping to work properly. The db_unique_name must be set to ensure that the catalog and Data Guard can identify this database as being different from the primary.

    DUPLICATE TARGET DATABASE
      FOR STANDBY
      FROM ACTIVE DATABASE
      DORECOVER
      SPFILE
        SET "db_unique_name"="foou" COMMENT ''Is a duplicate''
        SET LOG_ARCHIVE_DEST_2="service=inst3 ASYNC REGISTER
         VALID_FOR=(online_logfile,primary_role)"
        SET FAL_SERVER="inst1" COMMENT "Is primary"
      NOFILENAMECHECK;
    

    RMAN automatically copies the server parameter file to the standby host, starts the auxiliary instance with the server parameter file, restores a backup control file, and copies all necessary database files and archived redo logs over the network to the standby host. RMAN recovers the standby database, but does not place it in manual or managed recovery mode.

E.3.2 Creating a Standby Database with Backup-Based Duplication

To create a standby database from backups, specify FOR STANDBY but do not specify FROM ACTIVE DATABASE. Optionally, specify the DORECOVER option to recover the database after standby creation.

This scenario assumes that the standby host and primary database host have the same directory structure.

To create a standby database from backups:

  1. Make database backups and archived redo logs available to the auxiliary instance on the duplicate host as explained in Oracle Database Backup and Recovery User's Guide.

  2. Prepare the auxiliary database instance as explained in Oracle Database Backup and Recovery User's Guide.

  3. Decide how to provide names for the standby control files, datafiles, online redo logs, and tempfiles. This step is explained in Oracle Database Backup and Recovery User's Guide.

    In this scenario, the standby database files will be named the same as the primary database files.

  4. Start and configure RMAN as explained in Oracle Database Backup and Recovery User's Guide.

  5. Execute the DUPLICATE command.

    The following example illustrates how to use DUPLICATE for backup-based duplication. This example requires the NOFILENAMECHECK option because the primary database files have the same names as the standby database files.

    DUPLICATE TARGET DATABASE
      FOR STANDBY
      DORECOVER
      SPFILE
        SET "db_unique_name"="foou" COMMENT ''Is a duplicate''
        SET LOG_ARCHIVE_DEST_2="service=inst3 ASYNC REGISTER
         VALID_FOR=(online_logfile,primary_role)"
        SET FAL_SERVER="inst1" COMMENT "Is primary"
      NOFILENAMECHECK;
    

    RMAN automatically copies the server parameter file to the standby host, starts the auxiliary instance with the server parameter file, and restores all necessary database files and archived redo logs to the standby host. RMAN recovers the standby database, but does not place it in manual or managed recovery mode.

PKWvLLPKD Creating a Logical Standby Database

4 Creating a Logical Standby Database

This chapter steps you through the process of creating a logical standby database. It includes the following main topics:

4.1 Prerequisite Conditions for Creating a Logical Standby Database

Before you create a logical standby database, you must first ensure the primary database is properly configured. Table 4-1 provides a checklist of the tasks that you perform on the primary database to prepare for logical standby database creation.

Note that a logical standby database uses standby redo logs (SRLs) for redo received from the primary database, and also writes to online redo logs (ORLs) as it applies changes to the standby database. Thus, logical standby databases often require additional ARCn processes to simultaneously archive SRLs and ORLs. Additionally, because archiving of ORLs takes precedence over archiving of SRLs, a greater number of SRLs may be needed on a logical standby during periods of very high workload.

4.1.1 Determine Support for Data Types and Storage Attributes for Tables

Before setting up a logical standby database, ensure the logical standby database can maintain the data types and tables in your primary database. See Appendix C for a complete list of data type and storage type considerations.

4.1.2 Ensure Table Rows in the Primary Database Can Be Uniquely Identified

The physical organization in a logical standby database is different from that of the primary database, even though the logical standby database is created from a backup copy of the primary database. Thus, ROWIDs contained in the redo records generated by the primary database cannot be used to identify the corresponding row in the logical standby database.

Oracle uses primary-key or unique-constraint/index supplemental logging to logically identify a modified row in the logical standby database. When database-wide primary-key and unique-constraint/index supplemental logging is enabled, each UPDATE statement also writes the column values necessary in the redo log to uniquely identify the modified row in the logical standby database.

  • If a table has a primary key defined, then the primary key is logged along with the modified columns as part of the UPDATE statement to identify the modified row.

  • If there is no primary key, then the shortest nonnull unique-constraint/index is logged along with the modified columns as part of the UPDATE statement to identify the modified row.

  • If there is no primary key and no nonnull unique constraint/index, then all columns of bounded size are logged as part of the UPDATE statement to identify the modified row. All columns are logged except the following: LONG, LOB, LONG RAW, object type, and collections.

  • A function-based index, even though it is declared as unique, cannot be used to uniquely identify a modified row. However, logical standby databases support replication of tables that have function-based indexes defined, as long as modified rows can be uniquely identified.

Oracle recommends that you add a primary key or a nonnull unique index to tables in the primary database, whenever possible, to ensure that SQL Apply can efficiently apply redo data updates to the logical standby database.

Perform the following steps to ensure SQL Apply can uniquely identify rows of each table being replicated in the logical standby database.

Step 1   Find tables without unique logical identifier in the primary database.

Query the DBA_LOGSTDBY_NOT_UNIQUE view to display a list of tables that SQL Apply may not be able to uniquely identify. For example:

SQL> SELECT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_NOT_UNIQUE
  2> WHERE (OWNER, TABLE_NAME) NOT IN 
  3> (SELECT DISTINCT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED) 
  4> AND BAD_COLUMN = 'Y';

This query may take a few minutes to run.

Step 2   Add a disabled primary-key RELY constraint.

If your application ensures the rows in a table are unique, you can create a disabled primary key RELY constraint on the table. This avoids the overhead of maintaining a primary key on the primary database.

To create a disabled RELY constraint on a primary database table, use the ALTER TABLE statement with a RELY DISABLE clause. The following example creates a disabled RELY constraint on a table named mytab, for which rows can be uniquely identified using the id and name columns:

SQL> ALTER TABLE mytab ADD PRIMARY KEY (id, name) RELY DISABLE;

When you specify the RELY constraint, the system will assume that rows are unique. Because you are telling the system to rely on the information, but are not validating it on every modification done to the table, you must be careful to select columns for the disabled RELY constraint that will uniquely identify each row in the table. If such uniqueness is not present, then SQL Apply will not correctly maintain the table.

To improve the performance of SQL Apply, add a unique-constraint/index to the columns to identify the row on the logical standby database. Failure to do so results in full table scans during UPDATE or DELETE statements carried out on the table by SQL Apply.


See Also:


4.2 Step-by-Step Instructions for Creating a Logical Standby Database

This section describes the tasks you perform to create a logical standby database.

Table 4-2 provides a checklist of the tasks that you perform to create a logical standby database and specifies on which database you perform each task. There is also a reference to the section that describes the task in more detail.

4.2.1 Create a Physical Standby Database

You create a logical standby database by first creating a physical standby database and then transitioning it to a logical standby database. Follow the instructions in Chapter 3, "Creating a Physical Standby Database" to create a physical standby database.

4.2.2 Stop Redo Apply on the Physical Standby Database

You can run Redo Apply on the new physical standby database for any length of time before converting it to a logical standby database. However, before converting to a logical standby database, stop Redo Apply on the physical standby database. Stopping Redo Apply is necessary to avoid applying changes past the redo that contains the LogMiner dictionary (described in Section 4.2.3.2, "Build a Dictionary in the Redo Data").

To stop Redo Apply, issue the following statement on the physical standby database. If the database is an Oracle RAC database comprised of multiple instances, then you must first stop all Oracle RAC instances except one before issuing this statement:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

4.2.3 Prepare the Primary Database to Support a Logical Standby Database

This section contains the following topics:

4.2.3.1 Prepare the Primary Database for Role Transitions

In Section 3.1.4, "Set Primary Database Initialization Parameters", you set up several standby role initialization parameters to take effect when the primary database is transitioned to the physical standby role.


Note:

This step is necessary only if you plan to perform switchovers.

If you plan to transition the primary database to the logical standby role, then you must also modify the parameters shown in bold typeface in Example 4-1, so that no parameters need to change after a role transition:

  • Change the VALID_FOR attribute in the original LOG_ARCHIVE_DEST_1 destination to archive redo data only from the online redo log and not from the standby redo log.

  • Include the LOG_ARCHIVE_DEST_3 destination on the primary database. This parameter only takes effect when the primary database is transitioned to the logical standby role.

Example 4-1 Primary Database: Logical Standby Role Initialization Parameters

LOG_ARCHIVE_DEST_1=
 'LOCATION=/arch1/chicago/ 
  VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)
  DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_3=
 'LOCATION=/arch2/chicago/
  VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) 
  DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_STATE_3=ENABLE

To dynamically set these initialization parameter, use the SQL ALTER SYSTEM SET statement and include the SCOPE=BOTH clause so that the changes take effect immediately and persist after the database is shut down and started up again.

The following table describes the archival processing defined by the changed initialization parameters shown in Example 4-1.


When the Chicago Database Is Running in the Primary RoleWhen the Chicago Database Is Running in the Logical Standby Role
LOG_ARCHIVE_DEST_1Directs archiving of redo data generated by the primary database from the local online redo log files to the local archived redo log files in /arch1/chicago/.Directs archiving of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in /arch1/chicago/.
LOG_ARCHIVE_DEST_3Is ignored; LOG_ARCHIVE_DEST_3 is valid only when chicago is running in the standby role.Directs archiving of redo data from the standby redo log files to the local archived redo log files in /arch2/chicago/.

4.2.3.2 Build a Dictionary in the Redo Data

A LogMiner dictionary must be built into the redo data so that the LogMiner component of SQL Apply can properly interpret changes it sees in the redo. As part of building the LogMiner dictionary, supplemental logging is automatically set up to log primary key and unique-constraint/index columns. The supplemental logging information ensures each update contains enough information to logically identify each row that is modified by the statement.

To build the LogMiner dictionary, issue the following statement:

SQL> EXECUTE DBMS_LOGSTDBY.BUILD;

The DBMS_LOGSTDBY.BUILD procedure waits for all existing transactions to complete. Long-running transactions executed on the primary database will affect the timeliness of this command.


Note:

In databases created using Oracle Database 11g release 2 (11.2) or later, supplemental logging information is automatically propagated to any existing physical standby databases. However, for databases in earlier releases, or if the database was created using an earlier release and then upgraded to 11.2, you must check whether supplemental logging is enabled at the physical standby(s) if it is also enabled at the primary database. If it is not enabled at the physical standby(s), then before performing a switchover or failover, you must enable supplemental logging on all existing physical standby databases. To do so, issue the following SQL command on each physical standby:
SQL>  ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;

If you do not do this, then any logical standby that is also in the same Data Guard configuration will be unusable if a switchover or failover is performed to one of the physical standby databases. If a switchover or failover has already occurred and supplemental logging was not enabled, then you must recreate all logical standby databases.



See Also:


4.2.4 Transition to a Logical Standby Database

This section describes how to prepare the physical standby database to transition to a logical standby database. It contains the following topics:

4.2.4.1 Convert to a Logical Standby Database

The redo logs contain the information necessary to convert your physical standby database to a logical standby database.


Note:

If you have an Oracle RAC physical standby database, shut down all but one instance, set CLUSTER_DATABASE to FALSE, and start the standby database as a single instance in MOUNT EXCLUSIVE mode, as follows:
SQL> ALTER SYSTEM SET CLUSTER_DATABASE=FALSE SCOPE=SPFILE;
SQL> SHUTDOWN ABORT;
SQL> STARTUP MOUNT EXCLUSIVE; 

To continue applying redo data to the physical standby database until it is ready to convert to a logical standby database, issue the following SQL statement:

SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY db_name;

For db_name, specify a database name that is different from the primary database to identify the new logical standby database. If you are using a server parameter file (spfile) at the time you issue this statement, then the database will update the file with appropriate information about the new logical standby database. If you are not using an spfile, then the database issues a message reminding you to set the name of the DB_NAME parameter after shutting down the database.


Note:

If you are creating a logical standby database in the context of performing a rolling upgrade of Oracle software with a physical standby database, you should issue the following command instead:
SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY KEEP IDENTITY;

A logical standby database created with the KEEP IDENTITY clause retains the same DB_NAME and DBID as that of its primary database. Such a logical standby database can only participate in one switchover operation, and thus should only be created in the context of a rolling upgrade with a physical standby database.

Note that the KEEP IDENTITY clause is available only if the database being upgraded is running Oracle Database release 11.1 or later.


The statement waits, applying redo data until the LogMiner dictionary is found in the log files. This may take several minutes, depending on how long it takes redo generated in Section 4.2.3.2, "Build a Dictionary in the Redo Data" to be transmitted to the standby database, and how much redo data needs to be applied. If a dictionary build is not successfully performed on the primary database, this command will never complete. You can cancel the SQL statement by issuing the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL statement from another SQL session.


Caution:

In releases prior to Oracle Database 11g, you needed to create a new password file before you opened the logical standby database. This is no longer needed. Creating a new password file at the logical standby database will cause redo transport services to not work properly.

4.2.4.2 Adjust Initialization Parameters for the Logical Standby Database


Note:

If you started with an Oracle RAC physical standby database, set CLUSTER_DATABASE back to TRUE, as follows:
SQL> ALTER SYSTEM SET CLUSTER_DATABASE=TRUE SCOPE=SPFILE; 

On the logical standby database, shutdown the instance and issue the STARTUP MOUNT statement to start and mount the database. Do not open the database; it should remain closed to user access until later in the creation process. For example:

SQL> SHUTDOWN;
SQL> STARTUP MOUNT;

You need to modify the LOG_ARCHIVE_DEST_n parameters because, unlike physical standby databases, logical standby databases are open databases that generate redo data and have multiple log files (online redo log files, archived redo log files, and standby redo log files). It is good practice to specify separate local destinations for:

  • Archived redo log files that store redo data generated by the logical standby database. In Example 4-2, this is configured as the LOG_ARCHIVE_DEST_1=LOCATION=/arch1/boston destination.

  • Archived redo log files that store redo data received from the primary database. In Example 4-2, this is configured as the LOG_ARCHIVE_DEST_3=LOCATION=/arch2/boston destination.

Example 4-2 shows the initialization parameters that were modified for the logical standby databasZ4e. The parameters shown are valid for the Boston logical standby database when it is running in either the primary or standby database role.

Example 4-2 Modifying Initialization Parameters for a Logical Standby Database

LOG_ARCHIVE_DEST_1=
  'LOCATION=/arch1/boston/
   VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)
   DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_2=
  'SERVICE=chicago ASYNC
   VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
   DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_3=
  'LOCATION=/arch2/boston/
   VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)
   DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_STATE_3=ENABLE

Note:

If database compatibility is set to 11.1 or later, you can use the fast recovery area to store remote archived logs. To do this, you need to set only the following parameters (assuming you have already set the DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE parameters):
LOG_ARCHIVE_DEST_1=
  'LOCATION=USE_DB_RECOVERY_FILE_DEST
   DB_UNIQUE_NAME=boston'

Note that because you are using the fast recovery area, it is not necessary to specify the VALID_FOR parameter. Its default value is (ALL_LOGFILES,ALL_ROLES) and that is the desired behavior in this case. LOG_ARCHIVE_DEST_1 will be used for all log files, both online (primary) and standby.


The following table describes the archival processing defined by the initialization parameters shown in Example 4-2.


When the Boston Database Is Running in the Primary RoleWhen the Boston Database Is Running in the Logical Standby Role
LOG_ARCHIVE_DEST_1Directs archival of redo data generated by the primary database from the local online redo log files to the local archived redo log files in /arch1/boston/.Directs archival of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in /arch1/boston/.
LOG_ARCHIVE_DEST_2Directs transmission of redo data to the remote logical standby database chicago.Is ignored; LOG_ARCHIVE_DEST_2 is valid only when boston is running in the primary role.
LOG_ARCHIVE_DEST_3Is ignored; LOG_ARCHIVE_DEST_3 is valid only when boston is running in the standby role.Directs archival of redo data received from the primary database to the local archived redo log files in /arch2/boston/.


Note:

The DB_FILE_NAME_CONVERT initialization parameter is not honored once a physical standby database is converted to a logical standby database. If necessary, you should register a skip handler and provide SQL Apply with a replacement DDL string to execute by converting the path names of the primary database datafiles to the standby datafile path names. See the DBMS_LOGSTDBY package in Oracle Database PL/SQL Packages and Types Reference. for information about the SKIP procedure.

4.2.5 Open the Logical Standby Database

To open the new logical standby database, you must open it with the RESETLOGS option by issuing the following statement:

SQL> ALTER DATABASE OPEN RESETLOGS;

Note:

If you started with an Oracle RAC physical standby database, you can start up all other standby instances at this point.


Caution:

If you are co-locating the logical standby database on the same computer system as the primary database, you must issue the following SQL statement before starting SQL Apply for the first time, so that SQL Apply skips the file operations performed at the primary database. The reason this is necessary is that SQL Apply has access to the same directory structure as the primary database, and datafiles that belong to the primary database could possibly be damaged if SQL Apply attempted to re-execute certain file-specific operations.
SQL> EXECUTE DBMS_LOGSTDBY.SKIP('ALTER TABLESPACE');

The DB_FILENAME_CONVERT parameter that you set up while co-locating the physical standby database on the same system as the primary database, is ignored by SQL Apply. See Oracle Database PL/SQL Packages and Types Reference for information about DBMS_LOGSTDBY.SKIP and equivalent behavior in the context of a logical standby database.


Because this is the first time the database is being opened, the database's global name is adjusted automatically to match the new DB_NAME initialization parameter.


Note:

If you are creating the logical standby database in order to perform a rolling upgrade of the Oracle Database software, then before you start SQL Apply for the first time, Oracle recommends that you use the DBMS_LOGSTDBY PL/SQL procedure at the logical standby database to capture information about transactions running on the primary database that will not be supported by a logical standby database. Run the following procedures to capture and record the information as events in the DBA_LOGSTDBY_EVENTS table:
EXEC DBMS_LOGSTDBY.APPLY_SET('MAX_EVENTS_RECORDED',
DBMS_LOGSTDBY.MAX_EVENTS);

EXEC DBMS_LOGSTDBY.APPLY_SET('RECORD_UNSUPPORTED_OPERATIONS', 'TRUE');

See Also:


Issue the following statement to begin applying redo data to the logical standby database:

SQL>  ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

4.2.6 Verify the Logical Standby Database Is Performing Properly

See the following sections for help verifying that the logical standby database is performing properly:

4.3 Post-Creation Steps


Note:

The conversion of the physical standby database to a logical standby database happens in two phases:
  1. As part of the ALTER DATABASE RECOVER TO LOGICAL STANDBY statement (unless you have specified the KEEP IDENTITY clause), the DBID of the database is changed.

  2. As part of the first successful invocation of ALTER DATABASE START LOGICAL STANDBY APPLY statement, the control file is updated to make it consistent with that of the newly created logical standby database.

    Once you have successfully invoked the ALTER DATABASE START LOGICAL STANDBY APPLY statement, you should take a full backup of the logical standby database, because the backups taken from the primary database cannot be used to restore the logical standby database.


At this point, the logical standby database is running and can provide the maximum performance level of data protection. The following list describes additional preparations you can take on the logical standby database:

PKq| dZPK D4$OEBPS/role_management.htmPKDcOEBPS/img_text/sbydb044.htmPKD