PK
-Aoa, mimetypeapplication/epub+zipPK -A iTunesMetadata.plist\
administrator-managed database
An administrator-managed database is a database created on nodes that are not part of a server pool and are managed by the database or clusterware administrator.
all node patching
A method of applying patches to the nodes in a cluster. When using the all node patching method, all the nodes in the Real Application Clusters are initially brought down and the patch is applied on all the nodes. After the patch is applied to all nodes, then the nodes are brought back up.
Automatic Workload Repository (AWR)
A built-in repository that exists in every Oracle Database. At regular intervals, the Oracle Database makes a snapshot of all of its vital statistics and workload information and stores them in the AWR.
cache coherency
The synchronization of data in multiple caches so that reading a memory location through any cache returns the most recent data written to that location through any other cache. Sometimes called cache consistency.
Cache Fusion
A diskless cache coherency mechanism in Oracle Real Application Clusters that provides copies of blocks directly from a holding instance's memory cache to a requesting instance's memory cache.
cluster
Multiple interconnected computers or servers that appear as if they are one server to end users and applications.
cluster file system
A distributed file system that is a cluster of servers that collaborate to provide high performance service to their clients. Cluster file system software deals with distributing requests to storage cluster components.
Cluster Synchronization Services (CSS)
An Oracle Clusterware component that discovers and tracks the membership state of each node by providing a common view of membership across the cluster. CSS also monitors process health, specifically the health of the database instance. The Global Enqueue Service Monitor (LMON), a background process that monitors the health of the cluster database environment and registers and de-registers from CSS. See also, OCSSD.
Cluster Verification Utility (CVU)
A tool that verifies a wide range of Oracle RAC-specific components such as shared storage devices, networking configurations, system requirements, Oracle Clusterware, groups, and users.
CRSD
A Linux or UNIX process that performs high availability recovery and management operations such as maintaining the OCR. Also manages application resources and runs as root
user (or by a user in the admin
group on Mac operating system X-based systems) and restarts automatically upon failure.
Distributed Transaction Processing (DTP)
The paradigm of distributed transactions, including both XA-type externally coordinated transactions, and distributed-SQL-type (database links in Oracle) internally coordinated transactions.
Dynamic Host Configuration Protocol (DHCP)
A network application protocol used by devices (DHCP clients) to obtain configuration information for operation in an Internet Protocol network. This protocol reduces system administration workload, allowing devices to be added to the network with little or no manual intervention.
Enterprise Manager Configuration Assistant (EMCA)
A graphical user interface-based configuration assistant that you can use to configure Enterprise Manager features.
Event Manager (EVM)
The background process that publishes Oracle Clusterware events. EVM scans the designated callout directory and runs all scripts in that directory when an event occurs.
Event Manager Daemon (EVMD)
A Linux or UNIX event manager daemon that starts the racgevt
process to manage callouts.
Fast Application Notification (FAN)
Applications can use FAN to enable rapid failure detection, balancing of connection pools after failures, and re-balancing of connection pools when failed components are repaired. The FAN notification process uses system events that Oracle publishes when cluster servers become unreachable or if network interfaces fail.
Fast Connection Failover
Fast Connection Failover provides high availability to FAN integrated clients, such as clients that use JDBC, OCI, or ODP.NET. If you configure the client to use fast connection failover, then the client automatically subscribes to FAN events and can react to database UP
and DOWN
events. In response, Oracle gives the client a connection to an active instance that provides the requested database service.
forced disk write
In Oracle Real Application Clusters, a particular data block can only be modified by one instance at a time. If one instance modifies a data block that another instance needs, then whether a forced disk write is required depends on the type of request submitted for the block.
Free pool
A default server pool used in policy-based cluster and capacity management of Oracle Clusterware resources. The free pool contains servers that are not assigned to any server pool.
General Parallel File System (GPFS)
General Parallel File System (GPFS) is a shared-disk IBM file system product that provides data access from all of the nodes in a homogenous or heterogeneous cluster.
Global Cache Service (GCS)
Process that implement Cache Fusion. It maintains the block mode for blocks in the global role. It is responsible for block transfers between instances. The Global Cache Service employs various background processes such as the Global Cache Service Processes (LMSn) and Global Enqueue Service Daemon (LMD).
Global Cache Service Processes (LMSn)
Processes that manage remote messages. Oracle RAC provides for up to 10 Global Cache Service Processes.
Global Cache Service (GCS) resources
Global resources that coordinate access to data blocks in the buffer caches of multiple Oracle RAC instances to provide cache coherency.
global database name
The full name of the database that uniquely identifies it from any other database. The global database name is of the form database_name.database_domain—for example: TEST.US.EXAMPLE.COM
global dynamic performance views (GV$)
Dynamic performance views storing information about all open instances in a Oracle Real Application Clusters cluster. (Not only the local instance.) In contrast, standard dynamic performance views (V$) only store information about the local instance.
Global Enqueue Service Daemon (LMD)
The resource agent process that manages requests for resources to control access to blocks. The LMD process also handles deadlock detection and remote resource requests. Remote resource requests are requests originating from another instance.
Global Enqueue Service Monitor (LMON)
The background LMON process monitors the entire cluster to manage global resources. LMON manages instance deaths and the associated recovery for any failed instance. In particular, LMON handles the part of recovery associated with global resources. LMON-provided services are also known as Cluster Group Services.
Global Services Daemon (GSD)
A component that receives requests from SRVCTL to execute administrative job tasks, such as startup or shutdown. The command is executed locally on each node, and the results are returned to SRVCTL. GSD is installed on the nodes by default.
Grid home
The Oracle Home directory for the Oracle Grid Infrastructure for a cluster for a cluster software installation, which includes Oracle Clusterware and Oracle ASM.
grid infrastructure
The software that provides the infrastructure for an enterprise grid architecture. Oracle Database 11g release 2 (11.2) combines these infrastructure products into one software bundle called Oracle Grid Infrastructure for a cluster. In an Oracle cluster, Oracle Grid Infrastructure for a cluster includes Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM). For a standalone Oracle Database server, Oracle Grid Infrastructure for a cluster includes Oracle Restart and Oracle ASM.
Grid Naming Service (GNS)
A generic service which resolves the names of hosts in a delegated normal DNS zone by mapping them to IP addresses within the zone. GNS enables the use of Dynamic Host Configuration Protocol (DHCP) address for Oracle RAC database nodes, simplifying deployment. GNS also resolves host names passed back from a SCAN listener.
high availability
Systems with redundant components that provide consistent and uninterrupted service, even following hardware or software failures. This involves some degree of redundancy.
High Availability Cluster Multi-Processing (HACMP)
High Availability Cluster Multi-Processing is an IBM AIX-based high availability cluster software product. HACMP has two major components: high availability (HA) and cluster multi-processing (CMP).
instance
For an Oracle RAC database, each node in a cluster usually has one instance of the running Oracle software that references the database. When a database is started, Oracle allocates a memory area called the System Global Area (SGA) and starts one or more Oracle processes. This combination of the SGA and the Oracle processes is called an instance. Each instance has unique Oracle System Identifier (SID), instance name, rollback segments, and thread ID.
instance membership recovery
The method used by Oracle RAC guaranteeing that all cluster members are functional or active. IMR polls and arbitrates the membership. Any members that do not show a heartbeat by way of the control file or who do not respond to periodic activity inquiry messages are presumed terminated.
instance name
Represents the name of the instance and is used to uniquely identify a specific instance when clusters share common services names. The instance name is identified by the INSTANCE_NAME
parameter in the instance initialization file, init
sid
.ora
. The instance name equals the Oracle System Identifier (sid).
instance number
A number that associates extents of data blocks with particular instances. The instance number enables you to start an instance and ensure that it uses the extents allocated to it for inserts and updates. This ensures that an instance does not use space allocated for other instances.
interconnect
The private network communication link that is used to synchronize the memory cache of the nodes in the cluster.
Logical Volume Manager (LVM)
A generic term that describes Linux or UNIX subsystems for online disk storage management.
Inter-Process Communication (IPC)
A high-speed operating system-dependent transport component. The IPC transfers messages between instances on different nodes. Also referred to as the interconnect.
Master Boot Record (MBR)
A program that executes when a computer starts. Typically, the MBR resides on the first sector of a local hard disk. The program begins the startup process by examining the partition table to determine which partition to use for starting the computer. The MBR program then transfers control to the boot sector of the startup partition, which continues the startup process.
minimum downtime patching
In minimum downtime patching, the nodes are divided into two sets. The first set is shut down and the patch is applied to it. The second set is then shut down. The first set is brought up and then the patch is applied to the second set. After the patch is applied to the second set, those nodes are also brought up, finishing the patching operation.
multicast Domain Name Server (mDNS)
A part of Zero Configuration Networking (Zeroconf), mDNS provides the ability to address hosts using DNS-like names without the need of an existing, managed DNS server.
Network Interface Card (NIC)
A card that you insert into a computer to connect the computer to a network.
Network Time Protocol (NTP)
An Internet standard protocol, built on top of TCP/IP, that ensures the accurate synchronization to the millisecond of the computer clock times in a network of computers.
node
A node is a computer on which the Oracle Clusterware software is installed or will be installed.
Object Link Manager (OLM)
The Oracle interface that maps symbolic links to logical drives and displays them in the OLM graphical user interface.
OCSSD
A Linux or UNIX process that manages the Cluster Synchronization Services (CSS) daemon. Manages cluster node membership and runs as oracle
user; failure of this process results in cluster restart.
optimal flexible architecture (OFA)
A set of file naming and configuration guidelines created to ensure reliable Oracle installations that require little maintenance.
Oracle Base directory
The mountpoint for all software installations performed by a particular user. An Oracle base directory can contain multiple Oracle homes for Oracle software products, either of the same or different releases, all installed by the same operating system user. The Oracle Base directory is also the directory where the software parameter files, log files, trace files, and so on, associated with a specific installation owner are located.
Oracle Cluster File System (OCFS)
The Oracle proprietary cluster file system software that is available for Linux and Windows platforms.
Oracle Cluster Registry (OCR)
The Oracle RAC configuration information repository that manages information about the cluster node list and instance-to-node mapping information. The OCR also manages information about Oracle Clusterware resource profiles for customized applications.
Oracle Clusterware
This is clusterware that is provided by Oracle to manage cluster database processing including node membership, group services, global resource management, and high availability functions.
Oracle Home directory
The binary location for a particular software installation.
Typically The Oracle Home directory is a subdirectory of the Oracle Base directory for the software installation owner. However, in the case of Oracle Grid Infrastructure for a cluster, the Oracle Home directory (in this case, the Grid home) is located outside of the Oracle Base directory for the Oracle Grid Infrastructure for a cluster installation owner, because the path of the Grid home is changed to root
ownership.
Oracle Interface Configuration Tool (OIFCFG)
A command-line tool for both single-instance Oracle databases and Oracle RAC databases that enables you to allocate and de-allocate network interfaces to components, direct components to use specific network interfaces, and retrieve component configuration information. The Oracle Universal Installer (OUI) also uses OIFCFG to identify and display available interfaces.
Oracle Inventory directory
The Oracle Inventory directory is the central inventory location for all Oracle software installed on a server.
Oracle Notification Services (ONS)
A publish and subscribe service for communicating information about all FAN events.
Oracle Universal Installer (OUI)
A tool to install Oracle Clusterware, the Oracle relational database software, and the Oracle Real Application Clusters software. You can also use the Oracle Universal Installer to launch the Database Configuration Assistant (DBCA).
policy-managed database
A policy-managed database is created using a server pool. Oracle Clusterware allocates and reassigns capacity based on policies you define, enabling faster resource failover and dynamic capacity assignment.
raw device
A disk drive that does not yet have a file system set up. Raw devices are used for Oracle Real Application Clusters because they enable the sharing of disks. See also raw partition.
raw partition
A portion of a physical disk that is accessed at the lowest possible level. A raw partition is created when an extended partition is created and logical partitions are assigned to it without any formatting. Once formatting is complete, it is called a cooked partition. See also raw device.
Recovery Manager (RMAN)
An Oracle tool that enables you to back up, copy, restore, and recover data files, control files, and archived redo logs. It is included with the Oracle server and does not require separate installation. You can invoke RMAN as a command line utility from the operating system (O/S) prompt or use the GUI-based Enterprise Manager Backup Manager.
rolling patching
In Rolling Patching, one node (or group of nodes) is shutdown, the patch applied and the node brought back up again. This is repeated for each node in the cluster until all the nodes in the Real Application Clusters are patched.
Run-time Connection Load Balancing
Enables Oracle to make intelligent service connection decisions based on the connection pool that provides the optimal service for the requested application based on current workloads. The JDBC, ODP.NET, and OCI clients are integrated with the load balancing advisory; you can use any of these client environments to provide run-time connection load balancing.
scalability
The ability to add additional nodes to Oracle Real Application Clusters applications and achieve markedly improved scale-up and speed-up.
SCAN
A single name, or network alias, for the cluster. Oracle Database 11g database clients use SCAN to connect to the database. SCAN can resolve to multiple IP addresses, reflecting multiple listeners in the cluster handling public client connections.
Secure Shell (SSH)
A program for logging into a remote computer over a network. You can use SSH to execute commands on a remote computer and to move files from one computer to another. SSH uses strong authentication and secure communications over insecure channels.
Server Control (SRVCTL) Utility
Server Management (SRVM) comprises the components required to operate Oracle Enterprise Manager in Oracle Real Application Clusters. The SRVM components, such as the Intelligent Agent, Global Services Daemon, and SRVCTL, enable you to manage cluster databases running in heterogeneous environments through an open client/server architecture using Oracle Enterprise Manager.
server pool
A server pool is a logical division of nodes in a cluster into a group to support policy-managed databases.
services
Entities that you can define in Oracle RAC databases that enable you to group database workloads and route work to the optimal instances that are assigned to offer the service.
singleton services
Services that run on only one instance at any one time. By defining the Distributed Transaction Property (DTP) property of a service, you can force the service to be a singleton service.
split brain syndrome
Where two or more insta+ nces attempt to control a cluster database. In a two-node environment, for example, one instance attempts to manage updates simultaneously while the other instance attempts to manage updates.
system identifier (SID)
The Oracle system identifier (SID) identifies a specific instance of the running Oracle software. For a Oracle Real Application Clusters database, each node within the cluster has an instance referencing the database.
thread
Each Oracle instance has its own set of online redo log groups. These groups are called a thread of online redo. In non-Oracle Real Application Clusters environments, each database has only one thread that belongs to the instance accessing it. In Oracle Real Application Clusters environments, each instance has a separate thread, that is, each instance has its own online redo log. Each thread has its own current log member.
thread number
An identifier for the redo thread to be used by an instance, specified by the INSTANCE_NUMBER
initialization parameter. You can use any available redo thread number but an instance cannot use the same redo thread number as another instance.
transparent application failover (TAF)
A run-time failover for high-availability environments, such as Oracle Real Application Clusters, TAF refers to the failover and re-establishment of application-to-service connections. It enables client applications to automatically reconnect to the database if the connection fails, and optionally resume a SELECT statement that was in progress. This reconnect happens automatically from within the Oracle Call Interface (OCI) library.
2 Day + Real Application Clusters Guide
11g Release 2 (11.2)
E17264-12
May 2012
Oracle Database 2 Day + Real Application Clusters Guide 11g Release 2 (11.2)
E17264-12
Copyright © 2006, 2012, Oracle and/or its affiliates. All rights reserved.
Primary Author: Janet Stern
Contributing Authors: Mark Bauer, Vivian Schupmann, Richard Strohm, Douglas Williams
Contributors: David Austin, Eric Belden, David Brower, Jonathan Creighton, Sudip Datta, Venkatadri Ganesan, Shamik Ganguly, Prabhaker Gongloor, Mayumi Hayasaka, William Hodak, Masakazu Ito, Aneesh Khandelwal, Sushil Kumar, Rich Long, Barb Lundhild, Venkat Maddali, Gaurav Manglik, Markus Michalewicz, Mughees Minhas, Tim Misner, Joe Paradise, Srinivas Poovala, Hanlin Qian, Mark Scardina, Laurent Schneider, Uri Shaft, Cathy Shea, Jacqueline Sideri, Vijay Sriram, Vishwanath Subrahmannya Sastry, Mark Townsend, Ara Vagharshakian, Mike Zampiceni
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.
This chapter explains how to install Oracle Real Application Clusters (Oracle RAC) and Oracle RAC One Node using Oracle Universal Installer (OUI). Before installing Oracle RAC or Oracle RAC One Node, you must first install the Oracle Grid Infrastructure for a cluster software, which consists of Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM), for 11g release 2 (11.2). After Oracle Clusterware and Oracle ASM are operational, you can use OUI to install the Oracle Database software with the Oracle RAC components. Installing Oracle RAC One Node is available starting with Oracle Database 11g Release 2 (11.2.0.2).
This chapter includes the following sections:
Installing the Oracle Grid Infrastructure for a Cluster Software
Installing the Oracle Database Software and Creating a Database
About Converting an Oracle Database to an Oracle RAC Database
Oracle Clusterware is installed as part of the Oracle Grid Infrastructure for a cluster software. OUI installs Oracle Clusterware into a directory structure that is referred to as Grid_home
. This home is separate from the home directories of other Oracle software products installed on the same server. Because Oracle Clusterware works closely with the operating system, system administrator privileges are required for some installation tasks. In addition, some Oracle Clusterware processes must run as the special operating system user, root
. Oracle Automatic Storage Management (Oracle ASM) is also installed in the Grid home directory.
The Oracle RAC software is installed from the Oracle Database 11g installation media. By default, the standard Oracle Database 11g software installation process installs the Oracle RAC option when OUI recognizes that you are performing the installation on a cluster. OUI installs Oracle RAC into a directory structure that is referred to as Oracle_home
. This home is separate from the home directories of other Oracle software products installed on the same server.
To prepare the Oracle Media installation files:
If you have the Oracle software on CDs or DVDs, then insert the disc for Oracle Grid Infrastructure for a cluster into a disk drive on your computer. Make sure the disk drive has been mounted at the operating system level. You must change discs when installing the Oracle Database software.
If you do not have installation disks, but are instead installing from ZIP files, then continue to Step 2.
If the installation software is in one or more ZIP files, then create a staging directory on one node, for example, racnode1
, to store the unzipped files, as shown here:
mkdir -p /stage/oracle/11.2.0
Copy the ZIP files to this staging directory. For example, if the files were downloaded to a directory named /home/user1
, and the ZIP file is named 11200_linux_db.zip
, then use the following command to move the ZIP file to the staging directory:
cd /home/user1 cp 11200_linux_db.zip /stage/oracle/11.2.0
As the oracle
user on the first node, unzip the Oracle media, as shown in the following example:
cd /stage/oracle/11.2.0 unzip 11200_linux_db.zip
When you first start OUI, you are prompted to enter your email address and My Oracle Support password. By entering this information, you enable the following features:
Oracle Configuration Manager is installed and configured. This option enables you to associate information about your Oracle RAC configuration with your My Oracle Support (formerly OracleMetalink) account. In the event that you must place a service request with Oracle Support, that configuration information can help provide a more rapid resolution to the service issue.
E-mail notification of security alerts from My Oracle Support
Automatic download and application of the most recent patches to the newly installed Oracle software (with Oracle Grid Infrastructure for a cluster or Oracle Database). The software updates that can be downloaded include patches, critical patch updates, installer updates, and patch sets. This functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2)
If you choose to enable these features, then you must supply your My Oracle Support account name (your email address) and your password. You may have to configure the proxy settings before your computer can connect to My Oracle Support.
If you have downloaded the software updates, then you can enter the directory location where the files are stored on the local server instead of downloading them from My Oracle Support. The software updates are applied to the installed software during the installation process.
The following topics describe the process of installing Oracle Grid Infrastructure for a cluster, which consists of Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM):
You run OUI from the oracle
user account. Before you start OUI to install Oracle Grid Infrastructure for a cluster, you do not have to configure the environment of the oracle
user. You can specify the directories to use for the central inventory and the Grid home during the installation process.
However, you can set the ORACLE_BASE
environment variable to the directory in which you want the Oracle Inventory files located.For example, if you plan to make the Oracle Database home directory /u01/app/oracle/product/11.2.0/dbhome_1
, then you would set ORACLE_BASE
to the directory /u01/app/oracle/
. If you set the ORACLE_BASE
directory before installation, then this becomes the default location for the central inventory displayed by OUI.
You can also set the ORACLE_HOME
environment variable to the location chosen for the Grid home. If you set the ORACLE_HOME
directory before installation, then this becomes the default location for the Grid home displayed by OUI.
(Optional) To modify the user environment before installing Oracle Grid Infrastructure for a cluster on Oracle Linux:
As the oracle
user, execute the following commands:
[oracle]$ unset ORACLE_HOME [oracle]$ unset ORACLE_SID [oracle]$ unset ORACLE_BASE [oracle]$ export ORACLE_BASE=/u01/app/oracle/ [oracle]$ export ORACLE_HOME=/u01/app/11.2.0/grid
Verify the changes have been made by executing the following commands:
[oracle]$ echo $ORACLE_SID [oracle]$ echo $ORACLE_HOME /u01/app/11.2.0/grid [oracle]$ echo $ORACLE_BASE /u01/app/oracle/
During installation, for certain prerequisite check failures, you can click Fix & Check Again to generate a fixup script (runfixup.sh
). You are then prompted by OUI to run the fixup script as the root user in a separate session. You must run the script on all the nodes specified by OUI.
The Fixup script does the following:
Checks and sets kernel parameters to values required for successful installation, including:
Shared memory parameters
Semaphore parameters
Open file descriptor and UDP send/receive parameters
Sets permissions on the Oracle Inventory (central inventory) directory.
Reconfigures primary and secondary group memberships for the installation owner, if necessary, for the Oracle Inventory directory, and for the operating system privileges groups.
Sets shell limits to required values, if necessary.
Modifying the contents of the generated fixup script is not recommended.
Note: Using fixup scripts does not ensure that all the required prerequisites for installing Oracle Grid Infrastructure for a cluster and Oracle RAC are satisfied. You must still verify that all the requirements listed in Chapter 2, "Preparing Your Cluster" are met to ensure a successful installation. |
As the Oracle Grid Infrastructure for a cluster software owner (oracle)
user on the first node, install the Oracle Grid Infrastructure for a cluster for your cluster. Note that OUI uses Secure Shell (SSH) to copy the binary files from this node to the other nodes during the installation. OUI can configure SSH for you during installation.
Note: If you are installing Oracle Clusterware on a server that has a single-instance Oracle Database 11g installation, then stop the existing Oracle ASM instances, if any. After Oracle Clusterware is installed, start the Oracle ASM instances again. When you restart the single-instance Oracle database and the Oracle ASM instances, the Oracle ASM instances use the Cluster Synchronization Services Daemon (CSSD) instead of the daemon for the single-instance Oracle database. |
To install the Oracle Grid Infrastructure for a cluster software:
Use the following command to start OUI, where staging_area
is the location of the staging area on disk, or the root level of the installation media:
cd /staging_area/clusterware/Disk1
./runInstaller
After a few minutes, the Select Installation Option window appears.
Choose the Install and Configure Grid Infrastructure for a Cluster option, then click Next.
The Select Installation Type window appears.
Select Typical Installation, then click Next.
The Specify Cluster Configuration window appears.
In the SCAN Name field, enter a name for your cluster that is unique throughout your entire enterprise network. For example, you might choose a name that is based on the node names' common prefix. This guide uses the SCAN name docrac
.
In the Hostname column of the table of cluster nodes, you should see your local node, for example racnode1.example.com
. Click Add to add another node to the cluster.
The Add Cluster Node Information pop-up window appears.
Note: Specify both nodes during installation even if you plan to use Oracle RAC One Node. |
Enter the second node's public name (racnode2
), and virtual IP name (racnode2-vip
), and then click OK.
You are returned to the Specify Cluster Configuration window.
You should now see both nodes listed in the table of cluster nodes. Click the Identify Network Interfaces button. In the Identify Network Interfaces window, verify that each interfaces has the correct interface type (Public or Private) associated with it. If you have network interfaces that should not be used by Oracle Clusterware, then set the network interface type to Do Not Use.
Make sure both nodes are selected, then click the SSH Connectivity button at the bottom of the window.
The bottom panel of the window displays the SSH Connectivity information.
Enter the operating system username and password for the Oracle software owner (oracle
). Select the option If you have configured SSH connectivity between the nodes, then select the Reuse private and public keys existing in user home option. Click Setup.
A message window appears, indicating that it might take several minutes to configure SSH connectivity between the nodes. After a short period, another message window appears indicating that passwordless SSH connectivity has been established between the cluster nodes. Click OK to continue.
When returned to the Specify Cluster Configuration window, click Next to continue.
After several checks are performed, the Specify Install Locations window appears.
Perform the following actions on this page:
For the Oracle base field, make sure it is set to the location you chose for your Oracle Base directory, for example /u01/app/oracle/
. If not, then click Browse. In the Choose Directory window, go up the path until you can select the /u01/app/oracle/
directory, then click Choose Directory.
For the Software Location field, make sure it is set to the location you chose for your Grid home, for example /u01/app/11.2.0/grid
. If not, then click Browse. In the Choose Directory window, go up the path until you can select /u01/app/11.2.0/grid
, then click Choose Directory.
For the Cluster Registry Storage Type, choose Automatic Storage Management.
Enter a password for a SYSASM user in the SYSASM Password and Confirm Password fields. This password is used for managing Oracle ASM after installation, so make note of it in a secure place.
For the OSASM group, use the drop-down list to choose the operating system group for managing Oracle ASM, for example, dba
.
After you have specified information for all the fields on this page, click Next.
The Create ASM Disk Group page appears.
In the Disk Group Name field, enter a name for the disk group, for example DATA. Choose the Redundancy level for this disk group, and then in the Add Disks section, choose the disks to add to this disk group.
In the Add Disks section you should see the disks that you configured using the ASMLIB utility in Chapter 2.
When you have finished selecting the disks for this disk group, click Next.
If you have not installed Oracle software previously on this computer, then the Create Inventory page appears.
Change the path for the inventory directory, if required. If you are using the same directory names as this book, then it should show a value of /u01/app/oraInventory
. The oraInventory group name should show oinstall
.
Note: The path displayed for the inventory directory should be theoraInventory subdirectory of the directory one level above the Oracle base directory. For example, if you set the ORACLE_BASE environment variable to /u01/app/oracle/ before starting OUI, then the OraInventory path displayed is /u01/app/oraInventory . |
Click Next. The Perform Prerequisite Checks window appears.
If any of the checks have a status of Failed and a value of Yes in the Fixable column, then select that check, for example OS Kernel Parameter:file-max, and then click the Fix & Check Again button. This instructs the installer to create a shell script to fix the problem. You must run the specified script on the indicated nodes as the root
user to fix the problem. When you have finished running the script on each node, click OK.
Repeat until all the fixable checks have a status of Succeeded.
If there are other checks that failed, but are do not have a value of Yes in the Fixable field, then you must configure the node to meet these requirements in another window. After you have made the necessary adjustments, return to the OUI window and click Check Again. Repeat as needed until all the checks have a status of Succeeded. Click Next.
The Summary window appears.
Review the contents of the Summary window and then click Finish.
OUI displays a progress indicator allowing you to monitor the installation process.
As part of the installation process, you are required to run certain scripts as the root
user, as specified in the Execute Configuration Scripts window appears. Do not click OK until you have run the scripts.
The Execute Configuration Scripts window shows configuration scripts, and the path where the configuration scripts are located. Run the scripts on all nodes as directed, in the order shown. For example, on Oracle Linux you perform the following steps (note that for clarity, the examples show the current user, node and directory in the prompt):
As the oracle
user on racnode1
, open a terminal window, and enter the following commands:
[oracle@racnode1 oracle]$ cd /u01/app/oraInventory [oracle@racnode1 oraInventory]$ su
Enter the password for the root
user, and then enter the following command to run the first script on racnode1
:
[root@racnode1 oraInventory]# ./orainstRoot.sh
After the orainstRoot.sh
script finishes on racnode1
, open another terminal window, and as the oracle
user, enter the following commands:
[oracle@racnode1 oracle]$ ssh racnode2 [oracle@racnode2 oracle]$ cd /u01/app/oraInventory [oracle@racnode2 oraInventory]$ su
Enter the password for the root
user, and then enter the following command to run the first script on racnode2
:
[root@racnode2 oraInventory]# ./orainstRoot.sh
After the orainstRoot.sh
script finishes on racnode2
, go to the terminal window you opened in Step 15a. As the root
user on racnode1
, enter the following commands to run the second script, root.sh
:
[root@racnode1 oraInventory]# cd /u01/app/11.2.0/grid [root@racnode1 grid]# ./root.sh
Press Enter at the prompt to accept the default value.
Note: You must run theroot.sh script on the first node and wait for it to finish. You can run root.sh scripts concurrently on all other nodes except for the last node on which you run the script. Like the first node, the root.sh script on the last node must be run separately. |
After the root.sh
script finishes on racnode1
, go to the terminal window you opened in Step 15c. As the root
user on racnode2
, enter the following commands:
[root@racnode2 oraInventory]# cd /u01/app/11.2.0/grid [root@racnode2 grid]# ./root.sh
Press Enter at the prompt to accept the default value.
After the root.sh
script completes, return to the OUI window where the Installer prompted you to run the orainstRoot.sh
and root.sh
scripts. Click OK.
The software installation monitoring window reappears.
Continue monitoring the installation until the Finish window appears. Then click Close to complete the installation process and exit the installer
After installation is complete, do not remove manually or run cron jobs that remove /tmp/.oracle or /var/tmp/.oracle directories or their files while Oracle software is running on the server. If you remove these files, then the Oracle software can encounter intermittent hangs. Oracle Clusterware installations can fail with the error:
CRS-0184: Cannot communicate with the CRS daemon. |
After you have installed Oracle Clusterware, verify that the node applications are started. Depending on which operating system you use, you may have to perform some postinstallation tasks to configure the Oracle Clusterware components properly.
To complete the Oracle Clusterware configuration on Oracle Linux:
As the oracle
user on the first node, check the status of the Oracle Clusterware targets by entering the following command:
/u01/app/11.2.0/grid/bin/crsctl check cluster -all
This command provides output showing if all the important cluster services, such as gsd
, ons
, and vip,
are started on the nodes of your cluster.
In the displayed output, you should see the Oracle Clusterware daemons are online for each node in the cluster.
****************************************************************** racnode1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ****************************************************************** racnode2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ******************************************************************
If you see that one or more Oracle Clusterware resources are offline, or are missing, then the Oracle Clusterware software did not install properly.
Now that the Oracle Grid Infrastructure for a cluster software is functional, you can install the Oracle Database software on the ones of your cluster. OUI installs the software on the local node and then copies the binary files from the local node to all the other node in the cluster.
The following topics describe the process of installing the Oracle Database software and creating an Oracle RAC database or an Oracle RAC One Node database:
You run OUI from the oracle
user account. Before you start OUI you do not have to configure the environment of the oracle
user.
However, you can set the ORACLE_BASE
environment variable to the directory in which you want the Oracle Inventory files located.For example, if you plan to make the Oracle Database home directory /u01/app/oracle/product/11.2.0/dbhome_1
, then you would set ORACLE_BASE
to the directory /u01/app/oracle/
. If you set the ORACLE_BASE
directory before installation, then this becomes the default location for the central inventory displayed by OUI.
You can also set the ORACLE_HOME
environment variable to the location chosen for the Oracle Database home. If you set the ORACLE_HOME
directory before installation, then this becomes the default location for the Grid home displayed by OUI.
(Optional) To modify the user environment before installing Oracle Database software on Oracle Linux:
As the oracle
user, execute the following commands:
[oracle]$ unset ORACLE_SID [oracle]$ export ORACLE_BASE=/u01/app/oracle/ [oracle]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
Verify the chan ges have been made by executing the following commands:
[oracle]$ echo $ORACLE_SID [oracle]$ echo $ORACLE_HOME /u01/app/oracle/product/11.2.0/dbhome_1 [oracle]$ echo $ORACLE_BASE /u01/app/oracle/
Install the Oracle Grid Infrastructure for a cluster software.
If you chose to store the Oracle Clusterware files on Oracle ASM during the Oracle Grid Infrastructure for a cluster installation, then a single disk group was created in Oracle ASM. You can use this same disk group to store the data files for your Oracle database.
If you want to use a separate disk group for your Oracle database files or for the fast recovery area, then you must create the additional Oracle ASM disk groups before installing Oracle Database software.
To create an additional disk group using ASMCA:
Prepare the disks or devices for use with Oracle ASM, as described in "Configuring Installation Directories and Shared Storage".
Start the Oracle Automatic Storage Configuration Assistant (ASMCA) from the Grid home:
/u01/app/11.2.0/grid/bin/asmca
The Oracle ASM Configuration Assistant starts, and displays the Disk Groups window.
Click the Create button at the bottom left-hand side of the window to create a disk group.
The Create Disk Group window appears.
Provide the following information:
In the Disk Group Name field, enter a name for the new disk group, for example, FRA.
Choose a Redundancy level, for example, Normal.
Select the disks to include in the new disk group.
If you used ASMLIB to configure the disks for use with Oracle ASM, then the available disks are displayed if you have the Show Eligible option selected, and they have a Header Status of PROVISIONED.
After you have provided all the information, click OK. A progress window titled DiskGroup: Creation appears. After a few minutes, a message appears indicating the disk group was created successfully. Click OK to continue.
Repeat Step 3 and 4 to create additional disk groups, or click Exit, then select Yes to exit the utility.
After you have configured the operating system environment, you can use Oracle Universal Installer to install the Oracle RAC software and create an Oracle RAC database. If you plan to use Oracle RAC One Node, then you must first install the Oracle RAC software without creating a database. After the installation completes, you use Database Configuration Assistant (DBCA) to create the Oracle RAC One Node database.
To install Oracle Database software on your cluster and create a database:
As the oracle
user, use the following commands to start OUI, where staging_area
is the location of the staging area on disk, or the location of the mounted installation disk:
cd /staging_area
./runInstaller
When you start Oracle Universal Installer, the Configure Security Updates window appears.
(Optional) Enter your email address and My Oracle Support password, then click Next to continue.
If you want to receive notification by email of any newly discovered security issues related to the software you are installing, then enter an email address in the Email field. If you also want to receive the security updates through My Oracle Support, then use the same email address that is registered with My Oracle Support, select the I wish to receive security updates via My Oracle Support option, and enter your My Oracle Support login password in the My Oracle Support Password field.
If you provide an email address, then the Oracle Configuration Manager (OCM) tool with also be installed. This utility provides Oracle Support Services with configuration information about your system when creating service requests. You can disable the OCM tool after installation, but Oracle strongly discourages this. OCM does not access, collect, or store any personal information (except for technical support contact information), or any business data files residing in your software environment. For more information about OCM, see http://www.oracle.com/technetwork/documentation/ocm-092152.html
.
After you click Next, the Select Installation Option window appears.
If you want to create an Oracle RAC database, then select Create and configure a database. If you want to create an Oracle RAC One Node database, then select Install database software only. Click Next to continue.
The System Class window appears.
Choose Server Class, then click Next.
If you choose the Desktop Class option, then OUI installs a single-instance database, not a clustered database.
The Node Selection screen appears.
Select the Real Application Clusters database installation type.
Select the nodes on which you want to install Oracle Database software and create an Oracle RAC instance. All the available nodes in the cluster are selected by default.
Note: Select both nodes during installation, even if you are creating an Oracle RAC One Node database. |
Click the SSH Connectivity button at the bottom of the window. The bottom panel of the window displays the SSH Connectivity information.
Because you configured SSH connectivity between the nodes for the Oracle Grid Infrastructure for a cluster installation, select the Reuse private and public keys existing in user home option. If you are using a network user with home directory on shared storage, then also select the "User home if shared by the selected nodes" option. Click Test.
A message window appears, indicating that passwordless SSH connectivity has been established between the cluster nodes. Click OK to continue.
When returned to the Node Selection window, click Next to continue.
The Select Install Type window appears.
Choose the Typical install option, and click Next.
A typical installation requires minimal input. It installs the software and optionally creates a general-purpose database. If you choose the Advanced installation type (not documented in this guide), then you are prompted to provide more information about how the database should be configured. For example, you could set passwords for each user account individually, choose a different template for the starter database, choose a nondefault language for the database, and so on.
The Typical Install Configuration window appears.
In this window, you must provide the following information:
Oracle base location: The default value is /u01/app/oracle/
. If you did not set the ORACLE_BASE
environment variable and the default location is different from the directory location you have chose, then enter the directory for your Oracle base or click the Browse button to change the directory path.
Software location: If you did not set the ORACLE_HOME
environment variable before starting the installation, then enter the directory for your Oracle home or click the Browse button to change the directory path.
Storage Type: In this drop-down list, choose Automatic Storage Management (ASM). If you do not want to use Oracle ASM, then choose File System. Because Oracle ASM was installed with the Oracle Grid Infrastructure for a cluster, Oracle Automatic Storage Management is the default value.
Database file location: Choose the disk group to use for storing the database files. You can use the same disk group that Oracle Clusterware uses. If you do not want to use the same disk group that is currently being used to store the Oracle Clusterware files, then you must exit the installation and create a new disk group using Oracle ASM utilities. Refer to "Creating Additional Oracle ASM Disk Groups" for more information on creating a disk group.
If you chose the File System storage type, then enter the directory location of the shared storage where the database files will be created.
ASMSNMP Password: Enter the password for the ASMSNMP user. The ASMSNMP user is used primarily by Oracle Enterprise Manager to monitor Oracle ASM instances. See Oracle Automatic Storage Management Administrator's Guide for more information about the ASMSNMP user.
If you chose File System for the Storage Type, then this field is disabled.
Database edition: From this drop-down list choose either Enterprise Edition or Standard Edition. The number in parentheses next to your choice indicates the amount of disk space required.
OSDBA Group: From this drop-down list select the operating system group used for database administration, for example, dba
.
Global database name: Enter the fully qualified global name for your database. The global database name is in the form ORACLE_SID
.DB_DOMAIN
, for example, orcl.example.com
.
Administrative password: Enter the password to be used for the administrative account, such as SYS, SYSTEM, and DBSNMP.
Confirm Password: Enter the same password in this field.
After you have provided all the necessary information, click Next. The Perform Prerequisite Checks window appears.
After a short period, the Summary window appears. Review the information on this screen, then click Finish to continue.
If any of the information in the Summary window is incorrect, then use the Back button to return to a previous window and correct it.
After you click Finish, OUI displays a progress indicator to show that the installation has begun. This step takes several minutes to complete.
After the software is installed on each node, if you select the option to create a database, then OUI starts the Database Configuration Assistant (DBCA). This utility creates the database using the global database name specified in Step 9. At the end of the database creation, you see the DBCA window with the database configuration information displayed, including the URL for the Database Control console.
There is also a Password Management button that you can click to unlock the database user accounts, or change their default passwords.
After making note of the information in this window, click OK.
OUI configures Oracle Configuration Management, if you provided the information in Step 2.
If you chose to perform a software-only installation, then the database configuration assistants are not started. You must run DBCA separately to create the Oracle RAC One Node database.
See Also: Oracle Real Application Clusters Installation Guide for Linux and UNIX or other platform for information about creating an Oracle RAC One Node database using DBCA |
In the last step of the installation process, you are prompted to perform the task of running the root.sh
script on both nodes, as specified in the Execute Configuration Scripts window. Do not click OK until you have run the scripts on all nodes.
Perform the following steps to run the root.sh
script (note that for clarity, the examples show the current user, node and directory in the prompt):
Open a terminal window. As the oracle
user on the first node. Change directories to your Oracle home directory, and then switch to the root
user by entering the following commands:
[oracle@racnode1 oracle]$ cd /u01/app/oracle/product/11.2.0/dbhome_1 [oracle@racnode1 dbhome_1]$ su
Enter the password for the root
user, and then run the script specified in the Execute Configuration scripts window:
[root@racnode1 dbhome_1]# ./root.sh
Note: You can run theroot.sh script simultaneously on all nodes in the cluster for Oracle RAC installations or upgrades. |
As the root.sh
script runs, it prompts you for the path to the local bin
directory. The information displayed in the brackets is the information it has obtained from your system configuration. It also writes the dbhome
, oraenv
, and coraenv
files in the /usr/local/bin
directory. If these files exist, then you are asked to overwrite them. After responding to prompt, press the Enter key. To accept the default choice, press the Enter key without entering any text.
Enter commands similar to the following to run the script on the other nodes:
[root@racnode1 dhome_1]# exit [oracle@racnode1 dhome_1]$ ssh racnode2 [oracle@racnode2 ~]$ cd /u01/app/oracle/product/11.2.0/dbhome_1 [oracle@racnode2 dbhome_1]$ su
Enter the password for the root
user, and then run the script specified in the Execute Configuration scripts window:
[root@racnode2 dbhome_1]# ./root.sh
After responding to each prompt, press the Enter key.
When the root.sh
script completes, the following messages are displayed:
After you finish executing the script on all nodes, return to the Execute Configuration scripts window and click OK.
The Install Product window is displayed.
Click Next to complete the installation.
The Finish window is displayed, with the URL for Enterprise Manager Database Control displayed.
Click Close to exit the installer.
See Also:
|
At this point, if you chose to create an Oracle RAC database during installation, then you should verify that all the database services are up and running.
To verify the Oracle RAC database services are started:
Log in as the oracle
user and go to the Grid_home
/bin
directory:
[oracle] $ cd /u01/app/11.2.0/grid/bin
Run the following command to view the status of the resources managed by Oracle Clusterware that contain the string 'ora':
[oracle] $ ./crsctl status resource -w "TYPE co 'ora'" -t
The output of the command should show that the Oracle Clusterware, Oracle ASM, and Oracle Database resources are available (online) for each host. An example of the output is:
------------------------------------------------------------------------------ NAME TARGET STATE SERVER STATE_DETAILS ------------------------------------------------------------------------------ Local Resources ------------------------------------------------------------------------------ ora.DATA.dg ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora.LISTENER.lsnr ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora.asm ONLINE ONLINE racnode1 Started ONLINE ONLINE racnode2 Started ora.eons ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora.gsd OFFLINE OFFLINE racnode1 OFFLINE OFFLINE racnode2 ora.net1.network ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora.ons ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ora.registry.acfs ONLINE ONLINE racnode1 ONLINE ONLINE racnode2 ------------------------------------------------------------------------------ Cluster Resources ------------------------------------------------------------------------------ ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE racnode1 ora.oc4j 1 OFFLINE OFFLINE ora.orcl.db 1 ONLINE ONLINE racnode1 Open 2 ONLINE ONLINE racnode2 Open ora.racnode1.vip 1 ONLINE ONLINE racnode1 ora.racnode2.vip 1 ONLINE ONLINE racnode2 ora.scan1.vip 1 ONLINE ONLINE racnode1
Caution: After installation is complete, do not remove manually or runcron jobs that remove /tmp/.oracle or /var/tmp/.oracle directories or their files while Oracle software is running on the server. If you remove these files, then the Oracle software can encounter intermittent hangs. Oracle Clusterware installations will fail with the error:
CRS-0184: Cannot communicate with the CRS daemon. |
After you have installed the Oracle Real Application Clusters (Oracle RAC) software and created an Oracle RAC database there are additional tasks to perform before your cluster database is ready for use. These steps are recommended, but are not required.
This section contains the following topics:
After the Oracle Clusterware installation is complete, OUI automatically runs Cluster Verification Utility (CVU) as a Configuration Assistant to verify that the Clusterware installation has been completed successfully.
If CVU reports problems with your configuration, then correct these errors before proceeding.
See Also:
|
If you did not select the option to create an Oracle RAC database during installation, then you will need to create one using DBCA after you have verified that the installation of the Oracle RAC software was successful. The steps for creating an Oracle RAC database are documented in Oracle Real Application Clusters Installation Guide for Linux and UNIX.
Certain files used during installation are very important to the operation of the installed software. It is important to back up these files and keep them in a separate location from the installed software in case of hardware failure.
Oracle recommends that you back up the root.sh
script after you complete an installation. If you install other products in the same Oracle home directory, then OUI updates the contents of the existing root.sh
script during the installation. If you require information contained in the original root.sh
script, then you can recover it from the root.sh
backup copy.
During the installation described in this guide, the Enterprise Manager Database Control management repository is placed in secure mode. All Enterprise Manager data is encrypted using the encryption key stored in the file emkey.ora
. If this file is damaged or lost, and cannot be restored from a backup, then you are no longer able to use the existing Enterprise Manager repository.
The emkey.or
a file is located in the Oracle_home
/<node_name
>_<Database_name
>/sysman
/config
directory. For example, on the racnode2
server, the encryption key file for the orcl.example.com
database would be located at /u01/app/oracle/product/11.2.0/dbhome_1/racnode2_orcl/sysman/config/emkey.ora
directory.
When you create an Oracle RAC database and choose Database Control for your database management, the Enterprise Manager Database Control utility is installed and configured automatically.
To verify Oracle Enterprise Manager Database Control has been started in your new Oracle RAC environment:
Make sure the ORACLE_UNQNAME
environment variable is set to the unique name of the database to which you want to connect, for example orcl
. Also make sure the ORACLE_HOME
environment variable is set to the location of the installed Oracle Database software.
$ export ORACLE_UNQNAME=orcl $ echo $ORACLE_HOME /u01/app/oracle/product/11.2.0/dbhome_1 $ echo $ORACLE_UNQNAME orcl
Go to the Oracle_home/bin
directory.
Run the following command as the oracle
user:
./emctl status dbconsole
The Enterprise Manager Control (EMCTL) utility displays the current status of the Database Control console on the current node.
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0 Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved. https://racnode1.example.com:1158/em/console/aboutApplication Oracle Enterprise Manager 11g is running. ------------------------------------------------------------------ Logs are generated in directory /u01/app/oracle/product/11.2.0/dbhome_1/racnode1_orcl/sysman/log
If the EMCTL utility reports that Database Control is not started, then use the following command to start it:
./emctl start dbconsole
Following a typical installation, Database Control serves console pages from the node where database was created. The console also monitors agents on all nodes of the cluster. However, you can configure Enterprise Manager to have multiple Database Control consoles within a cluster using EMCA.
Periodically, Oracle issues bug fixes for its software called patches. Patch sets are a collection of bug fixes that were produced up to the time of the patch set release. Patch sets are fully tested product fixes. Application of a patch set affects the software residing in your Oracle home.Ensure that you run the latest patch set of the installed software. If you configured access to My Oracle Support during installation, then the latest patches should have been downloaded and applied during installation.
If you did not configure access to My Oracle Support within OUI, then you should apply the latest patch set for your release and any necessary patches that are not included in a patch set. Information about downloading and installing patches and patch sets is covered in Chapter 10, " Managing Oracle Software and Applying Patches".
Note: If you want to install Oracle Database or Oracle RAC releases 11.1.0.7, 11.1.0.6, or 10.2.0.4 after installing Oracle Clusterware 11g release 2, then you must install the one-off patch required for that release. Refer to the My Oracle Support Web site for the required patch updates for your installation. |
See Also:
|
The oracle
user operating system account is the account that you used to install the Oracle software. You can use different operating system accounts for accessing and managing your Oracle RAC database. You can modify the shell configuration file to set environment variables such as ORACLE_HOME
whenever you log in as that operating system user.
See Also:
|
You can use rconfig
, or Oracle Enterprise Manager to assist you with the task of converting a single-instance database installation to an Oracle Real Application Clusters (Oracle RAC) database. rconfig
is a command line utility. The Convert to Cluster Database option in Oracle Enterprise Manager Grid Control provides a GUI conversion tool. Additionally, after you have converted your single-instance database to an Oracle RAC database, you can use the srvctl
utility to convert the database to an Oracle RAC One Node database.
This section contains the following topics:
Overview of the Database Conversion Process Using Grid Control
Converting an Oracle RAC Database into an Oracle RAC One Node Database
Before you start the process of converting your database to a cluster database, your database environment must meet certain prerequisites:
The existing database and the target Oracle RAC database must be on the same release of Oracle Database 11g and must be running on the same platform.
The hardware and operating system software used to implement your Oracle RAC database must be certified for use with the release of the Oracle RAC software you are installing.
You must configure shared storage for your Oracle RAC database.
You must verify that any applications that run against the Oracle RAC database do not need any additional configuration before they can be used successfully with the cluster database. This applies to both Oracle applications and database features, such as Oracle Streams, and applications and products that do not come from Oracle.
Backup procedures should be available before converting from a single-instance Oracle Database to Oracle RAC.
For archiving in Oracle RAC environments, the archive log file format requires a thread number.
The archived redo log files from all instances of an Oracle RAC database are required for media recovery. If you archive to a file and you do not use a cluster file system, or some other means to provide shared file systems, then you require a method of accessing the archived redo log files from all nodes on which the cluster database has instances.
Note: For information about using individual Oracle Database 11g database products or options, refer to the product documentation library, which is available in the DOC directory on the 11g release 2 (11.2) installation media, or on the OTN Web site athttp://www.oracle.com/technetwork/indexes/documentation/ |
This section summarizes the process of converting a single-instance database to an Oracle RAC database using Oracle Enterprise Manager Grid Control:
Complete the prerequisite tasks for converting to an Oracle RAC database:
Oracle Clusterware and Oracle Database software is installed on all target nodes.
Oracle Clusterware is started.
The Oracle Database binary is enabled for Oracle RAC on all target nodes.
Shared storage is configured and accessible from all nodes.
User equivalency is configured for the operating system user performing the conversion.
Enterprise Manager agents are configured and running on all nodes, and are configured with the cluster and host information.
The database being converted has been backed up successfully.
Access the Database Home page for the database you want to convert.
Go to the Server subpage and select Convert to Cluster Database.
Provide the necessary credentials.
Select the host nodes that should contain instances of the new database.
Provide listener and instance configuration information.
Specify the location of the shared storage to be used for the data files.
Submit the job.
Complete the post-conversion tasks.
The resulting Oracle RAC database uses a server pool instead of a fixed configuration.
See Also: Oracle Real Application Clusters Installation Guide for Linux and UNIX, or for a different platform, for a complete description of this process |
rconfig
The following list provides an outline of the process of converting a single-instance database to an Oracle RAC database using the rconfig
utility:
Complete the prerequisite tasks for converting to an Oracle RAC database.
Oracle Clusterware and Oracle Database software is installed on all target nodes.
Oracle Clusterware is started.
The Oracle Database binary is enabled for Oracle RAC on all target nodes.
Shared storage is configured and accessible from all nodes.
User equivalency is configured for the operating system user performing the conversion.
The database being converted has been backed up successfully.
Using rconfig
, you can convert a single-instance database to either an administrator-managed cluster database or a policy-managed cluster database. Modify the parameters in either the ConvertToRAC_AdminManaged.xml
or ConvertToRAC_PolicyManaged.xml
sample file, as appropriate for your environment, then save the file to a new location. Both files are located in the Oracle_home
/assistants/rconfig/sampleXMLs
directory.
Run the rconfig
command, supplying the name of the modified XML file as input.
Complete the post-conversion tasks.
You can also use the rconfig
utility to convert single-instance Oracle ASM to clustered Oracle ASM.
See Also: Oracle Real Application Clusters Installation Guide for Linux and UNIX, or for a different platform, for a complete description of this process |
After you use the rconfig
utility to convert a single-instance Oracle database into a single-node Oracle RAC database, you can use the srvctl
utility to convert the database into an Oracle RAC One Node database. This functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2)
To convert your database to an Oracle RAC One Node database, use the following command:
srvctl convert database -d <database_name> -c RACONENODE
An Oracle RAC One Node database must be part of a multi-node cluster to support failover or online database relocation. You must either install Oracle Grid Infrastructure for a cluster and Oracle RAC on at least two nodes, or add a node to your existing single-node Oracle RAC database.
See Also:
|
Oracle Database 2 Day + Real Application Clusters Guide describes how to install, configure, and administer Oracle Clusterware, Oracle Automatic Storage Management (Oracle ASM), and Oracle Real Application Clusters (Oracle RAC) on a two-node system using the Oracle Linux system.
Note: For Linux operating systems other then Oracle Linux, see Oracle Real Application Clusters Installation Guide for Linux and UNIX. For other operating systems, see the platform-specific Oracle RAC installation guide. |
This guide covers topics that a reasonably knowledgeable Oracle database administrator (DBA) would need to know when moving from managing a single-instance Oracle Database environment to managing an Oracle RAC environment.
Oracle Database 2 Day + Real Application Clusters Guide is an Oracle RAC database administration guide for DBAs who want to install and use Oracle RAC. This guide assumes you have already read Oracle Database 2 Day DBA. This guide is intended for DBAs who:
Want basic DBA skills for managing an Oracle RAC environment
Manage Oracle databases for small- to medium-sized businesses
To use this guide, you should be familiar with the administrative procedures described in Oracle Database 2 Day DBA.
Note: Some DBAs may be interested in moving the data from their single-instance Oracle Database to their Oracle RAC database. This guide also explains the procedures for doing this. |
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc
.
Access to Oracle Support
Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info
or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs
if you are hearing impaired.
For more information, see the following in the Oracle Database documentation set:
Oracle Real Application Clusters Installation Guide for Linux and UNIX
Oracle Real Application Clusters Administration and Deployment Guide
The following text conventions are used in this guide:
Convention | Meaning |
---|---|
boldface | Boldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary. |
italic | Italic type indicates book titles, emphasis, or placeholder variables for which you supply particular values. |
monospace | Monospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter. |
To protect the database against data loss and reconstruct the database after data loss, you must devise, implement, and manage a backup and recovery strategy. This chapter describes how to back up and recover an Oracle Real Application Clusters (Oracle RAC) database.
This chapter contains the following sections:
Archiving the Oracle Real Application Clusters Database Redo Logs
Performing Backups of Your Oracle Real Application Clusters Database
About Preparing to Restore and Recover Your Oracle RAC Database
Displaying Backup Reports for Your Oracle Real Application Clusters Database
See Also:
|
To protect your Oracle Real Application Clusters (Oracle RAC) database from hardware failures or disasters, you must have a physical copy of the database files. The files protected by the backup and recovery facilities built into Oracle Enterprise Manager include data files, control files, server parameter files (SPFILEs), and archived redo log files. Using these files, your database can be reconstructed. The backup mechanisms that work at the physical level protect against damage at the file level, such as the accidental deletion of a data file or the failure of a disk drive. Database recovery involves restoring, or copying, the damaged files from backup and performing media recovery on the restored files. Media recovery is the application of redo logs or incremental backups to a restored data file to update it to the current time or some other specified time.
The Oracle Database flashback features, such as Oracle Flashback Drop and Oracle Flashback Table, provide a range of physical and logical data recovery tools as efficient, easy-to-use alternatives to physical and logical backup operations. The flashback features enable you to reverse the effects of unwanted database changes without restoring data files from backup or performing media recovery.
The Enterprise Manager physical backup and recovery features are built on the Recovery Manager (RMAN) command-line client. Enterprise Manager makes available many of the RMAN features, and provides wizards and automatic strategies to simplify and further automate RMAN-based backup and recovery.
Note: For the RMAN utility to work properly on Linux platforms, the$ORACLE_HOME/bin directory must appear in the PATH variable before the /usr/X11R6/bin directory. |
The Enterprise Manager Guided Recovery capability provides a Recovery Wizard that encapsulates the logic required for a wide range of file restoration and recovery scenarios, including the following:
Complete restoration and recovery of the database
Point-in-time recovery of the database or selected tablespaces
Flashback Database
Other flashback features of Oracle Database for logical-level repair of unwanted changes to database objects
Media recovery at the block level for data files with corrupt blocks
If the database files are damaged or need recovery, then Enterprise Manager can determine which parts of the database must be restored from a backup and recovered, including early detection of situations such as corrupted database files. Enterprise Manager guides you through the recovery process, prompting for needed information and performing the required recovery actions.
Using a fast recovery area minimizes the need to manually manage disk space for your backup-related files and balance the use of space among the different types of files. Oracle recommends that you enable a fast recovery area to simplify your backup management.
The larger the fast recovery area is, the more useful it becomes. Ideally, the fast recovery area should be large enough to contain all the following files:
A copy of all data files
Incremental backups
Online redo logs
Archived redo log files that have not yet been backed up
Control files and control file copies
Autobackups of the control file and database initialization parameter file
The fast recovery area for an Oracle RAC database must be placed on an Oracle ASM disk group, a cluster file system, or on a shared directory that is configured through a network file system file for each Oracle RAC instance. In other words, the fast recovery area must be shared among all of the instances of an Oracle RAC database. The preferred configuration for Oracle RAC is to use Oracle Automatic Storage Management (Oracle ASM) for storing the fast recovery area, using a different disk group for your recovery set than for your data files.
The location and disk quota must be the same on all instances. Oracle recommends that you place the fast recovery area on the shared Oracle ASM disks. In addition, you must set the DB_RECOVERY_FILE_DEST
and DB_RECOVERY_FILE_DEST_SIZE
parameters to the same values on all instances.
To use the fast recovery feature, you must first configure the fast recovery area for each instance in your Oracle RAC database.
To make your data highly available, it is important to configure the database so you can recover your data after a system failure. Redo logs contain a record of changes that were made to datafiles. Redo logs are stored in redo log groups, and you must have at least two redo log groups for your database.
After the redo log files in a group have filled up, the log writer process (LGWR) switches the writing of redo records to a new redo log group. Oracle Database can automatically save the inactive group of redo log files to one or more offline destinations, known collectively as the archived redo log (also called the archive log). The process of turning redo log files into archived redo log files is called archiving.
When you archive your redo log, you write redo log files to another location before they are overwritten. This location is called the archived redo log. These copies of redo log files extend the amount of redo data that can be saved and used for recovery. Archiving can be either enabled or disabled for the database, but Oracle recommends that you enable archiving.
When you use Oracle Database Configuration Assistant (DBCA) to create your Oracle Real Application Clusters (Oracle RAC) database, each instance is configured with at least two redo log files that are stored in the shared storage area. If you have a two-node Oracle RAC database, then at least four redo logs are created for the database, two for each instance.
If you use a cluster file system to store the archived redo log files for your Oracle RAC database, then the redo log files are shared file system files. If you use Oracle ASM to store the archived redo log files for your Oracle RAC database, then each instance automatically has access to all the archived redo log files generated by the database. If you use shared storage or raw devices to store the archived redo log files on each node, then you must configure the operating system to grant access to those directories for each instance of the cluster database that needs access to them.
The primary consideration when configuring archiving is to ensure that all archived redo logs can be read from every node during recovery, and if possible during backups. During recovery, because the archived log destinations are visible from the node that performs the recovery, Oracle RAC can successfully recover the archived redo log data. For creating backups of your Oracle RAC database, the strategy that you choose depends on how you configure the archiving destinations for each node. Whether only one node or all nodes perform archived redo log backups, you must ensure that the archived redo logs for every instance are backed up.
To backup the archived redo logs from a single node, that node must have access to the archived log files of the other instances. The archived redo log naming scheme that you use is important because when a node writes to a log with a specific filename on its file system, the file must be readable by any node that must access this archived redo log. For example, if node1
archives a log to /oracle/arc_dest/log_1_100_23452345.arc
, then node2
can back up this archived redo log only if it can read/oracle/arc_dest/log_1_100_23452345.arc
on its own file system.
Recovery Manager (RMAN) depends on server sessions, processes that run on the database server, to perform backup and recovery tasks. Each server session in turn corresponds to an RMAN channel, representing one stream of data to or from a backup device. RMAN supports parallelism, which is the use of multiple channels and server sessions to perform the work of a single backup job or file restoration task.
Because the control file, SPFILE, and data files are accessible by any instance, the backup operation of these files is distributed across all the allocated channels. For backups of the archived redo log, the actions performed by RMAN depend on the type of archiving scheme used by your Oracle RAC database.
If you use a local archiving scheme, then each instance writes the archived redo log files to a local directory. When multiple channels are allocated that have access to the archived redo log, for each archived redo log file, RMAN determines which channels have access to that archived redo log file. Then, RMAN groups the archived redo log files that can be accessed by a channel and schedules a backup job using that channel.
If each node in the cluster writes the archived redo log files to Oracle ASM, a clustered file system, or other type of shared storage, then each instance has access to all the archived redo log files. In this case, the backup of the archived redo log is distributed across all the allocated channels.
See Also:
|
For Oracle RAC, each instance has its own thread of redo. The preferred configuration for Oracle RAC is to configure the fast recovery area using an Oracle ASM disk group that is separate from the Oracle ASM disk group used for your data files. Alternatively, you can use a cluster file system archiving scheme.
To configure archiving for your Oracle RAC database:
On the Database Home page of Enterprise Manager Database Control, while logged in as a SYSDBA user, select Availability.
The Availability subpage appears.
In the Backup/Recovery section, under the heading Setup, select Recovery Settings.
The Recovery Settings page appears.
In the Media Recovery section, select the ARCHIVELOG mode option.
In the Log Archive Filename Format field, accept the default value, or enter the desired format.
For clustered databases, the format for the archive log file name should contain the %t
modifier, to indicate which redo log thread the archived redo log file belongs to. As a best practice, the format for the archive log file name should also include the %s
(log sequence number) and %r
(resetlogs identifier) modifiers.
If the archive log destination is the same for all instances, then in the Archive Log Destination field, change the value to the location of the archive log destination for the cluster database.
For example, you might set it to +DATA
if using Oracle ASM, or to /u01/oradata/arch
if you want local archiving on each node.
If you must configure a different archive log destination for any instance, then you must go to the Initialization Parameters page and modify the LOG_ARCHIVE_DEST_1
parameter that corresponds to the instance for which you want to configure the archive log destination. The Instance column should display the name of the instance, for example sales1
. Change the Value field to contain the location of the archive log destination for that instance.
If you want to configure multiple archive log destinations for the database, then on the Recovery Settings page, click Add Another Row under the Archive Log Destination field.
After you have finished configuring archiving, click Apply.
When prompted to restart the database, click Yes.
Enter the host and SYSDBA user credentials, then click Continue.
Wait a couple of minutes, then click Refresh.
If the database has been restarted, then you are prompted to enter the login credentials.
See Also:
|
Before taking backups of your Oracle Real Application Clusters (Oracle RAC) database using Enterprise Manager, you must configure access for the user performing the backups, or credentials. You can also configure the default values for certain backup settings, so they do not have to be specified every time a backup is taken.
When using Enterprise Manager, you must have the proper credentials to perform the configuration tasks for backup and recovery, to schedule backup jobs, and to perform recovery. The following credentials may be required:
The Oracle database administrator user you use when you log in to Enterprise Manager
The host operating system user whose credentials you provide when performing backup and recovery tasks
To perform or schedule RMAN tasks, you must either log in to Enterprise Manager as a user with SYSDBA
privileges, or provide host operating system credentials for a user who is a member of the dba
group. The host operating system user must also have execute permission for the RMAN command-line client.
For tasks requiring host operating system credentials, a Host Credentials form appears at the bottom of the page used to perform the task. Enterprise Manager uses the credentials when it invokes RMAN to perform jobs you requested or scheduled.
The Host Credentials form always includes an option labeled Save as Preferred Credential. If you select this option before performing your action, then the provided credentials are stored persistently for the currently logged-in Oracle database user. The preferred credentials are reused by default whenever you log in as that user and perform operations requiring host credentials.
Assuming you have a fast recovery area configured, you can configure several settings and policies that determine how backups are stored, which data is backed up, and how long backups are retained before being purged from the fast recovery area. You can also configure settings to optimize backup performance for your environment.
See Also:
|
When you use Oracle ASM to manage database files, Oracle recommends that you use RMAN for creating backups. You must have both database (SYSDBA
) privileges and host operating system (OSDBA
) credentials to perform backup and recovery operations.
If you log in to Enterprise Manager with SYSDBA
privileges, then any operating system user who has execute permission for the RMAN command-line client can perform backups of an Oracle Real Application Clusters (Oracle RAC) database. However, if you log in as a database user without SYSDBA
privileges, then you must provide the name and password of an operating system user that is a member of the OSDBA
group before you can perform the backup operation.
To back up an Oracle RAC database:
On the Cluster Database Home page, select Availability.
The Cluster Database Availability page appears.
In the Backup/Recovery section, under the heading Manage, select Schedule Backup.
Follow the backup procedures outlined in Chapter 9, "Performing Backup and Recovery" of Oracle Database 2 Day DBA or click Help on this page.
See Also:
|
Whether only one node or all nodes perform archive log backups, ensure that all archived redo log files for all nodes are backed up. If you use a local archiving scheme, then allocate multiple channels to provide RMAN access to all the archived redo log files.
You can configure RMAN to automatically delete the archived redo log files from disk after they have been safely backed up. This feature helps to reduce the disk space used by your Oracle RAC database, and prevent an unnecessary outage that might occur if you run out of available disk space.
To configure RMAN to automatically delete the archived redo log file files from disk after they have been safely backed up, when creating or scheduling your database backups:
On the Cluster Database Home page, select Availability.
The Cluster Database Availability page appears.
In the Backup/Recovery section, under the heading Manage, select Schedule Backup.
Choose a backup type and click Schedule Customized Backup.
While specifying the options for your backup, select Also back up all archived logs on disk if you are performing an online backup. There is no need to back up archived redo log files when performing an offline backup because the database is in a consistent state at the time of backup and does not require media recovery if you restore.
Select Delete all archived logs from disk after they are successfully backed up if you are using shared storage for your archived redo log files.
Note: Do not select Delete all archived logs from disk after they are successfully backed up if you are using a fast recovery area as your only archive log destination. In this case, archived redo log files that have been backed up are deleted automatically as space is needed for storage of other files. |
The Enterprise Manager Guided Recovery capability provides a Recovery Wizard that encapsulates the logic required for a wide range of restore and recovery scenarios. Enterprise Manager can determine which parts of the database must be restored and recovered, including early detection of situations such as corrupted database files. Enterprise Managers takes you through the recovery process, prompting for information and performing required file restoration and recovery actions.
This section discusses both instance recovery and media recovery. It contains the following topics:
The node that performs the recovery of an Oracle Real Application Clusters (Oracle RAC) database must be able to restore all the required data files. That node must also be able to either read all the required archived redo log files on disk or be able to restore the archived redo log files from backup files.
This section discusses two tasks you must perform before recovering your database:
During recovery, because the archive log file destinations are visible from the node that performs the recovery, Oracle RAC can successfully access the archived redo log files during recovery.
If you do not use shared storage or a clustered file system to store the archived redo log files for your cluster database, then you must make the archived redo log files available to the node performing the recovery.
Recovery of a failed instance in Oracle RAC is automatic. If an Oracle RAC database instance fails, then a surviving database instance processes the online redo logs generated by the failed instance to ensure that the database contents are in a consistent state. When recovery completes, Oracle Clusterware attempts to restart the failed instance automatically.
Media recovery is a manual process that occurs while a database is closed. A media failure is the failure of a read or write operation of a disk file required to run the database, due to a physical problem with the disk such as a head malfunction. Any database file can be vulnerable to a media failure. If a media failure occurs, then you must perform media recovery to restore and recover the damaged database files. Media recovery is always done by one instance in the cluster.
Before starting media recovery, the instance that is performing the recovery should be started in MOUNT
mode. The other instances should be started in NOMOUNT
mode.
During a restore operation, RMAN automatically locates the most recent backups of the database that are available. A channel connected to a specific node attempts to restore files that were backed up only to that node. For example, assume that an archived redo log file with the sequence number 1_001 is backed up to a device attached to the node racnode1
, while the archived redo log file with sequence number 2_003 is backed up to a device attached to the node racnode2
. If you allocate channels that connect to nodes racnode1
and racnode2
for a restore operation, then the channel connected to racnode1
restores log sequence 1_001, but not log sequence2_003. The channel connected to racnode2
can restore log sequence 2_003, but not log sequence 1_001.
If you use Oracle ASM or a clustered file system for storing the archived redo log files, then any instance can restore the archived redo log files.
See Also:
|
Oracle RAC automatically selects the optimum degree of parallelism for instance failure and media recovery.
When using Enterprise Manager and RMAN to perform the recovery, Oracle RAC automatically makes parallel the following three stages of recovery:
Restoring Data files—When restoring data files, the number of channels you allocate in the RMAN recovery script effectively sets the parallelism that RMAN uses. For example, if you allocate five channels, then you can have up to five parallel streams restoring data files.
Applying Incremental Backups—Similarly, when you are applying incremental backups, the number of channels you allocate determines the potential parallelism.
Applying Archived Redo Log Files—Using RMAN, the application of archived redo log files is performed in parallel. Oracle RAC automatically selects the optimum degree of parallelism based on available CPU resources.
See Also:
|
When using Enterprise Manager and RMAN, the process of recovering and restoring an Oracle RAC database is essentially the same as for a single-instance Oracle databases, except that you access RMAN from the Availability page at the cluster database level, instead of at the instance level.
To use Enterprise Manager and RMAN to restore and recover an Oracle RAC database:
On the Cluster Database Home Page, select Availability.
The Cluster Database Availability page appears.
In the Backup/Recovery section, under the heading Manage, select Perform Recovery.
The Perform Recovery page appears.
Follow the recovery procedures outlined in Chapter 9 ofOracle Database 2 Day DBA
See Also:
|
You can use Enterprise Manager to recover a lost or damaged server parameter file (SPFILE).
To recover an SPFILE for an Oracle RAC database:
Start the database in MOUNT
mode.
On the Cluster Database Home page, select Availability.
The Cluster Database Availability page appears.
In the Backup/Recovery section, under the heading Manage, select Perform Recovery.
When the database is not open, the Perform Recovery link takes you to the SPFILE restore page.
Specify the location of the fast recovery area, if configured.
In the Backup Information section, select Use Other Backup Information and Use an Autobackup.
On the Perform Recovery: Restore SPFILE page, specify a different location for the SPFILE to be restored to.
When finished selecting your options, click Restore, then click Yes to confirm you want to restore the SPFILE.
After the SPFILE is restored, you are prompted to login to the database again.
See Also:
|
Managing RMAN backup files, with or without Enterprise Manager, consists of two tasks:
Managing the backup files for your database that are stored on disk or tape
Managing the record of those backup files in the RMAN repository
Enterprise Manager simplifies both backup file management tasks. Some of the other tasks involved in managing backup files include the following:
Searching for backup files
Validating the contents of backup sets or image copies
Cross-checking a backup
Deleting expired or obsolete backup files
Marking backup files as available or unavailable
See Also:
|
Backup reports contain summary and detailed information about past backup jobs run by RMAN, including backup jobs run through Enterprise Manager and the RMAN command-line client.
To view backup reports:
On the Cluster Database Home page, select Availability.
The Availability page appears.
In the Backup/Recovery section, under the heading Manage, select Backup Reports.
The View Backup Report page appears, with a list of recent backup jobs.
In the Search section, specify any filter conditions and click Go to restrict the list to backups of interest.
You can use the Search section of the page to restrict the backups listed by the time of the backup, the type of data backed up, and the status of the jobs (whether it succeeded or failed, and whether warnings were generated during the job).
To view detailed information about any backup, click the backup job name in the Backup Name column.
The Backup Report page is displayed for the selected backup job. This page contains summary information about this backup job, such as how many files of each type were backed up, the total size of the data backed up, and the number, size, and type of backup files created.
The Backup Report page also contains a Search section that you can use to quickly run a search for another backup job or backup jobs from a specific date range. The resulting report contains aggregate information for backup jobs matching the search criteria.