PK
b(Aoa, mimetypeapplication/epub+zipPK b(A iTunesMetadata.plistR
This section describes new features as they pertain to the installation and configuration of Oracle Grid Infrastructure (Oracle Clusterware and Oracle Automatic Storage Management), and Oracle Real Application Clusters (Oracle RAC). This guide replaces Oracle Clusterware Installation Guide. The topics in this section are:
Note the following:
With this release, OUI no longer supports installation of Oracle Clusterware files on block or raw devices. Install Oracle Clusterware files either on Oracle Automatic Storage Management disk groups, or in a supported shared file system.
The following is a list of new features for Release 2 (11.2.0.3):
If nodes become unreachable in the middle of an upgrade, starting with release 11.2.0.3, you can run the rootupgrade.sh
script with the -force
flag to force an upgrade to complete.
The following is a list of new features for Release 2 (11.2.0.2):
The Oracle Grid Infrastructure Configuration Wizard enables you to configure the Oracle Grid Infrastructure software after performing a software-only installation. You no longer have to manually edit the config_params
configuration file as this wizard takes you through the process, step by step.
See Also: Oracle Clusterware Administration and Deployment Guide for more information about the configuration wizard. |
Starting with the release of the 11.2.0.2 patch set for Oracle Grid Infrastructure 11g Release 2 (Oracle Clusterware and Oracle Automatic Storage Management), Oracle Grid Infrastructure patch sets are full installations of the Oracle Grid Infrastructure software. Note the following changes with the new patch set packaging:
Direct upgrades from previous releases (11.x, 10.x) to the most recent patch set are supported.
New installations consist of installing the most recent patch set, rather than installing a base release and then upgrading to a patch release.
Out-of-place patch set upgrades only are supported. An out-of-place upgrade is one in which you install the patch set into a new, separate home.
See Also: My Oracle Support note 1189783.1, "Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2", available from the following URL: |
Oracle ASM 11g release 2 (11.2.0.2) and later for Oracle Solaris provides support for Oracle Automatic Storage Management Cluster File System (Oracle ACFS), including ACFS Snapshots, and Oracle ASM Dynamic Volume Manager (ADVM).
ACFS (including ACFS Snapshots) and ADVM are supported only on Oracle Solaris 10 Update 6, and on later updates to Oracle Solaris 10 (64-bit only).
Cluster Health Monitor gathers operating system metrics in real time, and stores them in its repository for later analysis, so that it can determine the root cause of many Oracle Clusterware and Oracle RAC issues with the assistance of Oracle Support.
Cluster Health Monitor also works in conjunction with Oracle Database Quality of Service Management (QoS) by providing metrics to detect memory over-commitment on a node. QoS Management can shut down services on overloaded nodes to relieve stress, to and preserve existing workloads.
To support QoS Management, Oracle Database Resource Manager and metrics have been enhanced to support fine-grained performance metrics, and also can manage workloads with user-defined performance classes.
During installation, in the Privileged Operating System Groups window, it is now optional to designate a group as the OSOPER for ASM group. If you choose to create an OSOPER for ASM group, then you can enter a group name configured on all cluster member nodes for the OSOPER for ASM group. In addition, the Oracle Grid Infrastructure installation owner no longer is required to be a member.
Use the Software Updates feature to dynamically download and apply software updates as part of the Oracle Database installation. You can also download the updates separately using the downloadUpdates
option and later apply them during the installation by providing the location where the updates are present.
In previous releases, to make use of redundant networks for the interconnect, bonding, trunking, teaming, or similar technology was required. Oracle Grid Infrastructure and Oracle RAC can now make use of redundant network interconnects, without the use of other network technology, to enhance optimal communication in the cluster. This functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2).
Redundant Interconnect Usage enables load-balancing and high availability across multiple (up to four) private networks (also known as interconnects).
The Database Quality of Service (QoS) Management Server allows system administrators to manage application service levels hosted in Oracle Database clusters by correlating accurate run-time performance and resource metrics and analyzing with an expert system to produce recommended resource adjustments to meet policy-based performance objectives.
The following is a list of new features for installation of Oracle Clusterware and Oracle ASM 11g release 2 (11.2):
With Oracle Grid Infrastructure 11g release 2 (11.2), Oracle Automatic Storage Management (Oracle ASM) and Oracle Clusterware are installed into a single home directory, which is referred to as the Grid Infrastructure home. Configuration assistants start after the installer interview process that configures Oracle ASM and Oracle Clusterware.
The installation of the combined products is called Oracle Grid Infrastructure. However, Oracle Clusterware and Oracle Automatic Storage Management remain separate products.
See Also: Oracle Database Installation Guide for Oracle Solaris for information about how to install Oracle Grid Infrastructure (Oracle ASM and Oracle Clusterware binaries) for a standalone server. This feature helps to ensure high availability for single-instance servers |
With this release, Oracle Cluster Registry (OCR) and voting disks can be placed on Oracle Automatic Storage Management (Oracle ASM).
This feature enables Oracle ASM to provide a unified storage solution, storing all the data for the clusterware and the database, without the need for third-party volume managers or cluster filesystems.
For new installations, OCR and voting disk files can be placed either on Oracle ASM, or on a cluster file system or NFS system. Installing Oracle Clusterware files on raw or block devices is no longer supported, unless an existing system is being upgraded.
The SYSASM privilege that was introduced in Oracle ASM 11g release 1 (11.1) is now fully separated from the SYSDBA privilege. If you choose to use this optional feature, and designate different operating system groups as the OSASM and the OSDBA groups, then the SYSASM administrative privilege is available only to members of the OSASM group. The SYSASM privilege also can be granted using password authentication on the Oracle ASM instance.
You can designate OPERATOR privileges (a subset of the SYSASM privileges, including starting and stopping Oracle ASM) to members of the OSOPER for ASM group.
Providing system privileges for the storage tier using the SYSASM
privilege instead of the SYSDBA
privilege provides a clearer division of responsibility between Oracle ASM administration and database administration, and helps to prevent different databases using the same storage from accidentally overwriting each other's files.
Cluster node times should be synchronized. With this release, Oracle Clusterware provides Cluster Time Synchronization Service (CTSS), which ensures that there is a synchronization service in the cluster. If Network Time Protocol (NTP) is not found during cluster configuration, then CTSS is configured to ensure time synchronization.
Oracle Enterprise Manager Database Control 11g provides the capability to automatically provision Oracle Grid Infrastructure and Oracle RAC installations on new nodes, and then extend the existing Oracle Grid Infrastructure and Oracle RAC database to these provisioned nodes. This provisioning procedure requires a successful Oracle RAC installation before you can use this feature.
See Also: Oracle Real Application Clusters Administration and Deployment Guide for information about this feature |
With Oracle Clusterware 11g release 2 (11.2), Oracle Universal Installer (OUI) detects when minimum requirements for installation are not completed, and creates shell script programs, called fixup scripts, to resolve many incomplete system configuration requirements. If OUI detects an incomplete task that is marked "fixable", then you can easily fix the issue by generating the fixup script by clicking the Fix & Check Again button.
The fixup script is generated during installation. You are prompted to run the script as root
in a separate terminal session. When you run the script, it raises kernel values to required minimums, if necessary, and completes other operating system configuration tasks.
You also can have Cluster Verification Utility (CVU) generate fixup scripts before installation.
In the past, adding or removing servers in a cluster required extensive manual preparation. With this release, you can continue to configure server nodes manually, or use Grid Plug and Play to configure them dynamically as nodes are added or removed from the cluster.
Grid Plug and Play reduces the costs of installing, configuring, and managing server nodes by starting a grid naming service within the cluster to allow each node to perform the following tasks dynamically:
Negotiating appropriate network identities for itself
Acquiring additional information it needs to operate from a configuration profile
Configuring or reconfiguring itself using profile data, making host names and addresses resolvable on the network
Because servers perform these tasks dynamically, the number of steps required to add or delete nodes is minimized.
Intelligent Platform Management Interface (IPMI) is an industry standard management protocol that is included with many servers today. IPMI operates independently of the operating system, and can operate even if the system is not powered on. Servers with IPMI contain a baseboard management controller (BMC) which is used to communicate to the server.
If IPMI is configured, then Oracle Clusterware uses IPMI when node fencing is required and the server is not responding.
With this release, you can install a new version of Oracle Clusterware into a separate home from an existing Oracle Clusterware installation. This feature reduces the downtime required to upgrade a node in the cluster. When performing an out-of-place upgrade, the old and new version of the software are present on the nodes at the same time, each in a different home location, but only one version of the software is active.
With this release, you can use Oracle Enterprise Manager Cluster Home page to perform full administrative and monitoring support for both standalone database and Oracle RAC environments, using High Availability Application and Oracle Cluster Resource Management.
When Oracle Enterprise Manager is installed with Oracle Clusterware, it can provide a set of users that have the Oracle Clusterware Administrator role in Oracle Enterprise Manager, and provide full administrative and monitoring support for High Availability application and Oracle Clusterware resource management. After you have completed installation and have Oracle Enterprise Manager deployed, you can provision additional nodes added to the cluster using Oracle Enterprise Manager.
With this release, the Single Client Access Name (SCAN) is the host name to provide for all clients connecting to the cluster. The SCAN is a domain name registered to at least one and up to three IP addresses, either in the domain name service (DNS) or the Grid Naming Service (GNS). The SCAN eliminates the need to change clients when nodes are added to or removed from the cluster. Clients using the SCAN can also access the cluster using EZCONNECT.
With this release, you can use the server control utility SRVCTL to shut down all Oracle software running within an Oracle home, in preparation for patching. Oracle Grid Infrastructure patching is automated across all nodes, and patches can be applied in a multi-node, multi-patch fashion.
To streamline cluster installations, especially for those customers who are new to clustering, Oracle introduces the Typical Installation path. Typical installation defaults as many options as possible to those recommended as best practices.
In prior releases, backing up the voting disks using a dd
command was a required postinstallation task. With Oracle Clusterware release 11.2 and later, backing up and restoring a voting disk using the dd
command is not supported.
Backing up voting disks manually is no longer required, because voting disks are backed up automatically in the OCR as part of any configuration change. Voting disk data is automatically restored to any added voting disks.
The following is a list of new features for release 1 (11.1)
With Oracle Database 11g release 1, Oracle Clusterware can be installed or configured as an independent product, and additional documentation is provided on storage administration. For installation planning, note the following documentation:
This book provides an overview and examples of the procedures to install and configure a two-node Oracle Clusterware and Oracle RAC environment.
This book (the guide that you are reading) provides procedures either to install Oracle Clusterware as a standalone product, or to install Oracle Clusterware with either Oracle Database, or Oracle RAC. It contains system configuration instructions that require system administrator privileges.
This platform-specific book provides procedures to install Oracle RAC after you have completed an Oracle Clusterware installation. It contains database configuration instructions for database administrators.
This book provides information for database and storage administrators who administer and manage storage, or who configure and administer Oracle Automatic Storage Management (Oracle ASM).
This is the administrator's reference for Oracle Clusterware. It contains information about administrative tasks, including those that involve changes to operating system configurations and cloning Oracle Clusterware.
This is the administrator's reference for Oracle RAC. It contains information about administrative tasks. These tasks include database cloning, node addition and deletion, Oracle Cluster Registry (OCR) administration, use of SRVCTL and other database administration utilities, and tuning changes to operating system configurations.
The following is a list of enhancements and new features for Oracle Database 11g release 1 (11.1).
This feature introduces a new SYSASM
privilege that is specifically intended for performing Oracle ASM administration tasks. Using the SYSASM
privilege instead of the SYSDBA
privilege provides a clearer division of responsibility between Oracle ASM administration and database administration.
OSASM is a new operating system group that is used exclusively for Oracle ASM. Members of the OSASM group can connect as SYSASM
using operating system authentication and have full access to Oracle ASM.
This appendix describes how to perform Oracle Clusterware and Oracle Automatic Storage Management upgrades.
Oracle Clusterware upgrades can be rolling upgrades, in which a subset of nodes are brought down and upgraded while other nodes remain active. Oracle Automatic Storage Management 11g release 2 (11.2) upgrades can be rolling upgrades. If you upgrade a subset of nodes, then a software-only installation is performed on the existing cluster nodes that you do not select for upgrade.
This appendix contains the following topics:
About Oracle ASM and Oracle Grid Infrastructure Installation and Upgrade
Preparing to Upgrade an Existing Oracle Clusterware Installation
Using CVU to Validate Readiness for Oracle Clusterware Upgrades
Checking Cluster Health Monitor Repository Size After Upgrading
Before you make any changes to the Oracle software, Oracle recommends that you create a backup of the Oracle software and databases.
Unset Oracle environment variables.
If you have set ORA_CRS_HOME as an environment variable, following instructions from Oracle Support, then unset it before starting an installation or upgrade. You should never use ORA_CRS_HOME as an environment variable except under explicit direction from Oracle Support.
Check to ensure that installation owner login shell profiles (for example, .profile
or .cshrc
) do not have ORA_CRS_HOME set.
If you have had an existing installation on your system, and you are using the same user account to install this installation, then unset the following environment variables: ORA_CRS_HOME; ORACLE_HOME; ORA_NLS10; TNS_ADMIN; and any other environment variable set for the Oracle installation user that is connected with Oracle software homes.
In past releases, Oracle Automatic Storage Management (Oracle ASM) was installed as part of the Oracle Database installation. With Oracle Database 11g release 2 (11.2), Oracle ASM is installed when you install the Oracle Grid Infrastructure components and shares an Oracle home with Oracle Clusterware when installed in a cluster such as with Oracle RAC or with Oracle Restart on a standalone server.
If you have an existing Oracle ASM instance, you can either upgrade it at the time that you install Oracle Grid Infrastructure, or you can upgrade it after the installation, using Oracle ASM Configuration Assistant (ASMCA). However, be aware that a number of Oracle ASM features are disabled until you upgrade Oracle ASM, and Oracle Clusterware management of Oracle ASM does not function correctly until Oracle ASM is upgraded, because Oracle Clusterware only manages Oracle ASM when it is running in the Oracle Grid Infrastructure home. For this reason, Oracle recommends that if you do not upgrade Oracle ASM at the same time as you upgrade Oracle Clusterware, then you should upgrade Oracle ASM immediately afterward.
You can perform out-of-place upgrades to an Oracle ASM instance using Oracle ASM Configuration Assistant (ASMCA). In addition to running ASMCA using the graphic user interface, you can run ASMCA in non-interactive (silent) mode.
In prior releases, you could use Database Upgrade Assistant (DBUA) to upgrade either an Oracle Database, or Oracle ASM. That is no longer the case. You can only use DBUA to upgrade an Oracle Database instance. Use Oracle ASM Configuration Assistant (ASMCA) to upgrade Oracle ASM.
See Also: Oracle Database Upgrade Guide and Oracle Database Storage Administrator's Guide for additional information about upgrading existing Oracle ASM installations |
Oracle recommends that you use CVU to check if here are any patches required for upgrading your existing Oracle Grid Infrastructure 11g release 2 or Oracle RAC database 11g Release 2 installations.
Be aware of the following restrictions and changes for upgrades to Oracle Grid Infrastructure installations, which consists of Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM):
To upgrade existing Oracle Clusterware installations to Oracle Grid Infrastructure 11g, your release must be greater than or equal to 10.1.0.3, 10.2.0.3, 11.1.0.6, or 11.2.
To upgrade existing Oracle Grid Infrastructure from 11.2.0.2 to 11.2.0.3 or later, you must apply patch 11.2.0.2.1 (11.2.0.2 PSU 1) or later.
To upgrade existing 11.1 Oracle Clusterware installations to Oracle Grid Infrastructure 11.2.0.3 or later, you must patch the release 11.1 Oracle Clusterware home with the patch for bug 7308467.
If you have Oracle ACFS file systems on Oracle Grid Infrastructure 11g release 2 (11.2.0.1), you upgrade Oracle Grid Infrastructure to any later version (11.2.0.2 or 11.2.0.3), and you take advantage of Redundant Interconnect Usage and add one or more additional private interfaces to the private network, then you must restart the Oracle ASM instance on each upgraded cluster member node.
Do not delete directories in the Grid home. For example, do not delete Grid_home/Opatch. If you delete the directory, then the Grid infrastructure installation owner cannot use Opatch to patch the grid home, and Opatch displays the error "checkdir error: cannot create Grid_home/OPatch'
To upgrade existing 11.2.0.1 Oracle Grid Infrastructure installations to Oracle Grid Infrastructure 11.2.0.2, you must first verify if you need to apply any mandatory patches for upgrade to succeed. Refer to Section E.6 for steps to check readiness.
See Also: "Oracle 11gR2 Upgrade Companion" Note 785351.1 on My Oracle Support: |
To upgrade existing Oracle Grid Infrastructure from 11.2.0.2, to 11.2.0.3 or later, you must apply patch 11.2.0.2.1 (11.2.0.2 PSU 1) or later
To upgrade existing 11.1 Oracle Clusterware installations to Oracle Grid Infrastructure 11.2.0.3 or later, you must patch the release 11.1 Oracle Clusterware home with the patch for bug 7308467.
Oracle Clusterware and Oracle ASM upgrades are always out-of-place upgrades. With 11g release 2 (11.2), you cannot perform an in-place upgrade of Oracle Clusterware and Oracle ASM to existing homes.
If the existing Oracle Clusterware home is a shared home, note that you can use a non-shared home for the Oracle Grid Infrastructure for a Cluster home for Oracle Clusterware and Oracle ASM 11g release 2 (11.2).
With Oracle Clusterware 11g release 1 and later releases, the same user that owned the Oracle Clusterware 10g software must perform the Oracle Clusterware 11g upgrade. Before Oracle Database 11g, either all Oracle software installations were owned by the Oracle user, typically oracle
, or Oracle Database software was owned by oracle
, and Oracle Clusterware software was owned by a separate user, typically crs
.
Oracle ASM and Oracle Clusterware both run in the Oracle Grid Infrastructure home.
During a major version upgrade to 11g release 2 (11.2), the software in the 11g release 2 (11.2) Oracle Grid Infrastructure home is not fully functional until the upgrade is completed. Running srvctl
, crsctl
, and other commands from the 11g release 2 (11.2) home is not supported until the final rootupgrade.sh
script is run and the upgrade is complete across all nodes.
To manage databases in the existing earlier version (release 10.x or 11.1) database homes during the Oracle Grid Infrastructure upgrade, use the srvctl
from the existing database homes.
During Oracle Clusterware installation, if there is a single instance Oracle ASM version on the local node, then it is converted to a clustered Oracle ASM 11g release 2 (11.2) installation, and Oracle ASM runs in the Oracle Grid Infrastructure home on all nodes.
If a single instance (non-clustered) Oracle ASM installation is on a remote node, which is a node other than the local node (the node on which the Oracle Grid Infrastructure installation is being performed), then it will remain a single instance Oracle ASM installation. However, during installation, if you select to place the Oracle Cluster Registry (OCR) and voting disk files on Oracle ASM, then a clustered Oracle ASM installation is created on all nodes in the cluster, and the single instance Oracle ASM installation on the remote node will become nonfunctional.
If you have an existing Oracle Clusterware installation, then you upgrade your existing cluster by performing an out-of-place upgrade. You cannot perform an in-place upgrade.
Complete the following tasks before starting an upgrade:
For each node, use Cluster Verification Utility to ensure that you have completed preinstallation steps. It can generate Fixup scripts to help you to prepare servers. In addition, the installer will help you to ensure all required prerequisites are met.
Ensure that you have information you will need during installation, including the following:
An Oracle base location for Oracle Clusterware.
An Oracle Grid Infrastructure home location that is different from your existing Oracle Clusterware location
A SCAN address
Privileged user operating system groups to grant access to Oracle ASM data files (the OSDBA for ASM group), to grant administrative privileges to the Oracle ASM instance (OSASM group), and to grant a subset of administrative privileges to the Oracle ASM instance (OSOPER for ASM group)
root
user access, to run scripts as root
during installation
For the installation owner running the installation, if you have environment variables set for the existing installation, then unset the environment variables $ORACLE_HOME
and $ORACLE_SID
, as these environment variables are used during upgrade. For example:
$ unset ORACLE_BASE $ unset ORACLE_HOME $ unset ORACLE_SID
Review the contents in this section to validate that your cluster is ready for upgrades.
Navigate to the staging area for the upgrade, where the runcluvfy.sh
command is located, and run the command runcluvfy.sh stage -pre crsinst -upgrade
to check the readiness of your Oracle Clusterware installation for upgrades. Running runcluvfy.sh
with the -pre crsinst -upgrade
flags performs system checks to confirm if the cluster is in a correct state for upgrading from an existing clusterware installation.
The command uses the following syntax, where variable content is indicated by italics:
runcluvfy.sh stage -pre crsinst -upgrade [-n node_list] [-rolling] -src_crshome src_Gridhome -dest_crshome dest_Gridhome -dest_version dest_version [-fixup[-fixupdir path]] [-verbose]
The options are:
-n
nodelist
The -n
flag indicates cluster member nodes, and nodelist
the comma-delimited list of non-domain qualified node names on which you want to run a preupgrade verification. If you do not add the -n
flag to the verification command, then all the nodes in the cluster are verified.
-rolling
Use this flag to verify readiness for rolling upgrades.
-src_crshome
src_Gridhome
Use this flag to indicate the location of the source Oracle Clusterware or Grid home that you are upgrading, where src_Gridhome
is the path to the home that you want to upgrade.
-dest_crshome
dest_Gridhome
Use this flag to indicate the location of the upgrade Grid home, where dest_ Gridhome
is the path to the Grid home.
-dest_version
dest_version
Use the dest_version
flag to indicate the release number of the upgrade, including any patchset. The release number must include the five digits designating the release to the level of the platform-specific patch. For example: 11.2.0.2.0.
-fixup
[-fixupdir
path
]
Use the -fixup
flag to indicate that you want to generate instructions for any required steps you need to complete to ensure that your cluster is ready for an upgrade. The default location is the CVU work directory. If you want to place the fixup instructions in a different directory, then add the flag -fixupdir
, and provide the path to the directory where you want to put the instructions for required fixes.
-verbose
Use the -verbose
flag to produce detailed output of individual checks
You can verify that the permissions required for installing Oracle Clusterware have been configured on the nodes node1
and node2
by running the following command:
$ ./runcluvfy.sh stage -pre crsinst -upgrade -n node1,node2 -rolling -src_crshome /u01/app/grid/11.2.0.1 -dest_crshome /u01/app/grid/11.2.0.2 -dest_version 11.2.0.3.0 -fixup -fixupdirpath /home/grid/fixup -verbose
Use Cluster Verification Utility to assist you with system checks in preparation for starting a database upgrade. The installer runs the appropriate CVU checks automatically, and either prompts you to fix problems, or provides a fixup script to be run on all nodes in the cluster before proceeding with the upgrade.
Use the following procedures to upgrade Oracle Clusterware or Oracle Automatic Storage Management:
Note: When you upgrade to Oracle Clusterware 11g release 2 (11.2), Oracle Automatic Storage Management (Oracle ASM) is installed in the same home as Oracle Clusterware. In Oracle documentation, this home is called the Oracle Grid Infrastructure home, or Grid home. Also note that Oracle does not support attempting to add additional nodes to a cluster during a rolling upgrade. |
Use the following procedure to upgrade Oracle Clusterware from an earlier release to a later release:
Note: Oracle recommends that you leave Oracle RAC instances running. When you start the root script on each node, that node's instances are shut down and then started up again by therootupgrade.sh script.
For single instance Oracle Databases on the cluster, only those that use Oracle ASM need to be shut down. Listeners do not need to be shut down. |
Start the installer, and select the option to upgrade an existing Oracle Clusterware and Oracle ASM installation.
On the node selection page, select all nodes.
Note: In contrast with releases prior to Oracle Clusterware 11g release 2, all upgrades are rolling upgrades, even if you select all nodes for the upgrade.Oracle recommends that you select all cluster member nodes for the upgrade, and then shut down database instances on each node before you run the upgrade |
Select installation options as prompted.
When prompted, run the rootupgrade.sh
script on each node in the cluster that you want to upgrade.
Run the script on the local node first. The script shuts down the earlier release installation, replaces it with the new Oracle Clusterware release, and starts the new Oracle Clusterware installation.
After the script completes successfully, you can run the script in parallel on all nodes except for one, which you select as the last node. When the script is run successfully on all the nodes except the last node, run the script on the last node.
After running the rootupgrade.sh
script on the last node in the cluster, if you left the check box with ASMCA marked, as is the default, then Oracle ASM Configuration Assistant runs automatically, and the Oracle Clusterware upgrade is complete. If you uncloaked the box on the interview stage of the upgrade, then ASMCA is not run automatically.
If an earlier version of Oracle Automatic Storage Management is installed, then the installer starts Oracle ASM Configuration Assistant to upgrade Oracle ASM to 11g release 2 (11.2). You can choose to upgrade Oracle ASM at this time, or upgrade it later.
Oracle recommends that you upgrade Oracle ASM at the same time that you upgrade the Oracle Clusterware binaries. Until Oracle ASM is upgraded, Oracle databases that use Oracle ASM cannot be created. Until Oracle ASM is upgraded, the 11g release 2 (11.2) Oracle ASM management tools in the Grid home (for example, srvctl
) will not work.
Because the Oracle Grid Infrastructure home is in a different location than the former Oracle Clusterware and Oracle ASM homes, update any scripts or applications that use utilities, libraries, or other files that reside in the Oracle Clusterware and Oracle ASM homes.
Note: At the end of the upgrade, if you set the OCR backup location manually to the older release Oracle Clusterware home (CRS home), then you must change the OCR backup location to the Oracle Grid Infrastructure home (Grid home). If you did not set the OCR backup location manually, then this issue does not concern you.Because upgrades of Oracle Clusterware are out-of-place upgrades, the previous release Oracle Clusterware home cannot be the location of the OCR backups. Backups in the old Oracle Clusterware home could be deleted. |
After you have completed the Oracle Clusterware 11g release 2 (11.2) upgrade, if you did not choose to upgrade Oracle ASM when you upgraded Oracle Clusterware, then you can do it separately using the Oracle Automatic Storage Management Configuration Assistant (asmca
) to perform rolling upgrades.
You can use asmca to complete the upgrade separately, but you should do it soon after you upgrade Oracle Clusterware, as Oracle ASM management tools such as srvctl
will not work until Oracle ASM is upgraded.
Note: ASMCA performs a rolling upgrade only if the earlier version of Oracle ASM is either 11.1.0.6 or 11.1.0.7. Otherwise, ASMCA performs a normal upgrade, in which ASMCA brings down all Oracle ASM instances on all nodes of the cluster, and then brings them all up in the new Grid home. |
Note the following if you intend to perform rolling upgrades of Oracle ASM:
The active version of Oracle Clusterware must be 11g release 2 (11.2). To determine the active version, enter the following command:
$ crsctl query crs activeversion
You can upgrade a single instance Oracle ASM installation to a clustered Oracle ASM installation. However, you can only upgrade an existing single instance Oracle ASM installation if you run the installation from the node on which the Oracle ASM installation is installed. You cannot upgrade a single instance Oracle ASM installation on a remote node.
You must ensure that any rebalance operations on your existing Oracle ASM installation are completed before starting the upgrade process.
During the upgrade process, you alter the Oracle ASM instances to an upgrade state. Because this upgrade state limits Oracle ASM operations, you should complete the upgrade process soon after you begin. The following are the operations allowed when an Oracle ASM instance is in the upgrade state:
Diskgroup mounts and dismounts
Opening, closing, resizing, or deleting database files
Recovering instances
Queries of fixed views and packages: Users are allowed to query fixed views and run anonymous PL/SQL blocks using fixed packages, such as dbms_diskgroup
)
Complete the following procedure to upgrade Oracle ASM:
On the node you plan to start the upgrade, set the environment variable ASMCA_ROLLING_UPGRADE as true. For example:
$ export ASMCA_ROLLING_UPGRADE=true
From the Oracle Grid Infrastructure 11g release 2 (11.2) home, start ASMCA. For example:
$ cd /u01/11.2/grid/bin $ ./asmca
Select Upgrade.
ASM Configuration Assistant upgrades Oracle ASM in succession for all nodes in the cluster.
After you complete the upgrade, run the command to unset the ASMCA_ROLLING_UPGRADE environment variable.
See Also: Oracle Database Upgrade Guide and Oracle Database Storage Administrator's Guide for additional information about preparing an upgrade plan for Oracle ASM, and for starting, completing, and stopping Oracle ASM upgrades |
Because Oracle Clusterware release 2 (11.2) is an out-of-place upgrade of the Oracle Clusterware home in a new location (the Oracle Grid Infrastructure for a Cluster home, or Grid home), the path for the CRS_HOME parameter in some parameter files must be changed. If you do not change the parameter, then you encounter errors such as "cluster target broken" on DB control or Grid control.
Use the following procedure to resolve this issue:
Log in to dbconsole
or gridconsole
.
Navigate to the Cluster tab.
Click Monitoring Configuration
Update the value for Oracle Home with the new Grid home path.
After upgrade from previous releases, if you want to deinstall the previous release Oracle Grid Infrastructure Grid home, then you must first change the permission and ownership of the previous release Grid home. Log in as root, and change the permission and ownership of the previous release Grid home using the following command syntax, where oldGH is the previous release Grid home, swowner is the Oracle Grid Infrastructure installation owner, and oldGHParent is the parent directory of the previous release Grid home:
For example:
#chmod -R 755 /u01/app/11.2.0.1/grid #chown -R grid /u01/app/11.2.0.1/grid #chown grid /u01/app/11.2.0.1
After a successful or a failed upgrade to Oracle Clusterware 11g release 2 (11.2), you can restore Oracle Clusterware to the previous version.
The restoration procedure in this section restores the Clusterware configuration to the state it was in before the Oracle Clusterware 11g release 2 (11.2) upgrade. Any configuration changes you performed during or after the 11g release 2 (11.2) upgrade are removed and cannot be recovered.
In the following procedure, the local node is the first node on which the rootupgrade script was run. The remote nodes are all other nodes that were upgraded.
To restore Oracle Clusterware to the previous release:
Use the downgrade procedure for the release to which you want to downgrade.
Downgrading to releases prior to 11g release 2 (11.2.0.1):
On all remote nodes, use the command syntax Grid_home/crs/install/rootcrs.pl -downgrade [-force] to stop the 11g release 2 (11.2) resources, shut down the 11g release 2 (11.2) stack.
Note: This command does not reset the OCR, or deleteocr.loc . |
For example:
# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -downgrade
If you want to stop a partial or failed 11g release 2 (11.2) installation and restore the previous release Oracle Clusterware, then use the -force
flag with this command.
Downgrading to a release 11.2.0.1 or later release:
Use the command syntax /rootcrs.pl -downgrade -oldcrshome oldGridHomePath -version oldGridversion, where oldGridhomepath is the path to the previous release Oracle Grid Infrastructure home, and oldGridversion is the release to which you want to downgrade. For example:
./rootcrs.pl -downgrade -oldcrshome /u01/app/11.2.0/grid -version 11.2.0.1
If you want to stop a partial or failed 11g release 2 (11.2) installation and restore the previous release Oracle Clusterware, then use the -force flag with this command.
After the rootcrs.pl -downgrade
script has completed on all remote nodes, on the local node use the command syntax Grid_home/crs/install/rootcrs.pl -downgrade -lastnode -oldcrshome pre11.2_crs_home -version pre11.2_crs_version [-force], where pre11.2_crs_home is the home of the earlier Oracle Clusterware installation, and pre11.2_crs_version is the release number of the earlier Oracle Clusterware installation.
For example:
# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -downgrade -lastnode -oldcrshome /u01/app/crs -version 11.1.0.6.0
This script downgrades the OCR. If you want to stop a partial or failed 11g release 2 (11.2) installation and restore the previous release Oracle Clusterware, then use the -force
flag with this command.
Log in as the Grid infrastructure installation owner, and run the following commands, where /u01/app/grid
is the location of the new (upgraded) Grid home (11.2):
./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false ORACLE_HOME=/u01/app/grid
As the Grid infrastructure installation owner, run the command ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=pre11.2_crs_home, where pre11.2_crs_home represents the home directory of the earlier Oracle Clusterware installation.
For example:
./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/u01/app/crs
For downgrades to 11.2 and later releases
On each node, start Oracle Clusterware from the earlier release Oracle Clusterware home using the command crsctl start crs
. For example, where the earlier release home is crshome11202
, use the following command on each node:
crshome11202/bin/crsctl start crs
For downgrades to 11.1 and earlier releases
You are prompted to run root.sh
from the earlier release Oracle Clusterware installation home in sequence on each member node of the cluster. After you complete this task, downgrade is completed.
Running root.sh
from the earlier release Oracle Clusterware installation home restarts the Oracle Clusterware stack, starts up all the resources previously registered with Oracle Clusterware in the older version, and configures the old initialization scripts to run the earlier release Oracle Clusterware stack.
If you are upgrading from a prior release using IPD/OS to Oracle Grid Infrastructure release 2 (11.2.0.2 and later), then you should review the Cluster Health Monitor repository size (the CHM repository). Oracle recommends that you review your CHM repository needs, and enlarge the repository size if you want to maintain a larger CHM repository.
Note: Your previous IPD/OS repository is deleted when you install Oracle Grid Infrastructure, and you run theroot.sh script on each node. |
By default, the CHM repository size for release 11.2.0.3 and later is a minimum of either 1GB or 3600 seconds (1 hour). For release 11.2.0.2, the CHM repository is one Gigabyte (1 GB), regardless of the size of the cluster.
To enlarge the CHM repository, use the following command syntax, where RETENTION_TIME is the size of CHM repository in number of seconds:
oclumon manage -repos resize RETENTION_TIME
The value for RETENTION_TIME must be more than 3600
(one hour) and less than 259200
(three days). If you enlarge the CHM repository size, then you must ensure that there is local space available for the repository size you select on each node of the cluster. If there is not sufficient space available, then you can move the repository to shared storage.
For example, to set the repository size to four hours:
$ oclumon manage -repos resize 14400
Installation Guide
11g Release 2 (11.2) for Oracle Solaris
E24616-05
May 2012
Oracle Grid Infrastructure Installation Guide, 11g Release 2 (11.2) for Oracle Solaris
E24616-05
Copyright © 2007, 2012, Oracle and/or its affiliates. All rights reserved.
Primary Author: Douglas Williams
Contributing Authors: Jonathan Creighton, Barb Lundhild, Paul K. Harter, Markus Michalewicz, Balaji Pagadala, Hanlin Qian, Sunil Ravindrachar, Dipak Saggi, Ara Shakian, Janet Stern, Binoy Sukumaran, Kannan Viswanathan
Contributors: Mark Bauer, Barb Glover, Yuki Feng, Aneesh Khandelwal, Saar Maoz, Bo Zhu
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.
This appendix describes how to install and configure Oracle products using response files. It includes information about the following topics:
When you start the installer, you can use a response file to automate the installation and configuration of Oracle software, either fully or partially. The installer uses the values contained in the response file to provide answers to some or all installation prompts.
Typically, the installer runs in interactive mode, which means that it prompts you to provide information in graphical user interface (GUI) screens. When you use response files to provide this information, you run the installer from a command prompt using either of the following modes:
If you include responses for all of the prompts in the response file and specify the -silent
option when starting the installer, then it runs in silent mode. During a silent mode installation, the installer does not display any screens. Instead, it displays progress information in the terminal that you used to start it.
If you include responses for some or all of the prompts in the response file and omit the -silent
option, then the installer runs in response file mode. During a response file mode installation, the installer displays all the screens, screens for which you specify information in the response file, and also screens for which you did not specify the required information in the response file.
You define the settings for a silent or response file installation by entering values for the variables listed in the response file. For example, to specify the Oracle home name, supply the appropriate value for the ORACLE_HOME
variable:
ORACLE_HOME="OraDBHome1"
Another way of specifying the response file variable settings is to pass them as command line arguments when you run the installer. For example:
-silent "ORACLE_HOME=OraDBHome1" ...
This method is particularly useful if you do not want to embed sensitive information, such as passwords, in the response file. For example:
-silent "s_dlgRBOPassword=binks342" ...
Ensure that you enclose the variable and its setting in quotes.
See Also: Oracle Universal Installer and OPatch User's Guide for Windows and UNIX for more information about response files |
The following table provides use cases for running the installer in silent mode or response file mode.
The following are the general steps to install and configure Oracle products using the installer in silent or response file mode:
Note: You must complete all required preinstallation tasks on a system before running the installer in silent or response file mode. |
Prepare a response file.
Run the installer in silent or response file mode.
If you completed a software-only installation, then run Net Configuration Assistant and Database Configuration Assistant in silent or response file mode.
These steps are described in the following sections.
This section describes the following methods to prepare a response file for use during silent mode or response file mode installations:
Oracle provides response file templates for each product and installation type, and for each configuration tool. These files are located at database/response
directory on the installation media.
Note: If you copied the software to a hard disk, then the response files are located in the directory/response . |
Table B-1 lists the response files provided with this software:
Table B-1 Response Files for Oracle Database
Response File | Description |
---|---|
Silent installation of Oracle Database 11g | |
Silent installation of Database Configuration Assistant | |
Silent installation of Oracle Net Configuration Assistant |
Table B-2 Response files for Oracle Grid Infrastructure
Response File | Description |
---|---|
Silent installation of Oracle Grid Infrastructure installations |
Caution: When you modify a response file template and save a file for use, the response file may contain plain text passwords. Ownership of the response file should be given to the Oracle software installation owner only, and permissions on the response file should be changed to 600. Oracle strongly recommends that database administrators or other administrators delete or secure response files when they are not in use. |
To copy and modify a response file:
Copy the response file from the response file directory to a directory on your system:
$ cp /directory_path/response/response_file.rsp local_directory
In this example, directory_path
is the path to the database
directory on the installation media. If you have copied the software to a hard drive, then you can edit the file in the response
directory if you prefer.
Open the response file in a text editor:
$ vi /local_dir/response_file.rsp
Remember that you can specify sensitive information, such as passwords, at the command line rather than within the response file. "How Response Files Work" explains this method.
See Also: Oracle Universal Installer and OPatch User's Guide for Windows and UNIX for detailed information on creating response files |
Follow the instructions in the file to edit it.
Note: The installer or configuration assistant fails if you do not correctly configure the response file. |
Change the permissions on the file to 600:
$ chmod 600 /local_dir/response_file.rsp
Note: A fully specified response file for an Oracle Database installation contains the passwords for database administrative accounts and for a user who is a member of the OSDBA group (required for automated backups). Ensure that only the Oracle software owner user can view or modify response files or consider deleting them after the installation succeeds. |
You can use the installer in interactive mode to record a response file, which you can edit and then use to complete silent mode or response file mode installations. This method is useful for custom or software-only installations.
Starting with Oracle Database 11g Release 2 (11.2), you can save all the installation steps into a response file during installation by clicking Save Response File on the Summary page. You can use the generated response file for a silent installation later.
When you record the response file, you can either complete the installation, or you can exit from the installer on the Summary page, before it starts to copy the software to the system.
If you use record mode during a response file mode installation, then the installer records the variable values that were specified in the original source response file into the new response file.
Note: Oracle Universal Installer does not record passwords in the response file. |
To record a response file:
Complete preinstallation tasks as for a normal installation.
Ensure that the Oracle Grid Infrastructure software owner user (typically grid
) has permissions to create or write to the Oracle home path that you will specify when you run the installer.
On each installation screen, specify the required information.
When the installer displays the Summary screen, perform the following steps:
Click Save Response File and specify a file name and location to save the values for the response file, and click Save.
Click Finish to create the response file and continue with the installation.
Click Save Response File and Cancel if you only want to create the response file but not continue with the installation. The installation will stop, but the settings you have entered will be recorded in the response file.
Before you use the saved response file on another system, edit the file and make any required changes.
Use the instructions in the file as a guide when editing it.
Run Oracle Universal Installer at the command line, specifying the response file you created. The Oracle Universal Installer executable, runInstaller
, provides several options. For help information on the full set of these options, run the runInstaller
command with the -help
option. For example:
$ directory_path/runInstaller -help
The help information appears in a window after some time.
To run the installer using a response file:
Complete the preinstallation tasks as for a normal installation
Log in as the software installation owner user.
If you are completing a response file mode installation, set the DISPLAY
environment variable.
Note: You do not have to set theDISPLAY environment variable if you are completing a silent mode installation. |
To start the installer in silent or response file mode, enter a command similar to the following:
$ /directory_path/runInstaller [-silent] [-noconfig] \ -responseFile responsefilename
Note: Do not specify a relative path to the response file. If you specify a relative path, then the installer fails. |
In this example:
directory_path
is the path of the DVD or the path of the directory on the hard drive where you have copied the installation binaries.
-noconfig
suppresses running the configuration assistants during installation, and a software-only installation is performed instead.
responsefilename
is the full path and file name of the installation response file that you configured.
When the installation completes, log in as the root
user and run the root.sh
script. For example
$ su root
password:
# /oracle_home_path/root.sh
You can run Net Configuration Assistant in silent mode to configure and start an Oracle Net listener on the system, configure naming methods, and configure Oracle Net service names. To run Net Configuration Assistant in silent mode, you must copy and edit a response file template. Oracle provides a response file template named netca.rsp
in the response
directory in the database
/response
directory on the DVD.
Note: If you copied the software to a hard disk, then the response file template is located in thedatabase/response directory. |
To run Net Configuration Assistant using a response file:
Copy the netca.rsp
response file template from the response file directory to a directory on your system:
$ cp /directory_path/response/netca.rsp local_directory
In this example, directory_path
is the path of the database
directory on the DVD. If you have copied the software to a hard drive, you can edit the file in the response
directory if you prefer.
Open the response file in a text editor:
$ vi /local_dir/netca.rsp
Follow the instructions in the file to edit it.
Note: Net Configuration Assistant fails if you do not correctly configure the response file. |
Log in as the Oracle software owner user, and set the ORACLE_HOME
environment variable to specify the correct Oracle home directory.
Enter a command similar to the following to run Net Configuration Assistant in silent mode:
$ $ORACLE_HOME/bin/netca -silent -responsefile /local_dir/netca.rsp
In this command:
The -silent
option indicates runs Net Configuration Assistant in silent mode.
local_dir
is the full path of the directory where you copied the netca.rsp
response file template.
Use the following sections to create and run a response file configuration after installing Oracle software.
When you run a silent or response file installation, you provide information about your servers in a response file that you otherwise provide manually during a graphical user interface installation. However, the response file does not contain passwords for user accounts that configuration assistants require after software installation is complete. The configuration assistants are started with a script called configToolAllCommands
. You can run this script in response file mode by creating and using a password response file. The script uses the passwords to run the configuration tools in succession to complete configuration.
If you keep the password file to use for clone installations, then Oracle strongly recommends that you store it in a secure location. In addition, if you have to stop an installation to fix an error, you can run the configuration assistants using configToolAllCommands
and a password response file.
The configToolAllCommands
password response file consists of the following syntax options:
internal_component_name is the name of the component that the configuration assistant configures
variable_name is the name of the configuration file variable
value is the desired value to use for configuration.
The command syntax is as follows:
internal_component_name|variable_name=value
For example:
oracle.assistants.asm|S_ASMPASSWORD=welcome
Oracle strongly recommends that you maintain security with a password response file:
Permissions on the response file should be set to 600.
The owner of the response file should be the installation owner user, with the group set to the central inventory (oraInventory) group.
To run configuration assistants with the configToolAllCommands
script:
Create a response file using the syntax filename.properties. For example:
$ touch cfgrsp.properties
Open the file with a text editor, and cut and paste the password template, modifying as needed.
Example B-1 Password response file for Oracle Grid Infrastructure installation for a cluster
Oracle Grid Infrastructure requires passwords for Oracle Automatic Storage Management Configuration Assistant (ASMCA), and for Intelligent Platform Management Interface Configuration Assistant (IPMICA) if you have a BMC card and you want to enable this feature. Provide the following response file:
oracle.assistants.asm|S_ASMPASSWORD=password oracle.assistants.asm|S_ASMMONITORPASSWORD=password oracle.crs|S_BMCPASSWORD=password
If you do not have a BMC card, or you do not want to enable IPMI, then leave the S_BMCPASSWORD input field blank.
Example B-2 Password response file for Oracle Real Application Clusters
Oracle Database configuration requires configuring a password for the SYS, SYSTEM, SYSMAN, and DBSNMP passwords for use with Database Configuration Assistant (DBCA). In addition, if you use Oracle ASM storage, then configure the ASMSNMP password. Also, if you selected to configure Oracle Enterprise Manager, then you must provide the password for the Oracle software installation owner for the S_HOSTUSERPASSWORD response.
oracle.assistants.server|S_SYSPASSWORD=password oracle.assistants.server|S_SYSTEMPASSWORD=password oracle.assistants.server|S_SYSMANPASSWORD=password oracle.assistants.server|S_DBSNMPPASSWORD=password oracle.assistants.server|S_HOSTUSERPASSWORD=password oracle.assistants.server|S_ASMSNMPPASSWORD=password
If you do not want to enable Oracle Enterprise Manager or Oracle ASM, then leave those password fields blank.
Change permissions to secure the file. For example:
$ ls -al cfgrsp.properties -rw------- 1 oracle oinstall 0 Apr 30 17:30 cfgrsp
Change directory to $ORACLE_HOME/cfgtoollogs
, and run the configuration script using the following syntax:
configToolAllCommands RESPONSE_FILE=/path/name.properties
for example:
$ ./configToolAllCommands RESPONSE_FILE=/home/oracle/cfgrsp.properties
This chapter describes the difference between a Typical and Advanced installation for Oracle Grid Infrastructure for a cluster, and describes the steps required to complete a Typical installation.
This chapter contains the following sections:
There are two installation options for Oracle Grid Infrastructure installations:
Typical Installation: The Typical installation option is a simplified installation with a minimal number of manual configuration choices. Oracle recommends that you select this installation type for most cluster implementations.
Advanced Installation: The Advanced Installation option is an advanced procedure that requires a higher degree of system knowledge. It enables you to select particular configuration choices, including additional storage and network choices, use of operating system group authentication for role-based administrative privileges, integration with IPMI, or more granularity in specifying Oracle Automatic Storage Management roles.
With Oracle Clusterware 11g release 2 (11.2), during installation Oracle Universal Installer (OUI) generates Fixup scripts (runfixup.sh
) that you can run to complete required preinstallation steps.
Fixup scripts are generated during installation. You are prompted to run scripts as root
in a separate terminal session. When you run scripts, they complete the following configuration tasks:
If necessary, sets kernel parameters required for installation and runtime to at least the minimum value.
Reconfigures primary and secondary group memberships for the installation owner, if necessary, for the Oracle Inventory directory and the operating system privileges groups.
Sets shell limits if necessary to required values.
Note: On Oracle Solaris 10 and later releases, you are not required to make changes to the/etc/system file to implement the System V IPC. Oracle Solaris 10 uses the resource control facility for its implementation. |
Complete the following manual configuration tasks
See Also: Chapter 2, "Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks" and Chapter 3, "Configuring Storage for Grid Infrastructure for a Cluster and Oracle Real Application Clusters (Oracle RAC)" if you need any information about how to complete these tasks |
Enter the following commands to check available memory:
# /usr/sbin/prtconf | grep "Memory size" # /usr/sbin/swap -s
The minimum required RAM is at least 2.5 GB of RAM for Oracle Grid Infrastructure for a Cluster installations, including installations where you plan to install Oracle RAC. For systems with 2.5 GB to 16 GB RAM, Oracle recommends that you use swap space equal to RAM. For systems with more than 16 GB RAM, use 0.75 x RAM as the swap space. If you use non-swappable memory, such as ISM, then you should deduct the memory allocated to this space from the available RAM before calculating swap space. If you plan to install Oracle Database or Oracle RAC on systems using DISM, then available swap space must be at least equal to the sum of the SGA sizes of all instances running on the servers.
If the swap space and the Grid home are on the same filesystem, then add together their respective disk space requirements for the total minimum space required.
df -h
This command checks the available space on file systems. If you use normal redundancy for Oracle Clusterware files, which is three Oracle Cluster Registry (OCR) locations and three voting disk locations, then you should have at least 2 GB of file space available on shared storage volumes reserved for Oracle Grid Infrastructure files.
Note: You cannot install OCR or voting disk files on raw partitions. You can install only on Oracle ASM, or on supported network-attached storage or cluster file systems. The only use for raw devices is as Oracle ASM disks. |
If you plan to install on Oracle ASM, then to ensure high availability of OCR or voting disk files on Oracle ASM, you need to have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. Each disk must have at least 1 GB of capacity to ensure that there is sufficient space to create Oracle Clusterware files.
Ensure you have at least 6.5 GB of space for the Oracle Grid Infrastructure for a Cluster home (Grid home) This includes Oracle Clusterware, Oracle Automatic Storage Management (Oracle ASM), and Oracle ACFS files and log files, and includes the Cluster Health Monitor repository.
df -h /tmp
Ensure that you have at least 1 GB of space in /tmp
. If this space is not available, then increase the size, or delete unnecessary files in /tmp
.
Ensure that you have the following available:
During Typical installation, you are prompted to confirm the default Single Client Access Name (SCAN), which is used to connect to databases within the cluster irrespective of which nodes they are running on. By default, the name used as the SCAN is also the name of the cluster. The default value for the SCAN is based on the local node name. If you change the SCAN from the default, then the name that you use must be globally unique throughout your enterprise.
In a Typical installation, the SCAN is also the name of the cluster. The SCAN and cluster name must be at least one character long and no more than 15 characters in length, must be alphanumeric, and may contain hyphens (-).
For example:
NE-Sa89
If you require a SCAN that is longer than15 characters, then be aware that the cluster name defaults to the first 15 characters of the SCAN.
Before starting the installation, you must have at least two interfaces configured on each node: One for the private IP address and one for the public IP address.
If you do not enable GNS, then the public and virtual IP addresses for each node must be static IP addresses, configured before installation for each node, but not currently in use. Public and virtual IP addresses must be on the same subnet.
Oracle Clusterware manages private IP addresses in the private subnet on interfaces you identify as private during the installation interview.
The cluster must have the following addresses configured:
The cluster must have the following addresses configured:
A public IP address for each node, with the following characteristics:
Static IP address
Configured before installation for each node, and resolvable to that node before installation
On the same subnet as all other public IP addresses, VIP addresses, and SCAN addresses
A virtual IP address for each node, with the following characteristics:
Static IP address
Configured before installation for each node, but not currently in use
On the same subnet as all other public IP addresses, VIP addresses, and SCAN addresses
A Single Client Access Name (SCAN) for the cluster, with the following characteristics:
Three Static IP addresses configured on the domain name server (DNS) before installation so that the three IP addresses are associated with the name provided as the SCAN, and all three addresses are returned in random order by the DNS to the requestor
Configured before installation in the DNS to resolve to addresses that are not currently in use
Given a name that does not begin with a numeral
On the same subnet as all other public IP addresses, VIP addresses, and SCAN addresses
Conforms with the RFC 952 standard, which allows alphanumeric characters and hyphens ("-"), but does not allow underscores ("_").
A private IP address for each node, with the following characteristics:
Static IP address
Configured before installation, but on a separate, private network, with its own subnet, that is not resolvable except by other cluster member nodes
After installation, when a client sends a request to the cluster, the Oracle Clusterware SCAN listeners redirect client requests to servers in the cluster.
Note: Oracle strongly recommends that you do not configure SCAN VIP addresses in the hosts file. Use DNS resolution for SCAN VIPs. If you use the hosts file to resolve SCANs, then you will only be able to resolve to one IP address and you will have only one SCAN address. |
See Also: Appendix C, "Understanding Network Addresses" for more information about network addresses |
In previous releases, to make use of redundant networks for the interconnect, bonding, trunking, teaming, or similar technology was required. Oracle Grid Infrastructure and Oracle RAC can now make use of redundant network interconnects, without the use of other network technology, to enhance optimal communication in the cluster. This functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2).
Redundant Interconnect Usage enables load-balancing and high availability across multiple (up to 4) private networks (also known as interconnects).
During installation, you are asked to identify the planned use for each network interface that OUI detects on your cluster node. You must identify each interface as a public or private interface, or as "do not use." For interfaces that you plan to have used for other purposes—for example, an interface dedicated to a network file system—you must identify those instances as "do not use" interfaces, so that Oracle Clusterware ignores them.
Redundant Interconnect Usage cannot protect interfaces used for public communication. If you require high availability or load balancing for public interfaces, then use a third party solution. Typically, bonding, trunking or similar technologies can be used for this purpose.
You can enable Redundant Interconnect Usage for the private network by selecting multiple interfaces to use as private interfaces. Redundant Interconnect Usage creates a redundant interconnect when you identify more than one interface as private. This functionality is available starting with Oracle Grid Infrastructure 11g Release 2 (11.2.0.2).
Refer to the tables listed in Section 2.7, "Identifying Software Requirements" for the list of required packages for your operating system.
Enter the following commands to create default groups and users:
One system privileges group for all operating system-authenticated administration privileges, including Oracle RAC (if installed):
# groupadd -g 1000 oinstall # groupadd -g 1031 dba # useradd -u 1101 -g oinstall -G dba oracle # mkdir -p /u01/app/11.2.0/grid # mkdir -p /u01/app/oracle # chown -R oracle:oinstall /u01 # chmod -R 775 /u01/
This set of commands creates a single installation owner, with required system privileges groups to grant the OraInventory system privileges (oinstall), and to grant the OSASM/SYSASM and OSDBA/SYSDBA system privileges. It also creates the Oracle base for both Oracle Grid Infrastructure and Oracle RAC, /u01/app/oracle
. It creates the Grid home (the location where Oracle Grid Infrastructure binaries are stored), /u01/app/11.2.0/grid
.
You must have space available either on a supported file system or on Oracle ASM for Oracle Clusterware files (voting disk files and Oracle Cluster Registries), and for Oracle Database files, if you install standalone or Oracle Real Application Clusters Databases. Creating Oracle Clusterware files on block or raw devices is no longer supported for new installations.
Review the relevant sections in Chapter 3 for the installation option you want to configure.
Start OUI from the root level of the installation media. For example:
./runInstaller
Select Install and Configure Grid Infrastructure for a Cluster, then select Typical Installation. In the installation screens that follow, enter the configuration information as prompted.
If you receive an installation verification error that cannot be fixed using a fixup script, then review Chapter 2, "Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks" to find the section for configuring cluster nodes. After completing the fix, continue with the installation until it is complete.
Oracle Grid Infrastructure Installation Guide for Oracle Solaris explains how to configure a server in preparation for installing and configuring an Oracle Grid Infrastructure installation (Oracle Clusterware and Oracle Automatic Storage Management). It also explains how to configure a server and storage in preparation for an Oracle Real Application Clusters (Oracle RAC) installation.
Oracle Grid Infrastructure Installation Guide for Oracle Solaris provides configuration information for network and system administrators, and database installation information for database administrators (DBAs) who install and configure Oracle Clusterware and Oracle Automatic Storage Management in an Oracle Grid Infrastructure for a Cluster installation.
For customers with specialized system roles who intend to install Oracle RAC, this book is intended to be used by system administrators, network administrators, or storage administrators to configure a system in preparation for an Oracle Grid Infrastructure for a cluster installation, and complete all configuration tasks that require operating system root
privileges. When Oracle Grid Infrastructure installation and configuration is completed successfully, a system administrator should only need to provide configuration information and to grant access to the database administrator to run scripts as root
during an Oracle RAC installation.
This guide assumes that you are familiar with Oracle Database concepts. For additional information, refer to books in the Related Documents list.
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc
.
Access to Oracle Support
Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info
or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs
if you are hearing impaired.
For more information, refer to the following Oracle resources:
Oracle Clusterware and Oracle Real Application Clusters Documentation
This installation guide reviews steps required to complete an Oracle Clusterware and Oracle Automatic Storage Management installation, and to perform preinstallation steps for Oracle RAC.
If you intend to install Oracle Database or Oracle RAC, then complete preinstallation tasks as described in this installation guide, complete Oracle Grid Infrastructure installation, and review those installation guides for additional information. You can install either Oracle databases for a standalone server on an Oracle Grid Infrastructure installation, or install an Oracle RAC database. If you want to install an Oracle Restart deployment of Oracle Grid Infrastructure, then refer to Oracle Database Installation Guide for Oracle Solaris
Most Oracle error message documentation is only available in HTML format. If you only have access to the Oracle Documentation media, then browse the error messages by range. When you find a range, use your browser's "find in page" feature to locate a specific message. When connected to the Internet, you can search for a specific error message using the error message search feature of the Oracle online documentation.
Installation Guides
Oracle Database Installation Guide for Oracle Solaris
Oracle Real Application Clusters Installation Guide for Linux and UNIX
Operating System-Specific Administrative Guides
Oracle Clusterware and Oracle Automatic Storage Management Administrative Guides
Oracle Real Application Clusters Administrative Guides
Generic Documentation
Printed documentation is available for sale in the Oracle Store at the following Web site:
To download free release notes, installation documentation, white papers, or other collateral, please visit the Oracle Technology Network (OTN). You must register online before using OTN; registration is free and can be done at the following Web site:
http://www.oracle.com/technetwork/index.html
If you already have a username and password for OTN, then you can go directly to the documentation section of the OTN Web site:
http://www.oracle.com/technetwork/indexes/documentation/index.html
Oracle error message documentation is available only in HTML. You can browse the error messages by range in the Documentation directory of the installation media. When you find a range, use your browser's search feature to locate a specific message. When connected to the Internet, you can search for a specific error message using the error message search feature of the Oracle online documentation.
The following text conventions are used in this document:
Convention | Meaning |
---|---|
boldface | Boldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary. |
italic | Italic type indicates book titles, emphasis, or placeholder variables for which you supply particular values. |
monospace | Monospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter. |
This appendix explains the reasons for preinstallation tasks that you are asked to perform, and other installation concepts.
This appendix contains the following sections:
This section reviews concepts about Oracle Grid Infrastructure for a Cluster preinstallation tasks. It contains the following sections:
This section contains the following topics:
You must have a group whose members are given access to write to the Oracle Inventory (oraInventory
) directory, which is the central inventory record of all Oracle software installations on a server. Members of this group have write privileges to the Oracle central inventory (oraInventory
) directory, and are also granted permissions for various Oracle Clusterware resources, OCR keys, directories in the Oracle Clusterware home to which DBAs need write access, and other necessary privileges. By default, this group is called oinstall
. The Oracle Inventory group must be the primary group for Oracle software installation owners.
The oraInventory
directory contains the following:
A registry of the Oracle home directories (Oracle Grid Infrastructure and Oracle Database) on the system
Installation logs and trace files from installations of Oracle software. These files are also copied to the respective Oracle homes for future reference.
Other metadata inventory information regarding Oracle installations are stored in the individual Oracle home inventory directories, and are separate from the central inventory.
You can configure one group to be the access control group for the Oracle Inventory, for database administrators (OSDBA), and for all other access control groups used by Oracle software for operating system authentication. However, this group then must be the primary group for all users granted administrative privileges.
Note: If Oracle software is already installed on the system, then the existing Oracle Inventory group must be the primary group of the operating system user (oracle or grid ) that you use to install Oracle Grid Infrastructure. Refer to "Determining If the Oracle Inventory and Oracle Inventory Group Exists" to identify an existing Oracle Inventory group. |
The Oracle Inventory directory (oraInventory
) is the central inventory location for all Oracle software installed on a server.
The first time you install Oracle software on a system, you are prompted to provide an oraInventory directory path.
When you provide an Oracle base path when prompted during installation, or you have set the environment variable ORACLE_BASE
for the user performing the Oracle Grid Infrastructure installation, OUI creates the Oracle Inventory directory in the path ORACLE_BASE/../oraInventory
. For example, if ORACLE_BASE
is set to /opt/oracle/11
, then the Oracle Inventory directory is created in the path /opt/oracle/oraInventory
, so that the central inventory for all installations is outside of the Oracle base for this particular Oracle installation user.
If you neither enter a path nor set ORACLE_BASE
, then the Oracle Inventory directory is placed in the home directory of the user that is performing the installation. For example:
/home/oracle/oraInventory
As this placement can cause permission errors during subsequent installations with multiple Oracle software owners, Oracle recommends that you do not accept this option, and instead use an OFA-compliant path.
For new installations, Oracle recommends that you either create an Oracle path in compliance with OFA structure, such as /u01/app/oraInventory
, that is owned by an Oracle software owner, or you set the Oracle base environment variable to an OFA-compliant value.
If you set an Oracle base variable to a path such as /u01/app/grid
or /u01/app/oracle
, then the Oracle Inventory is defaulted to the path u01/app/oraInventory
using correct permissions to allow all Oracle installation owners to write to this central inventory directory.
By default, the Oracle Inventory directory is not installed under the Oracle base directory for the installation owner. This is because all Oracle software installations share a common Oracle Inventory, so there is only one Oracle Inventory for all users, whereas there is a separate Oracle base for each user.
This section contains information about preparing an Oracle base directory.
During installation, you are prompted to specify an Oracle base location, which is owned by the user performing the installation. You can choose a location with an existing Oracle home, or choose another directory location that does not have the structure for an Oracle base directory.
Using the Oracle base directory path helps to facilitate the organization of Oracle installations, and helps to ensure that installations of multiple databases maintain an Optimal Flexible Architecture (OFA) configuration.
Even if you do not use the same software owner to install Grid Infrastructure (Oracle Clusterware and Oracle ASM) and Oracle Database, be aware that running the root.sh
script during the Oracle Grid Infrastructure installation changes ownership of the home directory where clusterware binaries are placed to root
, and all ancestor directories to the root level (/
) are also changed to root
. For this reason, the Oracle Grid Infrastructure for a cluster home cannot be in the same location as other Oracle software.
However, Oracle Grid Infrastructure for a standalone database--Oracle Restart--can be in the same location as other Oracle software.
See Also: Oracle Database Installation Guide for your platform for more information about Oracle Restart |
During installation, you are asked to identify the planned use for each network interface that OUI detects on your cluster node. Identify each interface as a public or private interface, or as an interface that you do not want Oracle Clusterware to use. Public and virtual IP addresses are configured on public interfaces. Private addresses are configured on private interfaces.
Refer to the following sections for detailed information about each address type:
The public IP address is assigned dynamically using DHCP, or defined statically in a DNS or in a hosts file. It uses the public interface (the interface with access available to clients).
Oracle Clusterware uses interfaces marked as private for internode communication. Each cluster node needs to have an interface that you identify during installation as a private interface. Private interfaces need to have addresses configured for the interface itself, but no additional configuration is required. Oracle Clusterware uses interfaces you identify as private for the cluster interconnect. If you identify multiple interfaces during information for the private network, then Oracle Clusterware configures them with Redundant Interconnect Usage. Any interface that you identify as private must be on a subnet that connects to every node of the cluster. Oracle Clusterware uses all the interfaces you identify for use as private interfaces.
For the private interconnects, because of Cache Fusion and other traffic between nodes, Oracle strongly recommends using a physically separate, private network. If you configure addresses using a DNS, then you should ensure that the private IP addresses are reachable only by the cluster nodes.
After installation, if you modify interconnects on Oracle RAC with the CLUSTER_INTERCONNECTS
initialization parameter, then you must change it to a private IP address, on a subnet that is not used with a public IP address. Oracle does not support changing the interconnect to an interface using a subnet that you have designated as a public subnet.
You should not use a firewall on the network with the private network IP addresses, as this can block interconnect traffic.
The virtual IP (VIP) address is registered in the GNS, or the DNS. Select an address for your VIP that meets the following requirements:
The IP address and host name are currently unused (it can be registered in a DNS, but should not be accessible by a ping
command)
The VIP is on the same subnet as your public interface
The GNS virtual IP address is a static IP address configured in the DNS. The DNS delegates queries to the GNS virtual IP address, and the GNS daemon responds to incoming name resolution requests at that address.
Within the subdomain, the GNS uses multicast Domain Name Service (mDNS), included with Oracle Clusterware, to enable the cluster to map host names and IP addresses dynamically as nodes are added and removed from the cluster, without requiring additional host configuration in the DNS.
To enable GNS, you must have your network administrator provide a set of IP addresses for a subdomain assigned to the cluster (for example, grid.example.com
), and delegate DNS requests for that subdomain to the GNS virtual IP address for the cluster, which GNS will serve. The set of IP addresses is provided to the cluster through DHCP, which must be available on the public network for the cluster.
See Also: Oracle Clusterware Administration and Deployment Guide for more information about Grid Naming Service |
Oracle Database 11g release 2 clients connect to the database using SCANs. The SCAN and its associated IP addresses provide a stable name for clients to use for connections, independent of the nodes that make up the cluster. SCAN addresses, virtual IP addresses, and public IP addresses must all be on the same subnet.
The SCAN is a virtual IP name, similar to the names used for virtual IP addresses, such as node1-vip
. However, unlike a virtual IP, the SCAN is associated with the entire cluster, rather than an individual node, and associated with multiple IP addresses, not just one address.
The SCAN works by being able to resolve to multiple IP addresses in the cluster handling public client connections. When a client submits a request, the SCAN listener listening on a SCAN IP address and the SCAN port is made available to a client. Because all services on the cluster are registered with the SCAN listener, the SCAN listener replies with the address of the local listener on the least-loaded node where the service is currently being offered. Finally, the client establishes connection to the service through the listener on the node where service is offered. All of these actions take place transparently to the client without any explicit configuration required in the client.
During installation listeners are created. They listen on the SCAN IP addresses provided on nodes for the SCAN IP addresses. Oracle Net Services routes application requests to the least loaded instance providing the service. Because the SCAN addresses resolve to the cluster, rather than to a node address in the cluster, nodes can be added to or removed from the cluster without affecting the SCAN address configuration.
The SCAN should be configured so that it is resolvable either by using Grid Naming Service (GNS) within the cluster, or by using Domain Name Service (DNS) resolution. For high availability and scalability, Oracle recommends that you configure the SCAN name so that it resolves to three IP addresses. At a minimum, the SCAN must resolve to at least one address.
If you specify a GNS domain, then the SCAN name defaults to clustername-scan.GNS_domain. Otherwise, it defaults to clustername-scan.current_domain. For example, if you start Oracle Grid Infrastructure installation from the server node1
, the cluster name is mycluster
, and the GNS domain is grid.example.com
, then the SCAN Name is mycluster-scan.grid.example.com
.
Clients configured to use IP addresses for Oracle Database releases prior to Oracle Database 11g release 2 can continue to use their existing connection addresses; using SCANs is not required. When you upgrade to Oracle Clusterware 11g release 2 (11.2), the SCAN becomes available, and you should use the SCAN for connections to Oracle Database 11g release 2 or later databases. When an earlier version of Oracle Database is upgraded, it registers with the SCAN listeners, and clients can start using the SCAN to connect to that database. The database registers with the SCAN listener through the remote listener parameter in the init.ora file. The REMOTE_LISTENER parameter must be set to SCAN:PORT. Do not set it to a TNSNAMES alias with a single address with the SCAN as HOST=SCAN.
The SCAN is optional for most deployments. However, clients using Oracle Database 11g release 2 and later policy-managed databases using server pools should access the database using the SCAN. This is because policy-managed databases can run on different servers at different times, so connecting to a particular node virtual IP address for a policy-managed database is not possible.
Oracle Clusterware 11g release 2 (11.2) is automatically configured with Cluster Time Synchronization Service (CTSS). This service provides automatic synchronization of all cluster nodes using the optimal synchronization strategy for the type of cluster you deploy. If you have an existing cluster synchronization service, such as NTP, then it will start in an observer mode. Otherwise, it will start in an active mode to ensure that time is synchronized between cluster nodes. CTSS will not cause compatibility issues.
The CTSS module is installed as a part of Oracle Grid Infrastructure installation. CTSS daemons are started up by the OHAS daemon (ohasd
), and do not require a command-line interface.
Understanding Oracle Automatic Storage Management Cluster File System
About Migrating Existing Oracle ASM Instances
About Converting Standalone Oracle ASM Installations to Clustered Installations
Oracle Automatic Storage Management has been extended to include a general purpose file system, called Oracle Automatic Storage Management Cluster File System (Oracle ACFS). Oracle ACFS is a new multi-platform, scalable file system, and storage management technology that extends Oracle Automatic Storage Management (Oracle ASM) functionality to support customer files maintained outside of the Oracle Database. Files supported by Oracle ACFS include application binaries and application reports. Other supported files are video, audio, text, images, engineering drawings, and other general-purpose application file data.
If you have an Oracle ASM installation from a prior release installed on your server, or in an existing Oracle Clusterware installation, then you can use Oracle Automatic Storage Management Configuration Assistant (ASMCA, located in the path Grid_home/bin) to upgrade the existing Oracle ASM instance to Oracle ASM 11g release 2 (11.2), and subsequently configure failure groups, Oracle ASM volumes, and Oracle Automatic Storage Management Cluster File System (Oracle ACFS).
Note: You must first shut down all database instances and applications on the node with the existing Oracle ASM instance before upgrading it. |
During installation, if you chose to use Oracle ASM and ASMCA detects that there is a prior Oracle ASM version installed in another home, then after installing the Oracle ASM 11g release 2 (11.2) binaries, you can start ASMCA to upgrade the existing Oracle ASM instance. You can then choose to configure an Oracle ACFS deployment by creating Oracle ASM volumes and using the upgraded Oracle ASM to create the Oracle ACFS.
On an existing Oracle Clusterware or Oracle RAC installation, if the prior version of Oracle ASM instances on all nodes is Oracle ASM 11g release 1, then you are provided with the option to perform a rolling upgrade of Oracle ASM instances. If the prior version of Oracle ASM instances on an Oracle RAC installation are from an Oracle ASM release prior to Oracle ASM 11g release 1, then rolling upgrades cannot be performed. Oracle ASM is then upgraded on all nodes to 11g release 2 (11.2).
If you have an existing standalone Oracle ASM installations on one or more nodes that are member nodes of the cluster, then OUI proceeds to install Oracle Grid Infrastructure for a cluster.
If you place Oracle Clusterware files (OCR and voting disks) on Oracle ASM, then ASMCA is started at the end of the clusterware installation, and provides prompts for you to migrate and upgrade the Oracle ASM instance on the local node, so that you have an Oracle ASM 11g release 2 (11.2) installation.
On remote nodes, ASMCA identifies any standalone Oracle ASM instances that are running, and prompts you to shut down those Oracle ASM instances, and any database instances that use them. ASMCA then extends clustered Oracle ASM instances to all nodes in the cluster. However, disk group names on the cluster-enabled Oracle ASM instances must be different from existing standalone disk group names.
The following section provides a short overview of server pools. It contains the following topics:
See Also: Oracle Clusterware Administration and Deployment Guide for information about how to configure and administer server pools |
With Oracle Clusterware 11g release 2 (11.2) and later, resources managed by Oracle Clusterware are contained in logical groups of servers called server pools. Resources are hosted on a shared infrastructure and are contained within server pools. The resources are restricted with respect to their hardware resource (such as CPU and memory) consumption by policies, behaving as if they were deployed in a single-system environment.
You can choose to manage resources dynamically using server pools to provide policy-based management of resources in the cluster, or you can choose to manage resources using the traditional method of physically assigning resources to run on particular nodes.
Caution: By default, any named user may create a server pool. To restrict the operating system users that have this privilege, Oracle strongly recommends that you add specific users to the CRS Administrators list. |
See Also: Oracle Clusterware Administration and Deployment Guide for more information about adding users to the CRS Administrator's list. |
The Oracle Grid Infrastructure installation owner has permissions to create and configure server pools, using SRVCTL, Oracle Enterprise Manager Database Control, or Oracle Database Configuration Assistant (DBCA).
Policy-based management:
Enables dynamic capacity assignment when needed to provide server capacity in accordance with the priorities you set with policies
Enables allocation of resources by importance, so that applications obtain the required minimum resources, whenever possible, and so that lower priority applications do not take resources from more important applications
Ensures isolation where necessary, so that you can provide dedicated servers in a cluster for applications and databases
Applications and databases running in server pools do not share resources. Because of this, server pools isolate resources where necessary, but enable dynamic capacity assignments as required. Together with role-separated management, this capability addresses the needs of organizations that have a standardized cluster environments, but allow multiple administrator groups to share the common cluster infrastructure.
Server pools divide the cluster into groups of servers hosting the same or similar resources. They distribute a uniform workload (a set of Oracle Clusterware resources) over several servers in the cluster. For example, you can restrict Oracle databases to run only in a particular server pool. When you enable role-separated management, you can explicitly grant permission to operating system users to change attributes of certain server pools.
Top-level server pools:
Logically divide the cluster
Are always exclusive, meaning that one server can only reside in one particular server pool at a certain point in time
Server pools each have three attributes that they are assigned when they are created:
MIN_SIZE: The minimum number of servers the server pool should contain. If the number of servers in a server pool is below the value of this attribute, then Oracle Clusterware automatically moves servers from elsewhere into the server pool until the number of servers reaches the attribute value, or until there are no free servers available from less important pools.
MAX_SIZE: The maximum number of servers the server pool may contain.
IMPORTANCE: A number from 0 to 1000 (0 being least important) that ranks a server pool among all other server pools in a cluster.
When Oracle Clusterware is installed, two server pools are created automatically: Generic and Free. All servers in a new installation are assigned to the Free server pool, initially. Servers move from Free to newly defined server pools automatically. When you upgrade Oracle Clusterware, all nodes are assigned to the Generic server pool, to ensure compatibility with database releases before Oracle Database 11g release 2 (11.2).
The Free server pool contains servers that are not assigned to any other server pools. The attributes of the Free server pool are restricted, as follows:
SERVER_NAMES, MIN_SIZE, and MAX_SIZE cannot be edited by the user
IMPORTANCE and ACL can be edited by the user
The Generic server pool stores pre-11g release 2 (11.2) databases and administrator-managed databases that have fixed configurations. Additionally, the Generic server pool contains servers that match either of the following:
Servers that you specified in the HOSTING_MEMBERS attribute of all resources of the application resource type
Servers with names you specified in the SERVER_NAMES attribute of the server pools that list the Generic server pool as a parent server pool
The Generic server pool's attributes are restricted, as follows:
No one can modify configuration attributes of the Generic server pool (all attributes are read-only)
When you specify a server name in the HOSTING_MEMBERS attribute, Oracle Clusterware only allows it if the server is:
Online and exists in the Generic server pool
Online and exists in the Free server pool, in which case Oracle Clusterware moves the server into the Generic server pool
Online and exists in any other server pool and the client is either a CRS Administrator (the user role that controls resource administration for server pools) or is allowed to use the server pool's servers, in which case, the server is moved into the Generic server pool
Offline and the client is a CRS Administrator
When you register a child server pool with the Generic server pool, Oracle Clusterware only allows it if the server names pass the same requirements as previously specified for the resources.
Servers are initially considered for assignment into the Generic server pool at cluster startup time or when a server is added to the cluster, and only after that to other server pools.
With an out-of-place upgrade, the installer installs the newer version in a separate Oracle Clusterware home. Both versions of Oracle Clusterware are on each cluster member node, but only one version is active.
Rolling upgrade avoids downtime and ensure continuous availability while the software is upgraded to a new version.
If you have separate Oracle Clusterware homes on each node, then you can perform an out-of-place upgrade on all nodes, or perform an out-of-place rolling upgrade, so that some nodes are running Oracle Clusterware from the earlier version Oracle Clusterware home, and other nodes are running Oracle Clusterware from the new Oracle Clusterware home.
An in-place upgrade of Oracle Clusterware 11g release 2 is not supported.
See Also: Appendix E, "How to Upgrade to Oracle Grid Infrastructure 11g Release 2" for instructions on completing rolling upgrades |
This appendix provides instructions for how to complete configuration tasks manually that Cluster Verification Utility (CVU) and the installer (OUI) normally complete during installation. Use this appendix as a guide if you cannot use the fixup script.
This appendix contains the following information:
Passwordless SSH configuration is a mandatory installation requirement. SSH is used during installation to configure cluster member nodes, and SSH is used after installation by configuration assistants, Oracle Enterprise Manager, Opatch, and other features.
Automatic Passwordless SSH configuration using OUI creates RSA encryption keys on all nodes of the cluster. If you have system restrictions that require you to set up SSH manually, such as using DSA keys, then use this procedure as a guide to set up passwordless SSH.
In the examples that follow, the Oracle software owner listed is the grid
user.
This section contains the following:
To determine if SSH is running, enter the following command:
$ pgrep sshd
If SSH is running, then the response to this command is one or more process ID numbers. In the home directory of the installation software owner (grid
, oracle
), use the command ls -al
to ensure that the .ssh
directory is owned and writable only by the user.
You need either an RSA or a DSA key for the SSH protocol. RSA is used with the SSH 1.5 protocol, while DSA is the default for the SSH 2.0 protocol. With OpenSSH, you can use either RSA or DSA. The instructions that follow are for SSH1. If you have an SSH2 installation, and you cannot use SSH1, then refer to your SSH distribution documentation to configure SSH1 compatibility or to configure SSH2 with DSA.
To configure SSH, you must first create RSA or DSA keys on each cluster node, and then copy all the keys generated on all cluster node members into an authorized keys file that is identical on each node. Note that the SSH files must be readable only by root
and by the software installation user (oracle
, grid
), as SSH ignores a private key file if it is accessible by others. In the examples that follow, the DSA key is used.
You must configure SSH separately for each Oracle software installation owner that you intend to use for installation.
To configure SSH, complete the following:
Complete the following steps on each node:
Log in as the software owner (in this example, the grid
user).
To ensure that you are logged in as grid
, and to verify that the user ID matches the expected user ID you have assigned to the grid
user, enter the commands id
and id grid
. Ensure that Oracle user group and user and the user terminal window process you are using have group and user IDs are identical. For example:
$ id uid=502(grid) gid=501(oinstall) groups=501(oinstall),502(grid,asmadmin,asmdba) $ id grid uid=502(grid) gid=501(oinstall) groups=501(oinstall),502(grid,asmadmin,asmdba)
If necessary, create the .ssh
directory in the grid
user's home directory, and set permissions on it to ensure that only the oracle user has read and write permissions:
$ mkdir ~/.ssh $ chmod 700 ~/.ssh
Enter the following command:
$ /usr/bin/ssh-keygen -t dsa
At the prompts, accept the default location for the key file (press Enter).
Note: SSH with passphrase is not supported for Oracle Clusterware 11g release 2 and later releases. |
This command writes the DSA public key to the ~/.ssh/id_dsa.pub
file and the private key to the ~/.ssh/id_dsa
file.
Never distribute the private key to anyone not authorized to perform Oracle software installations.
Repeat steps 1 through 4 on each node that you intend to make a member of the cluster, using the DSA key.
Complete the following steps:
On the local node, change directories to the .ssh
directory in the Oracle Grid Infrastructure owner's home directory (typically, either grid
or oracle
).
Then, add the DSA key to the authorized_keys
file using the following commands:
$ cat id_dsa.pub >> authorized_keys $ ls
In the .ssh directory, you should see the id_dsa.pub
keys that you have created, and the file authorized_keys
.
On the local node, use SCP (Secure Copy) or SFTP (Secure FTP) to copy the authorized_keys
file to the oracle
user .ssh
directory on a remote node. The following example is with SCP, on a node called node2, with the Oracle Grid Infrastructure owner grid
, where the grid
user path is /home/grid
:
[grid@node1 .ssh]$ scp authorized_keys node2:/home/grid/.ssh/
You are prompted to accept a DSA key. Enter Yes, and you see that the node you are copying to is added to the known_hosts
file.
When prompted, provide the password for the grid user, which should be the same on all nodes in the cluster. The authorized_keys
file is copied to the remote node.
Your output should be similar to the following, where xxx
represents parts of a valid IP address:
[grid@node1 .ssh]$ scp authorized_keys node2:/home/grid/.ssh/ The authenticity of host 'node2 (xxx.xxx.173.152) can't be established. DSA key fingerprint is 7e:60:60:ae:40:40:d1:a6:f7:4e:zz:me:a7:48:ae:f6:7e. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,xxx.xxx.173.152' (dsa) to the list of known hosts grid@node2's password: authorized_keys 100% 828 7.5MB/s 00:00
Using SSH, log in to the node where you copied the authorized_keys
file. Then change to the .ssh
directory, and using the cat
command, add the DSA keys for the second node to the authorized_keys
file, clicking Enter when you are prompted for a password, so that passwordless SSH is set up:
[grid@node1 .ssh]$ ssh node2 [grid@node2 grid]$ cd .ssh [grid@node2 ssh]$ cat id_dsa.pub >> authorized_keys
Repeat steps 2 and 3 from each node to each other member node in the cluster.
When you have added keys from each cluster node member to the authorized_keys
file on the last node you want to have as a cluster node member, then use scp
to copy the authorized_keys
file with the keys from all nodes back to each cluster node member, overwriting the existing version on the other nodes.
To confirm that you have all nodes in the authorized_keys
file, enter the command more authorized_keys
, and determine if there is a DSA key for each member node. The file lists the type of key (ssh-dsa
), followed by the key, and then followed by the user and server. For example:
ssh-dsa AAAABBBB . . . = grid@node1
Note: Thegrid user's /.ssh/authorized_keys file on every node must contain the contents from all of the /.ssh/id_dsa.pub files that you generated on all cluster nodes. |
After you have copied the authorized_keys
file that contains all keys to each node in the cluster, complete the following procedure, in the order listed. In this example, the Oracle Grid Infrastructure software owner is named grid
:
On the system where you want to run OUI, log in as the grid
user.
Use the following command syntax, where hostname1
, hostname2
, and so on, are the public host names (alias and fully qualified domain name) of nodes in the cluster to run SSH from the local node to each node, including from the local node to itself, and from each node to each other node:
[grid@nodename]$ ssh hostname1 date [grid@nodename]$ ssh hostname2 date . . .
For example:
[grid@node1 grid]$ ssh node1 date The authenticity of host 'node1 (xxx.xxx.100.101)' can't be established. DSA key fingerprint is 7z:60:60:zz:48:48:z1:a0:f7:4e. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,xxx.xxx.100.101' (DSA) to the list of known hosts. Mon Dec 4 11:08:13 PST 2006 [grid@node1 grid]$ ssh node1.example.com date The authenticity of host 'node1.example.com (xxx.xxx.100.101)' can't be established. DSA key fingerprint is 7z:60:60:zz:48:48:z1:a0:f7:4e. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1.example.com,xxx.xxx.100.101' (DSA) to the list of known hosts. Mon Dec 4 11:08:13 PST 2006 [grid@node1 grid]$ ssh node2 date Mon Dec 4 11:08:35 PST 2006 . . .
At the end of this process, the public host name for each member node should be registered in the known_hosts
file for all other cluster nodes.
If you are using a remote client to connect to the local node, and you see a message similar to "Warning: No xauth data; using fake authentication data for X11 forwarding," then this means that your authorized keys file is configured correctly, but your SSH configuration has X11 forwarding enabled. To correct this issue, proceed to "Setting Display and X11 Forwarding Configuration".
Repeat step 2 on each cluster node member.
If you have configured SSH correctly, then you can now use the ssh
or scp
commands without being prompted for a password. For example:
[grid@node1 ~]$ ssh node2 date Mon Feb 26 23:34:42 UTC 2009 [grid@node1 ~]$ ssh node1 date Mon Feb 26 23:34:48 UTC 2009
If any node prompts for a password, then verify that the ~/.ssh/authorized_keys
file on that node contains the correct public keys, and that you have created an Oracle software owner with identical group membership and IDs.
This section contains the following:
Note: The kernel parameter and shell limit values shown in the following section are recommended values only. For production database systems, Oracle recommends that you tune kernel resources to optimize the performance of the system. Refer to your operating system documentation for more information about kernel resource management. |
On Oracle Solaris 10 and later operating systems, verify that the project.max parameters shown in the following table are set to values greater than or equal to the recommended value shown.
The procedure following the table describes how to verify and set the values.
Note: In Oracle Solaris 10, you are not required to make changes to the/etc/system file to implement the System V IPC. The /etc/system parameters are provided here only for reference. |
Note:
|
On Oracle Solaris 10 and later releases, use the following procedure to view the current value specified for resource controls, and to change them if necessary:
To view the current values of the resource control, enter the following commands:
$ id -p // to verify the project id uid=100(oracle) gid=100(dba) projid=1 (group.dba) $ prctl -n project.max-shm-memory -i project group.dba $ prctl -n project.max-sem-ids -i project group.dba
If you must change any of the current values, then:
To modify the value of max-shm-memory to 6 GB:
# prctl -n project.max-shm-memory -v 6gb -r -i project group.dba
To modify the value of max-sem-ids to 256:
# prctl -n project.max-sem-ids -v 256 -r -i project group.dba
Note: When you use the commandprctl (Resource Control) to change system parameters, you do not need to restart the system for these parameter changes to take effect. However, the changed parameters do not persist after a system restart. |
Use the following procedure to modify the resource control project settings, so that they persist after a system restart:
By default, Oracle instances are run as the oracle
user of the dba
group. A project with the name group.dba
is created to serve as the default project for the oracle user. Run the command id
to verify the default project for the oracle
user:
# su - oracle $ id -p uid=100(oracle) gid=100(dba) projid=100(group.dba) $ exit
To set the maximum shared memory size to 4 GB, run the projmod
command:
# projmod -sK "project.max-shm-memory=(privileged,4G,deny)" group.dba
Alternatively, add the resource control value project.max-shm-memory=(privileged,4294967295,deny)
to the last field of the project entries for the Oracle project.
After these steps are complete, check the values for the /etc/project
file using the following command:
# cat /etc/project
The output should be similar to the following:
system:0:::: user.root:1:::: noproject:2:::: default:3:::: group.staff:10:::: group.dba:100:Oracle default project:::project.max-shmmemory=(privileged,4294967295,deny)
To verify that the resource control is active, check process ownership, and run the commands id
and prctl
, as in the following example:
# su - oracle $ id -p uid=100(oracle) gid=100(dba) projid=100(group.dba) $ prctl -n project.max-shm-memory -i process $$ process: 5754: -bash NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT project.max-shm-memory privileged 4.00GB - deny
Note: The value for the maximum shared memory depends on the SGA requirements and should be set to a value greater than the SGA size.For additional information, refer to the Solaris Tunable Parameters Reference Manual. |