Adding or Deleting a Node

One of the jobs of a DBA is adding and removing nodes from a RAC environment when capacity demands, although you should add a node of a similar spec it is possible to add a node of a higher or lower spec.

The first stage is to configure the operating system and make sure any necessary drivers are installed, also make sure that the node can see the shared disks available to the existing RAC.

I am going to presume we have a two RAC environment already setup, and we are going to add a third node.

Pre-Install Checking

You used the Cluster Verification utility when installing the RAC environment, the tools check that the node has been properly prepared for a RAC deployment. You can run the command either from the new node or from any of the existing nodes in the cluster

pre-install check run from new node runcluvfy.sh stage -pre crsinst -n rac1,rac2,rac3 -r 10gr2
pre-install check run from existing node cluvfy stage -pre crsinst -n rac1,rac2,rac3 -r 10g2

Make sure that you fix any highlighted problems before continuing.

Install CRS

Cluster Ready Services (CRS) should be installed first, this allows the node to become part of the cluster. Adding the new node can be started from any of the existing nodes

  1. Log into any of the existing nodes as user oracle then run the below command, the script below starts the OUI GUI tool, hopefully the tool will already see the existing cluster and will fill in the details for you

      $ORA_RS_HOME/oui/bin/addnode.sh
  2. In the specify cluster nodes to add to installation screen, enter the new names for the public, private and virtual hosts
  3. Click next to see a summary page
  4. Click install, the installer will copy the files from the existing node to the new node. Once copied you will be asked to run orainstRoot.sh and root.sh as user root
  5. Run orainstRoot.sh and root.sh in the new and rootaddnode.sh in the node that you are running the installation from.

      
    orainstRoot.sh
    sets the Oracle inventory in the new node and set ownerships and permissions to the inventory
    root.sh
    checks whether the Oracle CRS stack is already configured in the new node, creates /etc/oracle directory and adds the relevant OCR keys to the cluster registry and it adds the daemon to CRS and starts CRS in the new node.
     rootaddnode.sh
    configures the OCR registry to include the new nodes as part of the cluster

  6. Click next to complete the installation. Now you need to configure Oracle Notification Services (ONS). The port can be identified by the below command

      cat $ORA_CRS_HOME/opmn/conf/ons.config
  7. Now run the ONS utility by supplying the <remote_port> number obtained above

      racgons add_config rac3:<remote_port>

Installing Oracle DB Software

Once the CRS has been installed and the new node is in the cluster, it is time to install the Oracle DB software. Again you can use any of the existing nodes to install the software.

  1. Log into any of the existing nodes as user oracle then run the below command, the script below starts the OUI GUI tool, hopefully the tool will already see the existing cluster and fill in the details for you

      $ORA_RS_HOME/oui/bin/addnode.sh
  2. Click next on the welcome screen to open the specify cluster nodes to add to installation screen, you should have a list of all the existing nodes in the cluster, select the new node and click next
  3. Check the summary page then click install to start the installation
  4. The files will be copied to the new node, the script will ask you to run run.sh on the new node, then click OK to finish off the installation

Configuring the Listener

Now its time to configure the listener in the new node

  1. Login as user oracle, and set your DISPLAY environment variable, then start the Network Configuration Assistant

    $ORACLE_HOME/bin/netca
  2. Choose cluster management
  3. Choose listener
  4. Choose add
  5. Choose the the name as LISTENER

These steps will add a listener on rac3 as LISTENER_rac3

Create the Database Instance

Run the below to create the database instance on the new node

  1. Login as oracle on the new node, set the environment to database home and then run the database creation assistant (DBCA)

    $ORACLE_HOME/bin/dbca
  2. In the welcome screen choose oracle real application clusters database to create the instance and click next
  3. Choose instance management and click next
  4. Choose add instance and click next
  5. Select RACDB (or whatever name you gave you RAC environment) as the database and enter the SYSDBA and password, click next
  6. You should see a list of existing instances, click next and on the following screen enter ORARAC3 as the instance and choose RAC3 as the node name (substitute any of the above names for your environment naming convention)
  7. The database instance will now created, click next in the database storage screen., choose yes when asked to extend ASM

Removing a Node

Removing a node is similar to above but in reverse order

  1. Delete the instance on the node to be removed
  2. Clean up ASM
  3. Remove the listener from the node to be removed
  4. Remove the node from the database
  5. Remove the node from the clusterware

You can delete the instance by using the database creation assistant (DBCA), invoke the program choose the RAC database, choose instance management and then choose delete instance, enter the sysdba user and password then choose the instance to delete.

To clean up ASM follow the below steps

  1. From node 1 run the below command to stop ASM on the node to be removed

      srvctl stop asm -n rac3
      srvctl remove asm -n rac3
  2. Now run the following on the node to be removed

      cd $ORACLE_HOME/admin
      rm -rf +ASM
      
      cd $ORACLE_HOME/dbs
      rm -f *ASM*
  3. Check that /etc/oratab file has no ASM entries, if so remove them

Now remove the listener for the node to be removed

  1. Login as user oracle, and set your DISPLAY environment variable, then start the Network Configuration Assistant

    $ORACLE_HOME/bin/netca
  2. Choose cluster management
  3. Choose listener
  4. Choose Remove
  5. Choose the the name as LISTENER

Next we remove the node from the database

  1. Run the below script from the node to be removed

    cd $ORACLE_HOME/bin
    ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac3}" -local
    ./runInstaller
  2. Choose to deinstall products and select the dbhome
  3. Run the following from node 1

      cd $ORACLE_HOME/oui/bin
      ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1,rac2,rac3}"

Lastly we remove the clusterware software

  1. Run the following from node 1, you obtain the port number from remoteport section in the ons.config file in $ORA_CRS_HOME/opmn/conf

      $CRS_HOME/bin/racgons remove_config rac3:6200
  2. Run the following from the node to be removed as user root

      cd $CRS_HOME/install
      ./rootdelete.sh
  3. Now run the following from node 1 as user root, obtain the node number first

      $CRS_HOME/bin/olsnodes -n
      cd $CRS_HOME/install
      ./rootdeletenode.sh rac3,3
  4. Now run the below from the node to be removed as user oracle

      cd $CRS_HOME/oui/bin
      ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac3}" CRS=TRUE -local
      ./runInstaller
  5. Choose to deinstall software and remove the CRS_HOME
  6. Run the following from node as user oracle

      cd $CRS_HOME/oui/bin
      ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1,rac2,rac3}" CRS=TRUE
  7. Check that the node has been removed, the first should report "invalid node", the second you should not see any output and the last command you should only see nodes rac1 and rac2

      srvctl status nodeapps -n rac3
      crs_stat |grep -i rac3
      olsnodes -n