Sun Cluster 3.1 Common Tasks

Shutdown a node within the cluster

 

#scswitch –S –h <host>
#shutdown –g0 –y –i5

Note:
-S     evacuate all devices services/resource groups from specified node

Shutdown all the node/s in the cluster

 

#scshutdown –g0 –y      # This command will bring all nodes down.

Boot a node without entering the cluster

  ok> boot –x

Put a node into maintenance mode

  1. Switch all resources
    # scswitch –S –h <host>
 
2. Shutdown the node
    # shutdown –i5 –g0 –y
 
3. Put the node into maintenance mode (command run on another node)
    # scconf –c –q node=sun2,maintstate
    # scstat –q                                                            
    
    check node is in maintstate
 
4. You can now “boot –x” and work on the node

Add and remove a quorum device

 
  1. list the devices available to obtain the device number
    #scdidadm -L
  2. create the quorum device
    #scconf -a -q globaldev=d8

    note: if you get the eroor message "unable to scrub disk" use the scgdev to add the device to the global device namespace.

 
  1. list the quorum devices
    #scstat -q
  2. remove the qorum device
    #scconf -r -q globaldev=d8

Replace a faulty disk which is a quorum device

 

1. Create new temporary quorum device
2. remove qurom device with faulty disk
3. replace disk
4. Create quorum device with new disk
5. Remove temporary quorum device

Adding a new veritas volume

The device drivers that are assigned should match. The /etc/name_to_major vxio should be the same on all nodes. Becareful if the boot disks are encapsulated this is more complex consult user guide.
After creating a veritas volume you must sync the new volume with the cluster, if you get the error were you cannot newfs/mkfs a newly create volume then the cluster does not know about it, you must run the following command:
               
# scconf –c –D name=<volume group>,sync

 

1. Create a volume
    #vxassist –g nfsdg make vol-01 500m
 
2. resync the cluster with volume manager
    #scconf –c –D name=<disk group>,sync
 
3. Create filesystem on new volume
    #mkfs –F vxfs –o largefiles   /dev/vx/rdsk/nfsdg/vol-01
 
Note: if you get a error where the device cannot be found then the cluster is not synced with volume manager use the above command to resync
 
4. Create mount point
    #mkdir /global/nfs
 
5. Create the vfstab entry
    /dev/vx/dsk/nfsdg/vol-01 /dev/vx/rdsk/nfsdg/vol-01 /global/nfs ufs 2 yes global,logging
 
6. Mount the filesystem on both nodes
    #mount /global/nfs
 
7. Switch the disk group to another node (remember the group only is what switches)
    #scswitch –z –D nfsdg –h sun2

Remove and unregister a disk group

 

1. Make use that all resources using the disk group are offline and removed

2  Take the device group offline
    #scswitch -F -D appdg
 
3. Unregister the disk group
    #scconf -r -D name=appdg

Note: if you have problems regarding the device being busy use the vxdg -t import command then follow from step 1
 

Choosing Between HAStorage and HAStoragePlus

To determine whether to create HAStorage or HAStoragePlus resources within a data service resource group, consider the following criteria.

Disabling a Resource and Moving the Resource group into UNMANAGED State

 

Disable the resource
# scswitch -n -j <resource>

Offline the resource group
# scswitch -F -g <resource group>

Move the resource group to umanaged
# scswitch -u -g <resource-group>
 

Create a failover resource group with HAstoragePlus and logicalHostname resources

 

register resource types
# scrgadm -a -t SUNW.HAStoragePlus
# scrgadm -a -t SUNW.LogicalHostname

create the FAILOVER resource group
# scrgadm -a -g rg_oracle -h sun1,sun2

create the LogicalHostname failover resource
# scrgadm -a -L -g rg_oracle -j oraserver -l oraserver -n ipmp0@sun1,ipmp0@sun2

create the HAStoragePlus failover resource
# scrgadm -a -g rg_oracle -j hasp_data01 -t SUNW.HAStoragePlus \
> -x FileSystemMountPoints=/oracle/data01 \
> -x Affinityon=true

enable the resources (disabled by default)
# scswitch -e -j oraserver
# scswitch -e -j hasp_data01

online the resource group
# scswitch -Z -g rg_oracle

Note: the ipmp network group was created using the above IP Networking group information, also the mountpoint was added to the /etc/vfstab file

Create a scalable resource group with HAstoragePlus and shared address resources

 

register resource types
# scrgadm -a -t SUNW.HAStoragePlus
# scrgadm -a -t SUNW.LogicalHostname

create the SCALABLE resource group
# scrgadm -a -g rg_oracle \
> -y maximum_primaries=? \
> -y desired_primaries=? \
> -y RG_dependencies=<depend-resource-group> \
> -h sun1,sun2

create the LogicalHostname scalable resource
# scrgadm -a -S -j oraserver -g rg_oracle -l oraserver -n ipmp0@sun1,ipmp0@sun2

create the HAStoragePlus failover resource
# scrgadm -a -g rg_oracle -j hasp_data01 -t SUNW.HAStoragePlus \
> -x FileSystemMountPoints=/oracle/data01 \
> -x Affinityon=true

enable the resources (disabled by default)
# scswitch -e -j oraserver
# scswitch -e -j hasp_data01

online the resource group
# scswitch -Z -g rg_oracle

Note: the ipmp network group was created using the above IP Networking group information, also the mountpoint was added to the /etc/vfstab file

Remove a resource type

 

Disable the resource type
# scswitch -n -j <resource>

remove each resource of the resource type that you will remove
# scrgadm -r -j <resource>

remove the resource type
# scrgadm -r -t <resource>

Installing Agents

 

#scinstall -ik -s apache -d <path to were the agent is>
or
#scinstall (select option 3)

IP Networking groups
 
Use the below entry in the /etc/hostname.??? File to create a automatic IP network group

 

#cat /etc/hostname.qfe0
<hostname> group ipmp0 deprecated -failover
 

note: deprecated means that the IP address of the physical interface will not receive packets. only the VIP address will receive packets.