Setup Oracle Solaris Cluster (part2)

May 23, 2016 11:20 AM

_________ .____     ____ ___  _________________________________________ 
\_   ___ \|    |   |    |   \/   _____/\__    ___/\_   _____/\______   \
/    \  \/|    |   |    |   /\_____  \   |    |    |    __)_  |       _/
\     \___|    |___|    |  / /        \  |    |    |        \ |    |   \
 \______  /_______ \______/ /_______  /  |____|   /_______  / |____|_  /
        \/        \/                \/                    \/         \/ 





This document is only provide technically setup Oracle Solaris Cluster. The install planning, install preparation is not provided by this document. This tutorial was a continued part from previous setup (part1). This installation mode is interactively, there were a step by step configuring your cluster for all nodes. Typically, you must prepare the Cluster name, Cluster nodes, Cluster transport and adapter, quorum configuration and check.
Bellow the step by step Configure cluster, run scinstall from directory /usr/cluster/bin path. I marked my comment with '##' character :

bash-3.2# scinstall

## Choose 1 for creating a cluster
  *** Main Menu ***

    Please select from one of the following (*) options:

      * 1) Create a new cluster or add a cluster node
        2) Configure a cluster to be JumpStarted from this install server
        3) Manage a dual-partition upgrade
        4) Upgrade this cluster node
      * 5) Print release information for this cluster node

      * ?) Help with menu options
      * q) Quit

    Option:  1

## Select 1 Create a new cluster
  *** New Cluster and Cluster Node Menu ***

    Please select from any one of the following options:

        1) Create a new cluster
        2) Create just the first node of a new cluster on this machine
        3) Add this machine as a node in an existing cluster

        ?) Help with menu options
        q) Return to the Main Menu

    Option:  1
   
## You must configure rsh / ssh for every node to allow incoming commands
## Before continue this section, test all nodes using rsh command

  *** Create a New Cluster ***

    This option creates and configures a new cluster.

    You must use the Oracle Solaris Cluster installation media to install
    the Oracle Solaris Cluster framework software on each machine in the
    new cluster before you select this option.

    If the "remote configuration" option is unselected from the Oracle
    Solaris Cluster installer when you install the Oracle Solaris Cluster
    framework on any of the new nodes, then you must configure either the
    remote shell (see rsh(1)) or the secure shell (see ssh(1)) before you
    select this option. If rsh or ssh is used, you must enable root access
    to all of the new member nodes from this node.

    Press Control-D at any time to return to the Main Menu.

## Select yes
    Do you want to continue (yes/no) [yes]? 

## Select Typical mode
  >>> Typical or Custom Mode <<<

    This tool supports two modes of operation, Typical mode and Custom
    mode. For most clusters, you can use Typical mode. However, you might
    need to select the Custom mode option if not all of the Typical mode
    defaults can be applied to your cluster.

    For more information about the differences between Typical and Custom
    modes, select the Help option from the menu.

    Please select from one of the following options:

        1) Typical
        2) Custom

        ?) Help
        q) Return to the Main Menu

    Option [1]:  1

   
## Write a Cluster name
  >>> Cluster Name <<<

    Each cluster has a name assigned to it. The name can be made up of any
    characters other than whitespace. Each cluster name should be unique
    within the namespace of your enterprise.

    What is the name of the cluster you want to establish ? CLUSTERLAB 

## Insert all nodes, I create only 2 nodes, end with Ctrl+D after done
## Then confirm with 'yes'

  >>> Cluster Nodes <<<

    This Oracle Solaris Cluster release supports a total of up to 16
    nodes.

    List the names of the other nodes planned for the initial cluster
    configuration. List one node name per line. When finished, type
    Control-D:

    Node name:  sol10u11A
    Node name:  sol10u11B
    Node name (Control-D to finish):  ^D


    This is the complete list of nodes:

        sol10u11A
        sol10u11B

    Is it correct (yes/no) [yes]? 


    Attempting to contact "sol10u11B" ... done

    Searching for a remote configuration method ... done

    The Oracle Solaris Cluster framework is able to complete the
    configuration process without remote shell access.

## Insert cluster adapter, I am use e1000g1 and e1000g2.
## The target node also using same adapater name (in my case)
## You can check the adapter with 'dladm show-dev' before installation
## Confirm with 'yes' after registration the adapter
## Installer will plumb your adapter

  >>> Cluster Transport Adapters and Cables <<<

    You must identify the cluster transport adapters which attach this
    node to the private cluster interconnect.

 For node "sol10u11A",
    What is the name of the first cluster transport adapter (help) [e1000g3]?  e1000g1

    Will this be a dedicated cluster transport adapter (yes/no) [yes]? 

    Searching for any unexpected network traffic on "e1000g1" ... done
Unexpected network traffic was seen on "e1000g1".
"e1000g1" may be cabled to a public network.

    Do you want to use "e1000g1" anyway (yes/no) [no]?  yes

 For node "sol10u11A",
    What is the name of the second cluster transport adapter (help) [e1000g4]?  e1000g2

    Will this be a dedicated cluster transport adapter (yes/no) [yes]? 

    Searching for any unexpected network traffic on "e1000g2" ... done
    Verification completed. No traffic was detected over a 10 second
    sample period.

    Plumbing network address 172.16.0.0 on adapter e1000g1 >> NOT DUPLICATE ... done
    Plumbing network address 172.16.0.0 on adapter e1000g2 >> NOT DUPLICATE ... done

  >>> Resource Security Configuration <<<

    The execution of a cluster resource is controlled by the setting of a
    global cluster property called resource_security. When the cluster is
    booted, this property is set to SECURE.

    Resource methods such as Start and Validate always run as root. If
    resource_security is set to SECURE and the resource method executable
    file has non-root ownership or group or world write permissions,
    execution of the resource method fails at run time and an error is
    returned.

    Resource types that declare the Application_user resource property
    perform additional checks on the executable file ownership and
    permissions of application programs. If the resource_security property
    is set to SECURE and the application program executable is not owned
    by root or by the configured Application_user of that resource, or the
    executable has group or world write permissions, execution of the
    application program fails at run time and an error is returned.

    Resource types that declare the Application_user property execute
    application programs according to the setting of the resource_security
    cluster property. If resource_security is set to SECURE, the
    application user will be the value of the Application_user resource
    property; however, if there is no Application_user property, or it is
    unset or empty, the application user will be the owner of the
    application program executable file. The resource will attempt to
    execute the application program as the application user; however a
    non-root process cannot execute as root (regardless of property
    settings and file ownership) and will execute programs as the
    effective non-root user ID.

    You can use the "clsetup" command to change the value of the
    resource_security property after the cluster is running.

   
Press Enter to continue: 

## Select 'no' for quorum, it will be add later after installation

  >>> Quorum Configuration <<<

    Every two-node cluster requires at least one quorum device. By
    default, scinstall selects and configures a shared disk quorum device
    for you.

    This screen allows you to disable the automatic selection and
    configuration of a quorum device.

    You have chosen to turn on the global fencing. If your shared storage
    devices do not support SCSI, such as Serial Advanced Technology
    Attachment (SATA) disks, or if your shared disks do not support
    SCSI-2, you must disable this feature.

    If you disable automatic quorum device selection now, or if you intend
    to use a quorum device that is not a shared disk, you must instead use
    clsetup(1M) to manually configure quorum once both nodes have joined
    the cluster for the first time.

    Do you want to disable automatic quorum device selection (yes/no) [no]? 

## Confirm to create a configuration file and proceed the cluster creation

    Is it okay to create the new cluster (yes/no) [yes]? 

    During the cluster creation process, cluster check is run on each of
    the new cluster nodes. If cluster check detects problems, you can
    either interrupt the process or check the log files after the cluster
    has been established.

    Interrupt cluster creation for cluster check errors (yes/no) [no]? 

  Cluster Creation

    Log file - /var/cluster/logs/install/scinstall.log.1353

    Configuring global device using lofi on sol10u11B: done

    Starting discovery of the cluster transport configuration.

    Probes were sent out from all transport adapters configured for this
    node ("sol10u11A"). But, they were not seen by any of the other nodes.
    This may be due to any number of reasons, including improper cabling
    or a switch which was confused by the probes.

    You can either attempt to correct the problem and try the probes again
    or manually configure the transport. To correct the problem might
    involve re-cabling, changing the configuration, or fixing hardware.
    You must configure the transport manually to configure tagged VLAN
    adapters and non tagged VLAN adapters on the same private interconnect
    VLAN.

## Don't worry for this, just confirm with yes to enable the transport adapter
    Do you want to try again (yes/no) [yes]? 


    Probes were sent out from all transport adapters configured for this
    node ("sol10u11A"). But, they were not seen by any of the other nodes.
    This may be due to any number of reasons, including improper cabling
    or a switch which was confused by the probes.

    You can either attempt to correct the problem and try the probes again
    or manually configure the transport. To correct the problem might
    involve re-cabling, changing the configuration, or fixing hardware.
    You must configure the transport manually to configure tagged VLAN
    adapters and non tagged VLAN adapters on the same private interconnect
    VLAN.

    Do you want to try again (yes/no) [yes]? 


    The following connections were discovered:

        sol10u11A:e1000g1  switch1  sol10u11B:e1000g1
        sol10u11A:e1000g2  switch2  sol10u11B:e1000g2

    Completed discovery of the cluster transport configuration.

    Started cluster check on "sol10u11A".
    Started cluster check on "sol10u11B".

    cluster check failed for "sol10u11A".
    cluster check failed for "sol10u11B".

The cluster check command failed on both of the nodes.

Refer to the log file for details.
The name of the log file is /var/cluster/logs/install/scinstall.log.1353.


    Configuring "sol10u11B" ... done
    Rebooting "sol10u11B" ... done

## Node sol10u11B will be reboot then enter to cluster mode
   
    Configuring "sol10u11A" ... done
    Rebooting "sol10u11A" ...

Log file - /var/cluster/logs/install/scinstall.log.1353

Rebooting ...

updating /platform/i86pc/boot_archive

Broadcast Message from root (???) on sol10u11A Sab Mei 21 13:11:40...
THE SYSTEM sol10u11A IS BEING SHUT DOWN NOW ! ! !

Log off now or risk your files being damaged

## Node sol10u11A will reboot then enter to cluster mode
## Checking after installation will be wrote to other tutorial

#EOF

0 comments:

Article list :