Create and join cluster on Cluster Data ONTAP

Jan 31, 2019 8:23 PM

Netapp is a powerfull storage coming with a lot of features. ONTAP is Netapp's proprietary operating system for storage disk array. ONTAP includes code from Berkeley Net/2 BSD Unix, Spinnaker Networks technology and other operating systems. ONTAP originally only supported NFS, but later added support for SMB, iSCSI and Fibre Channel Protocol (including Fibre Channel over Ethernet and FC-NVMe). 

In June 16, 2006, NetApp released two variants of Data ONTAP, namely Data ONTAP 7G and, with nearly a complete rewrite[1], Data ONTAP GX. Data ONTAP GX was based on grid technology acquired from Spinnaker Networks. In 2010 these software product lines merged into one OS - Data ONTAP 8, which folded Data ONTAP 7G onto the Data ONTAP GX cluster platform [source: Wikipedia]. 

ONTAP can be operated with web gui using Netapp System Manager and using Command Line Interface. Command line interface is the most interested interface in the computing world. You can manage, control and troubleshoot over command line. In this article, we will demonstrate the cluster creation and node join on this tutorial.


[1] Create Cluster

To create a cluster required to specify cluster name and license. Please note that the license bellow is not real license, it's just for a sample :)

testlab::> cluster create -clustername wardilabz -license 
ABCDEFGHIJKLMNWWWWWWWWWWWWWW
Network set up
Network set up .
Network set up .. 
Network set up ... 
Network set up ....
Network set up .....
Network set up ......
Network set up .......
Network set up ........
Network set up ......... 
Starting replication service
Starting replication service . 
Creating cluster  
System start up 
System start up .
System start up ..
System start up ... 
System start up ....
System start up ..... 
System start up ...... 
System start up .......
System start up ........  
System start up .........
Starting cluster support services 


Cluster wardilabz has been created.

wardilabz::> cluster show
Node                  Health  Eligibility
--------------------- ------- ------------

testnode1             true    true

For complete cluster creation, we need cluster setup command to setup cluster network interface and network addressing at bellow command :

wardilabz::> cluster setup 

Welcome to the cluster setup wizard.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
Do you want to create a new cluster or join an existing cluster? {create, join}: create

Do you intend for this node to be used as a single node cluster? {yes, no} [no]: 
Will the cluster network be configured to use network switches? [yes]:

Existing cluster interface configuration found:
(Note: The Existing cluster interface IP addresses shown here are autogenerated and may vary in your instance of the lab.)
Port MTU IP Netmask
e0a 1500 169.254.207.173 255.255.0.0
e0b 1500 169.254.250.79 255.255.0.0
Do you want to use this configuration? {yes, no} [yes]:


Step 1 of 5: Create a Cluster
You can type "back", "exit", or "help" at any question.
Enter the cluster name: wardilabz
Enter the cluster base license key:
Creating cluster cluster1 Network set up ......... Starting replication service .. Creating cluster System start up ........... Updating volume location database Flexcache Management Starting cluster support services
Cluster cluster1 has been created.
Step 2 of 5: Add Feature License Keys
You can type "back", "exit", or "help" at any question.
Enter an additional license key []: <enter license> 
CIFS License was added.
Enter an additional license key []: <enter license> 
iSCSI License was added.
Enter an additional license key []: <enter license> 
NFS License was added.
Enter an additional license key []: <enter license> 
SnapManagerSuite License was added.
Enter an additional license key []: <enter license>   
SnapRestore License was added.
Enter an additional license key []: <enter license> 

Step 3 of 5: Set Up a Vserver for Cluster Administration
You can type "back", "exit", or "help" at any question.
Enter the cluster management interface port [e0c]: 
Enter the cluster management interface IP address: 192.168.0.101
Enter the cluster management interface netmask: 255.255.255.0
Enter the cluster management interface default gateway: 192.168.0.1
A cluster management interface on port e0c with IP address 192.168.0.101 has been created. You can use this address to connect to and manage the cluster.

Enter the DNS domain names: dns.wardilabz.com
Enter the name server IP addresses: 192.168.0.253
DNS lookup for the admin Vserver will use the dns.wardilabz.com domain.
Step 4 of 5: Configure Storage Failover (SFO)
You can type "back", "exit", or "help" at any question.
SFO will not be enabled on a non-HA system.
Step 5 of 5: Set Up the Node
You can type "back", "exit", or "help" at any question.
Where is the controller located []: My Home
Enter the node management interface port [e0c]:  <press enter to accept default value>  
Enter the node management interface IP address [192.168.0.111]: <press enter to accept default value> 
Enter the node management interface netmask [255.255.255.0]: <press enter to accept default value> 
Enter the node management interface default gateway [192.168.0.1]: <press enter to accept default value> 
Cluster setup is now complete.
To begin storing and serving data on this cluster, log in to the command-line
interface (for example, ssh admin@192.168.0.101) and complete the following
additional tasks if they have not already been completed:
- Join additional nodes to the cluster by running "cluster setup" on
those nodes.
- For HA configurations, verify that storage failover is enabled by
running the "storage failover show" command.
- Create a Vserver by running the "vserver setup" command.
In addition to using the CLI to perform cluster management tasks, you can manage
your cluster using OnCommand System Manager, which features a graphical user
interface that simplifies many cluster management tasks. This software is
available from the NetApp Support Site.

Exiting the cluster setup wizard.

wardilabz::> cluster show
Node                  Health  Eligibility
--------------------- ------- ------------

testnode1             true    true


[2] Install license

Install license for enable ONTAP facility, like NFS, CIFS, ISCSI, Snap facility, etc. Install license :

::> license add

wardilabz::> license add ABCDEFGHIJKLMNOPQRSTUVWXYZ 



[3] Rename node

Node can be rename using node command :

::> node rename -node -newname

wardilabz::> node rename -node testnode1 -newname war01


[4] Join cluster

For this tutorial, we use switchless cluster and both node are directly connected. It's depend on Netapp device you will setup, and refered to Installation Manual for physical connection. For example Netapp FAS2552, for switchless cluster connection of both node are cable connected to port e0f of node1 to e0e of node2 and e0f of node1 to e0f of node2 as bellow identified with Orange cables:


(Image source: Installation and setup instruction FAS2552 document)

For switched cluster data ontap, the inteface e0e and e0f of both node are connected to switch:



(Image source: Installation and setup instruction FAS2552 document)


Joining a new node to existing cluster use the cluster setup command with mention 'join' at the confirmation as bellow, go to the second node :

testnode2::> cluster setup

Welcome to the cluster setup wizard.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
Do you want to create a new cluster or join an existing cluster? {create, join}: join
Existing cluster interface configuration found:
Welcome to the cluster setup wizard.

You can enter the following commands at any time:
  "help" or "?" - if you want to have a question clarified,
  "back" - if you want to change previously answered questions, and
  "exit" or "quit" - if you want to quit the cluster setup wizard.
     Any changes you made before quitting will be saved.

You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.

Do you want to create a new cluster or join an existing cluster? {create, join}: join

Existing cluster interface configuration found:

Port MTU IP Netmask
e0a 1500 169.254.254.105 255.255.0.0
e0b 1500 169.254.111.119 255.255.0.0

Do you want to use this configuration? {yes, no} [yes]: 

Step 1 of 3: Join an Existing Cluster
You can type "back", "exit", or "help" at any question.

Enter the name of the cluster you would like to join [wardilabz]: 

Joining cluster wardilabz

Network set up
Network set up .
Network set up ..
Network set up ...
Network set up ....
Network set up .....
Network set up ......
Network set up .......
Network set up ........
Network set up .........
Network set up ..........
Node check
Node check .
Node check ..
Node check ...
Joining cluster
System start up
System start up .
System start up ..
System start up ...
System start up ....
System start up .....
System start up ......
System start up .......
System start up ........
System start up .........
System start up ..........
System start up ...........
System start up ............
System start up .............
System start up ..............
System start up ...............
System start up ................
System start up .................
System start up ..................
System start up ...................
System start up ....................
System start up .....................
System start up ......................
System start up .......................
System start up ........................
System start up .........................
System start up ..........................
System start up ...........................
System start up ............................
System start up .............................
System start up ..............................
Starting cluster support services
Starting cluster support services .
Starting cluster support services ..
Starting cluster support services ...

This node has joined the cluster wardilabz.


Step 2 of 3: Configure Storage Failover (SFO)
You can type "back", "exit", or "help" at any question.

SFO will not be enabled on a non-HA system.

Step 3 of 3: Set Up the Node
You can type "back", "exit", or "help" at any question.

Enter the node management interface port [e0c]: 
Enter the node management interface IP address [192.168.0.112]: 
Enter the node management interface netmask [255.255.255.0]: 
Enter the node management interface default gateway [192.168.0.1]: 

Cluster setup is now complete.

To begin storing and serving data on this cluster, log in to the command-line
interface (for example, ssh admin@192.168.0.101) and complete the following
additional tasks if they have not already been completed:

- Join additional nodes to the cluster by running "cluster setup" on 
  those nodes.
- For HA configurations, verify that storage failover is enabled by 
  running the "storage failover show" command.
- Create a Vserver by running the "vserver setup" command.


In addition to using the CLI to perform cluster management tasks, you can manage
your cluster using OnCommand System Manager, which features a graphical user
interface that simplifies many cluster management tasks. This software is
available from the NetApp Support Site.

Exiting the cluster setup wizard.

wardilabz::> cluster show
Node                  Health  Eligibility
--------------------- ------- ------------

testnode1             true    true
testnode2             true    true
2 entries were displayed.


[5] Check Aggregation of both nodes

wardilabz::> aggr show
Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0_node1       7.98GB   381.6MB   95% online   1 testnode1      raid_dp, normal
aggr0_node        7.98GB   381.7MB   95% online    1 testnode2      raid_dp, normal

2 entries were displayed.


[EOF]
[Wardilee]

0 comments:

Article list :