| by Arround The Web | No comments

How to Install Veritas Cluster Server 8.0 in RHEL

Veritas Cluster Server, commonly known as VCS, is used by organizations around the world to host their mission-critical applications and ensure always-on high availability for their hosted applications.

This ensure when a node or application fails, other nodes can take predefined actions to take over and bring up services elsewhere in the cluster.

VCS setup can be done in two ways based on the application requirement.

  • VCS Cluster Active-Active (AA) Setup – It has CVM, and the file system is available across the node simultaneously.
  • VCS Cluster Active-Passive (AP) Setup -FS can only be mounted on an active system, not on a passive system.

In this article, we’ll demonstrate how to install Veritas Cluster Server (VCS) Active-Active (AA) 8.0 on Linux (RHEL 8.8).

Our lab setup:

  • Two node Active-Active VCS cluster with RHEL 8.8
  • Node1 – 2gvcsnode1 – 192.168.10.110
  • Node2 – 2gvcsnode2 – 192.168.10.111
  • Storage Foundation Cluster File System HA (SFCFSHA)

Prerequisites

  • Each node must have three interfaces (One used as public interface for Heartbeat & Two private interfaces used for VCS cluster inter communication to share the resources across both the nodes)
  • Three 1GB disks should mapped to both the node in shared mode for fencing setup.
  • Setup password-less login between the system.
  • DNS configuration and local host entries
  • Disable selinux
  • Disable firewalld, if it’s mandatory allow required ports.
  • Configure NTP/Chrony

Adding Local Host entry

Even if you have DNS entry it’s safe to add a localhost entry in all nodes. Run the below command in both the node.

cp -p /etc/hosts /etc/hosts_bkp-$(date +%d-%m-%Y)

echo "
192.168.10.110 vcs1.2gvcsnode1.local       vcs1
192.168.10.111 vcs2.2gvcsnode2.local       vcs2" >> /etc/hosts

Downloading Veritas InfoScale

Veritas InfoScale can be downloaded from the portal if you have an active account with Veritas support. If not, trial version (free for 60 Days) can be downloaded.

Also, visit sort.veritas url and perform the compatibility check for supported kernels. As on today (24-June-2023) Veritas InfoScale 8.0.2 only support 4.18.0-372.32.1 kernel, which is not possible to install on RHEL 8.8, so we are going with Veritas InfoScale 8.0.

As part of this installation you must include a list of patches (CPI, Hotfixes and/or Cumulative), because the base package doesn’t support the latest kernel for some modules.

We have verified the Veritas site and found that the below three patches must be included in the base package for successful installation of Veritas InfoScale 8.0 on RHEL 8.8, So download below four packages and upload to the target server.

Veritas_InfoScale_8.0_RHEL.tar.gz			#Base Package
infoscale-rhel8_x86_64-Patch-8.0.0.1800.tar.gz		#Cumulative Patch
infoscale-rhel8.7_x86_64-Patch-8.0.0.2600.tar.gz	#Common Updates Patch
infoscale-rhel8.8_x86_64-Patch-8.0.0.2700.tar.gz	#RHEL8.8 Patch

Creating directory structure

Create a proper directory structure and move the files to respective directory to avoid confusion. I have created the following directories as per my convenient.

/backup/vcs8			#Base Package Path
/backup/vcs8_patch_1800		#Patch_1 Path
/backup/vcs8_patch_2600		#Patch_2 Path
/backup/vcs8_patch_2700		#Patch_3 Path

Extracting the Packages

Extract the packages to respective location as created above.

tar -xf Veritas_InfoScale_8.0_RHEL.tar.gz -C /backup/vcs8
tar -xf infoscale-rhel8_x86_64-Patch-8.0.0.1800.tar.gz -C /backup/vcs8_patch_1800
tar -xf infoscale-rhel8.7_x86_64-Patch-8.0.0.2600.tar.gz -C /backup/vcs8_patch_2600
tar -xf infoscale-rhel8.8_x86_64-Patch-8.0.0.2700.tar.gz -C /backup/vcs8_patch_2700

Performing Pre-Installation Check

It’s always recommend to run pre-installation check, which will ensure all required RPMs are already installed on the given systems. If it’s found missing RPMs that can be installed on the fly.

Syntax:

./installer -patch_path [Path_to_the_patch 1] -patch2_path [Path_to_the_patch 2] -patch3_path [Path_to_the_patch 3]

Navigate to VCS 8.0 base directory and run installer as shown below (Please use your package location instead of ours).

cd /backup/vcs8/dvd1-redhatlinux/rhel8_x86_64

./installer -patch_path /backup/vcs8_patch_1800 -patch2_path /backup/vcs8_patch_2600 -patch3_path /backup/vcs8_patch_2700

When you run, it will prompt with many options as shown below. Input 'P' and hit 'Enter' to perform pre-installation check.

		Veritas InfoScale Storage and Availability Solutions 8.0 Install Program

Copyright (c) 2020 Veritas Technologies LLC.  All rights reserved.  Veritas and the Veritas Logo are trademarks or
registered trademarks of Veritas Technologies LLC or its affiliates in the U.S. and other countries. Other names may
be trademarks of their respective owners.
 
The Licensed Software and Documentation are deemed to be "commercial computer software" and "commercial computer
software documentation" as defined in FAR Sections 12.212 and DFARS Section 227.7202.

Logs are being written to /var/tmp/installer-202306231445nLK while installer is in progress

		Veritas InfoScale Storage and Availability Solutions 8.0 Install Program

Task Menu:

    P) Perform a Pre-Installation Check		I) Install a Product
    C) Configure a Product Component		G) Upgrade a Product
    O) Perform a Post-Installation Check	U) Uninstall a Product
    L) License a Product			S) Start a Product
    D) View Product Descriptions		X) Stop a Product
    R) View Product Requirements		?) Help

Enter a Task: [P,I,C,G,O,U,L,S,D,X,R,?] P

Now, select which product you want to check. In our case, it’s Veritas InfoScale Enterprise, so input '4' and hit 'Enter'.

		Veritas InfoScale Storage and Availability Solutions 8.0 Precheck Program

    1) Veritas InfoScale Foundation
    2) Veritas InfoScale Availability
    3) Veritas InfoScale Storage
    4) Veritas InfoScale Enterprise
    b) Back to previous menu

Select a product to perform pre-installation check for: [1-4,b,q] 4

As we are planning to install Storage Foundation Cluster File System HA (SFCFSHA) component, so input '4' and hit 'Enter'. Also, you need to enter a list of systems you want to perform pre-checks on.

		Veritas InfoScale Storage and Availability Solutions 8.0 Precheck Program

    1) Cluster Server (VCS)
    2) Storage Foundation (SF)
    3) Storage Foundation and High Availability (SFHA)
    4) Storage Foundation Cluster File System HA (SFCFSHA)
    5) Storage Foundation for Oracle RAC (SF Oracle RAC)

Select a component to perform pre-installation check for: [1-5,q] 4

Enter the system names separated by spaces: [q,?] 2gvcsnode1 2gvcsnode2

Now, installer perform following checks and report if anything fails. Sometime it may report you as failed due to missing RPMs and it will give you an another option to install via yum or manually. If you found missing RPMs, you need to input '1' and hit 'Enter' in order to install those RPMs.

			Veritas InfoScale Enterprise 8.0 Precheck Program
				      2gvcsnode1 2gvcsnode2

Logs are being written to /var/tmp/installer-202306231445nLK while installer is in progress
 
Verifying systems: 100%
 
Estimated time remaining: (mm:ss) 0:00                                                                     8 of 8

    Checking system communication .......................................................................... Done
    Checking release compatibility ......................................................................... Done
    Checking installed product ............................................................................. Done
    Checking platform version .............................................................................. Done
    Checking prerequisite patches and rpms ....................................................... Partially Done
    Checking file system free space ........................................................................ Done
    Checking configured component .......................................................................... Done
    Performing product prechecks ........................................................................... Done

The following required OS rpms were not found on vcsnode1:
	net-tools.x86_64 bc.x86_64 ksh.x86_64

The following required OS rpms were not found on vcsnode2:
	net-tools.x86_64 bc.x86_64 ksh.x86_64

The installer provides some guidance about how to install OS rpms using native methods, like yum, or how to manually install the required OS rpms.

    1)  Install the missing required OS rpms with yes, if yes is configured on the systems
    2)  Install the missing required OS rpms manually, (detailed steps are provided)
    3)  Do not install the missing required OS rpms

How would you like to install the missing required OS rpms? [1-3,q,?] (1)

The installation may take a few minutes, be patient.

    Install the missing OS rpms with yum on vcsnode1 ................................................. Done 
    Install the missing OS rpms with yum on vcsnode2 ................................................. Done

Press [Enter] to continue:

Once RPMs installation done, at this point, Precheck will be re-run and you will get the output similar to the below one.

			Veritas InfoScale Enterprise 8.0 Precheck Program
				      2gvcsnode1 2gvcsnode2

Logs are being written to /var/tmp/installer-202306231445nLK while installer is in progress
 
Verifying systems: 100%
 
Estimated time remaining: (mm:ss) 0:00                                                                     8 of 8

    Checking system communication .......................................................................... Done
    Checking release compatibility ......................................................................... Done
    Checking installed product ............................................................................. Done
    Checking platform version .............................................................................. Done
    Checking prerequisite patches and rpms ................................................................. Done
    Checking file system free space ........................................................................ Done
    Checking configured component .......................................................................... Done
    Performing product prechecks ........................................................................... Done

Precheck report completed

System verification checks completed successfully
 
The following notes were discovered on the systems:

CPI NOTE V-9-30-1021: he system information on 2gvcsnode1:
	Operating system: Linux RHEL 8.8 86_64
	CPU number: 4
	CPU speed: 2693 MHz
	Memory size: 7963 MB
	Swap size: 9207 MB

CPI NOTE V-9-30-1021: he system information on 2gvcsnode1:
	Operating system: Linux RHEL 8.8 86_64
	CPU number: 4
	CPU speed: 2693 MHz
	Memory size: 7963 MB
	Swap size: 9207 MB

The following warnings were discovered on the systems:

CPI WARNING V-9-40-1400 vmware-tools is not running on vcsnode1, installer attempted to start it but failed. 
Please start the tool before installing Veritas InfoScale Enterprise

CPI WARNING V-9-40-1418 Kernel Release 4.18.0-477.13.1.el8_8.x86_64 is detected on vcsnode1, which is not 
recognizable by the installer. It is strongly recommended to check it on SORT (https://sort.veritas.com) before
continue.
 
CPI WARNING V-9-40-1401 vmware-tools is not running on vcsnode2, installer attempted to start it but failed. 
Please start the tool before installing Veritas InfoScale Enterprise

CPI WARNING V-9-40-1418 Kernel Release 4.18.0-477.13.1.el8_8.x86_64 is detected on vcsnode2, which is not 
recognizable by the installer. It is strongly recommended to check it on SORT (https://sort.veritas.com) before
continue.

Installing Veritas InfoScale Enterprise

As pre-installation checks were completed successfully, it’s time to install Veritas InfoScale Enterprise. Enter 'y' for the below questions to begin the installation.

Would you like to install InfoScale Enterprise on 2gvcsnode1 2gvcsnode2? [y,n,q] (n) y

This product may contain open source and other third party materials that are subject to a separate license. See the
applicable Third-Party Notice at https://www.veritas.com/about/legal/license-agreements
 
Do you agree with the terms of the End User License Agreement as specified in the EULA/en/EULA.pdf file present on media? [y,n,q,?] y

Veritas InfoScale Enterprise installation is in progress.

			Veritas InfoScale Enterprise 8.0 Install Program
				      2gvcsnode1 2gvcsnode2

Logs are being written to /var/tmp/installer-202306231445nLK while installer is in progress
 
    Installing InfoScale Enterprise: 100%
 
    Estimated time remaining: (mm:ss) 0:00                                                               31 of 31

    Performing InfoScale Enterprise preinstall tasks ....................................................... Done 
    Installing VRTSperl rpm ................................................................................ Done
    Installing VRTSpython rpm .............................................................................. Done
    Installing VRTSvlic rpm ................................................................................ Done
    Installing VRTSspt rpm ................................................................................. Done
    Installing VRTSveki rpm ................................................................................ Done
    Installing VRTSvxvm rpm ................................................................................ Done
    Installing VRTSaslapm rpm .............................................................................. Done
    Installing VRTSvxfs rpm ................................................................................ Done
    Installing VRTSfsadv rpm ............................................................................... Done
    Installing VRTSllt rpm ................................................................................. Done
    Installing VRTSgab rpm ................................................................................. Done
    Installing VRTSvxfen rpm ............................................................................... Done
    Installing VRTSamf rpm ................................................................................. Done
    Installing VRTSvcs rpm ................................................................................. Done
    Installing VRTScps rpm ................................................................................. Done
    Installing VRTSvcsag rpm ............................................................................... Done
    Installing VRTSvcsea rpm ............................................................................... Done
    Installing VRTSrest rpm ................................................................................ Done
    Installing VRTScsi rpm ................................................................................. Done
    Installing VRTSdbed rpm ................................................................................ Done
    Installing VRTSglm rpm ................................................................................. Done
    Installing VRTScavf rpm ................................................................................ Done
    Installing VRTSgms rpm ................................................................................. Done
    Installing VRTSodm rpm ................................................................................. Done
    Installing VRTSdbac rpm ................................................................................ Done
    Installing VRTSsfmh rpm ................................................................................ Done
    Installing VRTSvbs rpm ................................................................................. Done
    Installing VRTSsfcpi rpm ............................................................................... Done
    Installing VRTSvcswiz rpm .............................................................................. Done
    Performing InfoScale Enterprise postinstall tasks ...................................................... Done
 
Veritas InfoScale Enterprise Install completed successfully

Veritas License Activation

VCS installation is completed, so activate the license as shown below.

To comply with the terms of our End User License Agreement, you have 60 days to either:

 * Enter a valid license key matching the functionality in use on the systems
 * Enable keyless licensing and manage the systems with a Management Server. For more details visit 
http://www.veritas.com/community/blogs/introducing-keyless-feature-enablement-storage-foundation-ha-51. The product is fully functional during these 60 days.

    1)  Enter a valid license key(Key file path needed)
    2)  Enable keyless licensing and complete system licensing later

How would you like to license the systems? [1-2,q] (2)

    1) Veritas Infoscale Foundation
    2) Veritas Infoscale Availability
    3) Veritas Infoscale Storage
    4) Veritas Infoscale Enterprise
    b) Back to previous menu

Which product would you like to register? [1-4,b,q] (4)

Registering keyless key ENTERPRISE on Veritas InfoScale Enterprise
Successfully registered ENTERPRISE keyless key on 2gvcsnode1
Successfully registered ENTERPRISE keyless key on 2gvcsnode2

Veritas InfoScale Configuration

Would you like to configure InfoScale Enterprise on 2gvcsnode1 2gvcsnode2? [y,n,q] (n) y

The Veritas Cloud Receiver (VCR) is a preconfigured, cloud-based edge server deployed by Veritas. Enter telemetry.veritas.com to use the Veritas Cloud Receiver as an edge server for your environment.
Enter the hostname or IP address of the edge server: [q,?] 2gvcsnode1
Enter the edge server's port number: [q,?] 2023

I/O Fencing

It needs to be determined at this time if you plan to configure I/O Fencing in enabled or disabled mode, as well as help in determining the number of network interconnects (NICS) required on your systems. If you configure I/O Fencing in enabled mode, only a single NIC is required, though at least two are recommended.

A split brain can occur if servers within the cluster become unable to communicate for any number of reasons. If I/O Fencing is not enabled, you run the risk of data corruption should a split brain occur. Therefore, to avoid data corruption due to split brain in CFS environments, I/O Fencing has to be enabled.

If you do not enable I/O Fencing, you do so at your own risk

See the Administrator's Guide for more information on I/O Fencing

Do you want to configure I/O Fencing in enabled mode? [y,n,q,?] (y)

Please read the below instructions (information purpose only).

To configure VCS, answer the set of questions on the next screen.

When [b] is presented after a question, 'b' may be entered to go back to the first question of the configuration set.

When [?] is presented after a question, '?' may be entered for help or additional information about the question.

Following each set of questions, the information you have entered will be presented for confirmation. To repeat the set of questions and correct any previous errors, enter 'n' at the confirmation prompt.

No configuration changes are made to the systems until all configuration questions are completed and confirmed.

Press [Enter] to continue:

Enter a cluster name based on your requirement.

To configure VCS the following information is required:
	
	A unique cluster name
	One or more NICs per system used for heartbeat links
	A unique cluster ID number between 0-65535

	One or more heartbeat links are configured as private links
	You can configure one heartbeat link as a low-priority link

All systems are being configured to create one cluster.

Enter the unique cluster name: [q,?] 2gcluster01

Configures LLT for heartbeat communication.

    1)  Configure the heartbeat links using LLT over Ethernet
    2)  Configure the heartbeat links using LLT over UDP
    3)  Configure the heartbeat links using LLT over TCP
    4)  Configure the heartbeat links using LLT over RDMA
    5)  Automatically detect configuration for LLT over Ethernet
    b)  Back to previous menu

How would you like to configure heartbeat links? [1-5,b,q,?] (5) 

On Linux systems, only activated NICs can be detected and configured automatically.

Press [Enter] to continue:

At this point, it checks if both systems have three NICs and network connections. If both are good, it sets the connection priority. Also, you need to enter the unique cluster ID as shown below.

Logs are being written to /var/tmp/installer-202306231445nLK while installer is in progress
 
    Configuring LLT links: 100%    

    Estimated time remaining: (mm:ss) 0:00                                                                 4 of 4

    Checking system NICs on 2gvcsnode1 ............................................................. 3 NICs found 
    Checking system NICs on 2gvcsnode2 ............................................................. 3 NICs found 
    Checking network links ........................................................................ 3 links found
    Setting link priority .................................................................................. Done

Enter a unique cluster ID number between 0-65535: [b,q,?] (45289)

The cluster cannot be configured if the cluster ID 45289 is in use by another cluster. Installer can perform a check to determine
 if the cluster ID is duplicate. The check will take less than a minute to complete.

Would you like to check if the cluster ID is in use by another cluster? [y,n,q] (y)

    Checking cluster ID .................................................................................. Done

Duplicated cluster ID detection passed. The cluster ID 45289 can be used for the cluster.

Press [Enter] to continue:

Here you can see Cluster summary information’s.

Cluster information verification:
	
	Cluster Name: 2gcluster01
	Cluster ID Number: 45289

	Private Heartbeat NICs for 2gvcsnode1:
		link1=ens224
		link2=ens256
	Low-Priority Heartbeat NIC for 2gvcsnode1:
		link-lowpril=ens192

	Private Heartbeat NICs for 2gvcsnode2:
		link1=ens224
		link2=ens256
	Low-Priority Heartbeat NIC for 2gvcsnode2:
		link-lowpril=ens192

Is this information correct? [y,n,q,?] (y)

VCS secure mode configuration screen.

We recommend that you run Cluster Server in secure mode.

Would you like to configure the VCS cluster in secure mode? [y,n,q,?] (y) n

Are you sure that you want to proceed with non-secure installation? [y,n,q] (n) y

VCS user addition.

The following information is required to add VCS users:

	A user name
	A password for the user
	User privileges (Adimistrator, Operator, or Guest)

Do you wish to accept the default cluster credentials of 'admin/password'? [y,n,q] (y)

Do you want to add another user to the cluster? [y,n,q] (n)

VCS user information verification.

VCS User verification:

	User: admin	Privilege: Administrator
	Passwords are not displayed

Is this information correct? [y,n,q] (y)

The SMTP configuration screen, which will alert you if any error is detected in the cluster.

The following information is required to configure SMTP notification:

	The domain-based hostname of the SMTP server
	The email address of each Smtp recipient
	A minimum severity level of messages to send to each recipient

Do you want to confgire SMTP notification? [y,n,q,?] (n) y

Active NIC devices discovered on 2gvcsnode1: ens192

Enter the NIC for the VCS Notifier to use on 2gvcsnode1: [b,q,?] (ens192)
Is ens192 to be the public NIC used by all systems? [y,n,q,b,?] (y)
Enter the domain-based hostname of the SMTP server
(example:smpt.yourcompany.com): [b,q,?] smtp.2daygeek.com
Enter the full email address of the SMTP recipient
(example:user@yourcompany.com): [b,q,?] admin@2daygeek.com
Enter the minimum severity of events for which mail should be sent to admin@2daygeek.com [I=Information, W=Warning, E=Error, S=ServerError]: [b,q,?] E
Would you like to add another SMTP recipient? [y,n,q,b] (n)

SMTP configuration verification.

SMTP notification verification:

	NIC: ens192
	SMTP Address: smtp.2daygeek.com
	Recipient: admin@2daygeek.com receives email for Error oh higher events

Is this information correct? [y,n,q] (y)

SNMP configuration.

Do you want to configure SNMP notification? [y,n,q,?] (n)

Global cluster configuration.

Do you want to configure the Global Cluster Option? [y,n,q,?] (n)

All InfoScale Enterprise processes that are currently running must be stopped

Do you want to stop InfoScale Enterprise processes now? [y,n,q,?] (y)

To configure Storage Foundation Cluster File System HA, Veritas stops cluster services.

Logs are being written to /var/tmp/installer-202306231445nLK while installer is in progress
 
    Stopping InfoScale Enterprise: 100%
 
    Estimated time remaining: (mm:ss) 0:00                                                               11 of 11

    Performing InfoScale Enterprise prestop tasks .......................................................... Done 
    Stopping vcsmm ......................................................................................... Done
    Stopping vxgms ......................................................................................... Done
    Stopping vxglm ......................................................................................... Done
    Stopping vxcpserv ...................................................................................... Done
    Stopping had ........................................................................................... Done
    Stopping amf ........................................................................................... Done
    Stopping vxfen ......................................................................................... Done
    Stopping gab ........................................................................................... Done
    Stopping llt ........................................................................................... Done
    Performing InfoScale Enterprise poststop tasks ......................................................... Done 

Veritas InfoScale Enterprise Shutdown completed successfully

Storage Foundation Cluster File System HA (SFCFSHA) configuration and services start.

Logs are being written to /var/tmp/installer-202306231445nLK while installer is in progress
 
    Starting SFCFSHA: 100%
 
    Estimated time remaining: (mm:ss) 0:00                                                                 25 of 25

    Performing SFCFSHA configuration ......................................................................... Done
    Starting CollectorService ................................................................................ Done
    Starting veki ............................................................................................ Done
    Starting vxdmp ........................................................................................... Done
    Starting vxio ............................................................................................ Done
    Starting vxspac .......................................................................................... Done
    Starting vxconfigd ....................................................................................... Done
    Starting vxvm-recover .................................................................................... Done
    Starting vxencryptd ...................................................................................... Done
    Starting vvr ............................................................................................. Done
    Starting vxcloud ......................................................................................... Done
    Starting xprtld .......................................................................................... Done
    Starting vxfs ............................................................................................ Done
    Starting vxportal ........................................................................................ Done
    Starting fdd ............................................................................................. Done
    Starting vxcafs .......................................................................................... Done
    Starting llt ............................................................................................. Done
    Starting gab ............................................................................................. Done
    Starting vxfen ........................................................................................... Done
    Starting amf ............................................................................................. Done
    Starting vxglm ........................................................................................... Done
    Starting had ............................................................................................. Done
    Starting vxgms ........................................................................................... Done
    Starting vxodm ........................................................................................... Done
    Performing SFCFSHA poststart tasks ....................................................................... Done 

Storage Foundation Cluster File System HA Startup completed successfully

Fencing Configuration

We are configuring disk based fencing and have already added three 1GB disks from EMC Storage in shared mode on both nodes.

Fencing configuration
     1)  Configure Coordination Point client based fencing
     2)  Configure disk based fencing
     3)  Configure majority based fencing

Select the fencing mechanism to be configured in this Application Cluster: [1-3,q,?] 2

This I/O fencing configuration option requires a restart of VCS. Installer will stop VCS at a later stage in this run. Note hat the service groups will be online only on the systems that are in the 'AutoStartList' after restarting VCS. Do you want to continue? [y,n,q,b,?] y

Do you have SCSI3 PR enabled? [y,n,q,b,?] (y)

Since you have selected to configure disk based fencing, you need to prvide the existing disk group to be used a coordinator or create a new disk group for it.

Select one of the options below for fencing disk group:
     1)  Create a new disk group
     2)  Using an existing disk group
     b)  Back to previous menu

Enter the choice for a disk group: [1-2,b,q] 1

List of available disks to create a new disk group
A new disk group cannot be created as the number of available free VxVM CDS disks is 0 which is less than three. If there are disks available which are not under VxVM control, use the command vxdisksetup or use the installer to initialize them as VxVM disks.

Do you want to initiailze more disks as VxVM disks? [y,n,q,b] (y)

List of disks which can be initialized as VxVM disks:
     1)  emc0_0cde	1025.62m
     2)  emc0_0cdf	1025.62m
     3)  emc0_0cdg	1025.62m
     b)  Back to previous menu

Enter the disk options, separated by spaces: [1-3,b,q] 1 2 3
    Initializing disk emc0_0cde on 2gvcsnode1 .............................................................. Done
    Initializing disk emc0_0cdf on 2gvcsnode1 .............................................................. Done
    Initializing disk emc0_0cdg on 2gvcsnode1 .............................................................. Done

     1)  emc0_0cde	1025.62m
     2)  emc0_0cdf	1025.62m
     3)  emc0_0cdg	1025.62m
     b)  Back to previous menu

Select odd number of disks and at least three disks to form a disk group. Enter the disk options, separated by spaces: [1-3,b,q] 1 2 3

Enter the new disk group name: [b] fencedg
Created disk group fencedg

Before you continue with configuration, we recommend that you run the vxfentsthdw utility (I/O fencing test hardware utility), in a separate console, to test whether the shared storage supports I/O fencing. You can access the utility at '/opt/VRTSvcs/vxfen/bin/vxfentsthdw'.
As per the 'vxfentsthdw' run you performed, do you want to continue with this disk group? [y,n,q] (y)

Using disk group fencedg

Fencing configuration information. In this section, simply hit 'Enter' for all questions as the default answer is appropriate.

I/O fencing configuration verification
 	Disk Group: fencedg
	Fencing disk policy: dmp

Is this information correct? [y,n,q] (y)

Installer will stop VCS before applying fencing configuration. To make sure VCS shuts down successfully, unfreeze any frozen service group and unmount the mounted file systems in the cluster.

HAD and all the applications will be stopped. Do you want to stop VCS and all its applications and apply fencing configuration on all nodes at this point? [y,n,q] (y)

    Stopping VCS on 2gvcsnode1 ........................................................................ Done
    Stopping VCS on 2gvcsnode2 ........................................................................ Done
    Starting vxfen on 2gvcsnode1 ...................................................................... Done
    Starting vxfen on 2gvcsnode2 ...................................................................... Done
    Updating main.cf with fencing ..................................................................... Done
    Starting VCS on 2gvcsnode1 ........................................................................ Done
    Starting VCS on 2gvcsnode2 ........................................................................ Done

The coordination Point Agent monitors the registrations on the coordination points.
Do you want to configure Coordination Point Agent on the client cluster? [y,n,q] (y)
Enter a non-existing name for the service group for Coordination Point Agent: [b] (vxfen)

Additionally the Coordination Point Agent can also monitor changes to the Coordinator Disk Group constitution such as a disk being accidently deleted from the Coordinator Disk Group. The frequency of this detailed monitoring can be tuned with the LevelTwoMonitorFreq attribute. For example, if you set this attribute to 5, the agent will monitor the Coordinator Disk Group constitution every fine monitor cycles. If LevelTwoMonitorFreq attribute is not set, the agent will not monitor changes to the Coordinator Disk Group.

Do you want to set LevelTwoMonitorFreq? [y,n,q] (y)
Enter the value of the LevelTwoMonitorFreq attribute(0 to 65535): [b,q,?] (5)
Do you want to enable auto refresh of coordination points if registration keys are missing on any of them? [y,n,q,b,?] (n)

    Adding Coordination Point Agent via 2gvcsnode1 ............................................................... Done

    I/O Fencing configuration .................................................................................... Done

I/O Fencing configuration completed successfully

installer log files, summary file, and response file are saved at:

	/opt/VRTS/install/logs/installer-202106231932NcY

Would you like to view the summary  file? [y,n,q] (n)

We have successfully installed and configured Veritas Cluster Server 8.0. This is a third party software that doesn’t place binaries in a global location, so you need to add the path to the '/etc/profile' file to access the VCS commands.

echo "export PATH=$PATH:/opt/VRTS/bin" >> /etc/profile
source /etc/profile

Use the below command to check the Veritas Cluster (VCS) status.

Installing Veritas Cluster Server (VCS) 8.0 in RHEL 8.8

Wrapping Up

In this tutorial, we’ve shown you how to install Veritas Cluster Server (VCS) on Linux (RHEL 8.8).

In the next post, we’ll show you how to configure the Veritas Cluster Server 8.0 on RHEL8.8 like service group, resources, etc,.

If you have any questions or feedback, feel free to comment below.

The post How to Install Veritas Cluster Server 8.0 in RHEL first appeared on 2DayGeek.

Share Button

Source: 2DayGeek

Leave a Reply