-
Notifications
You must be signed in to change notification settings - Fork 40
Deploy VM ISO Repo
NOTE: We recommend not using auto_deploy_vm
CLI command for deployments as it is not actively maintained anymore.
Checklist:
- For single-node VM deployment, ensure the VM is created with 2+ attached disks.
OR - For multi-node VM deployment, ensure the VM is created with 6+ attached disks.
- Do you see the devices on execution of this command: lsblk ?
- Do both the systems on your setup have valid hostnames, are the hostnames accessible: ping ?
- Do you have IPs' assigned to all NICs eth0, eth1 and eth2?
- Identify primary node and run below commands on primary node
NOTE: For single-node VM, the VM node itself is treated as primary node.
-
Set root user password on all nodes:
sudo passwd root
-
Refresh the machine_id on VM:
rm /etc/machine-id systemd-machine-id-setup cat /etc/machine-id
-
Download ISOs and shell-scripts
NOTE: Contact Cortx RE team for latest ISO.mkdir /opt/isos
NOTE: If you are outside Seagate corporate network, download these files manually to
/opt/isos
directory and skip to Prepare cortx-prvsnr API step.
Get details for ISO download URL from Cortx RE team. The URL should have the following files:- cortx-2.0.0-*-single.iso
- cortx-os-1.0.0-*.iso or centos-7.8-minimal.iso
- cortx-prep-2.0.0-*.sh or download https://github.com/Seagate/cortx-prvsnr/blob/stable/cli/src/cortx_prep.sh
# Set source URL # It should have the following: cortx-2.0.0-*-single.iso, cortx-os-1.0.0-*.iso, cortx-prep-2.0.0-*.sh CORTX_RELEASE_REPO=<URL to Cortx ISO hosting> # Download Single ISO pushd /opt/isos SINGLE_ISO=$(curl -s ${CORTX_RELEASE_REPO} | sed 's/<\/*[^>]*>//g' | cut -f1 -d' ' | grep 'single.iso') curl -O ${CORTX_RELEASE_REPO}/iso/${SINGLE_ISO} # Download OS ISO OS_ISO=$(curl -s ${CORTX_RELEASE_REPO} | sed 's/<\/*[^>]*>//g' | cut -f1 -d' '|grep "cortx-os") curl -O ${CORTX_RELEASE_REPO}/iso/${OS_ISO} # Download cortx_prep script CORTX_PREP=$(curl -s ${CORTX_RELEASE_REPO} | sed 's/<\/*[^>]*>//g' | cut -f1 -d' '|grep ".sh") curl -O ${CORTX_RELEASE_REPO}/iso/${CORTX_PREP} popd
-
Prepare cortx-prvsnr API
pushd /opt/isos # Execute cortx-prep script sh /opt/isos/cortx-prep*.sh popd
-
Verify provisioner version (0.36.0 and above)
provisioner --version
-
Create config.ini file to some location:
IMPORTANT NOTE: Please check every details in this file correctly according to your node.
Verify interface names are correct as per your nodeUpdate required details in
~/config.ini
use sample config.inivi ~/config.ini
[cluster] mgmt_vip= [srvnode_default] network.data.private_interfaces=eth3,eth4 network.data.public_interfaces=eth1,eth2 network.mgmt.interfaces=eth0 bmc.user=None bmc.secret=None storage.cvg.0.data_devices=/dev/sdc storage.cvg.0.metadata_devices=/dev/sdb network.data.private_ip=None [srvnode-1] hostname=srvnode-1.localdomain roles=primary,openldap_server [enclosure_default] type=virtual [enclosure-1]
Note: Find the devices on each node separately using the command provided below, to fill into respective config.ini sections.
Complete list of attached devices:
device_list=$(lsblk -nd -o NAME -e 11|grep -v sda|sed 's|sd|/dev/sd|g'|paste -s -d, -)
Values for storage.cvg.0.metadata_devices:
echo ${device_list%%,*}
Values for storage.cvg.0.data_devices:
echo ${device_list#*,}
[cluster] mgmt_vip= [srvnode_default] network.data.private_interfaces=eth3,eth4 network.data.public_interfaces=eth1,eth2 network.mgmt.interfaces=eth0 bmc.user=None bmc.secret=None network.data.private_ip=None [srvnode-1] hostname=srvnode-1.localdomain roles=primary,openldap_server,kafka_server storage.cvg.0.data_devices=<Values for storage.cvg.0.data_devices from command executed earlier> storage.cvg.0.metadata_devices=<Values for storage.cvg.0.metadata_devices from command executed earlier> [srvnode-2] hostname=srvnode-2.localdomain roles=secondary,openldap_server,kafka_server storage.cvg.0.data_devices=<Values for storage.cvg.0.data_devices from command executed earlier> storage.cvg.0.metadata_devices=<Values for storage.cvg.0.metadata_devices from command executed earlier> [srvnode-3] hostname=srvnode-3.localdomain roles=secondary,openldap_server,kafka_server storage.cvg.0.data_devices=<Values for storage.cvg.0.data_devices from command executed earlier> storage.cvg.0.metadata_devices=<Values for storage.cvg.0.metadata_devices from command executed earlier> [enclosure_default] type=virtual [enclosure-1] [enclosure-2] [enclosure-3]
NOTE :
private_ip, bmc_secret, bmc_user
should be None for VM.
Manual deployment of VM consists of following steps from Auto-Deploy, which could be individually executed:
NOTE: Ensure VM Preparation for Deployment has been addressed successfully before proceeding
-
Bootstrap VM(s): Run setup_provisioner provisioner cli command:
If using remote hosted repos:
provisioner setup_provisioner \ --logfile --logfile-filename /var/log/seagate/provisioner/setup.log \ --source iso --config-path ~/config.ini \ --iso-cortx ${SINGLE_ISO} --iso-os ${OS_ISO} \ srvnode-1:$(hostname -f)
If using remote hosted repos:
provisioner setup_provisioner \ --logfile --logfile-filename /var/log/seagate/provisioner/setup.log \ --ha --source iso --config-path ~/config.ini \ --iso-cortx ${SINGLE_ISO} --iso-os ${OS_ISO} \ srvnode-1:<fqdn:primary_hostname> \ srvnode-2:<fqdn:secondary_hostname> \ srvnode-3:<fqdn:secondary_hostname>
Example:
provisioner setup_provisioner \ --logfile --logfile-filename /var/log/seagate/provisioner/setup.log \ --ha --source iso --config-path ~/config.ini \ --iso-cortx ${SINGLE_ISO} --iso-os ${OS_ISO} \ srvnode-1:host1.localdomain srvnode-2:host2.localdomain srvnode-3:host3.localdomain
NOTE:
- This command will ask for each node's root password during initial cluster setup.
This is one time activity required to setup passwordless ssh across nodes. - [OPTIONAL] For setting up a cluster of more than 3 nodes do append
--name <setup_profile_name>
to auto_deploy_vm command input parameters.
- This command will ask for each node's root password during initial cluster setup.
-
Update pillar and export pillar data for confstore:
provisioner configure_setup /root/config.ini <number of nodes in cluster> salt-call state.apply components.system.config.pillar_encrypt provisioner confstore_export
-
Bootstrap Validation Once deployment is bootstrapped (auto_deploy or setup_provisioner) command is executed successfully, verify salt master setup on both nodes (setup verification checklist)
salt '*' test.ping salt "*" service.stop puppet salt "*" service.disable puppet salt '*' pillar.get release salt '*' grains.get node_id salt '*' grains.get cluster_id salt '*' grains.get roles
-
Deployment Based On Component Groups:
If provisioner setup is completed and you want to deploy in stages based on component groupNOTE: At any stage, if there is a failure, it is advised to run destroy for that particular group. For help on destroy commands, refer to https://github.com/Seagate/cortx-prvsnr/wiki/Teardown-Node(s)#targeted-teardown
-
System component group
Single Nodeprovisioner deploy --setup-type single --states system
Multi Node
provisioner deploy --states system
-
Prereq component group
Single Nodeprovisioner deploy --setup-type single --states prereq
Multi Node
provisioner deploy --states prereq
- Utils component
Single Node
Multi Node
provisioner deploy --setup-type single --states utils
provisioner deploy --setup-type 3_node --states utils
- IO path component group
Single NodeMulti Nodeprovisioner deploy --setup-type single --states iopath
provisioner deploy --states iopath
- Control path component group
Single NodeMulti Nodeprovisioner deploy --setup-type single --states controlpath
provisioner deploy --states controlpath
-
Execute the following command on primary node to start the cluster:
cortx cluster start
-
Verify Cortx cluster status:
hctl status
-