Skip to content
This repository has been archived by the owner on Feb 8, 2024. It is now read-only.

Deploy VM ISO Repo

Yashodhan Pise edited this page Jun 10, 2021 · 31 revisions

Deploy VM: ISO Repo Method

Provisioner CLI Commands for Single and Multi-node VM

NOTE: We recommend not using auto_deploy_vm CLI command for deployments as it is not actively maintained anymore.

Before You Start

Checklist:

  • For single-node VM deployment, ensure the VM is created with 2+ attached disks.
    OR
  • For multi-node VM deployment, ensure the VM is created with 6+ attached disks.
  • Do you see the devices on execution of this command: lsblk ?
  • Do both the systems on your setup have valid hostnames, are the hostnames accessible: ping ?
  • Do you have IPs' assigned to all NICs eth0, eth1 and eth2?
  • Identify primary node and run below commands on primary node
    NOTE: For single-node VM, the VM node itself is treated as primary node.

VM Preparation for Deployment

  1. Set root user password on all nodes:

    sudo passwd root
    
  2. Refresh the machine_id on VM:

    rm /etc/machine-id
    systemd-machine-id-setup
    cat /etc/machine-id
    
  3. Download ISOs and shell-scripts
    NOTE: Contact Cortx RE team for latest ISO.

    mkdir /opt/isos
    

    NOTE: If you are outside Seagate corporate network, download these files manually to /opt/isos directory and skip to Prepare cortx-prvsnr API step.
    Get details for ISO download URL from Cortx RE team. The URL should have the following files:

    # Set source URL
    # It should have the following: cortx-2.0.0-*-single.iso, cortx-os-1.0.0-*.iso, cortx-prep-2.0.0-*.sh
    CORTX_RELEASE_REPO=<URL to Cortx ISO hosting>
    
    # Download Single ISO
    pushd /opt/isos
    SINGLE_ISO=$(curl -s ${CORTX_RELEASE_REPO} | sed 's/<\/*[^>]*>//g' | cut -f1 -d' ' | grep 'single.iso')
    curl -O ${CORTX_RELEASE_REPO}/iso/${SINGLE_ISO}
    # Download OS ISO
    OS_ISO=$(curl -s ${CORTX_RELEASE_REPO} | sed 's/<\/*[^>]*>//g' | cut -f1 -d' '|grep  "cortx-os")
    curl -O ${CORTX_RELEASE_REPO}/iso/${OS_ISO}
    # Download cortx_prep script
    CORTX_PREP=$(curl -s ${CORTX_RELEASE_REPO} | sed 's/<\/*[^>]*>//g' | cut -f1 -d' '|grep  ".sh")
    curl -O ${CORTX_RELEASE_REPO}/iso/${CORTX_PREP}
    popd    
    
  4. Prepare cortx-prvsnr API

    pushd /opt/isos
    # Execute cortx-prep script
    sh /opt/isos/cortx-prep*.sh
    popd
    
  5. Verify provisioner version (0.36.0 and above)

    provisioner --version
    
  6. Create config.ini file to some location:
    IMPORTANT NOTE: Please check every details in this file correctly according to your node.
    Verify interface names are correct as per your node

    Update required details in ~/config.ini use sample config.ini

    vi ~/config.ini
    

    Sample config.ini for single node VM

    [cluster]
    mgmt_vip=
    
    [srvnode_default]
    network.data.private_interfaces=eth3,eth4
    network.data.public_interfaces=eth1,eth2
    network.mgmt.interfaces=eth0
    bmc.user=None
    bmc.secret=None
    storage.cvg.0.data_devices=/dev/sdc
    storage.cvg.0.metadata_devices=/dev/sdb
    network.data.private_ip=None
    
    [srvnode-1]
    hostname=srvnode-1.localdomain
    roles=primary,openldap_server
    
    [enclosure_default]
    type=virtual
    
    [enclosure-1]
    

    Note: Find the devices on each node separately using the command provided below, to fill into respective config.ini sections.
    Complete list of attached devices:
    device_list=$(lsblk -nd -o NAME -e 11|grep -v sda|sed 's|sd|/dev/sd|g'|paste -s -d, -)
    Values for storage.cvg.0.metadata_devices:
    echo ${device_list%%,*}
    Values for storage.cvg.0.data_devices:
    echo ${device_list#*,}

    [cluster]
    mgmt_vip=
    
    [srvnode_default]
    network.data.private_interfaces=eth3,eth4
    network.data.public_interfaces=eth1,eth2
    network.mgmt.interfaces=eth0
    bmc.user=None
    bmc.secret=None
    network.data.private_ip=None
    
    [srvnode-1]
    hostname=srvnode-1.localdomain
    roles=primary,openldap_server,kafka_server
    storage.cvg.0.data_devices=<Values for storage.cvg.0.data_devices from command executed earlier>
    storage.cvg.0.metadata_devices=<Values for storage.cvg.0.metadata_devices from command executed earlier>
    
    [srvnode-2]
    hostname=srvnode-2.localdomain
    roles=secondary,openldap_server,kafka_server
    storage.cvg.0.data_devices=<Values for storage.cvg.0.data_devices from command executed earlier>
    storage.cvg.0.metadata_devices=<Values for storage.cvg.0.metadata_devices from command executed earlier>
    
    [srvnode-3]
    hostname=srvnode-3.localdomain
    roles=secondary,openldap_server,kafka_server
    storage.cvg.0.data_devices=<Values for storage.cvg.0.data_devices from command executed earlier>
    storage.cvg.0.metadata_devices=<Values for storage.cvg.0.metadata_devices from command executed earlier>
    
    [enclosure_default]
    type=virtual
    
    [enclosure-1]
    
    [enclosure-2]
    
    [enclosure-3]
    

    NOTE : private_ip, bmc_secret, bmc_user should be None for VM.

Deploy VM Manually:

Manual deployment of VM consists of following steps from Auto-Deploy, which could be individually executed:
NOTE: Ensure VM Preparation for Deployment has been addressed successfully before proceeding

  1. Bootstrap VM(s): Run setup_provisioner provisioner cli command:

    Single Node VM: Bootstrap

    If using remote hosted repos:

    provisioner setup_provisioner \
        --logfile --logfile-filename /var/log/seagate/provisioner/setup.log \
        --source iso --config-path ~/config.ini \
        --iso-cortx ${SINGLE_ISO} --iso-os ${OS_ISO} \
        srvnode-1:$(hostname -f)
    

    Multi Node VM: Bootstrap

    If using remote hosted repos:

    provisioner setup_provisioner  \
        --logfile --logfile-filename /var/log/seagate/provisioner/setup.log \
        --ha --source iso --config-path ~/config.ini \
        --iso-cortx ${SINGLE_ISO} --iso-os ${OS_ISO} \
        srvnode-1:<fqdn:primary_hostname> \
        srvnode-2:<fqdn:secondary_hostname> \
        srvnode-3:<fqdn:secondary_hostname>
    

    Example:

    provisioner setup_provisioner \
            --logfile --logfile-filename /var/log/seagate/provisioner/setup.log \
            --ha --source iso --config-path ~/config.ini \
            --iso-cortx ${SINGLE_ISO} --iso-os ${OS_ISO} \
            srvnode-1:host1.localdomain srvnode-2:host2.localdomain srvnode-3:host3.localdomain
    

    NOTE:

    1. This command will ask for each node's root password during initial cluster setup.
      This is one time activity required to setup passwordless ssh across nodes.
    2. [OPTIONAL] For setting up a cluster of more than 3 nodes do append --name <setup_profile_name> to auto_deploy_vm command input parameters.
  2. Update pillar and export pillar data for confstore:

    provisioner configure_setup /root/config.ini <number of nodes in cluster>
    salt-call state.apply components.system.config.pillar_encrypt
    provisioner confstore_export
    
  3. Bootstrap Validation Once deployment is bootstrapped (auto_deploy or setup_provisioner) command is executed successfully, verify salt master setup on both nodes (setup verification checklist)

    salt '*' test.ping  
    salt "*" service.stop puppet
    salt "*" service.disable puppet
    salt '*' pillar.get release  
    salt '*' grains.get node_id  
    salt '*' grains.get cluster_id  
    salt '*' grains.get roles  
    
  4. Deployment Based On Component Groups:
    If provisioner setup is completed and you want to deploy in stages based on component group

    NOTE: At any stage, if there is a failure, it is advised to run destroy for that particular group. For help on destroy commands, refer to https://github.com/Seagate/cortx-prvsnr/wiki/Teardown-Node(s)#targeted-teardown

    Non-Cortx Group: System & 3rd-Party Softwares

    1. System component group
      Single Node

      provisioner deploy --setup-type single --states system

      Multi Node

      provisioner deploy --states system
    2. Prereq component group
      Single Node

      provisioner deploy --setup-type single --states prereq

      Multi Node

      provisioner deploy --states prereq

    Cortx Group: Utils

    1. Utils component Single Node
      provisioner deploy --setup-type single --states utils
      Multi Node
      provisioner deploy --setup-type 3_node --states utils

    Cortx Group: IO Path

    1. IO path component group
      Single Node
      provisioner deploy --setup-type single --states iopath
      Multi Node
      provisioner deploy --states iopath

    Cortx Group: Control Path

    1. Control path component group
      Single Node
      provisioner deploy --setup-type single --states controlpath
      Multi Node
      provisioner deploy --states controlpath

    Start Cluster (irrespective of number of nodes):

    1. Execute the following command on primary node to start the cluster:

      cortx cluster start
    2. Verify Cortx cluster status:

      hctl status

Known issues:

  1. Known Issue 19: Known Issue 19: LVM issue - auto-deploy fails during provisioning of storage component (EOS-12289)
Clone this wiki locally