This is a Vagrant 1.7.2+ plugin that adds vagrant commands and provisioners to build and test OpenShift Origin.
Note
|
This plugin requires Vagrant 1.7.2+ |
Note
|
Instructions below generally assume a Linux-like command line and may require modifications for other environments. |
-
Compatible with VMs run via VirtualBox, AWS, libvirt, OpenStack, or managed providers.
-
Provides commands to install build dependencies, sync repositories, and run tests
v2 (aka M4) is no longer supported on the master branch or in the published rubygems.org version of this plugin. You must checkout the v2 branch of this repository in order to use the plugin with v2 and follow the documentation there.
Like other plugins, this gem will be installed in your ~/.vagrant.d/gems/
directory.
To work on the vagrant-openshift plugin, clone this repository and use Bundler to get the dependencies:
$ bundle
Compile and install using Rake:
$ rake
This plugin works in concert with the OpenShift Origin Vagrantfile to build and update OpenShift development environments.
The upstream OpenShift Origin projects will be git-cloned locally under your GOPATH (you don’t need to have golang or other build requirements installed locally). These will be set as the git "upstream" remote. Projects you have forked into your github account will also have that account as the git "origin" remote.
Add your GOPATH if you don’t already have one for golang:
$ echo "export GOPATH=~/code" >> ~/.bash_profile # ~/code can be any dir
$ source ~/.bash_profile
$ mkdir -p $GOPATH
Then clone the repositories into your GOPATH.
$ cd $GOPATH
$ vagrant openshift-local-checkout -u <github username>
$ cd src/github.com/openshift/origin
Generate a .vagrant-openshift.json
in your OpenShift Origin repo that
you may modify later to match your vagrant requirements:
$ vagrant origin-init --stage inst --os (fedora|centos7|rhel7|rhelatomic7) <instance name>
Running with the default VirtualBox provider:
Note
|
If you are trying to refresh an existing image, you’ll want to remove the current image with vagrant box list and vagrant box remove <box_name>
|
$ vagrant up
Note
|
See Other Providers below for launching VMs from other providers. |
-
Building updated code from edits in your local repository clones:
$ vagrant sync-openshift
For some providers, your local repositories are automatically synchronized
to the remote VM. If not, the --source
option can be used to do so
before building.
In addition to the OpenShift binary itself, by default a number of
component Docker images are built as well, which can take a long time. To
rebuild only the OpenShift binary, use the --no-images
option.
To enable easy customization of the build environment, any files placed under '\~/.openshiftdev/home.d' will be copied to the vagrant user home directory. For example: '~/.openshiftdev/home.d/.bash_profile' will be copied to '.bash_profile' on the vagrant VM.
Your origin repo Vagrantfile can use other providers than the default VirtualBox provider for creating VMs. Provider configuration consults outside configuration files so that the repository Vagrantfile does not have to be modified in most cases. See the relevant provider section in the Vagrantfile to learn what parameters are available.
If you are starting with a plain operating system host image (which is likely to be the case) then you have a bit of setup to do to prepare your host for building and running after creation. Consult the Initial Setup section for details.
-
Install the latest vagrant-aws plugin.
Install plugin from rubygems:
$ vagrant plugin install vagrant-aws
Or follow the build steps to build from source.
You now need some AWS-specific configuration to specify which AMI to use.
-
Ensure your AWS credentials file is present at
~/.awscred
; it should have the following entries filled in:
AWSAccessKeyId=<AWS API Key> AWSSecretKey=<AWS API Secret> AWSKeyPairName=<Keypair name> AWSPrivateKeyPath=<SSH Private key>
-
Re-create your
.vagrant-openshift.json
file with updated AWS settings:
$ vagrant origin-init --stage inst --os (fedora|centos7|rhel7|rhelatomic7) <instance name>
The instance name will be applied as a tag and should generally be specific to you and OpenShift so that you can identify the VM among any others in your account. It will be stored in the config file.
The Red Hat OpenShift team shares an account that provides pre-built
AMIs for the quickest startup possible, so this command will search for
the latest version of that AMI. If your account doesn’t have this AMI, you’ll need to supply
a base AMI in your repository’s .vagrant-openshift.json
file under the
aws.ami
key.
-
Start the AWS machine
vagrant up --provider=aws
Tip
|
Be sure to rerun origin-init for each subsequent run of vagrant up --provider=aws to pick up the last built ami.
|
Note
|
Requires latest AWS provider. |
Note
|
You can use the Vagrant-AMI plugin to create an AMI from a running AWS machine. |
-
Install the latest vagrant-openstack-plugin. See: https://github.com/cloudbau/vagrant-openstack-plugin.
Install plugin from rubygems:
$ vagrant plugin install vagrant-openstack-plugin
Note
|
On some systems (e.g. mac) doing export NOKOGIRI_USE_SYSTEM_LIBRARIES=1 can help make the above command work.
|
-
Edit
~/.openstackcred
and update your OpenStack credentials, endpoint and tenant name.
OSEndpoint=<OpenStack Endpoint URL, e.g. http://openshift.example.com:5000/v2.0/tokens> OSUsername=<OpenStack Username> OSAPIKey=<OpenStack Password> OSKeyPairName=<Keypair name as registered in OpenStack> OSPrivateKeyPath=<path to that SSH Private key> OSTenant=<OpenStack Tenant/Project Name, see it at the top in OpenStack web UI> OSFloatingIP=<specific floating ip or ':auto' if floating ip is desired> OSFloatingIPPool=<specific pool or 'false' (to use first found) if floating ip is desired>
-
Edit
.vagrant-openshift.json
and update the openstack provider section. You’ll need to indicate at least the base image you’d like to start, as well as the user to access with.
"openstack": { "image": "Fedora-Cloud-Base-20141203-21.x86_64", "ssh_user": "fedora" }
-
Start the OpenStack machine
vagrant up --provider=openstack
Note
|
Requires latest OpenStack provider. |
-
Install the vagrant-libvirt plugin dependencies
yum install libxslt-devel libxml2-devel libvirt-devel ruby-devel rubygems
-
Install the vagrant-libvirt plugin
vagrant plugin install vagrant-libvirt
Note
|
This may require modifying the system linker as described in this issue: |
# alternatives --set ld /usr/bin/ld.gold
-
Configure LibVirt to allow remote TLS connections
-
Create TLS certificates and key pairs. Follow the guide at http://libvirt.org/remote.html#Remote_certificates Example commands for creating a self signed certificate are provided below.
-
mkdir -p /etc/pki/libvirt/private
#CA Cert
certtool --generate-privkey > cakey.pem
cat <<EOF> ca.info
cn = MyOrg
ca
cert_signing_key
EOF
certtool --generate-self-signed --load-privkey cakey.pem --template ca.info --outfile cacert.pem
/bin/cp -f cacert.pem /etc/pki/CA/cacert.pem
#Server cert
certtool --generate-privkey > serverkey.pem
cat <<EOF> server.info
organization = MyOrg
cn = oirase
tls_www_server
encryption_key
signing_key
EOF
certtool --generate-certificate --load-privkey serverkey.pem \
--load-ca-certificate cacert.pem --load-ca-privkey cakey.pem \
--template server.info --outfile servercert.pem
/bin/cp -f serverkey.pem /etc/pki/libvirt/private/serverkey.pem
/bin/cp -f servercert.pem /etc/pki/libvirt/servercert.pem
#Client cert
certtool --generate-privkey > clientkey.pem
cat <<EOF> client.info
country = US
state = California
locality = Mountain View
organization = MyOrg
cn = client1
tls_www_client
encryption_key
signing_key
EOF
certtool --generate-certificate --load-privkey clientkey.pem \
--load-ca-certificate cacert.pem --load-ca-privkey cakey.pem \
--template client.info --outfile clientcert.pem
/bin/cp -f clientkey.pem /etc/pki/libvirt/private/clientkey.pem
/bin/cp -f clientcert.pem /etc/pki/libvirt/clientcert.pem
-
Modify /etc/sysconfig/libvirtd and enable listening to connections
LIBVIRTD_ARGS="--listen"
-
Restart libvirtd
-
Start the LibVirt machine
-
vagrant up --provider=libvirt
Note
|
Requires latest LibVirt provider |
Running on other environments which are not managed by Vagrant directly.
-
Install the vagrant-managed-servers plugin
vagrant plugin install vagrant-managed-servers
-
Edit the Vagrantfile and update the managed section to update the IP address, User name and SSH key.
managed.server = "HOST or IP of machine" override.ssh.username = "root" override.ssh.private_key_path = "~/.ssh/id_rsa"
-
Connect to the manually managed machine
vagrant up --provider=managed
Note
|
Requires latest Managed provider |
Ideally you would be able to use an image with the operating system, dependencies, and OpenShift already installed so you can just start hacking. But at this time that is not available for all providers.
Images may be thought of as being at one of four stages:
-
"os" - The base OS image (use a "minimal" one).
-
"deps" - OpenShift runtime dependencies and build requirements are installed.
-
"inst" - OpenShift code, images, and binaries are built and installed
-
"bootstrap" - OpenShift installed and running
You may want to create images that snapshot the output at each of these stages, as the rate of change and amount of time to create each is different.
After using vagrant up --provider=<provider>
to start a host with only
a basic operating system on it (Fedora 21+ or CentOS 7 should suffice),
you will need to install the build tools and other dependencies for
building and running OpenShift. The following vagrant commands should
help with this:
$ vagrant build-openshift-base
$ vagrant build-openshift-base-images
$ vagrant install-openshift-assets-base
Given this base foundation, you may want to vagrant package
the result before proceeding to install OpenShift code.
$ vagrant install-openshift
$ vagrant build-openshift-base-images # pick up updates if older "deps" base reused
$ vagrant build-openshift --images
$ vagrant build-sti --binary-only
This software distribution includes cryptographic software that is subject to the U.S. Export Administration Regulations (the "EAR") and other U.S. and foreign laws and may not be exported, re-exported or transferred (a) to any country listed in Country Group E:1 in Supplement No. 1 to part 740 of the EAR (currently, Cuba, Iran, North Korea, Sudan & Syria); (b) to any prohibited destination or to any end user who has been prohibited from participating in U.S. export transactions by any federal agency of the U.S. government; or (c) for use in connection with the design, development or production of nuclear, chemical or biological weapons, or rocket systems, space launch vehicles, or sounding rockets, or unmanned air vehicle systems.You may not download this software or technical information if you are located in one of these countries or otherwise subject to these restrictions. You may not provide this software or technical information to individuals or entities located in one of these countries or otherwise subject to these restrictions. You are also responsible for compliance with foreign law requirements applicable to the import, export and use of this software and technical information.