The OpenShift Migration Operator (mig-operator) assists with installation of app migration tooling on OpenShift 3.x and 4.x clusters:
- Migration Controller (mig-controller)
- Migration UI (mig-ui)
- Velero
The above suite of tools is collectively referred to as CAM (cluster app migration) in some parts of this guide.
Before using mig-operator to attempt a migration, verify that the OpenShift clusters you'll be migrating apps between meet version requirements:
Product | Versions |
---|---|
OpenShift 3.x | v3.7+ |
OpenShift 4.x | v4.1+ |
Note that temporary object storage will be required perform a migration:
- SSH client (for Microsoft Windows users Putty is recommended)
- Recent version of Firefox or Chrome
- oc client Download from try.openshift.com
For migrations in disconnected environments it may be desired to pull required images ahead of time.
Required Images - CAM Tool
quay.io/ocpmigrate/mig-controller:release-1.0
quay.io/ocpmigrate/mig-operator:release-1.0
quay.io/ocpmigrate/mig-ui:release-1.0
quay.io/ocpmigrate/velero:fusor-1.1
quay.io/ocpmigrate/migration-plugin:release-1.0
quay.io/ocpmigrate/velero-restic-restore-helper:fusor-1.1
registry.access.redhat.com/rhel7
docker.io/registry:2
Required Images - NooBaa
docker.io/noobaa/noobaa-core:5
docker.io/noobaa/noobaa-core:5
docker.io/noobaa/noobaa-operator:1.1.0
docker.io/centos/mongodb-36-centos7
On both the source and the destination cluster, we'll use mig-operator to install components needed to perform a migration.
A manifest for deploying mig-operator to OpenShift 3.x and 4.x has been made available as operator.yml
# The operator manifest should be downloaded and edited to the specifications above.
# https://raw.githubusercontent.com/konveyor/mig-operator/master/deploy/non-olm/v1.1.0/operator.yml
--------------------------------------------------------------------------------------
# Login to your source cluster (apps will be migrated FROM here)
oc login https://my-source-cluster:8443
# Create mig-operator (requires cluster-admin privilege)
oc create -f operator.yml
--------------------------------------------------------------------------------------
# Login to your destination cluster (apps will be migrated TO here)
oc login https://my-destination-cluster:8443
# Create mig-operator (requires cluster-admin privilege).
oc create -f operator.yml
Next up, we'll tell mig-operator which components it should install for us by creating a MigrationController
resource.
When you install mig-operator, it introduces a MigrationController
CRD to your cluster.
The sample controller-3.yml
and controller-4.yml
are provided with recommended defaults for:
- Using OpenShift 3.x as the source cluster (only Velero installed)
- Using OpenShift 4.x as the destination cluster (all components: Velero, mig-controller, and mig-ui installed)
If you wish to run app migrations with a different configuration, it's as simple as changing values within the spec
of each sample MigrationController yaml definition. Note that it is critical that the mig-controller component should only be running on one of the two clusters involved with a migration.
For each OpenShift 3.x cluster that will be involved in the migration, use controller-3.yml as a starting point.
https://raw.githubusercontent.com/konveyor/mig-operator/master/deploy/non-olm/v1.0.0/controller-3.yml
For each OpenShift 4.x cluster that will be involved in the migration, use controller-4.yml as a starting point
https://raw.githubusercontent.com/konveyor/mig-operator/master/deploy/non-olm/v1.0.0/controller-4.yml
The snippet below shows the contents of controller-3.yml, a starting point for installing app migration components on OpenShift 3.x.
apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
spec:
[...]
# 'snapshot_tag' should be release-1.0 to follow along with this guide
snapshot_tag: release-1.0
# 'migration_velero' should always be 'true'
migration_velero: true
# 'migration_controller' and 'migration_ui' should be 'true' only on the destination cluster
migration_controller: false
migration_ui: false
[...]
# To install the controller on OpenShift 3 you will need to configure the API endpoint:
# mig_ui_cluster_api_endpoint: https://replace-with-openshift-cluster-hostname:8443
Sample source cluster MigrationController config:
[...]
snapshot_tag: release-1.0
migration_velero: true
migration_controller: false
migration_ui: false
[...]
Sample destination cluster MigrationController config
[...]
snapshot_tag: release-1.0
migration_velero: true
migration_controller: true
migration_ui: true
[...]
You may have also noticed the commented out option for specifying an API endpoint.
[...]
# To install the controller on OpenShift 3 you will need to configure the API endpoint:
# mig_ui_cluster_api_endpoint: https://replace-with-openshift-cluster-hostname:8443
[...]
Default limits allow 10 namespaces, 100 PVs, and 100 Pods to be migrated by a single MigPlan. Resource limits can be adjusted by configuring the MigrationController resource responsible for deploying mig-controller.
[...]
migration_controller: true
# This configuration is loaded into mig-controller, and should be set on the
# cluster where `migration_controller: true`
mig_pv_limit: 100
mig_pod_limit: 100
mig_namespace_limit: 10
[...]
This should be set to the API endpoint of the OpenShift cluster where the UI will run.
After making necessary edits to the provided starter MigrationController yaml, login to each cluster and create the MigrationController resource. This will kick off creation of the CAM componenets.
# Source Cluster
oc login https://my-source-cluster:8443
oc create -f migrationcontroller-src.yml
# Destination Cluster
oc login https://my-destination-cluster:8443
oc create -f migrationcontroller-dest.yml
Once you have created the MigrationController CR, mig-operator will start working on your request. After a few minutes have passed, the complete results should be visible by looking at the running Pods in the openshift-migration
namespace.
# Resulting Pods if you installed all components (destination cluster)
$ oc get pods -n openshift-migration
NAME READY STATUS RESTARTS AGE
migration-controller-89b4f85bb-gtrb8 1/1 Running 0 8m
migration-operator-69546687dd-8trfq 2/2 Running 0 8m
migration-ui-65cc6c6f9f-vmqf8 1/1 Running 0 8m
restic-mh5lb 1/1 Running 0 7m
restic-p849t 1/1 Running 0 7m
velero-84b8d9878b-gklfb 1/1 Running 0 8m
# Resulting Pods if you installed only Velero (source cluster)
$ oc get pods -n openshift-migration
NAME READY STATUS RESTARTS AGE
migration-operator-69546687dd-8trfq 2/2 Running 0 8m
restic-mh5lb 1/1 Running 0 7m
restic-p849t 1/1 Running 0 7m
velero-84b8d9878b-gklfb 1/1 Running 0 8m
If you need to limit the nodes the restic daemonset runs on:
- Add the daemonset_node_selector to the MigrationController CR, for example:
spec:
daemonset_node_selector:
node-role.kubernetes.io/worker: ""
topology.kubernetes.io/zone: us-east-1b
- Values for selectors can be updated normally, i.e.
us-east-1b
can be updated tous-east-1a
in the above example. - If a selector needs to be removed completely do one of the following:
- Update the MigrationController CR with the new desired values then manually remove the unwanted selectors from the daemonset.
- Set
daemonset_node_selector: null
to clear all selectors. Once the daemonset updates add a new set of selectors, if desired.
Please note that CORS configuration is no longer required in CAM 1.1 and above versions. This section only applies to CAM 1.0.
After following the steps above, you should have installed the migration UI on one of your clusters.
CAMs UI is served out of the cluster behind its own route, there are 3 distinct origins at play:
- The UI - (Ex: https://mig-ui-mig.apps.examplecluster.com)
- The OAuth Server - (Ex: https://openshift-authentication-openshift-authentication.apps.examplecluster.com)
- The API Server - (Ex: https://api.examplecluster.com:6443)
When the UI is served to the browser through it's route, the browser recognizes its origin, and blocks AJAX requests to alternative origins. This is called Cross-Origin Resource Sharing (CORS), and it's a deliberate browser security measure.
The full description of CORS is linked above, but without configuring the non-UI
origin servers, the requests will be blocked. To enable these requests,
the UI's origin should be whitelisted in a corsAllowedOrigins
list for each
alternative origin. The servers should recognize this list and inspect the
origin of incoming requests. If the origin matches one of the CORS whitelisted
origins (the UI), there are a set of headers that are returned in the response
that inform the browser the CORS request is accepted and valid.
Additionally, for the same reasons described above, any requests that the UI may make to source 3.x clusters will also have to be whitelisted by configuring the same field in the 3.x cluster master-config.yaml. This causes the 3.x API servers to accept the CORS requests incoming from the UI that was served out of its OCP4 cluster's route.
A new corsAllowedOrigins
entry will need to be added on each OpenShift cluster involved in a migration for the Migration Web UI to function properly.
In order to enable the UI to talk to an OpenShift 3 cluster (whether local or remote) it is necessary to edit the master-config.yaml and restart the OpenShift master nodes.
To determine the CORS URL that needs to be added retrieve the route URL after installing the controller.
$ oc get -n openshift-migration route/migration -o go-template='{{ .spec.host }}{{ println }}'
SSH into the OpenShift master nodes, noting the hostname from above
Add the hostname to /etc/origin/master/master-config.yaml
under corsAllowedOrigins:
corsAllowedOrigins:
- //$output-from-previous-command
After making these changes on 3.x you'll need to restart OpenShift components to pick up the changed config values. The process for restarting 3.x control plane components differs based on the OpenShift version.
# In OpenShift 3.7-3.9, the control plane runs within systemd services
$ systemctl restart atomic-openshift-master-api
$ systemctl restart atomic-openshift-master-controllers
# In OpenShift 3.10-3.11, the control plane runs in 'Static Pods'
$ /usr/local/bin/master-restart api
$ /usr/local/bin/master-restart controller
On OpenShift 4 cluster resources are modified by the operator if the controller is installed there and you can skip these steps. If you chose not to install the controller on your OpenShift 4 cluster you will need to perform these steps manually.
If you haven't already, determine the CORS URL that needs to be added retrieve the route URL
$ oc get -n openshift-migration route/migration -o go-template='{{ .spec.host }}{{ println }}'
oc edit authentication.operator cluster
and ensure the following exist:
spec:
unsupportedConfigOverrides:
corsAllowedOrigins:
- //localhost(:|$)
- //127.0.0.1(:|$)
- //$output-from-previous-command
$ oc edit kubeapiserver.operator cluster
and ensure the following exist:
spec:
unsupportedConfigOverrides:
corsAllowedOrigins:
- //$output-from-previous-command
To visit the UI, look at the route on the cluster where you specified mig-ui should be installed.
$ oc get routes migration -n mig -o jsonpath='{.spec.host}'
migration-mig.apps.samplecluster.com
Looking at the sample output above, we'd visit the URL below from our browser:
- Before you can login you will need to accept the certificates with your browser on both the source and destination cluster. To do this:
- Visit the link displayed by the webui for
.well-known/oauth-authorization-server
. - Refresh the page
- Get redirected to login page
- Login with your credentials for the cluster.
CAM components use S3 object storage as temporary scratch space when performing migrations. This storage can be any object storage that presents an S3 like
interface. Currently, we have tested AWS S3, Noobaa, and Minio.
To see examples of deploying your own local S3 like
interface with NooBaa or to see examples of how to leverage AWS S3
service see ObjectStorage.md