Skip to content

Commit

Permalink
Merge pull request openshift#10 from alexander-demichev/readme
Browse files Browse the repository at this point in the history
Update README
  • Loading branch information
openshift-merge-robot authored Dec 15, 2021
2 parents 6bd8211 + a292b34 commit 77e73b9
Showing 1 changed file with 19 additions and 197 deletions.
216 changes: 19 additions & 197 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,212 +1,34 @@
# OpenShift cluster-api-provider-aws
# Machine API Provider AWS

This repository hosts an implementation of a provider for AWS for the
OpenShift [machine-api](https://github.com/openshift/cluster-api).
This repository contains implementations of AWS Provider for [Machine API](https://github.com/openshift/machine-api-operator).

This provider runs as a machine-controller deployed by the
[machine-api-operator](https://github.com/openshift/machine-api-operator)
## What is the Machine API

### How to build the images in the RH infrastructure
The Dockerfiles use `as builder` in the `FROM` instruction which is not currently supported
by the RH's docker fork (see [https://github.com/kubernetes-sigs/kubebuilder/issues/268](https://github.com/kubernetes-sigs/kubebuilder/issues/268)).
One needs to run the `imagebuilder` command instead of the `docker build`.
A declarative API for creating and managing machines in an OpenShift cluster. The project is based on v1alpha2 version of [Cluster API](https://github.com/kubernetes-sigs/cluster-api).

Note: this info is RH only, it needs to be backported every time the `README.md` is synced with the upstream one.
## Documentation

## Deploy machine API plane with minikube
- [Overview](https://github.com/openshift/machine-api-operator/blob/master/docs/user/machine-api-operator-overview.md)
- [Hacking Guide](https://github.com/openshift/machine-api-operator/blob/master/docs/dev/hacking-guide.md)

1. **Install kvm**
## Architecture

Depending on your virtualization manager you can choose a different [driver](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md).
In order to install kvm, you can run (as described in the [drivers](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver) documentation):
The provider imports [Machine controller](https://github.com/openshift/machine-api-operator/tree/master/pkg/controller/machine) from `machine-api-operator` and provides implementation for Actuator interface. The Actuator implementation is responsible for CRUD operations on AWS API.

```sh
$ sudo yum install libvirt-daemon-kvm qemu-kvm libvirt-daemon-config-network
$ systemctl start libvirtd
$ sudo usermod -a -G libvirt $(whoami)
$ newgrp libvirt
```
## Building and running controller locally

To install to kvm2 driver:
```
NO_DOCKER=1 make build && ./bin/machine-controller-manager
```

```sh
curl -Lo docker-machine-driver-kvm2 https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \
&& chmod +x docker-machine-driver-kvm2 \
&& sudo cp docker-machine-driver-kvm2 /usr/local/bin/ \
&& rm docker-machine-driver-kvm2
```
By default, we run make tasks in a container. To run the controller locally, set NO_DOCKER=1.

2. **Deploying the cluster**
## Running tests

To install minikube `v1.1.0`, you can run:
### Unit

```sg
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.1.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
```
In order to run unit tests use `make test`.

To deploy the cluster:
### E2E Tests

```
$ minikube start --vm-driver kvm2 --kubernetes-version v1.13.1 --v 5
$ eval $(minikube docker-env)
```

3. **Deploying machine API controllers**

For development purposes the aws machine controller itself will run out of the machine API stack.
Otherwise, docker images needs to be built, pushed into a docker registry and deployed within the stack.

To deploy the stack:
```
kustomize build config | kubectl apply -f -
```

4. **Deploy secret with AWS credentials**

AWS actuator assumes existence of a secret file (references in machine object) with base64 encoded credentials:

```yaml
apiVersion: v1
kind: Secret
metadata:
name: aws-credentials-secret
namespace: default
type: Opaque
data:
aws_access_key_id: FILLIN
aws_secret_access_key: FILLIN
```

You can use `examples/render-aws-secrets.sh` script to generate the secret:
```sh
./examples/render-aws-secrets.sh examples/addons.yaml | kubectl apply -f -
```

5. **Provision AWS resource**

The actuator expects existence of certain resource in AWS such as:
- vpc
- subnets
- security groups
- etc.

To create them, you can run:

```sh
$ ENVIRONMENT_ID=aws-actuator-k8s ./hack/aws-provision.sh install
```

To delete the resources, you can run:

```sh
$ ENVIRONMENT_ID=aws-actuator-k8s ./hack/aws-provision.sh destroy
```

All machine manifests expect `ENVIRONMENT_ID` to be set to `aws-actuator-k8s`.

## Test locally built aws actuator

1. **Tear down machine-controller**

Deployed machine API plane (`machine-api-controllers` deployment) is (among other
controllers) running `machine-controller`. In order to run locally built one,
simply edit `machine-api-controllers` deployment and remove `machine-controller` container from it.

1. **Build and run aws actuator outside of the cluster**

```sh
$ go build -o bin/machine-controller-manager sigs.k8s.io/cluster-api-provider-aws/cmd/manager
```

```sh
$ .bin/machine-controller-manager --kubeconfig ~/.kube/config --logtostderr -v 5 -alsologtostderr
```
If running in container with `podman`, or locally without `docker` installed, and encountering issues, see [hacking-guide](https://github.com/openshift/machine-api-operator/blob/master/docs/dev/hacking-guide.md#troubleshooting-make-targets).


1. **Deploy k8s apiserver through machine manifest**:

To deploy user data secret with kubernetes apiserver initialization (under [config/master-user-data-secret.yaml](config/master-user-data-secret.yaml)):

```yaml
$ kubectl apply -f config/master-user-data-secret.yaml
```

To deploy kubernetes master machine (under [config/master-machine.yaml](config/master-machine.yaml)):

```yaml
$ kubectl apply -f config/master-machine.yaml
```

1. **Pull kubeconfig from created master machine**

The master public IP can be accessed from AWS Portal. Once done, you
can collect the kube config by running:

```
$ ssh -i SSHPMKEY ec2-user@PUBLICIP 'sudo cat /root/.kube/config' > kubeconfig
$ kubectl --kubeconfig=kubeconfig config set-cluster kubernetes --server=https://PUBLICIP:8443
```
Once done, you can access the cluster via `kubectl`. E.g.
```sh
$ kubectl --kubeconfig=kubeconfig get nodes
```

## Deploy k8s cluster in AWS with machine API plane deployed

1. **Generate bootstrap user data**

To generate bootstrap script for machine api plane, simply run:

```sh
$ ./config/generate-bootstrap.sh
```

The script requires `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables to be set.
It generates `config/bootstrap.yaml` secret for master machine
under `config/master-machine.yaml`.

The generated bootstrap secret contains user data responsible for:
- deployment of kube-apiserver
- deployment of machine API plane with aws machine controllers
- generating worker machine user data script secret deploying a node
- deployment of worker machineset

1. **Deploy machine API plane through machine manifest**:

First, deploy generated bootstrap secret:

```yaml
$ kubectl apply -f config/bootstrap.yaml
```

Then, deploy master machine (under [config/master-machine.yaml](config/master-machine.yaml)):

```yaml
$ kubectl apply -f config/master-machine.yaml
```

1. **Pull kubeconfig from created master machine**

The master public IP can be accessed from AWS Portal. Once done, you
can collect the kube config by running:

```
$ ssh -i SSHPMKEY ec2-user@PUBLICIP 'sudo cat /root/.kube/config' > kubeconfig
$ kubectl --kubeconfig=kubeconfig config set-cluster kubernetes --server=https://PUBLICIP:8443
```

Once done, you can access the cluster via `kubectl`. E.g.

```sh
$ kubectl --kubeconfig=kubeconfig get nodes
```

# Upstream Implementation
Other branches of this repository may choose to track the upstream
Kubernetes [Cluster-API AWS provider](https://github.com/kubernetes-sigs/cluster-api-provider-aws/)

In the future, we may align the master branch with the upstream project as it
stabilizes within the community.
If you wish to run E2E tests, you can use `make e2e`. Make sure you have a running OpenShift cluster on AWS.

0 comments on commit 77e73b9

Please sign in to comment.