We recommend using an external Mongo service, such as Atlas. If you'd like to run Mongo in the same cluster as your other NxCloud services, then please continue following this guide.
> helm repo add mongodb https://mongodb.github.io/helm-charts
> helm install community-operator mongodb/community-operator
If you'd like to encrypt you Mongo volumes, apply the encrypted storage
class kubectl apply -f encrypted-storage-class.yml
# NO ENCRYPTION (if you skipped the above step)
> kubectl apply -f examples/mongodb.yml
# WITH ENCRYPTION (if you followed step 2 above)
> kubectl apply -f examples/encrypted-mongodb.yml
This will create a secret. You can get the value of the secret as follows:
> kubectl get secret cloud-mongodb-nrwl-api-admin-user -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"n"}}{{$v|base64decode}}{{"nn"}}{{end}}'
You might need to wait a bit for the Pods to be created before this secret will be available.
The result should look like
this: mongodb+srv://admin-user:[email protected]/nrwl-api?replicaSet=cloud-mongodb&ssl=false
.
Extract the connection string and remember it. You'll need to use in your secrets.yml
file.
Next steps: continue from step 2. onwards.
If you can't run Mongo in Kubernetes, you can also set it up manually using our approved Docker image.
-
Deploy this Docker image on ECS, or a custom VM:
nxprivatecloud/nx-cloud-mongo:latest
-
Here is an example of how to run it, and what options you should pass to it. Please adapt the command to the deployment env of your choice (We will provide an example ECS set-up below):
docker run -t -i -p 27017:27017 \ -e MONGO_INITDB_ROOT_USERNAME=some-admin-usernamne -e MONGO_INITDB_ROOT_PASSWORD=someAdminPassword123 \ -v $PWD/mongo-data:/data/db \ --rm --name nx-cloud-mongo nxprivatecloud/nx-cloud-mongo:latest \ --wiredTigerCacheSizeGB 2 --bind_ip_all \ --replSet rs0 \ --keyFile /mongo-security/keyfile.txt
- There are a few important options we need to pass:
MONGO_INITDB_ROOT_USERNAME
andMONGO_INITDB_ROOT_PASSWORD
- this will enable auth on our DB-v $PWD/mongo-data:/data/db
- this will create a persistent volume mapping to the host, so the DB doesn't disappear when the Docker image gets recreated.--wiredTigerCacheSizeGB 2
- This sets the memory limit of the DB. Try increasing it if you are having a lot of activity.--replSet rs0 --keyFile /mongo-security/keyfile.txt
- this sets up a simple replica set. You can ignore this, it's an implementation detail needed for NxCloud connections.
- There are a few important options we need to pass:
This is a sample connection string you can use when connecting to your instance above (note the extra URL params at the end): mongodb://<your-server-ip-or-address>:27017/?authSource=admin&directConnection=true
⤵️ Here is an example ECS implementation
{
"family": "nx-cloud-mongo-standalone",
"containerDefinitions": [
{
"name": "NxCloudMongo",
"image": "nxprivatecloud/nx-cloud-mongo:latest",
"cpu": 1024,
"memory": 3072,
"portMappings": [
{
"name": "nxcloudmongo-27017-tcp",
"containerPort": 27017,
"hostPort": 27017,
"protocol": "tcp"
}
],
"essential": true,
"command": [
"--wiredTigerCacheSizeGB",
"3",
"--bind_ip_all",
"--replSet",
"rs0",
"--keyFile",
"/mongo-security/keyfile.txt"
],
"environment": [
{
"name": "MONGO_INITDB_ROOT_USERNAME",
"value": "some-admin-user"
},
{
"name": "MONGO_INITDB_ROOT_PASSWORD",
"value": "adminPass123"
}
],
"mountPoints": [
{
"sourceVolume": "data",
"containerPath": "/data/db",
"readOnly": false
}
],
"volumesFrom": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/DeployCloud",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
}
}
],
"executionRoleArn": "arn:aws:iam::623002322076:role/ecsTaskExecutionRole",
"volumes": [
{
"name": "data",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
}
],
"requiresCompatibilities": [
"EC2"
],
"cpu": "1024",
"memory": "3072"
}
As of version latest-11-05-2023T11-28-04
(and all new calver based versions that are similar to 2305.11.3
) and higher, we strongly recommend you update from Mongo 4.2 to Mongo 6 for
your NxCloud installation.
Mongo upgrades need to be done incrementally between major versions. In both scenarios below, we'll upgrade from Mongo 4.2 to 4.4 then to 5.0 then to 6.0.
If at any point you run into any issues, or you'd like us to assist you with the upgrade, please send an email to [email protected]
If you're using Mongo Atlas, you can do a rolling upgrade by editing your cluster's configuration:
and first selecting Mongo 4.4, then Mongo 5.0, finally Mongo 6.0 (notice how it doesn't let us upgrade from 4.4 straight to 6):
If you're using the Mongo Operator, you'll need to edit your Mongo ReplicaSet and apply the version changes incrementally, as Mongo doesn't allow a direct 4.2 to 6.0 upgrade.
During the upgrade, we'll need to continually update our feature compatibility version also.
Mongo requires that this featureCompatibilityVersion
always trails the version we want to upgrade to. For example, if our pods are on Mongo 4.4 and we want to upgrade them to Mongo 5, our featureCompatibilityVersion needs to be first set to 4.4.
It's important you follow the exact steps in order:
- Backup your database first using a tool like
mongodump
. You can ssh into any one of your pods, as they should have the same data. - Edit your
/mongo.yml
file - Change to Mongo version 4.4:
version: '4.4.20'
- Then, we'll also set this option right below it:
featureCompatibilityVersion: '4.2'
- Your
mongo.yml
file should now contain these 2 lines:
version: '4.4.20' featureCompatibilityVersion: '4.2'
- Then, we'll also set this option right below it:
- Apply your changes
kubectl apply -f mongo.yml
- Your replica pods will slowly start upgrading now
- use
kubectl get pods
to monitor this - This process will take around 10-20 minutes to complete.
- It's very important to let all the pods completely upgrade before moving on to the next step.
- use
- Once all pods have upgraded and are in a healthy state, they should look similar to the below: 7. 8. The age of the pods needs to be a very low number. This means they have recently been re-created with the new Mongo version.
- Now set
featureCompatibilityVersion: '4.4'
kubectl apply -f mongo.yml
- This will set the feature compatibility version of the Mongo pods to a version that we need for the next step of the upgrade.
- Do not move on to this step until you've applied the change in the previous step up to your cluster, otherwise your upgrade will fail. Now set
version: '5.0.17'
- Apply the change
- Leave it to upgrade for the next 20-30 minutes.
- Your
kubectl get pods
output should show healthy pods, with2/2
availability and theAGE
of the pods should now also be reset (similar to the screenshot we saw above)
- Set
featureCompatibilityVersion: '5.0'
- Apply the changes:
kubectl apply -f mongo.yml
. You don't need to wait/watch for pod upgrades as this change will be applied very quickly.
- Apply the changes:
- Again, do not move on until you've applied
kubectl apply -f ...
the change in the previous step Change toversion: '6.0.5'
and apply it.- Wait for 20 minutes
- Then check your pods are in a healthy state
kubectl get pods
- Finally, set
featureCompatibilityVersion: '6.0'
and apply the change. This change will be very quick. - All your pods should now be on Mongo 6. Congratulations!
If you are unsure if your pods have finished upgrading between Mongo versions (as it takes a while for the Mongo operator to slowly do a rolling upgrade of your replicas) and you'd like to check what Mongo version your pods are on, you can exec into a pod and:
mongo --version # you can do this for each pod part of your replica set
To check the featureCompatibilityVersion
, exec into the pod and:
mongo -u admin-user -p <password> --authenticationDatabase nrwl-api
use nrwl-api
db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})
Although not necessary, you can use the above commands to check each of your 3 replicas is on the correct Mongo version and featureCompatibilityVersion
between the upgrade steps above.