Skip to content

Commit

Permalink
Update and Fix Autoscaler Docs (#655)
Browse files Browse the repository at this point in the history
Signed-off-by: Neaj Morshad <[email protected]>
  • Loading branch information
Neaj-Morshad-101 authored Aug 21, 2024
1 parent c0f94ab commit 6a2fa55
Show file tree
Hide file tree
Showing 12 changed files with 31 additions and 28 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ $ kubectl get pod -n demo es-combined-0 -o json | jq '.spec.containers[].resourc
Let's check the Elasticsearch resources,

```json
$ kubectl get elasticsearch -n demo es-combined -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get elasticsearch -n demo es-combined -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "elasticsearch") | .resources'
{
"limits": {
"cpu": "500m",
Expand Down Expand Up @@ -492,7 +492,7 @@ $ kubectl get pod -n demo es-combined-0 -o json | jq '.spec.containers[].resourc
}
}
$ kubectl get elasticsearch -n demo es-combined -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get elasticsearch -n demo es-combined -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "elasticsearch") | .resources'
{
"limits": {
"cpu": "1",
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/mariadb/autoscaler/compute/cluster/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ $ kubectl get pod -n demo sample-mariadb-0 -o json | jq '.spec.containers[].reso

Let's check the MariaDB resources,
```bash
$ kubectl get mariadb -n demo sample-mariadb -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mariadb -n demo sample-mariadb -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mariadb") | .resources'
{
"limits": {
"cpu": "200m",
Expand Down Expand Up @@ -509,7 +509,7 @@ $ kubectl get pod -n demo sample-mariadb-0 -o json | jq '.spec.containers[].reso
}
}
$ kubectl get mariadb -n demo sample-mariadb -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mariadb -n demo sample-mariadb -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mariadb") | .resources'
{
"limits": {
"cpu": "250m",
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/mongodb/autoscaler/compute/replicaset.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ $ kubectl get pod -n demo mg-rs-0 -o json | jq '.spec.containers[].resources'

Let's check the MongoDB resources,
```bash
$ kubectl get mongodb -n demo mg-rs -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mongodb -n demo mg-rs -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mongodb") | .resources'
{
"limits": {
"cpu": "200m",
Expand Down Expand Up @@ -509,7 +509,7 @@ $ kubectl get pod -n demo mg-rs-0 -o json | jq '.spec.containers[].resources'
}
}
$ kubectl get mongodb -n demo mg-rs -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mongodb -n demo mg-rs -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mongodb") | .resources'
{
"limits": {
"cpu": "400m",
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/mongodb/autoscaler/compute/standalone.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ $ kubectl get pod -n demo mg-standalone-0 -o json | jq '.spec.containers[].resou

Let's check the MongoDB resources,
```bash
$ kubectl get mongodb -n demo mg-standalone -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mongodb -n demo mg-standalone -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mongodb") | .resources'
{
"limits": {
"cpu": "200m",
Expand Down Expand Up @@ -487,7 +487,7 @@ $ kubectl get pod -n demo mg-standalone-0 -o json | jq '.spec.containers[].resou
}
}
$ kubectl get mongodb -n demo mg-standalone -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mongodb -n demo mg-standalone -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mongodb") | .resources'
{
"limits": {
"cpu": "400m",
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/mysql/autoscaler/compute/cluster/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ $ kubectl get pod -n demo sample-mysql-0 -o json | jq '.spec.containers[].resour

Let's check the MySQL resources,
```bash
$ kubectl get mysql -n demo sample-mysql -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mysql -n demo sample-mysql -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mysql") | .resources'
{
"limits": {
"cpu": "200m",
Expand Down Expand Up @@ -421,7 +421,7 @@ $ kubectl get pod -n demo sample-mysql-0 -o json | jq '.spec.containers[].resour
}
}
$ kubectl get mysql -n demo sample-mysql -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mysql -n demo sample-mysql -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mysql") | .resources'
{
"limits": {
"cpu": "250m",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ $ kubectl get pod -n demo sample-pxc-0 -o json | jq '.spec.containers[].resource

Let's check the PerconaXtraDB resources,
```bash
$ kubectl get perconaxtradb -n demo sample-pxc -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get perconaxtradb -n demo sample-pxc -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "perconaxtradb") | .resources'
{
"limits": {
"cpu": "200m",
Expand Down Expand Up @@ -462,7 +462,7 @@ $ kubectl get pod -n demo sample-pxc-0 -o json | jq '.spec.containers[].resource
}
}
$ kubectl get perconaxtradb -n demo sample-pxc -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get perconaxtradb -n demo sample-pxc -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "perconaxtradb") | .resources'
{
"limits": {
"cpu": "250m",
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/proxysql/autoscaler/compute/cluster/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ $ kubectl get pod -n demo proxy-server-0 -o json | jq '.spec.containers[].resour

Let's check the ProxySQL resources,
```bash
$ kubectl get proxysql -n demo proxy-server -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get proxysql -n demo proxy-server -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "proxysql") | .resources'
{
"limits": {
"cpu": "200m",
Expand Down Expand Up @@ -542,7 +542,7 @@ $ kubectl get pod -n demo proxy-server-0 -o json | jq '.spec.containers[].resour
}
}
$ kubectl get proxysql -n demo proxy-server -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get proxysql -n demo proxy-server -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "proxysql") | .resources'
{
"limits": {
"cpu": "250m",
Expand Down
8 changes: 4 additions & 4 deletions docs/guides/redis/autoscaler/compute/redis.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ redis.kubedb.com/rd-standalone created
Now, wait until `rd-standalone` has status `Ready`. i.e,

```bash
$ kubectl get mg -n demo
$ kubectl get rd -n demo
NAME VERSION STATUS AGE
rd-standalone 6.2.14 Ready 2m53s
```
Expand All @@ -109,7 +109,7 @@ $ kubectl get pod -n demo rd-standalone-0 -o json | jq '.spec.containers[].resou

Let's check the Redis resources,
```bash
$ kubectl get redis -n demo rd-standalone -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get redis -n demo rd-standalone -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "redis") | .resources'
{
"limits": {
"cpu": "200m",
Expand Down Expand Up @@ -168,7 +168,7 @@ Here,
- `spec.databaseRef.name` specifies that we are performing compute resource autoscaling on `rd-standalone` database.
- `spec.compute.standalone.trigger` specifies that compute resource autoscaling is enabled for this database.
- `spec.compute.standalone.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling.
- `spec.compute.replicaset.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%.
- `spec.compute.standalone.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%.
If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating.
- `spec.compute.standalone.minAllowed` specifies the minimum allowed resources for the database.
- `spec.compute.standalone.maxAllowed` specifies the maximum allowed resources for the database.
Expand Down Expand Up @@ -329,7 +329,7 @@ $ kubectl get pod -n demo rd-standalone-0 -o json | jq '.spec.containers[].resou
}
}
$ kubectl get redis -n demo rd-standalone -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get redis -n demo rd-standalone -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "redis") | .resources'
{
"limits": {
"cpu": "400m",
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/redis/autoscaler/compute/sentinel.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ $ kubectl get pod -n demo sen-demo-0 -o json | jq '.spec.containers[].resources'

Let's check the RedisSentinel resources,
```bash
$ kubectl get redissentinel -n demo sen-demo -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get redissentinel -n demo sen-demo -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "redissentinel") | .resources'
{
"limits": {
"cpu": "200m",
Expand Down Expand Up @@ -358,7 +358,7 @@ $ kubectl get pod -n demo sen-demo-0 -o json | jq '.spec.containers[].resources'
}
}
$ kubectl get redis -n demo sen-demo -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get redis -n demo sen-demo -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "redissentinel") | .resources'
{
"limits": {
"cpu": "400m",
Expand Down
6 changes: 3 additions & 3 deletions docs/guides/redis/autoscaler/storage/redis.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ rd-standalone 6.2.14 Ready 2m53s
Let's check volume size from petset, and from the persistent volume,

```bash
$ kubectl get sts -n demo rd-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
$ kubectl get petset -n demo rd-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
"1Gi"
$ kubectl get pv -n demo
Expand Down Expand Up @@ -146,7 +146,7 @@ Here,
- `spec.storage.standalone.trigger` specifies that storage autoscaling is enabled for this database.
- `spec.storage.standalone.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered.
- `spec.storage.standalone.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `50%` of the current amount.
- It has another field `spec.storage.replicaSet.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`.
- It has another field `spec.storage.standalone.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`.

Let's create the `RedisAutoscaler` CR we have shown above,

Expand Down Expand Up @@ -255,7 +255,7 @@ We can see from the above output that the `RedisOpsRequest` has succeeded.
Now, we are going to verify from the `Petset`, and the `Persistent Volume` whether the volume of the standalone database has expanded to meet the desired state, Let's check,

```bash
$ kubectl get sts -n demo rd-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
$ kubectl get petset -n demo rd-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
"1594884096"
$ kubectl get pv -n demo
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
Expand Down
7 changes: 5 additions & 2 deletions docs/guides/redis/monitoring/using-prometheus-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,15 +172,18 @@ Notice the `Labels` and `Port` fields. `ServiceMonitor` will use these informati
KubeDB will also create a `ServiceMonitor` crd in `monitoring` namespace that select the endpoints of `coreos-prom-redis-stats` service. Verify that the `ServiceMonitor` crd has been created.

```bash
$ kubectl get servicemonitor -n monitoring
$ kubectl get servicemonitor -n demo
NAME AGE
kubedb-demo-coreos-prom-redis 1m
```

Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of Redis crd.

```bash
$ kubectl get servicemonitor -n demo kubedb-demo-coreos-prom-redis -o yaml
```

```yaml
$ kubectl get servicemonitor -n monitoring kubedb-demo-coreos-prom-redis -o yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
Expand Down
6 changes: 3 additions & 3 deletions docs/guides/redis/volume-expansion/volume-expansion.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ sample-redis 6.2.14 Ready 5m4s
Let's check volume size from petset, and from the persistent volume,

```bash
$ kubectl get sts -n demo sample-redis-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
$ kubectl get petset-n demo sample-redis-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
"1Gi"
$ kubectl get pv -n demo
Expand Down Expand Up @@ -178,10 +178,10 @@ We can see from the above output that the `RedisOpsRequest` has succeeded.
Now, we are going to verify from the `Petset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check,

```bash
$ kubectl get sts -n demo sample-redis-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
$ kubectl get petset -n demo sample-redis-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
"2Gi"
$ kubectl get sts -n demo sample-redis-shard1 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
$ kubectl get petset -n demo sample-redis-shard1 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
"2Gi"
$ kubectl get pv -n demo
Expand Down

0 comments on commit 6a2fa55

Please sign in to comment.