Skip to content

Commit

Permalink
Update and Fix Autoscaler Docs
Browse files Browse the repository at this point in the history
Signed-off-by: Neaj Morshad <[email protected]>
  • Loading branch information
Neaj-Morshad-101 committed Aug 16, 2024
1 parent 2312b2d commit afd6f81
Show file tree
Hide file tree
Showing 9 changed files with 20 additions and 20 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ $ kubectl get pod -n demo es-combined-0 -o json | jq '.spec.containers[].resourc
Let's check the Elasticsearch resources,

```json
$ kubectl get elasticsearch -n demo es-combined -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get elasticsearch -n demo es-combined -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "elasticsearch") | .resources'
{
"limits": {
"cpu": "500m",
Expand Down Expand Up @@ -492,7 +492,7 @@ $ kubectl get pod -n demo es-combined-0 -o json | jq '.spec.containers[].resourc
}
}
$ kubectl get elasticsearch -n demo es-combined -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get elasticsearch -n demo es-combined -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "elasticsearch") | .resources'
{
"limits": {
"cpu": "1",
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/mariadb/autoscaler/compute/cluster/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ $ kubectl get pod -n demo sample-mariadb-0 -o json | jq '.spec.containers[].reso

Let's check the MariaDB resources,
```bash
$ kubectl get mariadb -n demo sample-mariadb -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mariadb -n demo sample-mariadb -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mariadb") | .resources'
{
"limits": {
"cpu": "200m",
Expand Down Expand Up @@ -509,7 +509,7 @@ $ kubectl get pod -n demo sample-mariadb-0 -o json | jq '.spec.containers[].reso
}
}
$ kubectl get mariadb -n demo sample-mariadb -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mariadb -n demo sample-mariadb -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mariadb") | .resources'
{
"limits": {
"cpu": "250m",
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/mongodb/autoscaler/compute/replicaset.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ $ kubectl get pod -n demo mg-rs-0 -o json | jq '.spec.containers[].resources'

Let's check the MongoDB resources,
```bash
$ kubectl get mongodb -n demo mg-rs -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mongodb -n demo mg-rs -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mongodb") | .resources'
{
"limits": {
"cpu": "200m",
Expand Down Expand Up @@ -509,7 +509,7 @@ $ kubectl get pod -n demo mg-rs-0 -o json | jq '.spec.containers[].resources'
}
}
$ kubectl get mongodb -n demo mg-rs -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mongodb -n demo mg-rs -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mongodb") | .resources'
{
"limits": {
"cpu": "400m",
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/mongodb/autoscaler/compute/standalone.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ $ kubectl get pod -n demo mg-standalone-0 -o json | jq '.spec.containers[].resou

Let's check the MongoDB resources,
```bash
$ kubectl get mongodb -n demo mg-standalone -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mongodb -n demo mg-standalone -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mongodb") | .resources'
{
"limits": {
"cpu": "200m",
Expand Down Expand Up @@ -487,7 +487,7 @@ $ kubectl get pod -n demo mg-standalone-0 -o json | jq '.spec.containers[].resou
}
}
$ kubectl get mongodb -n demo mg-standalone -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mongodb -n demo mg-standalone -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mongodb") | .resources'
{
"limits": {
"cpu": "400m",
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/mysql/autoscaler/compute/cluster/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ $ kubectl get pod -n demo sample-mysql-0 -o json | jq '.spec.containers[].resour

Let's check the MySQL resources,
```bash
$ kubectl get mysql -n demo sample-mysql -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mysql -n demo sample-mysql -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mysql") | .resources'
{
"limits": {
"cpu": "200m",
Expand Down Expand Up @@ -421,7 +421,7 @@ $ kubectl get pod -n demo sample-mysql-0 -o json | jq '.spec.containers[].resour
}
}
$ kubectl get mysql -n demo sample-mysql -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get mysql -n demo sample-mysql -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mysql") | .resources'
{
"limits": {
"cpu": "250m",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ $ kubectl get pod -n demo sample-pxc-0 -o json | jq '.spec.containers[].resource

Let's check the PerconaXtraDB resources,
```bash
$ kubectl get perconaxtradb -n demo sample-pxc -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get perconaxtradb -n demo sample-pxc -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "perconaxtradb") | .resources'
{
"limits": {
"cpu": "200m",
Expand Down Expand Up @@ -462,7 +462,7 @@ $ kubectl get pod -n demo sample-pxc-0 -o json | jq '.spec.containers[].resource
}
}
$ kubectl get perconaxtradb -n demo sample-pxc -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get perconaxtradb -n demo sample-pxc -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "perconaxtradb") | .resources'
{
"limits": {
"cpu": "250m",
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/proxysql/autoscaler/compute/cluster/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ $ kubectl get pod -n demo proxy-server-0 -o json | jq '.spec.containers[].resour

Let's check the ProxySQL resources,
```bash
$ kubectl get proxysql -n demo proxy-server -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get proxysql -n demo proxy-server -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "proxysql") | .resources'
{
"limits": {
"cpu": "200m",
Expand Down Expand Up @@ -542,7 +542,7 @@ $ kubectl get pod -n demo proxy-server-0 -o json | jq '.spec.containers[].resour
}
}
$ kubectl get proxysql -n demo proxy-server -o json | jq '.spec.podTemplate.spec.resources'
$ kubectl get proxysql -n demo proxy-server -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "proxysql") | .resources'
{
"limits": {
"cpu": "250m",
Expand Down
6 changes: 3 additions & 3 deletions docs/guides/redis/autoscaler/storage/redis.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ rd-standalone 6.2.14 Ready 2m53s
Let's check volume size from petset, and from the persistent volume,

```bash
$ kubectl get sts -n demo rd-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
$ kubectl get petset -n demo rd-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
"1Gi"
$ kubectl get pv -n demo
Expand Down Expand Up @@ -146,7 +146,7 @@ Here,
- `spec.storage.standalone.trigger` specifies that storage autoscaling is enabled for this database.
- `spec.storage.standalone.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered.
- `spec.storage.standalone.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `50%` of the current amount.
- It has another field `spec.storage.replicaSet.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`.
- It has another field `spec.storage.standalone.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`.

Let's create the `RedisAutoscaler` CR we have shown above,

Expand Down Expand Up @@ -255,7 +255,7 @@ We can see from the above output that the `RedisOpsRequest` has succeeded.
Now, we are going to verify from the `Petset`, and the `Persistent Volume` whether the volume of the standalone database has expanded to meet the desired state, Let's check,

```bash
$ kubectl get sts -n demo rd-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
$ kubectl get petset -n demo rd-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
"1594884096"
$ kubectl get pv -n demo
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
Expand Down
6 changes: 3 additions & 3 deletions docs/guides/redis/volume-expansion/volume-expansion.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ sample-redis 6.2.14 Ready 5m4s
Let's check volume size from petset, and from the persistent volume,

```bash
$ kubectl get sts -n demo sample-redis-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
$ kubectl get petset-n demo sample-redis-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
"1Gi"
$ kubectl get pv -n demo
Expand Down Expand Up @@ -178,10 +178,10 @@ We can see from the above output that the `RedisOpsRequest` has succeeded.
Now, we are going to verify from the `Petset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check,

```bash
$ kubectl get sts -n demo sample-redis-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
$ kubectl get petset -n demo sample-redis-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
"2Gi"
$ kubectl get sts -n demo sample-redis-shard1 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
$ kubectl get petset -n demo sample-redis-shard1 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
"2Gi"
$ kubectl get pv -n demo
Expand Down

0 comments on commit afd6f81

Please sign in to comment.