Only uneven number of control plane nodes are supported. This provides the best balance of fault tolerance and cost for etcd.
If enabled, the minimum and maximum number of worker nodes must be specified.
Auto scaling requires the use of CPU and memory limits on the resource definition of Pods.
If Kubernetes is unable to schedule a Pod due to insufficient CPU or memory in the cluster, a worker node will be added, as long as the maximum number of worker nodes has not been reached.
If the aggregate resource limits of all existing Pods is lower than 50% of the cluster capacity, a worker node will be removed, as long as the minimum number of worker nodes has not been reached.