VPA stands for Vertical Pod Autoscaling, which frees you from having to think about what values to specify for a container's CPU and memory requests. The autoscaler can recommend values for CPU and memory requests, or it can automatically update values for CPU and memory requests.
Before using VPA, we need to install Metrics Server first as follows:
Prepare the installation of Metrics Server
After finishing the installation of Metrics Server, we may start testing the function of VPA.
Here I follow the official document: Configuring vertical pod autoscaling
https://cloud.google.com/kubernetes-engine/docs/how-to/vertical-pod-autoscaling
1. Getting resource recommendations
my-rec-vpa.yaml
2. Updating resource requests automatically
my-vpa.yaml
The output shows three sets of recommendations for CPU and memory requests: lower bound, target, and upper bound as below with red characters.
The target recommendation says that the container will run optimally if it requests 627 milliCPU and 262144 kilobytes of memory.
Reference:
https://blog.csdn.net/networken/article/details/92830036
Before using VPA, we need to install Metrics Server first as follows:
Prepare the installation of Metrics Server
$ git clone https://github.com/kubernetes-incubator/metrics-server.git
$ vim metrics-server/deploy/1.8+/metrics-server-deployment.yaml
==> Add args in the end
...
...
args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
Deploy the yaml file$ kubectl create -f deploy/1.8+/
$ kubectl -n kube-system get pods | grep metrics
metrics-server-7c9d76cf84-49qzp 1/1 Running 0 4h37m
Verification$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
ndc73-0541 2032m 4% 12572Mi 1%
$ kubectl -n kube-system top pods
NAME CPU(cores) MEMORY(bytes)
calico-node-6kx7t 39m 345Mi
coredns-858b8fd7d-5b4lb 4m 21Mi
coredns-858b8fd7d-5h2b6 9m 21Mi
etcd-ndc73-0541 29m 95Mi
gpu-upt-n7h6g 1m 22Mi
kube-apiserver-ndc73-0541 46m 568Mi
kube-controller-manager-ndc73-0541 51m 78Mi
kube-proxy-55qrg 11m 53Mi
kube-scheduler-ndc73-0541 15m 24Mi
metrics-server-7c9d76cf84-49qzp 2m 18Mi
my-scheduler-8d6b544c7-z529w 15m 47Mi
vpa-admission-controller-9c9c8c97d-htnj2 11m 15Mi
vpa-recommender-77c99558b5-b8kzd 11m 16Mi
vpa-updater-59c66946d5-xm49n 13m 16Mi
After finishing the installation of Metrics Server, we may start testing the function of VPA.
Here I follow the official document: Configuring vertical pod autoscaling
https://cloud.google.com/kubernetes-engine/docs/how-to/vertical-pod-autoscaling
1. Getting resource recommendations
my-rec-vpa.yaml
apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
name: my-rec-vpa
spec:
targetRef:
apiVersion: "extensions/v1beta1"
kind: Deployment
name: my-rec-deployment
updatePolicy:
updateMode: "Off"
my-rec-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-rec-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: my-rec-deployment
spec:
containers:
- name: my-rec-container
image: nginx
Deploy them and see the resource recommendation
$ kubectl create -f my-rec-vpa.yaml
$ kubectl create -f my-rec-deployment.yaml
$ kubectl get vpa my-rec-vpa --output yaml
apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
creationTimestamp: "2019-09-19T07:21:28Z"
generation: 1
name: my-rec-vpa
namespace: default
resourceVersion: "12429624"
selfLink: /apis/autoscaling.k8s.io/v1beta2/namespaces/default/verticalpodautoscalers/my-rec-vpa
uid: 18f042d5-daae-11e9-b99c-ac1f6ba464ec
spec:
targetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: my-rec-deployment
updatePolicy:
updateMode: "Off"
status:
conditions:
- lastTransitionTime: "2019-09-19T07:23:22Z"
status: "True"
type: RecommendationProvided
recommendation:
containerRecommendations:
- containerName: my-rec-container
lowerBound:
cpu: 25m
memory: 262144k
target:
cpu: 25m
memory: 262144k
uncappedTarget:
cpu: 25m
memory: 262144k
upperBound:
cpu: 7931m
memory: 8291500k
2. Updating resource requests automatically
my-vpa.yaml
apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
name: my-vpa
spec:
targetRef:
apiVersion: "extensions/v1beta1"
kind: Deployment
name: my-auto-deployment
updatePolicy:
updateMode: "Auto"
my-auto-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-auto-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: my-auto-deployment
spec:
containers:
- name: my-container
image: k8s.gcr.io/ubuntu-slim:0.1
resources:
requests:
cpu: 100m
memory: 50Mi
command: ["/bin/sh"]
args: ["-c", "while true; do timeout 0.5s yes >/dev/null; sleep 0.5s; done"]
Deploy them
$ kubectl create -f my-vpa.yaml $ kubectl create -f my-auto-deployment.yaml $ kubectl get vpa my-vpa --output yaml apiVersion: autoscaling.k8s.io/v1beta2 kind: VerticalPodAutoscaler metadata: creationTimestamp: "2019-09-19T07:30:14Z" generation: 1 name: my-vpa namespace: default resourceVersion: "12430493" selfLink: /apis/autoscaling.k8s.io/v1beta2/namespaces/default/verticalpodautoscalers/my-vpa uid: 52370870-daaf-11e9-b99c-ac1f6ba464ec spec: targetRef: apiVersion: extensions/v1beta1 kind: Deployment name: my-auto-deployment updatePolicy: updateMode: Auto status: conditions: - lastTransitionTime: "2019-09-19T07:31:22Z" status: "True" type: RecommendationProvided recommendation: containerRecommendations: - containerName: my-container lowerBound: cpu: 586m memory: 262144k target: cpu: 627m memory: 262144k uncappedTarget: cpu: 627m memory: 262144k upperBound: cpu: 777m memory: 262144k
The target recommendation says that the container will run optimally if it requests 627 milliCPU and 262144 kilobytes of memory.
Reference:
https://blog.csdn.net/networken/article/details/92830036
No comments:
Post a Comment