I have a PostgreSQL cluster with three nodes with Patroni. The cluster manages a very high workload and for this reason, it runs in production on bare metal machines. We need to migrate this infrastructure to Kubernetes (for several reasons) and I am doing some performance tests executed with PgBench. First I compared Baremetal vs Virtual Machine and I got very small degradation. Then I compared VSI vs Kubernetes to understand the overhead added by K8s.
Now I am trying to fine-tune CPU and memory. K8s runs on Worker nodes with 48 vCPU and 192 Gb. However, once PostgreSQL was deployed I still see:
NAME CPU(cores) MEMORY(bytes)
postgresql-deployment-5c98f5c949-q758d 2m 243Mi
even if I allocated the following to the PostgreSQL container:
resources:
requests:
memory: 64Gi
limits:
memory: 64Gi
if I run:
kubectl top pod <pod name> -n <namespace>
I got the following:
NAME CPU(cores) MEMORY(bytes)
postgresql-deployment-5c98f5c949-q758d 2m 244Mi
the same appears from K8s dashboard even if the result of:
kubectl describe pod <pod name> -n <namespace>
show that the Pod runs with a Guarantee QoS and 64Gi of RAM for requested and limit.
How this is supposed to work?
Another thing I don't understand is the CPU limit and requested. I expect to enter something like this:
resources:
requests:
cpu: 40
memory: 64Gi
limits:
cpu: 40
memory: 64Gi
I expected to reserve 40 vCPU to my container but during the deployment, I see insufficient CPU on the node when I run kubectl describe pod <pod name> -n <namespace>. The max value I can use is 1.
How this is supposed to work?
Obviously, I read the documentation and searched for different examples, but when I put things in practice I see test results different from the theory. I know I am missing something.