2

I'm having some issues with permissions and I'm really hoping someone can point me to where I'm going wrong...

I've got a Kube cluster set up and functioning (for example, I'm running a mysql pod and connecting to it without issue), and I've been trying to get a Postgresql pod running with TLS support. The service that will be connecting to this pod requires TLS, so going without TLS is unfortunately not an option.

Here's where things get a bit messy, everything functions - except for the fact that for some reason Postgres init can't seem to read my certificate files that are stored in Kube secrets. Seems like whatever options I choose, Postgres init returns the following:

$ kubectl logs data-server-97469df55-8wd6q
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgres ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... UTC
creating configuration files ... ok
running bootstrap script ... ok
sh: locale: not found
2021-09-11 20:03:54.323 UTC [32] WARNING:  no usable system locales were found
performing post-bootstrap initialization ... ok
initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
syncing data to disk ... ok


Success. You can now start the database server using:

    pg_ctl -D /var/lib/postgres -l logfile start

waiting for server to start....2021-09-11 20:04:01.882 GMT [37] FATAL:  could not load server certificate file "/var/lib/postgres-secrets/server.crt": Permission denied
2021-09-11 20:04:01.882 GMT [37] LOG:  database system is shut down
pg_ctl: could not start server
Examine the log output.
 stopped waiting

I HIGHLY suspect my issue is the very first line, but I'm not sure how to go about resolving this in Kubernetes. How do I tell Kubernetes that I need to mount my secrets so that user 'postgres' can read them (being lazy and doing a chmod 0777 does not work)?

These are my configs:

apiVersion: v1
kind: Service
metadata:
  name: data-server
  labels:
    app: data-server
spec:
  ports:
  - name: data-server
    targetPort: 5432
    protocol: TCP
    port: 5432
  selector:
    app: data-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: data-server
spec:
  selector:
    matchLabels:
      app: data-server
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: data-server
    spec:
      serviceAccountName: default
      containers:
      - name: postgres
        image: postgres:alpine
        imagePullPolicy: IfNotPresent
        args:
          - -c
          - hba_file=/var/lib/postgres-config/pg_hba.conf
          - -c
          - config_file=/var/lib/postgres-config/postgresql.conf
        env:
          - name: PGDATA
            value: /var/lib/postgres 
          - name: POSTGRES_PASSWORD_FILE
            value: /var/lib/postgres-secrets/postgres-pwd.txt
        ports:
        - name: data-server
          containerPort: 5432
          hostPort: 5432
          protocol: TCP
        volumeMounts:
        - name: postgres-config
          mountPath: /var/lib/postgres-config
        - name: postgres-storage
          mountPath: /var/lib/postgres-data
        - name: postgres-secrets
          mountPath: /var/lib/postgres-secrets
      volumes:
      - name: postgres-config
        configMap:
          name: data-server        
      - name: postgres-storage
        persistentVolumeClaim:
          claimName: gluster-claim
      - name: postgres-secrets
        secret:
          secretName: data-server
          defaultMode: 0640

Secrets:

$ kubectl get secret
NAME                  TYPE                                  DATA   AGE
data-server           Opaque                                5      131m
default-token-nq7pv   kubernetes.io/service-account-token   3      5d5h

PV / PVC

$ kubectl describe pv,pvc
Name:            gluster-pv
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    
Status:          Bound
Claim:           default/gluster-claim
Reclaim Policy:  Retain
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        50Gi
Node Affinity:   <none>
Message:         
Source:
    Type:                Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
    EndpointsName:       gluster-cluster
    EndpointsNamespace:  <unset>
    Path:                /gv0
    ReadOnly:            false
Events:                  <none>


Name:          gluster-claim
Namespace:     default
StorageClass:  
Status:        Bound
Volume:        gluster-pv
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      50Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Used By:       data-server-97469df55-8wd6q
               dnsutils
               mysql-6f47967858-xngbr
5
  • One thing you can try is configure your pod to run some other than postgres (like sleep inf, for example); then you can kubectl rsh into it and examine the filesystem -- are the secrets mounted where you expect? What do the permissions look like? Commented Sep 11, 2021 at 23:21
  • Shoot, I knew I forgot to add something. I actually have tried that. I spun up a alpine server with the same permissions and mounts. The results were that and I can see them, but the problem is I'm root.. Commented Sep 11, 2021 at 23:43
  • Maybe you just need an initContainer to set permissions on the secret so that postgres can read it? Commented Sep 12, 2021 at 0:12
  • Hmm. Definitely worth a try, though it seems kinda hacky.. Commented Sep 12, 2021 at 0:47
  • So...sadly this doesn't work. I'm not sure what i'm doing wrong but I can't seem to access the files in initContainer: ``` kubectl logs data-lake-844c4864b5-bcdf4 -c postgres-init chown: /var/lib/postgres-secrets/.*: No such file or directory ``` Commented Sep 12, 2021 at 1:37

1 Answer 1

3

Figured it out.. Turns out it was just a necessary block in the template/spec:

securityContext:
  runAsUser: 70
  fsGroup: 70

Took way too long to find a reference to this using the googles. Seems a bit odd too...what happens if I want to switch off alpine to something else? the UID/GID aren't going to be the same. I'll have to find those and change those here too. Seems silly to use IDs rather than names.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.