1

I'm having troubles establishing a SSL connection between a web service and a remotely hosted Postgres database. With the same cert and key files being used for the web service, I can connect to the database with tools such as pgAdmin and DataGrip. These files were downloaded from Postgres instance in the Google Cloud Console.

Issue:

At the time of Spring Boot service start up, the following error occurs:

org.postgresql.util.PSQLException: Could not read SSL key file /tls/tls.key

Where I look at the Postgres server logs, the error is recorded as

LOG: could not accept SSL connection: UNEXPECTED_RECORD

Setup:

Spring Boot service running on Minikube (local) and GKE connecting to a Google Cloud SQL Postgres instance.

Actions Taken:

I have downloaded the client cert & key. I created a K8s TLS Secret using the downloaded client cert & key. I also have made sure the files can be read from the volume mount by running the following command on the k8s deployment config:

command: ["bin/sh", "-c", "cat /tls/tls.key"]

Here is the datasource url which is fed in via an environment variable (DATASOURCE).

"jdbc:postgresql://[Database-Address]:5432/[database]?ssl=true&sslmode=require&sslcert=/tls/tls.crt&sslkey=/tls/tls.key"

Here is the k8s deployment yaml, any idea where i'm going wrong?

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{ template "service.name" . }}
  labels:
    release: {{ template "release.name" . }}
    chart: {{ template "chart.name" . }}
    chart-version: {{ template "chart.version" . }}
  release: {{ template "service.fullname" . }}
spec: 
  replicas: {{ $.Values.image.replicaCount }}
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1 
  template:
    metadata:
      labels:
        app: {{ template "service.name" . }}
        release: {{ template "release.name" . }}
        env: {{ $.Values.environment }}
    spec:
      imagePullSecrets:
        - name: {{ $.Values.image.pullSecretsName }}
      containers:
        - name: {{ template "service.name" . }}
          image: {{ $.Values.image.repo }}:{{ $.Values.image.tag }}
          # command: ["bin/sh", "-c", "cat /tls/tls.key"]
          imagePullPolicy: {{ $.Values.image.pullPolicy }}
          volumeMounts:
            - name: tls-cert
              mountPath: "/tls"
              readOnly: true
          ports:
            - containerPort: 80
          env:
            - name: DATASOURCE_URL
              valueFrom:
                secretKeyRef:
                  name: service
                  key: DATASOURCE_URL
            - name: DATASOURCE_USER
              valueFrom:
                secretKeyRef:
                  name: service
                  key: DATASOURCE_USER
            - name: DATASOURCE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: service
                  key: DATASOURCE_PASSWORD
      volumes:
        - name: tls-cert
          projected:
            sources:
              - secret:
                  name: postgres-tls
                  items:
                    - key: tls.crt
                      path: tls.crt
                    - key: tls.key
                      path: tls.key
6
  • 1
    Is the container running as root or as a different account, if different account please ensure that account has read permissions on /tls/tls.key Commented Apr 27, 2018 at 21:07
  • @NitinMidha, I have not configured any permissions explicitly. Do you have a link to a reference where I can learn how to do this? Commented Apr 27, 2018 at 21:09
  • 1
    As per guidelines it is not recommended to run container as root. So many Docker Images take steps to use a non root account. I had done that in one of the NGINX image and i have to assign explicit permissions in my Dockerfile. So please check base image your are using and see if they are using a non root account. Check below link. Commented Apr 27, 2018 at 21:12
  • 1
    medium.com/@mccode/… Commented Apr 27, 2018 at 21:14
  • @NitinMidha, thank you for this. Here is my docker file FROM openjdk:8 EXPOSE 80 ADD /target/service.jar service.jar ENTRYPOINT exec java $JAVA_OPTS -jar service.jar Commented Apr 27, 2018 at 21:14

1 Answer 1

2

So I figured it out, I was asking the wrong question!

Google Cloud SQL has a proxy component for the Postgres database. Therefore, trying to connect the traditional way (the problem I was trying to solve) has been resolved by implementing proxy. Instead of dealing with whitelisting IPs, SSL certs, and such, you just spin up the proxy, point it at a GCP credential file, then updated your database uri to access via localhost.

To set up the proxy, you can find directions here. There is a good example of a k8s deployment file here.

One situation I did come across was the GCP service account. Make sure to add Cloud SQL Client AND Cloud SQL Editor roles. I only added the Cloud SQL Client to start with and kept getting the 403 error.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.