2

Is there anything wrong with how I am trying to configure my Minikube cluster in a way the pods can access the PostgreSQL instance within the same machine?

I've access the /etc/hosts within the Minikube cluster via minikube ssh and returns:

127.0.0.1       localhost
127.0.1.1       minikube
192.168.99.1    host.minikube.internal
192.168.99.110  control-plane.minikube.internal

database-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: service-database
spec:
  type: ExternalName
  externalName: host.minikube.internal
  ports:
    - port: 5432
      targetPort: 5432

pod-deployment.yaml

apiVersion: apps/v1
kind: Deployment
spec:
  ...
  template:
    ...
    spec:
      containers:
        - name: <container_alias>
          image: <container_name>
          env:
            - name: DB_URL
              value: "jdbc:postgresql://service-database/<database_name>"
          ports:
            - containerPort: 8080

Note: DB_URL environment variable points to the spring.datasource.url in the application.properties in SpringBoot.

Then when I tried to get the logs printed, I am getting this exception:

Caused by: java.net.UnknownHostException: service-database

1 Answer 1

1

I've access the /etc/hosts within the Minikube cluster via minikube ssh and returns

That may be true, but for the same reason kubernetes does not expose the /etc/hosts of its Nodes, nor will minikube do the same thing. Kubernetes has its own DNS resolver, and thus its own idea of what should be in /etc/hosts (docker does the same thing -- it similarly does not just expose the host's /etc but rather allows the user to customize that behavior on container launch)

There is a formal mechanism to tell kubernetes that you wish to manage the DNS resolution endpoints manually -- that's what a Headless Service does, although usually the "manually" part is done by the StatefulSet controller, but there's nothing stopping other mechanisms from grooming that list:

apiVersion: v1
kind: Service
metadata:
  name: service-database
spec:
  type: ClusterIP
  # yes, literally the word "None"
  clusterIP: None
  ports:
    - name: 5432-5432
      port: 5432
      targetPort: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
  name: service-database
subsets:
- addresses:
  - ip: 192.168.99.1
  ports:
  - name: 5432-5432
    port: 5432
    protocol: TCP

and now the internal DNS will resolve service-database to be the answers 192.168.99.1 and also populate the SRV records just like normal

Sign up to request clarification or add additional context in comments.

2 Comments

This would mean I need to have a separate configuration when I am doing to change the infrastructure from Minikube to a hosted Kubernetes-cluster. Is there another way to keep the service-database configuration isolated?
Well, you can't have it both ways: unchanged yaml that works in minikube "oh but with a postgres that lives outside the cluster" unless your production postgresql also lives outside the cluster in the same mechanism. Having said that, I'll edit the answer to try and make the change smaller

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.