17

So, I've got some Python code running inside a Docker container. I started up my local env using Google's gcloud script. I'm seeing basic access style logs and health check info, but I'm not sure how I can pass through log messages I'm writing from my Python app to the console. Is there a parameter I can set to accomplish this with my gcloud script or is there something I can set in the Dockerfile that can help?

4
  • 3
    You can attach yourself to the running container or you can use docker logs. Also you can attach yourself when starting a container with docker run -a. I Hope these information helps you. Commented Dec 4, 2014 at 20:16
  • 1
    share the Dockerfile out to get more support. Where is the log now inside containers ? generally print the logs to the console (stdout/stderr) inside container, then you can use docker logs outside. You can always use docker exec command to jump inside to check logs like normal app Commented Dec 8, 2014 at 0:52
  • Thanks for the help guys. "docker logs" was what I was looking for. I think the part I was missing was how to get the running docker process IDs (docker ps), so I could feed that to the logs command. If either of you can write out your answer, I'll mark it as correct. Commented Dec 8, 2014 at 21:29
  • Possible duplicate of Python app does not print anything when running detached in docker Commented Nov 24, 2016 at 14:08

2 Answers 2

29

For Python to log on your terminal/command line/console, when executed from a docker container, you should have this variable set in your docker-compose.yml

  environment:
    - PYTHONUNBUFFERED=0

This is also a valid solution if you're using print to debug.

Sign up to request clarification or add additional context in comments.

5 Comments

Glad could be of help :)
I'm using docker-compose and one of the containers is running Django. Despite the messages from "mange.py runserver" are being printed (all GET and POST requests was showing) my print messages did not show up until I add this envirroment variable. Thanks!!
Thanks man, very helpful. You'd think when you run docker-compose up it would default to having the same output that your code producer pre-Dockerize, but maybe I'm missing something.
@Sentient07 And if you don`t use docker-compose file, just single run command?
- PYTHONUNBUFFERED=1
2

(Answer based on the comments)

You don't need to know the container ID if you wrap the app into docker-compose. Just add docker-compose.yml alongside your Dockerfile. It might sound as an extra level of indirection, but for a simple app it's as trivial as this:

version: "3.3"

services:
  build: .
  python_app:
    environment:
      - PYTHONUNBUFFERED=1

That's it. The benefit of having it is that you don't need to pass a lot of flags that docker require because they are added automatically. It also simplifies work with volumes and env vars if they become required later.

You can they view logs by service name:

docker-compose logs python_app

By the way, I'd rather set PYTHONUNBUFFERED=1 if I'm testing something locally. It disabled buffering, which makes logging more deterministic locally. I had a lot of logging problems, for example, when I tried to spin up grpc server in my python app because the logs that are flushed before the server starts were not all init logs I wanted to see. And once the server starts, you will not see the init logs because the logging reattaches to a different/spawned process.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.