4

I am running a python application as a docker container and in my python application I am using pythons logging class to log execution steps using logger.info, logger.debug and logger.error. The problem with this is the log file is only persistent within the docker container and if the container goes away then the log file is also lost and also that every time I have to view the log file I have to manually copy the container log file to local system.What I want to do is that whatever log is being written to container log file, it should be persistent on the local system - so write to a local system log file or Auto mount the docker log file to local system.

Few things about my host machine:

  1. I run multiple docker containers on the machine.
  2. My sample docker-core file is: FROM server-base-v1 ADD . /app WORKDIR /app ENV PATH /app:$PATH CMD ["python","-u","app.py"]

  3. My sample docker-base file is: FROM python:3 ADD ./setup /app/setup WORKDIR /app RUN pip install -r setup/requirements.txt

  4. A sample of my docker-compose.yml file is:

`

version: "2"
networks:
    server-net:

services:
  mongo:
    container_name: mongodb
    image: mongodb
    hostname: mongodb
    networks:
      - server-net
    volumes:
      - /dockerdata/mongodb:/data/db

    ports:
        - "27017:27017"
        - "28017:28017"

  server-core-v1:
    container_name: server-core-v1
    image: server-core-v1:latest
    depends_on:
      - mongo
    networks:
            - server-net
    ports:
        - "8000:8000"
    volumes:
        - /etc/localtime:/etc/localtime:ro

`

Above yml file sample is just a part of my actual yml file. I have multiple server-core-v1 containers(with different names) running parallel with each having their own logging file.

I would also appreciate if there are some better strategies for doing logging in python with docker and make it persistent. I read few articles which mentioned using sys.stderr.write() and sys.stdout.write() but not sure how to use that especially with multiple containers running and logging.

1
  • Attach a volume to the container. Store the log files in the volume. Even if the container gets deleted, you still have the log files in the volume Commented Feb 26, 2019 at 10:07

2 Answers 2

2

Bind-mounts are what you need.

enter image description here

As you can see, bind-mounts are accesible from yours host file system. It is very simmilar to shared folders in VM architecture. You can simple achieve that with mounting your volume directly to path on your PC. In yours case:

version: "2"
networks:
    server-net:

services:
  mongo:
    container_name: mongodb
    image: mongodb
    hostname: mongodb
    networks:
      - server-net
    volumes:
      - /dockerdata/mongodb:/data/db

    ports:
        - "27017:27017"
        - "28017:28017"

  server-core-v1:
    container_name: server-core-v1
    image: server-core-v1:latest
    depends_on:
      - mongo
    networks:
            - server-net
    ports:
        - "8000:8000"
    volumes:
        - ./yours/example/host/path:/etc/localtime:ro

Just replace ./yours/example/host/path with target directory on yours host. In this scenario, i belive that logger is on server side.

If you are working on windows remember to bind in current user directory!

Sign up to request clarification or add additional context in comments.

Comments

1

Volumes are what you need.

You can create volumes to bind an internal container folder with a local system folder. So that you can store your logs in a logs folder and map this as a volume to any folder on your local system.

You can specify a volume in the docker-compose.yml file for each service you are creating. See the docs.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.