0

I would like to run Python scripts in various stages of Jenkins (pipeline) jobs, abroad a wide range of agents. I want the same Python environment for all of these, so I'm considering using Docker for this purpose.

I'm considering using Docker to build an image that contains the Python environment (with installed packages, etc.), that then allows an external Python script based on argument input:

docker run my_image my_python_file.py

My question is now, how should the infrastructure be? I see that the Python docker distribution is 688MB, and transferring this image to all steps would surely be an overhead? However, they are all on the same network, so perhaps it wouldn't be a big issue.

Updates. So, my Dockerfile looks like this:

FROM python:3.6-slim-jessie

COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

CMD ["python3"]

Then I build the image using

>docker build ./ -t my-app

which successfully builds the image and install my requirements. Then I want to start the image as daemon using

> docker run -dit my-app

Then I execute the process using

> docker exec -d {DAEMON_ID} my-script.py

2 Answers 2

1

Run your Docker container as a daemon process, and every time you need to run your Python script, call docker exec.

docker exec -d <your-container> <your-python-file.py>
Sign up to request clarification or add additional context in comments.

4 Comments

You ran the command python3 with no input, so it exited. Try to tell Docker that the command is interactive so that it keeps stdin open: docker run -dit my-app
I tried that, as weel. I get the following error when I then execute: Error response from daemon: OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"my-script.py\": executable file not found in $PATH": unknown.
This happens because the my-script.py is outside the container. Start your container by mounting a volume (so that the host and the container can share files) and put my-script.py in that volume.
Ok, so now I got it to work, but I'm not sure this is the optimal way to do it. Run Daemon: docker run -v $(pwd)/:/src -dit my-app Execute script: docker exec {ID} python src/my_script.py
1

Using a Docker agents for build is an effective way to have distributed and reproducible builds.

I see that the Python docker distribution is 688MB, and transferring this image to all steps would surely be an overhead?

You should consider using smaller Docker images. There are alpine and slim docker images for python. You should consider using these first. The size of the alpine python image is 89.2MB. Also the most of the image layers will be cached by Docker, so you will be pulling a some layers with significantly smaller sizes.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.