232

In this official document, it can run command in a yaml config file:

https://kubernetes.io/docs/tasks/configure-pod-container/

apiVersion: v1
kind: Pod
metadata:
  name: hello-world
spec:  # specification of the pod’s contents
  restartPolicy: Never
  containers:
  - name: hello
    image: "ubuntu:14.04"
    env:
    - name: MESSAGE
      value: "hello world"
    command: ["/bin/sh","-c"]
    args: ["/bin/echo \"${MESSAGE}\""]

If I want to run more than one command, how to do?

12 Answers 12

301
command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]

Explanation: The command ["/bin/sh", "-c"] says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. In shell scripting a semicolon separates commands, and && conditionally runs the following command if the first succeed. In the above example, it always runs command one followed by command two, and only runs command three if command two succeeded.

Alternative: In many cases, some of the commands you want to run are probably setting up the final command to run. In this case, building your own Dockerfile is the way to go. Look at the RUN directive in particular.

Sign up to request clarification or add additional context in comments.

10 Comments

Yes, very valid, however, I think there are also good use cases to extend command as it overrides the Dockerfile's Entrypoint ;)
Any idea on how to do this with container lifecycle? It has no args
@aclokay you can just specify the arguments as additional command strings. The separation between command & args in the Container is just to make overriding the arguments easier. They are functionally equivalent.
@Abdul it means run the script provided as an argument, rather than starting an interactive shell or loading the script from a file.
This makes no sense to me... Isn't -c an argument? Then why the heck would you put it inside command and not args? Then why not put everything inside command since that means that command does not have to contain only the executable but it can also add arguments... why bother separating the two? Why wouldn't command: ["/bin/sh", "-c", "command one; command two && command three""] work exactly like your example? This would make sense if you had to write command: "/bin/sh" and then args: ["-c", "..."]
|
200

My preference is to multiline the args, this is simplest and easiest to read. Also, the script can be changed without affecting the image, just need to restart the pod. For example, for a mysql dump, the container spec could be something like this:

containers:
  - name: mysqldump
    image: mysql
    command: ["/bin/sh", "-c"]
    args:
      - echo starting;
        ls -la /backups;
        mysqldump --host=... -r /backups/file.sql db_name;
        ls -la /backups;
        echo done;
    volumeMounts:
      - ...

The reason this works is that yaml actually concatenates all the lines after the "-" into one, and sh runs one long string "echo starting; ls... ; echo done;".

8 Comments

Nice, but when you request an edit with kubectl, it will be in one line again. :)
@sekrett oh no ! :(
This worked quite nicely - the key is the semicolon on each line. This is a particularly good solution when the commands are many and would be multiline with the solution above. Makes git diff a breeze
+1 Beautiful, plus multi-line commands work perfectly: command: ['/bin/bash', '-c'] args: - exec &> /path/to/redirected/program.output; ` python /program.py` ` --key1=val1` ` --key2=val2` ` --key3=val3`
But what if one command is so long (many parameters) that for formatting purposes one wants to break that into multiple lines using `? This seems to fail for us and the next line after ` is interpreted as new command instead of as a continuation.
|
78

If you're willing to use a Volume and a ConfigMap, you can mount ConfigMap data as a script, and then run that script:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-configmap
data:
  entrypoint.sh: |-
    #!/bin/bash
    echo "Do this"

    echo "Do that"
---
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: "ubuntu:14.04"
    command:
    - /bin/entrypoint.sh
    volumeMounts:
    - name: configmap-volume
      mountPath: /bin/entrypoint.sh
      readOnly: true
      subPath: entrypoint.sh
  volumes:
  - name: configmap-volume
    configMap:
      defaultMode: 0700
      name: my-configmap

This cleans up your pod spec a little and allows for more complex scripting.

$ kubectl logs my-pod
Do this
Do that

4 Comments

Very cool, but I think it is simpler to have the script inline, just use multiline syntax. I show this in a separate answer.
What about when I need to pass double quotes. For example imagine this command: printf '%s @%s\n' "$(echo 'user')" "$(echo 'host')"
This is the most flexible solution
I think this way is much better. Because the scripts can share env and envFrom configurations. But sh -c can't.
61

If you want to avoid concatenating all commands into a single command with ; or && you can also get true multi-line scripts using a heredoc:

command: 
 - sh
 - "-c"
 - |
   /bin/bash <<'EOF'

   # Normal script content possible here
   echo "Hello world"
   ls -l
   exit 123

   EOF

This is handy for running existing bash scripts, but has the downside of requiring both an inner and an outer shell instance for setting up the heredoc.

3 Comments

how we can setting up the heredoc?
by doing that getting the while running $SHELL output: \bin\ash
It seems like the nested shells prevent the sysout from spooling to the kubernetes logs
30

I am not sure if the question is still active but due to the fact that I did not find the solution in the above answers I decided to write it down.

I use the following approach:

readinessProbe:
  exec:
    command:
    - sh
    - -c
    - |
      command1
      command2 && command3

I know my example is related to readinessProbe, livenessProbe, etc. but suspect the same case is for the container commands. This provides flexibility as it mirrors a standard script writing in Bash.

Comments

14

IMHO the best option is to use YAML's native block scalars. Specifically in this case, the folded style block.

By invoking sh -c you can pass arguments to your container as commands, but if you want to elegantly separate them with newlines, you'd want to use the folded style block, so that YAML will know to convert newlines to whitespaces, effectively concatenating the commands.

A full working example:

apiVersion: v1
kind: Pod
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  containers:
  - name: busy
    image: busybox:1.28
    command: ["/bin/sh", "-c"]
    args:
    - >
      command_1 &&
      command_2 &&
      ... 
      command_n

3 Comments

But then if you go to edit the deployment yaml, it will be in one line, unreadable again
Editing a live resource with kubectl edit and the likes is an anti-pattern and a bad idea anyway. Your running configuration should always match what's written on your manifests. That's what declarative configuration and version control systems are for.
Seemed like a great solution, but doesn't work if you use indentation in the listed commands: Even better to you use single quotes ''. But thanks for referencing the YAML docs, was very helpful.
13

Here is my successful run

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: busybox
  name: busybox
spec:
  containers:
  - command:
    - /bin/sh
    - -c
    - |
      echo "running below scripts"
      i=0; 
      while true; 
      do 
        echo "$i: $(date)"; 
        i=$((i+1)); 
        sleep 1; 
      done
    name: busybox
    image: busybox

Comments

11

Here is another way to run multi line commands.

apiVersion: batch/v1
kind: Job
metadata:
  name: multiline
spec:
  template:
    spec:
      containers:
      - command:
        - /bin/bash
        - -exc
        - |
          set +x
          echo "running below scripts"
          if [[ -f "if-condition.sh" ]]; then
            echo "Running if success"
          else
            echo "Running if failed"
          fi
        name: ubuntu
        image: ubuntu
      restartPolicy: Never
  backoffLimit: 1

Comments

10

Here is one more way to do it, with output logging.

apiVersion: v1
kind: Pod
metadata:
  labels:
    type: test
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
    volumeMounts:
      - name: log-vol
        mountPath: /var/mylog
    command:
        - /bin/sh
        - -c
        - >
            i=0;
            while [ $i -lt 100 ];
            do
             echo "hello $i";
             echo "$i :  $(date)" >> /var/mylog/1.log;
             echo "$(date)" >> /var/mylog/2.log;
             i=$((i+1));
             sleep 1;
            done

  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:
    - name: log-vol
      emptyDir: {}

Comments

4

Personally, I would do it like this because it's cleaner and I have see this a lot:

apiVersion: v1
kind: Pod
metadata:
  name: almalinux
spec:
  containers:
  - name: almalinux
    image: almalinux
    command: ["/bin/sh", "-c"]
    args:
      - |
        echo "Hello, World!"
        echo "This is a test."
        ip a
        cat /etc/os-release
        echo "This is how you can run multiple commands in a single container and keep it running."
        tail -f /dev/null

The tail at the end keeps the pod running (it's quite useful).

the result bellow:

console result

P.S.: I use k9s btw.

Comments

1

Just to bring another possible option, secrets can be used as they are presented to the pod as volumes:

Secret example:

apiVersion: v1
kind: Secret 
metadata:
  name: secret-script
type: Opaque
data:
  script_text: <<your script in b64>>

Yaml extract:

....
containers:
    - name: container-name
      image: image-name
      command: ["/bin/bash", "/your_script.sh"]
      volumeMounts:
        - name: vsecret-script
          mountPath: /your_script.sh
          subPath: script_text
....
  volumes:
    - name: vsecret-script
      secret:
        secretName: secret-script

I know many will argue this is not what secrets must be used for, but it is an option.

Comments

0

migration-testing: - step: name: Run migration script: - pipe: atlassian/aws-eks-kubectl-run:2.2.1 variables: AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION CLUSTER_NAME: $CLUSTER_NAME KUBECTL_COMMAND: exec KUBECTL_ARGS: - '-n' - 'test-namespace' - 'test-podname' - '-it' - '--' - '/bin/sh' - '-c' - 'npm run migrate' in place of test-podname I want to run another kubectl command to get current pod in namespace as podname is not fixed it takes some random numeric string after name provided in yaml file so that I dont have to worry about name of pod

1 Comment

As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.