1

I am looking for an alternative to something like ssh user@node1 uptime && ssh user@node2 uptime, where both of the SSH-commands are run simultaneosly. As they are both blocking until the command returns, && and ; between them don't work.

My goal is to run infinite while loops on both nodes via SSH. So the first one would never return, and the second one would never be run. I would then like to save the output after terminating the loops with Ctrl+C to a log-file and read that one via Python.

Is there an easy solution to this?

Thanks in advance!

3
  • Use & to run the first command in the background. You can't stop it with Ctrl+C then, though. Commented May 14, 2021 at 6:27
  • @StefanWobbe: No blocking in proper sense occurs here. They are just run in sequence (and, if you use &&, the second command is run only if the first one succeeded). "Blocking each other" would mean that the program communicate somehow, maybe over a common semaphore. Of course ssh user@node1 uptime & ssh user@node2 uptime would run them in parralel, but I don't see much gain from it, since uptime is supposed to return quickly anyway. Commented May 14, 2021 at 6:36
  • 1
    use & instead of &&. Commented May 14, 2021 at 7:04

1 Answer 1

4

Capturing SSH output

On the one hand, you need to capture the ssh output/error and store it into a file so that you can process it afterwards with Python. To this purpose you can:

1- Store output and error directly into a file

ssh user@node cmd 2>&1 > session.log

2- Show output/error in the console while storing it into a file (I would recommend this one)

ssh user@node cmd 2>&1 | tee session.log

Check this for further information about the tee command.

Running commands in parallel

On the other hand, you want to run both commands in parallel and block the current bash process. You can achieve this by:

1- Blocking the current bash process until their childs are done.

cmd1 & ; cmd2 & ; wait

Check this for further information about the wait command.

2- Spawning the child processes and freeing the current bash process. Notice that the processes will be kept alive although the main process ends.

nohup cmd & ; nohup cmd & 

The whole thing

I would recommend combining both approaches using tee (so you can still see the ssh outputs on your terminal) and blocking the current process until everything is done (so that when you kill the main process all the processes are killed too).

ssh user@node1 uptime 2>&1 | tee session1.log & ; ssh user@node2 uptime 2>&1 | tee session2.log & ; wait
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.