0

I have a python script that I am using to call a bash script that renames a file. I then need the new name of the file so python can do some further processing on it. I'm using subprocess.Popen to call the shell script. The shell script echos the new file name so I can use stdout=subprocess.PIPE to get the new file name.

The problem is that sometimes the bash script tries to rename the file with it's old name depending on the circumstances and so gives the message that the two files are the same from the mv command. I have cutout all the other stuff and included a basic example below.

$ ls -1
test.sh
test.txt

This shell script is just an example to force the error message.

$ cat test.sh
#!/bin/bash
mv "test.txt" "test.txt"
echo "test"

In python:

$ python
>>> import subprocess
>>> p = subprocess.Popen(['/bin/bash', '-c', './test.sh'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
>>> p.stdout.read()
"mv: `test.txt' and `test.txt' are the same file\ntest\n"

How can I ignore the message from the mv command and only get the output of the echo command? If all goes well the only output of the shell script would be the result of the echo so really I just need to ignore the mv error message.

Thanks,

Geraint

2
  • 2
    Don't send stderr to stdout if you don't want to see error messages on stdout. Commented Jan 27, 2015 at 17:41
  • so you don't want to know if the command has not been successful? Commented Jan 27, 2015 at 17:56

2 Answers 2

0

Direct stderr to null, Thusly

$ python
>>> import os
>>> from subprocess import *
>>> p = Popen(['/bin/bash', '-c', './test.sh'], stdout=PIPE, stderr=open(os.devnull, 'w'))
>>> p.stdout.read()
Sign up to request clarification or add additional context in comments.

4 Comments

do not set stderr=PIPE unless you read from the pipe. os.devnull is portable
I see no problem with allowing the GC to reap the unused PIPE. But the os.devnull is a nice trick I didn't know.
unread PIPE is a deadlock waiting to happen. Count number of warnings about it in the subprocess docs.
I thought of that, but my take was that OP was talking about a single line of error from his script. Any deadlock would not take place until a buffer was full (typically 4k). But you're right -- it's bad form and should be avoided. I've revised my answer
0

To get subprocess' output and ignore its error messages:

#!/usr/bin/env python
from subprocess import check_output
import os

with open(os.devnull, 'wb', 0) as DEVNULL:
    output = check_output("./test.sh", stderr=DEVNULL)

check_output() raises an exception if the script returns with non-zero status.

See How to hide output of subprocess in Python 2.7.

3 Comments

I assume if the OP wanted to ignore stderr, he doesn't want an exception either, no?
I don't think so. If OP wants to ignore errors completely; the explicit try/except could be added
Thanks for this. Although in this case I don't want an exception this is somethiing that I am sure will be useful where It matters if there has been an error.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.