5

Consider the following sample script:

#!/bin/sh

do_something() {
    echo $@
    return 1
}

cat <<EOF > sample.text
This is a sample text
It serves no other purpose
EOF

cat sample.text | while read arg1 arg2 arg3 arg4 arg5; do
    ret=0
    do_something "$arg1" "$sarg2" "$arg3" "$arg4" "$arg5" <&3 || ret=$?
done 3<&1

What is the purpose of redirecting stdout as input for filedescriptor 3? At least in Bash, it does not seem to make any difference if omitted. Does it have any effect if it is executed in any other shell than bash?

UPDATE

For those wondering where this is from, it is a simplified sample from Debian's cryptdisks_start script.

16
  • 1
    Where's this script from? I'm curious what the author intended. Since do_something() doesn't use its standard input, <&3 == 0<&3 doesn't make any difference, as you observed. Commented Jan 14, 2017 at 14:38
  • Your usage of file descriptors has no impact on the overall logic you are going to do! Commented Jan 14, 2017 at 14:53
  • There's a tricky deadlock I don't quite understand yet if do_something does try to read from it standard input. Commented Jan 14, 2017 at 15:28
  • Basically, nothing has been written to the loop's standard output yet when do_something is called, so there is nothing on its standard input to read. Commented Jan 14, 2017 at 15:43
  • 1
    @nautical, in the days of web-based SCM interfaces (sourceforge, github, launchpad, etc), most projects have somewhere an individual line of source can be linked to directly on the web -- in light of which, "download this tarball, unpack it, and find the file containing [...]" isn't very reasonable a request. Commented Jan 23, 2017 at 14:50

2 Answers 2

12

The clear intent here is to prevent do_something from reading from the sample.text stream, by ensuring that its stdin is coming from elsewhere. If you're not seeing differences in behavior with or without the redirection, that's because do_something isn't actually reading from stdin in your tests.

If you had both read and do_something reading from the same stream, then any content consumed by do_something wouldn't be available to a subsequent instance of read -- and, of course, you'd have illegitimate contents fed on input to do_something, resulting in consequences such as a bad encryption key being attempted (if the real-world use case were something like cryptmount), &c.

cat sample.text | while read arg1 arg2 arg3 arg4 arg5; do
    ret=0
    do_something "$arg1" "$sarg2" "$arg3" "$arg4" "$arg5" <&3 || ret=$?
done 3<&1

Now, it's buggy -- 3<&1 is bad practice compared to 3<&0, inasmuch as it assumes without foundation that stdout is something that can also be used as input -- but it does succeed in that goal.


By the way, I would write this more as follows:

exec 3</dev/tty || exec 3<&0     ## make FD 3 point to the TTY or stdin (as fallback)

while read -a args; do           ## |- loop over lines read from FD 0
  do_something "${args[@]}" <&3  ## |- run do_something with its stdin copied from FD 3
done <sample.text                ## \-> ...while the loop is run with sample.txt on FD 0

exec 3<&-                        ## close FD 3 when done.

It's a little more verbose, needing to explicitly close FD 3, but it means that our code is no longer broken if we're run with stdout attached to the write-only side of a FIFO (or any other write-only interface) rather than directly to a TTY.


As for the bug that this practice prevents, it's a very common one. See for example the following StackOverflow questions regarding it:

etc.

Sign up to request clarification or add additional context in comments.

3 Comments

Sorry for the late reply. I finally had some time to circle back to this problem. I was aware that read can yield "funny" if stdin is not properly redirected/protected. What had me stumped was the redirection of stdout as input. It seemed like the author was trying to enforce some pipe like behavior. I had never seen this before. Is there actually ever a valid reason to redirect stdout as done in my question?
@nautical, "ever" covers a lot of cases. Maybe stdin is opened to /dev/null, stdout is open read/write (so you can also read from it), and you don't have a controlling TTY. But that's a corner case, and someone has to know that their script is going to be used in that corner case for it to make sense.
@CharlesDuffy: I had trouble understanding the solution earlier before you added the code comments. The comments help me understand it now. Thanks.
2

As explained before, this is about separating streams for reading, but there is a simpler general format for it.

First get rid of the useless cat to keep the the while loop in the scope of the main script. This enables the contents of the loop to interact with stdin, stdout & stderr as usual, without interference from the arg reading from the text file. E.g.:

while read arg1 arg2 arg3 arg4 arg5 <&3; do
    ret=0
    do_something "$arg1" "$sarg2" "$arg3" "$arg4" "$arg5" || ret=$?
done 3< sample.text

The read probably needs the -r flag to prevent backslash escape sequence interpretation. do_something then receives stdin (or stdout with <&1 like the example, if that's even possible or needed). If sample.txt should be a command's output, use process substitution like done 3< <(arg_generating_command).

2 Comments

This seems a simpler and safer solution than the accepted answer — and it works in a case I just hit.
Concise, easier to remember, avoids treading into BashFAQ/024 territory, and allows you to use read as normal within the while loop, for example, to prompt the user over a list of things generated from a file or other subprocess.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.