158

I am trying to redirect all output from a command line programme to a file. I am using Bash. Some of the output is directed to a the file, but some still appears in the terminal and is not stored to the file.

Similar symptoms are described here:

Redirect all output to file in Bash

However, I have tried the proposed solution (capture standard error) without success:

<command> <arguments>   > stdout.txt 2> stderr.txt

The file stderr.txt is created, but it is empty.

A possible clue is that the command-line programme is a client communicating with a server on the same machine. It may be that some of the output is coming from the server.

Is there a way to capture all the output from the terminal, irrespective of its origin?

I've confirmed that the missing output is generated by the server. Running the command in a separate terminal causes some output in both terminals, I can pipe all the output from the command terminal to a file. This raises issues about how to capture the server output, but that's a different question.

3
  • 4
    How would the server write to your terminal without going through the client? Can you be more specific about what the command is? What you have there should be working fine. Commented May 30, 2013 at 17:06
  • 1
    If the program directly writes to e.g. /dev/tty, instead of one of the standard output streams, there's no (simple) way to capture that. There's also the possibility that it might be duplicating the stdout/stderr file descriptors to another file descriptor and writing there, which you could capture (e.g. ... 3> somefile), but you would have to know what file descriptor is being used... Commented Aug 1, 2014 at 16:45
  • The actual problem of missing/partial output may due to output buffer. see this answer: stackoverflow.com/a/70209496/8428146 Commented Nov 17, 2022 at 5:51

6 Answers 6

184

Though not POSIX, Bash 4 has the &> operator:

command &> alloutput.txt

Sign up to request clarification or add additional context in comments.

7 Comments

Totally didn't know about this. Finally an easy solution that doesn't redirect only one log level but all!
Can this be used with a pipe?
quickest fix that works for me, all i need to do is add ampersand sign
This is the best answer.
@Paul yes, via command |& command
|
179

You can use this syntax to redirect all output, standard error and standard output, to file stdout.txt:

<command> <arguments> > allout.txt 2>&1

6 Comments

This naive approach will only capture standard in and standard out. There can be others. For instance password prompts. For this reason the selected answer is more correct. I won't downvote you though :)
I think it's important to add that this is all related to the broad topic of I/O Redirection in BASH
For this solution also the order of redirection is important. Doing the 2>&1 first does not seem to work.
This might be the "naive approach" but the accepted answer missed critical stderr output when I tried it on Ubuntu. This answer worked for me so it gets my naive upvote.
@a1an : the order is important because n>&m means now n points to the same thing that m currently points to. As by default : 1 and 2 file descriptors both points to the terminal, if you do : 2>&1 >somefile : 2 now (still) points to the terminal (as 1 pointed to the terminal at this stage), and then 1 points to the file, so only stdout (1) will point to the file
|
83

If the server is started on the same terminal, then it's the server's stderr that is presumably being written to the terminal and which you are not capturing.

The best way to capture everything would be to run:

script output.txt

before starting up either the server or the client. This will launch a new shell with all terminal output redirected out output.txt as well as the terminal. Then start the server from within that new shell, and then the client. Everything that you see on the screen (both your input and the output of everything writing to the terminal from within that shell) will be written to the file.

When you are done, type "exit" to exit the shell run by the script command.

3 Comments

@yanbellavance Usually, the way to indicate that is to simply upvote the answer that you prefer. Downvoting is only supposed to be used for answers that are not helpful at all. Furthermore, for this question, linuxcdeveloper's answer (linuxcdeveloper is the person who answered, Urda just edited the answer) did not actually work, as Stefan's problem was that the output was coming from two different processes; using "script" is the best way to capture all of the output for a session, no matter which process it comes from.
Here here, "script" is the best way to answer OP's specific question. It will literally capture everything!
or script -c yourcommand output.txt
7

I had trouble with a crashing program cough PHP cough. Upon crashing, the shell it was ran in reported the crash reason, "Segmentation fault (core dumped)".

To avoid this output not getting logged, the command can be run in a subshell that will capture and direct this kind of output:

sh -c 'your_command' > your_stdout.log 2> your_stderr.err
# or
sh -c 'your_command' > your_stdout.log 2>&1

1 Comment

Lol, upvoted because *cough PHP cough*
6

You can execute a subshell and redirect all output while still putting the process in the background:

( ./script.sh blah > ~/log/blah.log 2>&1 ) &
echo $! > ~/pids/blah.pid

1 Comment

How does this help? The problem in the question is that some output is bypassing FD 1 and 2.
2

The proper answer is on ssh - How to pipe output from local to remote server:

your_command | ssh username@server "cat > filename.txt"

1 Comment

There's nothing in the question about wanting to write to a file on the server.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.