1

I need a method to trace last successfully executed command in a bash script. For example I have this script:

#!/bin/bash
command1
command2

command1 and command2 both will return some exit code. What I need is to save the last successfully executed command and when I will re-run this script I want it to start from that point. So if command1 executed correctly and command2 failed, next time I run the script it will start from command2.

The simplest approach would be storing execution information inside additional file, which will be read by the script before execution. And before executing command, the script will check if that command was already successfully executed.

Is there a better approach for this? Are there any example of this implemenation? I think I saw something similar in configure scripts.

9
  • 3
    If you really need to track the status of every command and support re-running from any arbitrary point then you are going to need to manually record each exit status to a file and have checks before each line for whether the command on that line worked correctly the last time (or create a temporary copy of the script with the successful lines removed, with sed or similar). But this sounds like the wrong sort of solution to me. What's the real problem here? (Something like make might actually be useful for this potentially also.) Commented Jan 1, 2015 at 13:09
  • Does it have to be bash? For this I would use perl. Granted my perl skils are better than bash and I sure there are bash geeks that could be more helpful, but if you like I can show you in perl. Commented Jan 1, 2015 at 13:15
  • @Etan Reisner, Any implementation of this solution? I already have checking of exit codes of the commands. I was looking for convenient way to return to specific function. I need this for update/set-up scripts which I want to run on new VMs. Commented Jan 1, 2015 at 13:16
  • @terary, I guess I can use any language. Would gladly check your solution Commented Jan 1, 2015 at 13:18
  • There isn't a convenient way to return to a specific function. That's the problem. You need to store status of each command in a loadable file and then check the status of each command before running it. And no, what you need is to write your script such that it is safe to run more than once. Which means it needs to test for things that have already been done and not do them again and/or only do the things in a way that is safe to do more than once. Commented Jan 1, 2015 at 13:19

4 Answers 4

3
#!/bin/bash -e
#
#  run those commands that did not yet run
#  breaks upon failed command due to -e
#
#  keeps the line number of the last successful command in lastline
#
#  first, some preparation:
#

lastfile=$(readlink -f lastline) # inspired by ton1c: Get absolute path
test -f $lastfile && lastline=`cat $lastfile` >/dev/null
test -z "$lastline" && lastline=0

e()
{
   thisline="$BASH_LINENO"
   test $lastline -lt $thisline || return 0
   "$@"
   echo $thisline > $lastfile
}

#
# and now prepend every command by e
# for example run this, interrupt the sleep 5 and run again
# to restart ALL commands, remove the file "lastline"
#

echo running sleep 3
e sleep 3
echo running sleep 5
e sleep 5
echo running sleep 7
e sleep 7
Sign up to request clarification or add additional context in comments.

4 Comments

This is very clever. It would be improved if there were some way to reset it. However, I can help thinking that it is starting to reach the point where make would be a more robust and flexible solution.
reset it by just deleting the file "lastline".
Yes, I got that, but it leaks the implementation detail. And what if you have several such scripts? What is the correct filename? You could, for example, do something like: if [[ $1 == --reset ]]; then shift; rm -f lastline; fi to make the script reset itself if invoked with --reset as the first command-line argument. (A more robust solution doesn't fit in a comment :) )
If your script or command changes working directory, specify full path to lastline file. Lost some time on figuring that out
1
#!/usr/bin/perl

use strict;

open(COMMANDS,"commands.txt")||die "Couldn't open command file\n";
my @cmd = <COMMANDS>;  #get all the file contents into array
close(COMMANDS);

#whacks file
open(COMMANDS,">commands.txt")||die "Couldn't open command file\n";




foreach my $cmd ( @cmd)
  { chomp($cmd);
    my($do,$status) = split(/:/,$cmd);

    if($status =~ /pending/i)
      {
    my $return = qx($do);

    if(!$return)
      {$status='failed';}
    else
      { print "$do returned $return\n";
        $status ='completed';
      }
      }
    else
      {$status ='pending';}
    print COMMANDS "$do:$status\n";

}
close(COMMANDS);

contents of command:

echo "fish":pending
date +%D:pending
whoamix:failed
whoami:pending

You'll have to work out the logic some. qx will return undef (perl's false) if the command fails, but otherwise returns the output. Secondly, you will need to fix it that once the listed is all 'completed' perl changes them to all pending again.

9 Comments

This doesn't share shell state between the commands and so limits the things that can be done somewhat severely. That may not be a problem but is worth pointing out. It also prevents multi-line commands from being used and is potentially and, as written, cannot handle commands with : in them anywhere.
My apologies. My intent was to demonstrate how to do what was specified in perl as opposed to bash. I am not sure what you mean by 'shell state'? As for multi-line line and the use of ":", that is easily fixable. I guess I wasn't trying to solve your problem but to help you solve it.
Each command is run in a new shell. So you can't set variables/etc. and use them between commands like you can in a normal shell script. And yes, the : parsing issue is easily fixable.
hmmm. Can set environment varialbe than export? If the vairables are not applicable outside the scope of your project I would use config. Anyway - I hope helps. Perl is my best friend and I would recommend to anyone doing geek stuff.
Perhaps it should be pointed out that this pretty much amounts to a Makefile without the versatility and utility of a proven, documented, flexible, standard framework.
|
0

Do you want to restart the script at some arbitrary time in the future (that will be hard), or do you just need to temporarily run a shell to execute some commands and then come back to the script? If the latter, you could do something like:

#!/bin/sh
cmd1 || ${SHELL-bash} || exit 1
cmd2 || ${SHELL-bash} || exit 1
cmd3 || ${SHELL-bash} || exit 1

If cmd2 failes, you should get a shell prompt. Do whatever you want. When you are done, if you want the script to pick up at cmd3, exit the shell with exit 0. If you want to abort, exit the shell with exit 1.

1 Comment

This is actually a fairly clever "breakpoint" style script. The one "problem" it has is that it doesn't let you easily re-run the given command. A function with a select loop and an array of commands would be better (but also harder to do with sh/bash).
0

This may not be the answer you are looking for, but a simple Makefile would seem to satisfy your requirements. Make terminates execution on error out of the box, and its purpose is to avoid repeating processing steps. Especially if (at least most of) your commands generate an output file, it would be a rather natural fit. If not, keeping a simple state file is a common Make idiom to mark a command sequence as succesfully completed.

(If you are new to Make, note well that each command in a target needs to have a literal tab for indentation.)

.PHONY: all
all: .command1-done .command2-done
.command1-done:
    command1
    touch $@
# Comment out the dependency to avoid forcing sequential execution
.command2-done: .command1-done
    command2
    touch $@

.PHONY: clean
clean:
    rm -f .*-done

So, this runs command1 and, if it succeeds, proceeds to the next command in the recipe, which runs touch on the semaphore file (the Make variable $@ expands to the current target name). If this succeeds, too, it runs command2, and similarly creates its "done" file if it succeeds. Running make again will tell you that no commands need to be run:

$ make
command1
touch .command1-done
command2
touch .command2-done

$ make
make: Nothing to be done for `all'.

If you actually care about the execution order of targets, that is usually because there is a dependency (command2 fails or, worse, produces incorrect results unless it is run after command1). Such dependencies need to be declared -- I put in an example of how to declare such a relationship. If there is no dependency, you should probably not declare one; then, Make will run your targets in arbitrary order (though the versions I have worked with are generally predictable); or you can have Make run them in parallel with make -j.

(The state file usually starts with a dot, to keep it hidden from a regular directory listing.)

More realistically, perhaps your commands actually generate some output:

.PHONY: all
all: grep.out wc.out
grep.out:
    grep -Fw Debian /etc/motd >$@
wc.out: grep.out
    wc -l <$< >$@

(Unfortunately, shell redirection will create the output file even if the target fails. A workaround is to use a temporary output file, then move it into place only on success:

.PHONY: all
all: grep.out wc.out
grep.out:
    grep -Fw Debian /etc/motd >[email protected]
    mv [email protected] $@
wc.out: grep.out
    wc -l <$< >[email protected]
    mv [email protected] $@

.PHONY: clean
clean:
    rm -f .*.tmp

But anyway, this is already getting too long.)

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.