3

I am trying to generate a badge from PyLint output in a Gitlab CI script. Eventually, the job should fail if PyLint has a non-zero exit code. But before it does so, I want the badge to be created. So I have tried the following:

before_script:
    - [...]
    - mkdir -p public
script:
    - pylint lib --disable R,missing-docstring,wrong-import-order --reports=y | tee public/pylint-report.txt
    - export SUCCESS=${PIPESTATUS[0]}
    - SCORE=$(tail -n 2 public/pylint-report.txt | grep -o -P "\d\d?\.\d+\/\d*" | head -1)
    - echo "PyLint score ${SCORE}"
    - python3.6 -m pybadges --left-text=PyLint --right-text=${SCORE} > public/pylint.svg
    - exit ${SUCCESS}
artifacts:
    when: always
    [...]

This works fine if the PyLint exit code is 0:

$ mkdir -p public
$ pylint lib --disable R,missing-docstring,wrong-import-order --reports=y | tee public/pylint-report.txt; export SUCCESS=${PIPESTATUS[0]}
[Pylint report output]
$ SCORE=$(tail -n 2 public/pylint-report.txt | grep -o -P "\d\d?\.\d+\/\d*" | head -1)
$ echo "PyLint score ${SCORE}"
PyLint score 10.00/10
$ python3.6 -m pybadges --left-text=PyLint --right-text=${SCORE} > public/pylint.svg
$ exit ${SUCCESS}
Uploading artifacts...
public/pylint-report.txt: found 1 matching files   
public/pylint.svg: found 1 matching files          
Uploading artifacts to coordinator... ok            id=XXX responseStatus=201 Created token=XXX
Job succeeded

However, when PyLint exits with non-zero, the script is aborted after the first line:

$ mkdir -p public
$ pylint lib --disable R,missing-docstring,wrong-import-order --reports=y | tee public/pylint-report.txt
[Pylint report output]
Uploading artifacts...
public/pylint-report.txt: found 1 matching files   
WARNING: public/pylint.svg: no matching files      
Uploading artifacts to coordinator... ok            id=XXX responseStatus=201 Created token=XXX
ERROR: Job failed: exit code 1

To clarify: I want the job to fail, but I want to make sure the script always runs all the lines. Only the exit command in the last line should determine the job status.

This runs in a container that uses the Bash.

I expected the tee command to always exit with 0 so that the first script line should never fail. But that does not seem to be the case.

I have tried appending a || true call to the first line, but then the following line, SUCCESS=${PIPESTATUS[0]}, is always 0; perhaps this refers to the root cause.

Also, I have tried to append the export call (now second line) to the first line, separated by a semicolon. Again, no difference even though I also expected the export call always to exit with 0.

My question is hence: why can the first line of the script exit with a non-zero code? How do I prevent this?

Or maybe: is there an easier way to achieve the same goal?

2
  • grumbles about a "script" being specified as a list rather than a multi-line string -- the multi-line-string approach makes it much clearer that your code is passed to the shell exactly as-given, without things being done behind your back; whereas if they're assembling the list into a script themselves, what's to say they aren't adding logging or other instrumentation that impacts $?/PIPESTATUS/etc. behind your back too? Commented May 10, 2019 at 15:28
  • BTW, why use export at all, vs setting a regular shell variable stored in heap memory? Commented May 10, 2019 at 16:52

3 Answers 3

6

Gitlab sets a bunch of "helpful" shell options you don't actually want. Among these is errexit, aka set -e, and pipefail (which is generally a good idea, but in conjunction with set -e means your script exits if any component of a pipeline fails).

To work around this one:

{ SUCCESS=0; pylint lib ...args... || SUCCESS=$?; } > >(tee public/pylint-report.txt)

We're sitting SUCCESS directly here (no need for export), so you don't need to refer to PIPESTATUS later. Branching on the return value of a command marks that command as "checked", so it isn't treated as a failure for purposes of errexit.


BTW, for background on set -e and why it's something you really don't want, see BashFAQ #105.

As another aside, all-caps variable names are used for variables meaningful to the shell or POSIX-specified tools, whereas names with at least one lowercase character are reserved for application use and guaranteed not to collide. See https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap08.html, keeping in mind that setting a shell variable will overwrite any preexisting like-named environment variable.

Sign up to request clarification or add additional context in comments.

3 Comments

Not sure what the > > are supposed to do, but they result in a Yaml parsing error. Apart from that, this seems a good solution.
> alone is a redirection. >(...) is a process substitution; the shell replaces it with a name of a fifo which, when written to, redirects its output to a process running .... Combine the two, and you're redirecting your stdout to a FIFO that writes to a copy of tee, without using a pipeline. If it were a shell error I'd tell you to force bash to be used instead of sh, but for a yaml error I just suggest getting Python or something else that knows how to generate valid YAML to tell you how to quote it.
The whole string just needs to be double-quoted. Otherwise, works like a charm, thanks!
1

I found this after i found workaround and was very very mad what caused my scripts don't work ...

Added into before_script into .gitlab-ci.yml:

    - |
      # Gitlab trying to be smarter then you ... they just single-handedly defeated whole purpose of Docker container 
      # this is revert of their "awesome" idea
      # this setting should be used ONLY per script bases
      # behavior is unexpected and inconsistent when particular script has not been written with this in mind
      # Scripts are written by amateurs, ops, by somebody who has been developer, not operations, so:
      # - if i had build everything with this setting, it would be "safer" - sometimes ...
      # - but ... not really, @read: https://mywiki.wooledge.org/BashFAQ/105#So-called_strict_mode
      # - but!: only per script bases rule applies never the less!
      # - NO runtime should ever change Docker environment like this
      # - there is NO documentation FOR IT IN GITLAB DOCS!
      # - seriously Gitlab!?
      # - @see https://elder.dev/posts/safer-bash/ -- for writing scripts like that,
      #   and Gitlab REQUIRES YOU TO WITHOUT DISCLOSING IT!
      #   - example of errexit: https://www.newline.co/courses/newline-guide-to-bash-scripting/errexit
      # - @see https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html - for all env setting
      # - @see frutrated developers who faced this: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27668, https://www.adoclib.com/blog/gitlab-ci-exit-1-even-if-it-is-successful.html, https://stackoverflow.com/questions/56079993/avoid-early-exit-from-command-in-gitlab-ci-script-pipeline-while-still-capturing, ...
      set +o errexit
      set +o pipefail

You can find this "helpful" (as @charlesdufy put it) thing and my reaction to it here - gitlab code comment & rant - you can upvote or comment on it, to get it removed. I didn't see anything so insane as this in any big app like this.

Comments

0

Depending on which version of Gitlab you are using, you might have more luck with an after_script section as an alternate way to run arbitrary code whether or not a process fails.

"after_script is used to define the command that will be run after all jobs, including failed ones. This has to be an array or a multi-line string."

Link

1 Comment

Yeah, I wasn't sure why it was downvoted either. Feel free to upvote it. I agree it's not a complete answer, but thought it might be worth mentioning. Let us know if you're able to resolve your issue.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.