I have a script that finds test names and is widely used in our company. It operates on the command line like so:
find_test.py --type <type> --name <testname>
Inside the script is the equivalent of:
import argparse
parser = argparse.ArgumentParser(description='Get Test Path')
parser.add_argument('--type', dest='TYPE', type=str, default=None, help="TYPE [REQUIRED]")
parser.add_argument('--name', dest='test_name', type=str, default=None, help="Test Name (Slow)")
parser.add_argument('--id', dest='test_id', type=str, default=None, help="Test ID (Fast)")
parser.add_argument('--normalise', dest='normalise', action="store_true", default=False, help="Replace '/' with '.' in Test Name")
args = parser.parse_args()
(Not sure what all these arguments do, I personally only use the first two). These lines are then proceeded by the code that uses these arguments.
I want to refactor this script so I can import it as a module, but also preserve its command line functionality - since lots of people use this script and it is also called in some of our csh scripts.
I have refactored it so far, like so:
def _main():
<all the former code that was in find_test.py>
if __name__ == "__main__":
_main()
And this still runs fine from the command line. But I don't know how then in my parent script I pass arguments with the relevant switches into this.
How do I refactor this further and then call it in parent script? Is this possible?
I'd also rather not use docopts which i've read is the new argparse unless necessary - i.e. can't be done with argparse, since it's not installed company wide and this can be an arduous procedure.