I was reading up about the Python interpreter because I couldn't understand why some things had the python compiled objects (.pyc), but others didn't.
I got the answer to my question, but now I'm confused. So okay, the interpreter compiles a script to a module...which is 'sort of' like an object in C if I'm understanding this correctly (C programmer here, new to Python) - or I guess more like a .class in Java since it's compiled bytecode, not native instructions...anyway it does this when either you import a script, OR if you explicitly call it to be compiled (which is for some reason less favorable).
So under that understanding, is there any runtime difference between compiled bytecode and not? Assuming there's only one interpreter (a bytecode interpreter), it would mean that if the module isn't already compiled, it has to do the grammar/lexing/parsing (compiling) right before it does the interpretation. Won't that lead to a higher execution time?
So if you take the above to be true, then it's obviously best if the modules are compiled into .pyc, not ran as a standard .py script on the fly.
Would that mean it's best to have as minimal execution in your main run as possible?
I would think, if your entry point has any hardcore logic (ie. mine has a couple tree traversals, and other heavy compares), shouldn't that entry point in and of itself be wrapped so it's compiled?
That is, instead of:
# file.py:
def main():
<stuff goes here - setup, whatever shared resources different modules need, etc.>
main()
Would it be better to do:
# wrapper.py:
from file.py import *
main()
Hope I explained what I'm asking, well enough. It's quite possible that I got the wrong understanding of how the interpreter/compiler is used in Python and this question isn't even reasonable to ask - I'm quite new to Python.
TIA