1

Well, this query struck my mind when someone pointed out to me that importing a package using import package gives more code readability. Is this actually true? I mean when using this statement as compared to from package import x, y, z, isn't there any overhead of importing the entire package?

2
  • 1
    i think you misunderstood what the guy meant, i think he meant to compare import package to from package import *. Wildcard imports should be avoided, as they "pollute" the namespace. Commented Aug 6, 2011 at 14:03
  • okay... that clarifies it further... Commented Aug 6, 2011 at 14:09

4 Answers 4

7

I don't expect any performance difference. Whole package will be loaded anyway.

For example:

# load dirname() function from os.path module
>>> from os.path import dirname

#the os.path.basename() was not imported
>>> basename('/foo/bar.txt')
NameError: name 'basename' is not defined

# however, basename() is already available anyway:
dirname.__globals__['basename']('/foo/bar.txt')
Sign up to request clarification or add additional context in comments.

2 Comments

In both cases whole packages is loaded anyway. The difference is only in what gets to the namespace.
that means if I use from ... import ..., if I could somehow get around this namespace thing, I would be (theoretically speaking), able to access everything in that package?
3

Using the point notation is always less performant than importing a function directly and calling it, because the function does have to be searched in the modules dictionary. This counts for every getattr operation.

For example when appending items to a list:

lst     = []
for i in xrange(5000):
    lst.append(i ** .5 * 2)

This is faster:

lst     = []
append  = lst.append
for i in xrange(5000):
    append(i ** .5 * 2)

This can make a real heavy difference.

>>> def bad():
...     lst     = []
...     for i in xrange(500):
...         lst.append(i ** .5 * 2)
>>> def good():
...     lst     = []
...     append  = lst.append
...     for i in xrange(500):
...         append(i ** .5 * 2)
>>> from timeit import timeit
>>> timeit("bad()", "from __main__ import bad", number = 1000)
0.175249130875
>>> timeit("good()", "from __main__ import good", number = 1000)
0.146750989286

3 Comments

I think he meant a performance penalty to import, not to access.
Let me guess, gentoo user? 0.06µs per call is not what many people would consider a heavy difference. Certainly not, when you consider how much more readable my_well_named_variable.append(item) is.
The absolute difference is irrelevant, the percentage different is what matters, and that is significant. If you had a program like above but with 5 billion iterations instead of 500, you'd care about a 25% speed increase.
3

The performance will be same either way. The entire module is compiled, if needed, and the code executed, the first time you import a module, no matter how you import it.

1 Comment

+1. The different forms of the import statement influence only what names end up in the importing module's namespace, not how much of the module gets loaded into memory.
1

Which is more readable

from os.path import split, join

then a bunch of split and join calls that would accidentally be read as the string methods of the same name, or

import os.path

then referencing them as os.path.split and os.path.join? Here, it's clear they're methods dealing with paths.

Either way, you have to actually load the whole module. Otherwise, things that you imported that depended on other things in the module you didn't import wouldn't work.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.