Python has to create a new list on each entry to myfunction1(), and assign it to 'biglist'.
In myfunction2(), you are passing a reference to the global-scoped 'biglist', so there's no copying to be done.
There are other, subtle differences between the two. Passing in that reference leaves the global data open to (possibly unwanted) interference:
>>> biglist = [ 1,2,3,4,5,6,7,8,9 ]
>>> def myfunction3(mylist):
... mylist[2] = 99
...
>>> biglist
[1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> myfunction3(biglist)
>>> biglist
[1, 2, 99, 4, 5, 6, 7, 8, 9]
...whereas, declaring it in function scope means it's created anew every time. So, for instance:
>>> def myfunction4():
... mylist = [ 1,2,3,4,5 ]
... print mylist
... mylist[2] = 99
...
>>> myfunction4()
[1, 2, 3, 4, 5]
>>> myfunction4()
[1, 2, 3, 4, 5]
Each time the function's called, you've got a fresh, clean, unadulterated copy of the list to play with.
So how do you get the best of both worlds? Try this:
>>> def myfunction5():
... mylist = biglist+[] # Make a private copy
... mylist[4] = 99
...
>>> biglist
[1, 2, 99, 4, 5, 6, 7, 8, 9]
>>> myfunction5()
>>> biglist
[1, 2, 99, 4, 5, 6, 7, 8, 9]
You can see that global-scope list is unchanged. Your new function, based upon this method, would be:
def myfunction1a(number):
mylist = biglist+[] # Copy-safe version
print number*mylist
How does that compare using your benchmark timings? I know that in this case you're not actually modifying "biglist" in your function, but it's not a bad paradigm to get used to using, if you must have shared global data, and the fact that the list is only constructed from scratch once (and then copied) might give some performance improvements.
timeitis to time execution as if it's run a single time!