Your solution works nicely and is probably one of the fastest ways to do this for small to medium sized lists, but it creates an unnecessary list (in python2.x). Usually that's not a problem, but in a few cases, depending on the object b, it could be an issue. Another which is lazy in python2 as well as python 3 is:
':'.join(str(x) for x in b)
Some timings for python 2.7.3:
$ python -m timeit -s 'b = ["x", 2, "y"]' '":".join(map(str,b))'
1000000 loops, best of 3: 1.66 usec per loop
$python -m timeit -s 'b = ["x", 2, "y"]' '":".join([str(x) for x in b])'
1000000 loops, best of 3: 1.49 usec per loop
$ python -m timeit -s 'b = ["x", 2, "y"]' '":".join(str(x) for x in b)'
100000 loops, best of 3: 3.26 usec per loop
$python -m timeit -s 'from itertools import imap; b = ["x", 2, "y"]' '":".join(imap(str,b))'
100000 loops, best of 3: 2.83 usec per loop
Some timings for python3.2:
$ python3 -m timeit -s 'b = ["x", 2, "y"]' '":".join(map(str,b))'
100000 loops, best of 3: 2.6 usec per loop
$ python3 -m timeit -s 'b = ["x", 2, "y"]' '":".join([str(x) for x in b])'
100000 loops, best of 3: 2.08 usec per loop
$ python3 -m timeit -s 'b = ["x", 2, "y"]' '":".join(str(x) for x in b)'
100000 loops, best of 3: 3.39 usec per loop
Note that if you let the loop get a lot bigger, the differences become less important:
python2.7.3:
$ python -m timeit -s 'b = list(range(10000))' '":".join(str(x) for x in b)'
100 loops, best of 3: 4.83 msec per loop
$ python -m timeit -s 'b = list(range(10000))' '":".join([str(x) for x in b])'
100 loops, best of 3: 4.33 msec per loop
$ python -m timeit -s 'b = list(range(10000))' '":".join(map(str,b))'
100 loops, best of 3: 3.29 msec per loop
python 3.2.0
$ python3 -m timeit -s 'b = list(range(10000))' '":".join(str(x) for x in b)'
100 loops, best of 3: 6.42 msec per loop
$ python3 -m timeit -s 'b = list(range(10000))' '":".join([str(x) for x in b])'
100 loops, best of 3: 5.51 msec per loop
$ python3 -m timeit -s 'b = list(range(10000))' '":".join(map(str,b))'
100 loops, best of 3: 4.55 msec per loop
*all timings done on my MacbookPro, OS-X 10.5.8 intel core2duo ....
Notes,
- python2.x is faster than python3.x in all cases (for me)
- List-comprehension turns out to be fastest for your example list, but
map is faster for a larger list. map is probably slower for the small list as you need to look up the function whereas the list comprehension cannot be "shadowed", so no lookup needs to be performed. There may be another turn-around point for HUGE lists where the time it takes to build the intermediate list becomes significant.
- generator expression is always the slowest (but lazy in both cases)
try/exceptloop to normalize all the inputs beforejoining them if you (arguably) prefer clarity.numpy.ndarray, but there's also a built-inarraymodule which supplies anarraytype (which can only hold homogeneous data)