5

I have a list of list with 1,200 rows and 500,000 columns. How do I convert it into a numpy array?

I've read the solutions on Bypass "Array is too big" python error but they are not helping.

I tried to put them into a numpy array:

import random
import numpy as np
lol = [[random.uniform(0,1) for j in range(500000)] for i in range(1200)]
np.array(lol)

[Error]:

ValueError: array is too big.

Then i've tried pandas:

import random
import pandas as pd
lol = [[random.uniform(0,1) for j in range(500000)] for i in range(1200)]
pd.lib.to_object_array(lol).astype(float)

[Error]:

ValueError: array is too big.

I've also tried hdf5 as @askewchan suggested:

import h5py
filearray = h5py.File('project.data','w')
data = filearray.create_dataset('tocluster',(len(data),len(data[0])),dtype='f')
data[...] = data

[Error]:

    data[...] = data
  File "/usr/lib/python2.7/dist-packages/h5py/_hl/dataset.py", line 367, in __setitem__
    val = numpy.asarray(val, order='C')
  File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", line 460, in asarray
    return array(a, dtype, copy=False, order=order)
  File "/usr/lib/python2.7/dist-packages/h5py/_hl/dataset.py", line 455, in __array__
    arr = numpy.empty(self.shape, dtype=self.dtype if dtype is None else dtype)
ValueError: array is too big.

This post shows that I can store a huge numpy array in disk Python: how to store a numpy multidimensional array in PyTables?. But i can't even get my list of list into a numpy array =(

7
  • How is the list (of lists) currently stored? Is it on disk, or calculated in memory? Commented Mar 18, 2014 at 1:11
  • Calculated in memory. Commented Mar 18, 2014 at 1:11
  • Can't reproduce, even on a 32bit system you should not get "array is too big", since the array simply isn't too big. So what numpy version are you using? Can you reproduce it with lol = [[0] * 500000] * 1200; a = np.array(lol) (much more -memory- efficient for testing)? Commented Apr 3, 2014 at 15:05
  • 1
    Nvm. it simply is too large if you are using a 32bit numpy/python (forgot itemsize factor). You should upgrade, since you apparently can use a huge amount of RAM, the system can hardly really be 32bit? Commented Apr 3, 2014 at 15:33
  • 2
    More succinctly does np.zeros((500000, 1200)) work? Commented Apr 4, 2014 at 12:27

5 Answers 5

4

On a system with 32GB of RAM and 64-bit Python your code:

import random
import numpy as np
lol = [[random.uniform(0,1) for j in range(500000)] for i in range(1200)]
np.array(lol)

works just fine for me but it's probably not the best route to take. This is the kind of thing PyTables was built for. Since you're dealing with homogeneous data you can use the Array class or, better yet, the CArray class (which supports compression). This can be done as follows:

import numpy as np
import tables as pt

# Create container
h5 = pt.open_file('myarray.h5', 'w')
filters = pt.Filters(complevel=6, complib='blosc')
carr = h5.create_carray('/', 'carray', atom=pt.Float32Atom(), shape=(1200, 500000), filters=filters)

# Fill the array
m, n = carr.shape
for j in xrange(m):
    carr[j,:] = np.random.randn(n) 

h5.close() # "myarray.h5" (~2.2 GB)

# Open file
h5 = pt.open_file('myarray.h5', 'r')
carr = h5.root.carray
# Display some numbers from array
print carr[973:975, :4]
print carr.dtype    

If you print carr.flavor it will return 'numpy'. You can use this carr in the same way you can use a NumPy array. The information is stored on disk but is still quite fast.

Sign up to request clarification or add additional context in comments.

Comments

2

With h5py / hdf5:

import numpy as np
import h5py

lol = np.empty((1200, 5000)).tolist()

f = h5py.File('big.hdf5', 'w')
bd = f.create_dataset('big_dataset', (len(lol), len(lol[0])), dtype='f')
bd[...] = lol

Then, I believe you can access your big dataset bd as if it were an array, but it is stored and accessed from disk, not memory:

In [14]: bd[0, 1:10]
Out[14]:
array([ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.], dtype=float32)

And you can have several 'datasets' in the one file (multiple arrays).

abd = f.create_dataset('another_big_dataset', (len(lol), len(lol[0])), dtype='f')
abd[...] = lol
abd += 10

Then:

In [24]: abd[:3, :10]
Out[24]: 
array([[ 10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.],
       [ 10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.],
       [ 10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.]], dtype=float32)

In [25]: bd[:3, :10]
Out[25]: 
array([[ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.]], dtype=float32)

My computer can't handle your example, so I can't test this with an array your size but I hope it works!

Depending on what you want to do with your array, you might have more luck with pytables, which does a lot more than h5py.

See also:
Python Numpy Very Large Matrices
exporting from/importing to numpy, scipy in SQLite and HDF5 formats

2 Comments

too bad, it's still too big for numpy
I don't understand. This doesn't make a numpy array.
1

Have you tried assigning a dtype? This works for me.

import random
import numpy as np
lol = [[random.uniform(0,1) for j in range(500000)] for i in range(1200)]
ar = np.array(lol, dtype=np.float64)

Another option is to use blaze. http://blaze.pydata.org/

import random
import blaze
lol = [[random.uniform(0,1) for j in range(500000)] for i in range(1200)]
ar = blaze.array(lol)

1 Comment

How much memory do you have?
0
+50

The problem seems to be that you are using something (either OS or python) which is only 32bit, which is the source of the size limitation. The solution is to upgrade to 64bit.

1 Comment

-2

An alternative is the following:

lol = np.empty((1200,500000))
for i in range(lol.shape[0]):
    lol[i] = [random.uniform(0,1) for j in range(lol.shape[1])]

This is reasonably close to your initial form, I hope it can fit into your code. I cannot test with your numbers, as I don't have enough RAM to handle the array.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.