I have a program for learning Artificial Neural Network and it takes a 2-d numpy array as training data. The size of the data array I want to use is around 300,000 x 400 floats. I can't use chunking here because the library I am using (DeepLearningTutorials) takes a single numpy array as training data.
The code shows MemoryError when the RAM usage is around 1.6Gb by this process(I checked it in system monitor) but I have a total RAM of 8GB. Also, the system is Ubuntu-12.04 32-bit.
I checked for the answers ofor other similar questions but somewhere it says that there is nothing like allocating memory to your python program and somewhere the answer is not clear as to how to increase the process memory.
One interesting thing is I am running the same code on a different machine and it can take a numpy array of almost 1,500,000 x 400 floats without any problem. The basic configurations are similar except that the other machine is 64-bit and this one is 32-bit.
Could someone please give some theoretical answer as to why there is so much difference in this or is this the only reason for my problem?