3

I have 60mb file with lots of lines.

Each line has the following format:

(x,y)

Each line will be parsed as a numpy vector at shape (1,2).

At the end it should be concatenated into a big numpy array at shpae (N,2) where N is the number of lines.

What is the fastest way to do that? Because now it takes too much time(more than 30 min).

My Code:

with open(fname) as f:
for line in f:
    point = parse_vector_string_to_array(line)
    if points is None:
        points = point
    else:
        points = np.vstack((points, point))

Where the parser is:

def parse_vector_string_to_array(string):
    x, y =eval(string)
    array = np.array([[x, y]])
    return array
12
  • 3
    Definitely do not do this: points = np.vstack((points, point)). That results in points being copied for every new line. Instead, make points a python list, and append to it. Don't convert it to a numpy array until you have finished reading the file. Commented Aug 20, 2015 at 19:40
  • 3
    If you can change the format of the file, get rid of the parentheses. Those are unusual to have in a text file, and will require special processing. (Of course, if you have control over the format, and you care about performance, you should consider a binary format instead of text.) Commented Aug 20, 2015 at 19:46
  • 1
    @member555: See the Numpy documentation on input and output. The first block of routines deals with Numpy's custom binary format (.npy and .npz files), but there are also routines to read raw binary files. Commented Aug 20, 2015 at 19:54
  • 2
    @member555 this question is much related, from where you can get some insight. The best way I found is to create a temporary array and populate it while you go through the file. Commented Aug 20, 2015 at 19:56
  • 1
    @member555: No, you actually don't. Where does the data come from? It must be written by some other program. If that other program is written in Python, it could write the data in .npy format. If it's in a different programming language, you could write raw binary files, or use a more portable format like netCDF or HDF5. Commented Aug 20, 2015 at 21:14

1 Answer 1

2

One thing that would improve speed is to imitate genfromtxt and accumulate each line in a list of lists (or tuples). Then do one np.array at the end.

for example (roughly):

points = []
for line in file:
    x,y = eval(line)
    points.append((x,y))
result = np.array(points)

Since your file lines look like tuples I'll leave your eval parsing. We don't usually recommend eval, but in this limited case it might the simplest.

You could try to make genfromtxt read this, but the () on each line will give some headaches.

pandas is supposed to have a faster csv reader, but I don't know if it can be configured to handle this format or now.

Sign up to request clarification or add additional context in comments.

1 Comment

If anything, user ast.literal_eval() – it's always better not to execute arbitrary code from an input file if we don't have to.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.