1

I am currently using struct.unpack to read a binary file. Frequently, I would be reading different types of values, so I might read a few longs, then read 8 floats, then read 2 shorts, a couple bytes, etc.

But they are generally grouped nicely so you might get a bunch of longs, and then a bunch of floats, and then a bunch of shorts, etc.

I've read a couple posts about how arrays perform much faster than unpack, but am not sure if there will be a significant difference if I am constantly calling fromfile with different array objects (one for each type I might come across).

Has anyone done any performance tests to compare the two in this situation?

1 Answer 1

1

Sounds like you are in the best position to do the time trials. You already have the struct.unpack version, so make an array.fromfile version and then use the timeit module to do some benchmarks. Something like this:

python -m timeit -s "import struct_version" "struct_version.main()"

python -m timeit -s "import array_version" "array_version.main()"

where struct_version and array_version are your two different versions, and main is the function that does all the processing.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.