I have this code to read 64MB of binary data into memory:
#define SIZE 8192
char* readFromFile(FILE* fp)
{
char* memBlk = new char[SIZE*SIZE];
fread(memBlk, 1, SIZE*SIZE, fp);
return memBlk;
}
int main()
{
FILE* fp = fopen("/some_path/file.bin", "rb+");
char* read_data = readFromFile(fp);
// do something on read data
// EDIT: It is a matrix, so I would be reading row-wise.
delete[] memBlk;
fclose(fp);
}
When I use this code independently, the runtime is less than 1 second. However, when I put the exact same code (just to benchmark), in one of our applications, the runtime is 146 seconds. The application is quite a bulky one with upto 5G memory usage.
Some of it can be explained by the current memory usage, cache misses and other factors but a difference by a factor of 146 sounds unreasonable to me.
Can someone explain this?
Memory mapping may improve performance. Any other suggestions are also welcome.
Thanks.
Machine info:
Linux my_mach 2.6.9-67.ELsmp #1 SMP Wed Nov 7 13:56:44 EST 2007 x86_64 x86_64 x86_64 GNU/Linux
EDIT:
Thanks for your answers, However, i missed out on the fact that actually the place where i inserted was itself being called 25 times, so it is not exactly a factor of 146.
Anyways, the answers were helpful, Thanks for your time.
// .. do something with it..do?staticvariable would be much more efficient.