Skip to main content
replaced http://gamedev.stackexchange.com/ with https://gamedev.stackexchange.com/
Source Link

In order to compare algorithms speed, I measure execution time of a code part with a method similar to the one described in this questiondescribed in this question.

I am not operating on Windows, but on Linux, so I use gettimeofday function in order to measure time with microsecond precision.

My test program has a simple architecture:

int main()
{
    test_algorithm1(); // Measure execution time of algorithm1 and 
    test_algorithm2(); // Measure execution time of algorithm1 and 

    return 0;
}

My functions test_algorithm1 and test_algorithm2 have exactly the same structure:

void test_algorithmX()
{
    struct timeval before, after;
    time_t         elapsedUs;
    int            i;

    gettimeofday(&before, nullptr);
    for (i = 0; i < 1000000; i++) // Repeated 1 000 000 times for more precision.
    { 
        // Code to measure here
    }
    gettimeofday(&after, nullptr);

    elapsedUs = after.tv_usec - before.tv_usec; // Elapsed microseconds
    elapsedUs += (after.tv_sec - before.tv_sec) * 1000000; // Elapsed seconds
    std::cout << "Elapsed time for algorithm X: " << elapsedUs << std::endl;
}

However, I face two problems:

  • Measured time seems very random, especially with short algorithms. If I run many times the program, I can obtain results between 25 000 and 35 000 us.
  • If I invert algorithms order in the main function (running test_algorithm2 before test_algorithm1), values change. It appears that the first one to be executed is slower. Values like 50 000/15 000 us may change to 40 000/20 000 if I invert them.

This is very problematic, because I sometimes measure algorithms of very close speed, and I cannot make a conclusion with this random.

Is there a better way to measure execution time? Or did I do something bad, like using the gettimeofday function?

In order to compare algorithms speed, I measure execution time of a code part with a method similar to the one described in this question.

I am not operating on Windows, but on Linux, so I use gettimeofday function in order to measure time with microsecond precision.

My test program has a simple architecture:

int main()
{
    test_algorithm1(); // Measure execution time of algorithm1 and 
    test_algorithm2(); // Measure execution time of algorithm1 and 

    return 0;
}

My functions test_algorithm1 and test_algorithm2 have exactly the same structure:

void test_algorithmX()
{
    struct timeval before, after;
    time_t         elapsedUs;
    int            i;

    gettimeofday(&before, nullptr);
    for (i = 0; i < 1000000; i++) // Repeated 1 000 000 times for more precision.
    { 
        // Code to measure here
    }
    gettimeofday(&after, nullptr);

    elapsedUs = after.tv_usec - before.tv_usec; // Elapsed microseconds
    elapsedUs += (after.tv_sec - before.tv_sec) * 1000000; // Elapsed seconds
    std::cout << "Elapsed time for algorithm X: " << elapsedUs << std::endl;
}

However, I face two problems:

  • Measured time seems very random, especially with short algorithms. If I run many times the program, I can obtain results between 25 000 and 35 000 us.
  • If I invert algorithms order in the main function (running test_algorithm2 before test_algorithm1), values change. It appears that the first one to be executed is slower. Values like 50 000/15 000 us may change to 40 000/20 000 if I invert them.

This is very problematic, because I sometimes measure algorithms of very close speed, and I cannot make a conclusion with this random.

Is there a better way to measure execution time? Or did I do something bad, like using the gettimeofday function?

In order to compare algorithms speed, I measure execution time of a code part with a method similar to the one described in this question.

I am not operating on Windows, but on Linux, so I use gettimeofday function in order to measure time with microsecond precision.

My test program has a simple architecture:

int main()
{
    test_algorithm1(); // Measure execution time of algorithm1 and 
    test_algorithm2(); // Measure execution time of algorithm1 and 

    return 0;
}

My functions test_algorithm1 and test_algorithm2 have exactly the same structure:

void test_algorithmX()
{
    struct timeval before, after;
    time_t         elapsedUs;
    int            i;

    gettimeofday(&before, nullptr);
    for (i = 0; i < 1000000; i++) // Repeated 1 000 000 times for more precision.
    { 
        // Code to measure here
    }
    gettimeofday(&after, nullptr);

    elapsedUs = after.tv_usec - before.tv_usec; // Elapsed microseconds
    elapsedUs += (after.tv_sec - before.tv_sec) * 1000000; // Elapsed seconds
    std::cout << "Elapsed time for algorithm X: " << elapsedUs << std::endl;
}

However, I face two problems:

  • Measured time seems very random, especially with short algorithms. If I run many times the program, I can obtain results between 25 000 and 35 000 us.
  • If I invert algorithms order in the main function (running test_algorithm2 before test_algorithm1), values change. It appears that the first one to be executed is slower. Values like 50 000/15 000 us may change to 40 000/20 000 if I invert them.

This is very problematic, because I sometimes measure algorithms of very close speed, and I cannot make a conclusion with this random.

Is there a better way to measure execution time? Or did I do something bad, like using the gettimeofday function?

Source Link
Aracthor
  • 1k
  • 2
  • 11
  • 28

How to PRECISELY measure execution time of a code?

In order to compare algorithms speed, I measure execution time of a code part with a method similar to the one described in this question.

I am not operating on Windows, but on Linux, so I use gettimeofday function in order to measure time with microsecond precision.

My test program has a simple architecture:

int main()
{
    test_algorithm1(); // Measure execution time of algorithm1 and 
    test_algorithm2(); // Measure execution time of algorithm1 and 

    return 0;
}

My functions test_algorithm1 and test_algorithm2 have exactly the same structure:

void test_algorithmX()
{
    struct timeval before, after;
    time_t         elapsedUs;
    int            i;

    gettimeofday(&before, nullptr);
    for (i = 0; i < 1000000; i++) // Repeated 1 000 000 times for more precision.
    { 
        // Code to measure here
    }
    gettimeofday(&after, nullptr);

    elapsedUs = after.tv_usec - before.tv_usec; // Elapsed microseconds
    elapsedUs += (after.tv_sec - before.tv_sec) * 1000000; // Elapsed seconds
    std::cout << "Elapsed time for algorithm X: " << elapsedUs << std::endl;
}

However, I face two problems:

  • Measured time seems very random, especially with short algorithms. If I run many times the program, I can obtain results between 25 000 and 35 000 us.
  • If I invert algorithms order in the main function (running test_algorithm2 before test_algorithm1), values change. It appears that the first one to be executed is slower. Values like 50 000/15 000 us may change to 40 000/20 000 if I invert them.

This is very problematic, because I sometimes measure algorithms of very close speed, and I cannot make a conclusion with this random.

Is there a better way to measure execution time? Or did I do something bad, like using the gettimeofday function?