If you swap pointers to the mutexes after a failed iteration, you can get better performance than either your original code, or the ordered solution shown in the currently accepted answer.
void lock(pthread_mutex_t* m1, pthread_mutex_t* m2) {
while(1) {
pthread_mutex_lock(m1);
if(pthread_mutex_trylock(m2) == 0) { // if lock succesfull
break;
}
pthread_mutex_unlock(m1);
sched_yield();
pthread_mutex_t* tmp = m1;
m1 = m2;
m2 = tmp;
}
}
Without swapping, the code works, but burns cpu cycles in highly contested situations.
The ordering solution works, doesn't burn cpu cycles, but can sometimes block on a mutex while holding the lock on another mutex. This can reduce parallel execution on a multicore platform.
By inserting the swap you eliminate both of those problems:
- You never hold a locked mutex while blocking on another.
- You don't "spin" because on all iterations but the first, you try to lock a mutex that just failed a
try_lock. Thus you are likely to block (without holding a locked mutex), allowing other threads to run with both mutexes available.
I've published a paper that goes into more detail, and supplies C++ code demonstrating these algorithms.
The paper terms the original solution "Persistent". The ordered solution is called "Ordered". And with the swapping, you get "Smart & Polite".