67

I'm building a queuing mechanism of sorts. There are rows of data that need processing, and a status flag. I'm using an update .. returning clause to manage it:

UPDATE stuff
SET computed = 'working'
WHERE id = (SELECT id from STUFF WHERE computed IS NULL LIMIT 1)
RETURNING * 

Is the nested select part the same lock as the update, or do I have a race condition here? If so, does the inner select need to be a select for update?

1
  • 3
    If you're trying to build a message queue in sql, and this is any reasonable large volume of tasks, I'm assuming you're eventually going to want to delete the rows for completed jobs. These will hose your indices, so be sure to vacuum analyze whenever you cleanup the completed tasks or your performance will crawl. You might want to consider just using an actual message queue (rabbitmq, zeromq, activemq, etc). Commented Jun 30, 2014 at 17:00

2 Answers 2

50

While Erwin's suggestion is possibly the simplest way to get correct behavior (so long as you retry your transaction if you get an exception with SQLSTATE of 40001), queuing applications by their nature tend to work better with requests blocking for a chance to take their turn at the queue than with the PostgreSQL implementation of SERIALIZABLE transactions, which allows higher concurrency and is somewhat more "optimistic" about the chances of collision.

The example query in the question, as it stands, in the default READ COMMITTED transaction isolation level would allow two (or more) concurrent connections to both "claim" the same row from the queue. What will happen is this:

  • T1 starts and gets as far as locking the row in the UPDATE phase.
  • T2 overlaps T1 in execution time and attempts to update that row. It blocks pending the COMMIT or ROLLBACK of T1.
  • T1 commits, having successfully "claimed" the row.
  • T2 tries to update the row, finds that T1 already has, looks for the new version of the row, finds that it still satisfies the selection criteria (which is just that id matches), and also "claims" the row.

It can be modified to work correctly (if you are using a version of PostgreSQL which allows the FOR UPDATE clause in a subquery). Just add FOR UPDATE to the end of the subquery which selects the id, and this will happen:

  • T1 starts and now locks the row before selecting the id.
  • T2 overlaps T1 in execution time and blocks while trying to select an id, pending the COMMIT or ROLLBACK of T1.
  • T1 commits, having successfully "claimed" the row.
  • By the time T2 is able to read the row to see the id, it sees that it has been claimed, so it finds the next available id.

At the REPEATABLE READ or SERIALIZABLE transaction isolation level, the write conflict would throw an error, which you could catch and determine was a serialization failure based on the SQLSTATE, and retry.

If you generally want SERIALIZABLE transactions but you want to avoid retries in the queuing area, you might be able to accomplish that by using an advisory lock.

Sign up to request clarification or add additional context in comments.

11 Comments

In READ COMMITTED it would prevent double-assignment, but in the case of overlapping transactions would return an empty result set, even if there were other rows available. At stricter isolation levels it wouldn't make any real difference, but might be good form anyway.
Note that applications using this strategy must have a way to discover when someone has claimed a row, then crashed or otherwise failed to do the work.
@kgrittn does the statement need to be wrapped in a BEGIN...COMMIT to hold the FOR UPDATE lock? there's some reports of it not working otherwise: github.com/collectiveidea/delayed_job_active_record/pull/79
@jtomson, most locks are released at the end of the transaction which acquires them (the exception being some advisory locks). Without a BEGIN/COMMIT block (or the equivalent using other syntax or an enclosing function), each statement creates its own transaction which is automatically committed or rolled back upon completion of the statement.
@Andy FOR UPDATE by itself would not prevent the serialization failure, but if you use Erwin's example (for 9.5 and later) that uses FOR UPDATE SKIP LOCKED with the REPEATABLE READ isolation level, then you get exactly what you want without the serialization failure. There are some ideas floating around to get similar performance for SERIALIZABLE transactions if the update has no ORDER BY clause and includes a LIMIT, but that is just talk at this point.
|
34

If you are the only user, the query should be fine. In particular, there is no race condition or deadlock within the query itself (between the outer query and the subquery). The manual:

However, a transaction never conflicts with itself.

For concurrent use, the matter may be more complicated. You would be on the safe side with SERIALIZABLE transaction mode:

BEGIN ISOLATION LEVEL SERIALIZABLE;
UPDATE stuff
SET    computed = 'working'
WHERE  id = (SELECT id FROM stuff WHERE computed IS NULL LIMIT 1)
RETURNING * 
COMMIT;

You need to prepare for serialization failures and retry your query in such a case.

But I am not entirely sure if this isn't overkill. I'll ask @kgrittn to stop by .. he is the expert with concurrency and serializable transactions ..

And he did. :)

Best of both worlds

Run the query in default transaction mode READ COMMITTED.

For Postgres 9.5 or later use FOR UPDATE SKIP LOCKED. See:

For older versions recheck the condition computed IS NULL explicitly in the outer UPDATE:

UPDATE stuff
SET    computed = 'working'
WHERE  id = (SELECT id FROM stuff WHERE computed IS NULL LIMIT 1)
AND   computed IS NULL;

As @kgrittn's advised in the comment to his answer, this query could come up empty, without having done anything, in the (unlikely) case it got intertwined with a concurrent transaction.

Therefore, it would work much like the first variant in transaction mode SERIALIZABLE, you would have to retry - just without the performance penalty.

The only problem: While the conflict is very unlikely because the window of opportunity is just so tiny, it can happen under heavy load. You could not tell for sure whether there are finally no more rows left.

If that does not matter (like in your case), you are done here.
If it does, to be absolutely sure, start one more query with explicit locking after you get an empty result. If this comes up empty, you are done. If not, continue.
In plpgsql it could look like this:

LOOP
   UPDATE stuff
   SET    computed = 'working'
   WHERE  id = (SELECT id FROM stuff WHERE computed IS NULL
                LIMIT 1 FOR UPDATE SKIP LOCKED);  -- pg 9.5+
   -- WHERE  id = (SELECT id FROM stuff WHERE computed IS NULL LIMIT 1)
   -- AND    computed IS NULL; -- pg 9.4-

   CONTINUE WHEN FOUND;  -- continue outside loop, may be a nested loop

   UPDATE stuff
   SET    computed = 'working'
   WHERE  id = (SELECT id FROM stuff WHERE computed IS NULL
                LIMIT 1 FOR UPDATE);

   EXIT WHEN NOT FOUND;  -- exit function (end)
END LOOP;

That should give you the best of both worlds: performance and reliability.

5 Comments

SERIALIZABLE transactions will indeed cause correct behavior, but feedback on the new implementation of its implementation in PostgreSQL 9.1 (and later) suggests that queuing applications are a "worst case" scenario. While the technique (call Serializable Snapshot Isolation or SSI) is much faster than using blocking locks for most workloads, I have a report of a 20% performance hit for one particular queuing application, and the reporter (a major PostgreSQL contributor) was able to intentionally engineer worse. So, you could try it, but you might find explicit locking works better.
I hesitate to make any statement about performance without actually benchmarking it, but I would expect marginally better performance (probably small enough to be hard to measure) by just starting with the UPDATE in the second block. There are optimizations for a connection re-locking a row it already has locked, and I don't think it would block unless you're in a situation where you'd need the retry anyway.
in my specific case, this isn't necessary. when the process reaches the end of the queue (as far as it's concerned), it sleeps for 10 minutes and tries again. if a record falls into the next window, it's not an issue. thanks for compiling though.
@kolosy: The simple version with repeated AND computed IS NULL in the outer query should be the optimal solution for such a case.
For 9.5 and later is FOR UPDATE SKIP LOCKED all that is needed or is the rest of the paragraph relevant still? The end of the "For older versions" block is not 100% clear.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.