2

Suppose I have a function which is used to increment a column called project_id by 1 in the database under certain circumstances. (Assume auto increment will not satisfy this use case, due to logic that has to be run before determining whether to increment).

When a user clicks "New Project", the function finds the current largest project_id in the database, does SOME OTHER STUFF, and adds a row by incrementing project_id by 1.

Is there a possibility that when multiple users connect, the time that the server takes to process SOME OTHER STUFF, causes multiple users to pick up the same project_id? If so, how would you go about avoiding this problem? Is it just a bad idea to increment in this way?

Here is an illustration. This example is based on Laravel, but the question applies generally.

function ReallyLongFunction () {
    //figure out what the current max project_id is and increment by 1
    $newProjectId = Project::max("project_id")+1;


    //Some other stuff that the server has to do. I am exaggerating here the length of time.
    //If another user triggers this function, will both users will have the same project id?
    sleep(2);

   //Insert new row into database
   $project = new TempProject;
   $project->project_id = $newProjectId;
   $project->save();
}
2
  • I'm not a database expert but I would say, yes this is bad practice. Why doesn't auto increment work if you are trying to exactly perform an auto increment in your code? Commented Sep 11, 2015 at 0:14
  • You could create a "draft" record in your database each time the user presses the "new project" button and just burn the new auto increment ID if they cancel. Commented Sep 11, 2015 at 0:14

1 Answer 1

2

It is always a bad idea to do in your own code what the sql server can do better internally. $newProjectId = Project::max("project_id")+1; sounds very much like a substitute for auto_increment. However if your requirement is more complex than that, you need locks

[LOW_PRIORITY] WRITE lock:

The session that holds the lock can read and write the table.

Only the session that holds the lock can access the table. No other session can access it until the lock is released.

Lock requests for the table by other sessions block while the WRITE lock is held.

The LOW_PRIORITY modifier affects lock scheduling if the WRITE lock request must wait, as described later.

If the LOCK TABLES statement must wait due to locks held by other sessions on any of the tables, it blocks until all locks can be acquired.

If lot of users are accessing the same page concurrently the response will start to slow down because the second request cannot be served until the first one has completed.

Sign up to request clarification or add additional context in comments.

1 Comment

Thanks for the quick replies... you've confirmed my suspicion that it wasn't the best way to go about things..

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.