1

I have an application written with Java, Spring, Hibernate, Postgres.

It tracks user's login attempts. If user has more than 5 invalid attempts from the same IP address in less than 1 hour - this IP address will be blocked .

I have the following entity class for storing information about login attempts:

@Entity
public class BlockedIp {

private String ipAddress;
private Date blockDate;
private Date unblockDate;
private Date lastAttemptDate;
private Integer wrongAttemptsCount;
...}

First, when app gets login request - it checks if IP address is already blocked (blockDate != null). Then in returns special response code to user.

If app gets login request and credentials are wrong:

  • if last attempt was less than 1 hour ago - it increments wrongAttemptsCount and if (wrongAttemptsCount == 5) - sets blockDate.
  • if last attempt was more than 1 hour ago - it sets wrongAttemptsCount to 1.

If app gets login request and credentials are correct - it resets wrongAttemptsCount to zero, so user can make up to 5 mistakes again :)

The issue is when some users try to login simultaneously from the same IP. For instance, wrongAttemptsCount = 4, so user can have only one last attempt. And we have 3 incoming requests, and they all are with wrong credentials. In theory, only first request will pass, but two others will be blocked. In practice, of course, the all get wrongAttemptsCount equals to 4 from database, so they all will be processed as non-blocked.

So, which options i have to solve this issue and with minimal performance loss, if possible? I thought about SELECT FOR UPDATE statement for my query but my colleague said that this will seriously hit performance. Also he offered to look at @Version annotation. Is it really worth it? Is it much faster? Maybe someone can offer other options?

1 Answer 1

1

Optimistic locking yields the best performance and also prevents lost updates in multi-request conversations, and it will prevent multiple concurrent transactions from updating the same row without being notified about a new row version.

Only one transaction will pass, the other ones getting a stale-state exception. You must understand that locks are acquired even if you don't explicitly request it so. Every modified row takes a lock, it's just that the transactional write-behind cache post-pones the entity state transition towards the end of the current transaction.

SELECT FOR UPDATE, like any pessimistic locking mechanism takes explicit locks. You can also use PESSIMISTIC_READ to obtain a shared lock, since PostgreSQL supports that too.

The PESSIMISTIC_READ will prevent other transactions from acquiring the shared lock on your User entity, but without optimistic locking, you can still have more than 5 failure attempts. After the current transaction releases the lock, the other competing transaction would acquire the newly released lock and save the new failed logic attempt anyway. This happens for READ_COMMITTED but it's prevented by REPEATABLE_READ or SERIALIZABLE, but increasing the transaction isolation level can indeed lower your application scalability.

All in all, use optimistic locking and handle stale-state exceptions accordingly.

Sign up to request clarification or add additional context in comments.

2 Comments

lets summarize. In terms of my question, I should: 1) use Version annotation (introduce some special field in my entity for this), 2) catch StaleStateException in all methods which save\update this entity, 3) if such exception occurs - retrieve entity again to get its new state and repeat all steps of my algorithm. Am i correct?
You are correct. Step 3 should run in a new transaction/Session.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.