0

We are using spring batch processing to process a file which have 10,000 records and our database is postgresql. In our process we are reading the file using flat file item reader and we are processing each record as follows:

Loop Record

    Insert Table 1;

    Insert Table 2;

    Insert Table 3, 4, 5;

End Loop

At the end of the process we are getting PSQL Exception which says:

Out of shared memory exception. Hint : Increase max locks per transaction.

Is there a way to resolve it?

2 Answers 2

1

As already stated by the error message, you need to increase the max_locks_per_transaction within the postgresql.conf as documented in the postgres documentation: 18.12. Lock Management or in this stack overflow article: How to increase max_locks_per_transaction

Sign up to request clarification or add additional context in comments.

1 Comment

I second this answer.
0

We also encountered the "Out of shared memory exception".

Increasing the max_locks_per_transaction should not have been necessary, as we were not adding a lot of data to the database on each operation (a few thousand rows per transaction).

We were inserting a batch of records into a table, using save_all(), inside a method marked as @Transactional.

Examining the locks that were being created during the process (by querying the pg_locks table) revealed hundreds of entries with a locktype of transactionid.

I turned out that these were being created by SAVEPOINT calls being made to the database in between each insert. Specifically, if you enable statement logging in Postgres (docs), we saw these after each insert:

LOG: statement: SAVEPOINT PGJDBC_AUTOSAVE
LOG: execute S_2: select row_create_date from my_table ... etc

We had two database-generated columns defined on each row: the primary key, and a "created-date" field, which was configured using the @Generated attribute in Spring, like so:

@Generated
@Column(
    name = "row_create_dt",
    insertable = false,
    updatable = false
)

These fields were causing SELECT statements in between each INSERT. These seemed to be prompting JPA to request "sub-transactions" around each INSERT, which was causing SAVEPOINT calls (and hence locks), which ultimately were the cause of the Postgres "Out of shared memory exception".

We were able to stop the SAVEPOINT calls by generating the created-date value in code, and switching the primary key to be created via the GenerationType.SEQUENCE approach (rather than using GenerationType.IDENTITY). We also needed to set a larger allocationSize for our use-case:

@SequenceGenerator(name = "my_table_id_seq", sequenceName = "my_table_seq", allocationSize = 100)

...and therefore also increased the step size for the Postgres sequence accordingly:

alter sequence my_table_seq increment by 100;

This fixed the out of shared memory issue for us.


PS: Also note this issue reported in PGJDBC in 2019:

The fix for this introduced the cleanupSavepoints option in PGJDBC (which defaults to false - docs here). Enabling this might also reduce the lock pressure from any legitimate SAVEPOINT calls, to help avoid the out of memory error.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.