function readOnly(count){ }
Starting November 20, the site will be set to read-only. On December 4, 2023,
forum discussions will move to the Trailblazer Community.
+ Start a Discussion
ssoftwaressoftware 

Apex batch job - record lock

Hi All,

 

I have a batch job runnning quite frequently - every hour processing Case records. While the job is running, users are sometimes experiencing difficulty in saving case records that they were edting at the same time. I understand this is due to the batch job locking those records while they were being processed by it.

 

Couple of questions:

1. If I reduce the batch size will it minimise this issue or does it not have any effect? For instance if there were 1000 cases in total matching the query criteria, the batch job is processing by default 200 records at a time. When the batch process started, is it locking all the 1000 records that the querylocator returned or only 200 records at the time?

 

2. I also have a master-detail relationship between the Case object (master) and a custom detail object. If a batch program is being run only on the detail object, does Salesforce also lock the related master Case records? If it does not, then this could be a potential workaround for me - where the batch is making all the needed changes in the custom detail object while the users can make any changes to the Cases during this time.

 

Kind Regards

Madhav

Best Answer chosen by Admin (Salesforce Developers) 
bob_buzzardbob_buzzard

I would expect that reducing the batch size would reduce the contention, as the number of records locked for the duration of the transaction will be reduced.  The batch process won't lock all records in the query locator, as that could be up to 50 million records which would make your org pretty unusable!

All Answers

bob_buzzardbob_buzzard

The parent is locked when the child record is updated.  For this reason you should try to group your records by parent.

 

Take a look at this blog post for more information:

 

http://blogs.developerforce.com/engineering/2013/04/managing-lookup-skew-to-avoid-record-lock-exceptions.html

ssoftwaressoftware

Thanks Bob for your response. I appreciate it. I will check out the link that you have sent. Could you also please comment on my first question? Does reducing the batch size mitigate the locking issue (in this case there are no parent-child relationships - just the Case object)?

 

>If I reduce the batch size will it minimise this issue or does it not have any effect? For instance if there were 1000 cases in >total matching the query criteria, the batch job is processing by default 200 records at a time. When the batch process >started, is it locking all the 1000 records that the querylocator returned or only 200 records at the time?

bob_buzzardbob_buzzard

I would expect that reducing the batch size would reduce the contention, as the number of records locked for the duration of the transaction will be reduced.  The batch process won't lock all records in the query locator, as that could be up to 50 million records which would make your org pretty unusable!

This was selected as the best answer
ssoftwaressoftware

Hi Bob, that is good to know. I think I follow this approach and see how it goes instead of creating a parent-child / lookup relationship. Thanks for your quick response. I appreciate it.