function readOnly(count){ }
Starting November 20, the site will be set to read-only. On December 4, 2023,
forum discussions will move to the Trailblazer Community.
+ Start a Discussion
davehilarydavehilary 

Unexpected "UNABLE_TO_LOCK_ROW" on simple update

Hello,

 

We have 2 custom objects: 'Property' and 'Marketed_Property'. These objects are related with a look up field on Marketed_Property. So in this relationship, Property is the Parent object and Marketed_Property is the Child object. We have a trigger on the 'Marketed Property' object called 'updateStatusOnProperty' which updates all parent Property records when fields on the child Marketed_Property object are updated. There are also 2 triggers on Property which amend fields on the Property object when updated. The Marketed_Property object is populated by 10 @future calls carrying out a bulk update, fed by an input file of approx 2000 rows of data.

 

While processing the data using the @future jobs, we are get an "UNABLE_TO_LOCK_ROW" error for one of the Property record updates in one of the @future jobs. The other 9 @future jobs complete successfully. The error is reproducible but only on our live org and only sporadically, and with the lock occurring against different single records each time. We have cloned our live environment in a full size test org but cannot recreate the problem here, nor in any sandbox or DE org.

 

The trigger code is 1) doing a select on Property for all records where there is a child Marketed_Property object, 2) doing some comparisons on the Marketed_Property data to determine which Property rows/fields should be updated, and 3) updating the relevant Property records....and it's this step that's failing.

 

The code is below:

 

if(mpIds.size() == 0){return;}
private List<Property__c> RecordsBatch=new List<Property__c>();
List<Property__c> props = [Select id,Property_Status__c,Asking_Price__c,Estate_Agent__c,Beds__c,Weeks_On_Market__c,Date_Marketed__c,Property_Type__c,Type__c,Last_Update_Date__c,Matched__c,(Select id,Property_Status__c,Asking_Price__c,Estate_Agent__c,Beds__c,Weeks_On_Market__c,Date_Marketed__c,Property_Type__c,Type__c,Last_Updated__c from Properties__r order by LastModifiedDate desc)from Property__c where Id IN : mpIds];
for(Property__c p : props){
Property__c p1 = new Property__c(ID = p.Id);
List<Marketed_Property__c> listMP = p.Properties__r;
if(listMP.size()>0)
{
if(listMP.size()==2)
{
if(listMP[0].Asking_Price__c < listMP[1].Asking_Price__c)
{
p1.Asking_Price__c = listMP[0].Asking_Price__c;
}
else
{
p1.Asking_Price__c = listMP[1].Asking_Price__c;
}
if(listMP[0].Property_Status__c == 'For Sale' && listMP[1].Property_Status__c == 'For Sale')
{
p1.Property_Status__c = 'For Sale';
}
else if((listMP[0].Property_Status__c == 'For Sale' && listMP[1].Property_Status__c == 'Sold STC')||(listMP[0].Property_Status__c == 'Sold STC' && listMP[1].Property_Status__c == 'For Sale'))
{
p1.Property_Status__c = 'Sold STC';
}
else if((listMP[0].Property_Status__c == 'For Sale' && listMP[1].Property_Status__c == 'Sold')||(listMP[0].Property_Status__c == 'Sold' && listMP[1].Property_Status__c == 'For Sale'))
{
p1.Property_Status__c = 'Sold';
}
else if(listMP[0].Property_Status__c == 'Withdrawn' && listMP[1].Property_Status__c == 'Withdrawn')
{
p1.Property_Status__c = 'Withdrawn';
}
Marketed_Property__c MP = null;
if(listMP[0].Date_Marketed__c == listMP[1].Date_Marketed__c){
list<String> forEA = new List<String>();
forEA.add(listMP[0].Estate_Agent__c);
forEA.add(listMP[1].Estate_Agent__c);
forEA.sort();
if(forEA[0] == listMP[0].Estate_Agent__c){
MP = listMP[0];
}else{
MP = listMP[1];
}
} else if(listMP[0].Date_Marketed__c > listMP[1].Date_Marketed__c){
MP = listMP[1];
}else
{MP = listMP[0];}

p1.Estate_Agent__c = MP.Estate_Agent__c;
p1.Beds__c = MP.Beds__c;
p1.Weeks_On_Market__c = MP.Weeks_On_Market__c;
p1.Date_Marketed__c = MP.Date_Marketed__c;
p1.Property_Type__c = MP.Property_Type__c;
p1.Type__c = MP.Type__c;
p1.Last_Update_Date__c = MP.Last_Updated__c;
}
else
{
p1.Property_Status__c = listMP[0].Property_Status__c;
p1.Asking_Price__c = listMP[0].Asking_Price__c;
p1.Estate_Agent__c = listMP[0].Estate_Agent__c;
p1.Beds__c = listMP[0].Beds__c;
p1.Weeks_On_Market__c = listMP[0].Weeks_On_Market__c;
p1.Date_Marketed__c = listMP[0].Date_Marketed__c;
p1.Property_Type__c = listMP[0].Property_Type__c;
p1.Type__c = listMP[0].Type__c;
p1.Last_Update_Date__c = listMP[0].Last_Updated__c;
}
}
if(p.Matched__c == false){
//p.Matched__c = true;
p1.Matched__c = true;
}
RecordsBatch.add(p1);
if(RecordsBatch.size()== 1000)
{
update RecordsBatch;
RecordsBatch.clear();
}
}
if(RecordsBatch.size()> 0)
{
update RecordsBatch;
RecordsBatch.clear();
}

 

 

 

The error message is below:

18:3:38.13|CODE_UNIT_FINISHED
18:3:38.631|CODE_UNIT_STARTED|[EXTERNAL]updateStatusOnProperty on Marketed_Property trigger event AfterUpdate for a0DA0000000ukYY, a0DA0000000ukYZ, <snip> 186 IDs </snip>
18:3:38.719|DML_BEGIN|[62,2]|Op:Insert|Type:MatchingProHistory__c|Rows:187
18:3:39.338|DML_END|[62,2]|
18:3:39.339|SOQL_EXECUTE_BEGIN|[112,28]|Aggregations:1|Select id,Property_Status__c,Asking_Price__c,Estate_Agent__c,Beds__c,Weeks_On_Market__c,Date_Marketed__c,Property_Type__c,Type__c,Last_Update_Date__c,Matched__c,(Select id,Property_Status__c,Asking_Price__c,Estate_Agent__c,Beds__c,Weeks_On_Market__c,Date_Marketed__c,Property_Type__c,Type__c,Last_Updated__c from Properties__r order by LastModifiedDate desc)from Property__c where Id IN : mpIds
18:3:39.427|SOQL_EXECUTE_END|[112,28]|Rows:139|Duration:88
18:3:39.605|DML_BEGIN|[280,5]|Op:Update|Type:Property__c|Rows:139
18:3:49.7|DML_END|[280,5]|
18:3:49.8|EXCEPTION_THROWN|[280,5]|System.DmlException: Update failed. First exception on row 0 with id a0CA0000000Y27WMAS; first error: UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record: []
18:3:49.13|FATAL_ERROR|System.DmlException: Update failed. First exception on row 0 with id a0CA0000000Y27WMAS; first error: UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record: []

We've been trying to rewrite this query but it isn't improving things. Our current theory is that since the asynchronous @future calls are running in parallel across the Marketed_Property object records, and since some of these child records have the same parent then the same Property record is being updated multiple times and being locked as a result. However opinion is divided as some of the team thinks Salesforce execution controls and prevents such a situation occurring.

 

Anyone seen this before and can see something we're missing?

 

Thanks.

Message Edited by davehilary on 03-17-2010 08:32 AM
Best Answer chosen by Admin (Salesforce Developers) 
davehilarydavehilary

We think we've solved this intermittent problem now - at least to the extent that it hasn't happened on 6 successive runs across multiple full volume environments - so posting our fix in case it's of use to others.

 

The root cause of the problem in our configuration was that the @future jobs were asynchronously updating a child object which had a trigger to update the parent object, which in turn had a couple of triggers updating reference data on the parent object and elsewhere.

 

The 'aha' monent came when we setup and checked debug logs for the @future jobs and saw most of them running for 200-300 seconds! It turned out that one of the triggers on the Property object was carrying out updates and inserts on two separate objects, of which the insert was taking considerably longer than the update. We think that because the insert step in the multi-step trigger was taking longer relative to every other step, it was locking the parent object record for longer than necessary for any further updates by the other triggers, and hence a race condition was occurring where the @future jobs would continually retry updating the parent object and eventually give up with the UNABLE_TO_LOCK_ROW error we saw.

 

Our solution was to deactivate two of these triggers as they only need to run when some of the reference data changes (annually), and to split the logic in the 'slow' trigger out into separate triggers. In subsequent runs the @future jobs all complete processing in around 1-10 seconds.

 

Our trigger configuration was overly complex and interdependent as it had evolved that way across a number of development phases so it's worth reviewing this periodically as increased volumes may cause some emergent instability in what was previously a stable configuration (which was what happened to us).

All Answers

NaishadhNaishadh
I think, your code go into loop. Are u sure that every time you create new property with Id is unique and not the duplicate one? i.e. Property__c p1 = new Property__c(ID = p.Id);
SanjSanj
HI, mpIds is a set of Propery IDs. So this is Unique.
davehilarydavehilary

We think we've solved this intermittent problem now - at least to the extent that it hasn't happened on 6 successive runs across multiple full volume environments - so posting our fix in case it's of use to others.

 

The root cause of the problem in our configuration was that the @future jobs were asynchronously updating a child object which had a trigger to update the parent object, which in turn had a couple of triggers updating reference data on the parent object and elsewhere.

 

The 'aha' monent came when we setup and checked debug logs for the @future jobs and saw most of them running for 200-300 seconds! It turned out that one of the triggers on the Property object was carrying out updates and inserts on two separate objects, of which the insert was taking considerably longer than the update. We think that because the insert step in the multi-step trigger was taking longer relative to every other step, it was locking the parent object record for longer than necessary for any further updates by the other triggers, and hence a race condition was occurring where the @future jobs would continually retry updating the parent object and eventually give up with the UNABLE_TO_LOCK_ROW error we saw.

 

Our solution was to deactivate two of these triggers as they only need to run when some of the reference data changes (annually), and to split the logic in the 'slow' trigger out into separate triggers. In subsequent runs the @future jobs all complete processing in around 1-10 seconds.

 

Our trigger configuration was overly complex and interdependent as it had evolved that way across a number of development phases so it's worth reviewing this periodically as increased volumes may cause some emergent instability in what was previously a stable configuration (which was what happened to us).

This was selected as the best answer
anschoeweanschoewe

We experienced something similar.  However, after turning off all triggers, we still had problems.  Only after a lot of searching did I stumble on this bit of documentation:

 

http://www.salesforce.com/us/developer/docs/api/index_Left.htm#StartTopic=Content%2Fwhats_new.htm|SkinName=webhelp

 

The important section says:

 

Exercise caution with multi-threaded processes. Some client applications partition large transactions into separate sets of records. Multiple threads run the same integration process on the different data sets. However, do not use multi-threaded processes if you are operating on records that have different parent objects, because an

>UNABLE_TO_LOCK_ROWS

 error can occur, causing the entire transaction to fail.

For example, suppose you update contacts every night. You run a multi-threaded process that handled all the contacts beginning with “A” in one thread, and those beginning with “B” in another thread. But one account has a contact beginning with “A” and another with “B”, so those two contacts are handled by different threads. When the second thread tries to lock the parent account, it hits a lock already placed by the first thread, and a deadlock occurs. The second thread eventually times out and the transaction fails. Instead, if put every contact of accounts that begin with “A”� in one thread and the contacts of accounts that begin with “B”� in another thread, there would be no such contention.

 

-----------------------------

 

In our situation, we were trying to update a lot of Opportunity records in parallel (using the Bulk API).  It would randomly fail because I wasn't grouping opportunities by Account in my batch jobs.  Meaning, of my 10 parallel-running batch jobs, there were opportunities that shared the same parent Account object in different batch jobs.  This is a problem because when the Bulk API tries to update an opportunity in one batch job, it locks the parent Account.  When another batch job simultaneously tries to update another opportunity with the same account, it can't because the Account is locked.  So, I ended up dividing the opportunities into buckets -making sure to place all of the opportunities with the same parent Account in the same batch job (bucket).  Problem solved.

 

Hopefully this solves other people some time.  This was not documented in the Bulk API documentation.

nagalakshminagalakshmi

Hi,

 

I am also facing the same issue. when i insert the records from parent to child object through batch apex class. i am getting error as 'First error: Insert failed. First exception on row 0; first error: UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record: []'. But it is working fine in full sandbox which is copy of live. But i am getting error in live. How can i solve this error. Please help me.

 

 

Thanks,

Lakshmi

LVSLVS

Just to clarify things, what davehilary faced was a 'deadlock' and not a 'racing condition'