function readOnly(count){ }
Starting November 20, the site will be set to read-only. On December 4, 2023,
forum discussions will move to the Trailblazer Community.
+ Start a Discussion
earlbarbyearlbarby 

Trigger data chunking on large updates

Hi, it appears that when a trigger runs on a large update, salesforce chunks data into 200 record batches and executes the trigger multiple times, causing any queries in that trigger to run multiple times.  Even with our triggers fully bulkified (a single query), we're hitting the query governor limits on a 6000 record update because that single query is running 30 times. Is this chunking behavior new or has it always been like this? Is there a workaround of some sort? A simplified example follows.

 

trigger CheckBusinessHours on Case (before update) {

//this query runs for each of the 200 record chunks
BusinessHours DefaultBH = [Select id from BusinessHours where IsDefault = true];

/*
... other code, no more queries ...
*/

}

 




Shashikant SharmaShashikant Sharma

Try to use static variable

 

Create a class like

 

public class Constants   

{       

 //Constant to block execution of SOQL        

public static Boolean runSOQLQuery = true;   

 

public  static BusinessHours BusinessHours DefaultBH{get;set;}

}

 

use it in your trigger like

 

trigger CheckBusinessHours on Case (before update) {

//this query runs for each of the 200 record chunks
if(Constants.runSOQLQuery) 
{
BusinessHours DefaultBH = [Select id from BusinessHours where IsDefault = true];
}
else
{
BusinessHours DefaultBH = Constants.DefaultBH;
}

DefaultBHConstants

for(BusinessHours bh : DefaultBH )
{
setProcessedBHIds.add(bh.id);
} 
/*
... other code, no more queries ...
*/
}

 

 Now this query will only run once

 

I hope this will work for youa, please ask if any issues.

earlbarbyearlbarby

Hello Shashikant. Thanks for the reply. I considered that, but I'd prefer to avoid using a separate class to store variables that only need to be local because it's confusing/sloppy coding and because the trigger already complies with salesforce's own best practices on bulkification. I'm having trouble even finding a mention of data chunking in this context though. Is this behavior documented anywhere? Can it be disabled? Or is it really necessary to write all triggers on the assumption that their dataset is going to be broken up into 200 record batches?

Shashikant SharmaShashikant Sharma

I don't think there is any other solution for you to have a static class, yes it is true that your trigger is written as per best practice for bulkifying, but of you know the trigger execution it will again come in for next chunk of 200 ans your query will get executed unnecessarary. I don't think you have any other option to use static class.

David81David81

To my knowledge, this is the way it has always been. I've written tons of triggers with SOQL queries in them and never run into this issue when doing bulk updates. Are you sure it's this trigger that is causing the issue and not another trigger down the line?

earlbarbyearlbarby

I'm positive; the trigger is definitely executing multiple times for batches larger than 200 records, each time rerunning queries. I probably wouldn't have even noticed this if not for a client doing something non-standard causing a trigger to run on 6000 records when it's typically running only a few hundred.

I made a quick example in a clean dev org with only this trigger and a test class:

trigger ChunkTest on Case (before insert) {
    System.Debug('Starting trigger with ' + Trigger.New.Size() + ' records. Queries already executed: ' + Limits.getQueries());
    Account a = [select id from account limit 1];
}

 Inserting a batch of 1050 cases produces a log that looks like this:

12:35:02.429 (429690000)|DML_BEGIN|[10]|Op:Insert|Type:Case|Rows:1050
12:35:02.712 (712614000)|USER_DEBUG|[2]|DEBUG|Starting trigger with 200 records. Queries already executed: 0
12:35:03.974 (1974067000)|USER_DEBUG|[2]|DEBUG|Starting trigger with 200 records. Queries already executed: 1
12:35:05.636 (3636440000)|USER_DEBUG|[2]|DEBUG|Starting trigger with 200 records. Queries already executed: 2
12:35:07.357 (5357336000)|USER_DEBUG|[2]|DEBUG|Starting trigger with 200 records. Queries already executed: 3
12:35:09.051 (7051698000)|USER_DEBUG|[2]|DEBUG|Starting trigger with 200 records. Queries already executed: 4
12:35:10.615 (8615785000)|USER_DEBUG|[2]|DEBUG|Starting trigger with 50 records. Queries already executed: 5
12:35:11.005 (9005701000)|DML_END|[10]

As you can see, it's pretty easy to exhaust the 100 query governor limit if you have a batch of several thousand records and a trigger that needs to pull data from a few separate objects. 

David81David81

Interesting. Perhaps this is a change that came with the increased governor limits for triggers? The query limit used to be much smaller than 100, so I can't imagine it behaved the same way in those days.