function readOnly(count){ }
Starting November 20, the site will be set to read-only. On December 4, 2023,
forum discussions will move to the Trailblazer Community.
+ Start a Discussion
Starz26Starz26 

Trigger Recursion Static Variable stops second batch for Bulk updates

Ok, I give up.

 

I am updating 359 records. I call an @future method in my trigger. Since it is after insert and after update, I added a static variable which set before calling the @future method.

 

Now when workflows fire for the first 200 records, the after update (second one) does not call the @future.

 

HOWEVER, when the next batch of 159 records gets updated it seems the transaction is not reset as the static variable is still = true and thus the second set of records does not call the @future method.

 

Are the transactions no longer reset?

 

For what its worth, I am simply querying for 359 records and then issuing an update call from the developer console.

 

Anyone have any ideas?

Jerun JoseJerun Jose
Shashikanth Sharma from these boards, suggested the idea of using static set of IDs instead of static booleans.

If a particular record has been processed by a trigger, then you add its ID to that set. In the trigger body you check if this set contains the incoming record ID and if it does, you skip the processing. I've seen it be more handy than the boolean technique, because it is more smart at figuring out if it an actual recursive call you are making, or if it is an invocation for another record.

In the trigger body you check if this set has
Jerun JoseJerun Jose
As for static variable not being reset for the second batch of records, I can see why it would not work in a test method, but for a normal execution, I have no clue.
sfdcfoxsfdcfox

A "transaction" in salesforce.com is one complete execution of code from start to finish, which may affect a number of records up to the governor limits for number of records per transaction. If you're using the Developer Console, it is easy to see each transaction, as one (and only one) debug log will be generated per transaction.

 

When you use the API, such as the Apex Data Loader, it automatically breaks every 200 records, and so every 200 records will generate a separate debug log. This is why you could query 359 records, and use the update call against the query results, and you would see 4 total transactions: 2 for the Apex Data Loader loading data, and 2 for future methods.

 

However, a different story is laid out when you talk about the Execute Anonymous system. Here, you are not using the API, but instead are executing code. This means that your trigger would be called twice, just as in the API example above, but both "transactions" are wrapped up in an atomic "super-transaction" that must pass or fail as a whole. Coincidentally, this means your governor limits are now shared across the total number of records being processed, instead of each having their own limit, as they would in the API.

 

This subtle difference in how Apex Code and the API process batches of 200 records means that you need to take consideration both when trying to develop code, and when trying to perform quick fix codes for Execute Anonymous. As a quick example, consider the following scenario:

 

Execute Anonymous:

delete [select id from account];

Now, were you a System Administrator, this would completely wipe out your database of all accounts, non-private contacts, opportunities, cases, contracts, and other account-driven data, right? The correct answer here is "maybe." Assuming you have no triggers in the system, at all, and the total query size was less than 50,000 records, and no validation rules would stop you, then you might have a decent chance at it working.

 

However, let us say, for sake of argument, that you have a single trigger that fires on deletion and calls a @future method. At 50,000 records, the absolute maximum you could process using the code above, your method would be called 250 times with a limit of 10 times; the entire transaction fails and is rolled back. In fact, you be limited to deleting a mere 2,000 records given that limitation. The Apex Data Loader would happily chug along and delete 50,000 records with that same trigger in place, since an Apex Data Loader transaction is broken into pieces.

 

My point is, while Execute Anonymous is great for small tasks, you must always remember that those tasks will have diminished resources compared to an Apex Data Loader transcation of the same size, and will always be limited to 50,000 records (or whatever the current Execute Anonymous governor limit is for DML rows) when you perform a DML on a query or collection derived from a query, compared to the API's "unlimited" transaction size limit.

Arnt mongoDBArnt mongoDB
maybe the problem is not between the batches of 200, but between the smaller chunks of 100: Each batch may be broken down intw two chunks of 100 records, and triggers run on these smaller chunks. Static variables are not reset between these smaller chunks. See documentation jere:
https://help.salesforce.com/apex/HTViewSolution?id=000003793&language=en_US