+ Start a Discussion
JWikkJWikk 

after update triggers and workflow field updates

We have trigger code (after update) running twice due to a workflow field update. The code works with values unrelated to the field update rule. We do check if the trigger.old fields != trigger.new fields (field values were actually changed). However, we can't tell that the trigger code already did the work and so should exit before performing the same work again. We have experimented with static variables and static methods and it looks promising so far.

 

If(TriggerTransaction.isPosted(payments[i].id) && payments[i].BGBK__Amount__c != null)
        	continue;
        Else
        	TriggerTransaction.post(payments[i].id);

 

 

Does anyone have best practice code snippets to handle this situation?

 

Note:

As per SF order of execution, triggers are run first, then workflow is run and if there were updates run triggers again. 

 

James

BasicGov Systems

IspitaIspita

why dont you use a field to identify that the process has been done say if :-

  • Lets say the field is Flag__c
  • so if payments[i]. Flag__c == true { skip} else  {do the process and set the field  payments[i]. Flag__c = true}  
JWikkJWikk

Yes, I could set an object field; however, the field is then set from then on - it won't work the second time the user makes an update and there is no guarantee that the code will always run (workflow criteria may be false). The code itself needs to know if it ran before.

 

Has anyone else used static variables or methods to do this? (assuming this is the only way to handle this)

stcforcestcforce

i tried to create a solution for more or less the same problem using an adaptation of the cookbook recipe for detecting multiple trigger executions. In the end, i ignored the problem and just reduced the code so it was lean enough to run twice. Unless you create a complete set of deep copies of your variables, you can't detect whether the differences from the old were introduced via the previous trigger or the initial update. A possible option might be to use an asynchronous method on the second call. As i understand it, this will create a second context with lesser limitations on governor limits. It can possibly absorb the inefficiency.

 

Other options can be used if the processing is not terribly important. For instance, Some of the examples that salesforce uses in its instructional texts, favor an approach where the limits are tested via the Limit class and processing on records beyond a point is just cutoff. Another approach that is used occasionally when the data doesn't matter much is to move the processing from the trigger to a regularly scheduled class. 

 

Anyhow, good luck.