function readOnly(count){ }
Starting November 20, the site will be set to read-only. On December 4, 2023,
forum discussions will move to the Trailblazer Community.
+ Start a Discussion
JWSJWS 

Help with error -- System.LimitException: Too many script statements: 200001

I've seen several questions arise about this and ran into it too when trying to import data via the DataLoder and wanted to see if I could get some help.  First...

 

Business Context:

--------------------------

I have a cascading roll-up that needs to happen within the following hierarchy of objects:

 

Organization (account object)

Bill Account (custom)
Premise (custom)

 

We have premises (think "your house") and bill accounts (you living AT a particular premise).  There is a many-to-one relationship between premises and bill accounts -- you can have multiple electrical connection points as a customer.  However, we may also have a need to roll several bill accounts up to what we call an "organization".  So an organization of "McDonalds" may have many bill accounts.


We want to be able to see revenue aggregated all the way up the chain.  A trigger on Premise__c updates revenue, kilowatt hour usage, etc. on its parent Bill_Account__c.  From there, a trigger on Bill_Account__c fires and updates the Organization (Account) that it belongs to.

 

The Code

---------------

On Premise__c, I have this trigger:

 

trigger updateBillAccountSummedUsage on Premise__c (after insert, after update, after delete, after undelete) {
ch 2012
// This trigger will update a billAccount record with values found on the related Bill Accounts

// First, we create a Set to hold billAccount IDs of Bill Accounts in trigger and a List to hold billAccounts which will be updated
    Set<Id> premiseIdSet = new Set<Id>();
    List<Bill_Account__c> billAccountUpdateList = new List<Bill_Account__c>();

// Next, we go through either trigger.old (in the case of deletes or updates) or trigger.new
// if this is an insert, update or undelete.  When going through the trigger, we create ths set of billAccount IDs
// that are found within the trigger.

// If this is a delete trigger, pull them the trigger.old, otherwise from trigger.new
    if(Trigger.isDelete || trigger.isUpdate)
        for(Premise__c p : trigger.old){
            premiseIdSet.add(p.Id);
            }
   if(trigger.isInsert || trigger.isUpdate || trigger.isUndelete)
        for(Premise__c p : trigger.new){
            premiseIdSet.add(p.Id);
       }    
   
// Now we query for all of the billAccounts associated with the Bill Accounts in the trigger using the above created set of Ids
    List<Bill_Account__c> billAccountList = [Select b.Premise_Number__r.Rolling_12_Month_kWh_Total__c, b.Premise_Number__r.Rolling_12_Month_Revenue_Total__c, b.Premise_Number__r.Rolling_12_Month_Peak_Demand__c, b.Premise_Number__r.Id, b.Premise_Number__c, b.Name, b.Id From Bill_Account__c b where b.Premise_Number__c IN :premiseIdSet];

// We iterate through the billAccounts and then through the Bill Accounts related to those billAccounts.
    for(Bill_Account__c b : billAccountList){

    // Set the billAccount record's fields with the newly summed totals.
    
    b.Rolling_12_Month_kWh_Total__c = b.Premise_Number__r.Rolling_12_Month_kWh_Total__c;
    b.Rolling_12_Month_Revenue_Total__c = b.Premise_Number__r.Rolling_12_Month_Revenue_Total__c;
    b.Rolling_12_Month_Peak_Demand__c = b.Premise_Number__r.Rolling_12_Month_Peak_Demand__c;
        
    billAccountUpdateList.add(b);
    }
    
    if(!billAccountUpdateList.IsEmpty()) update billAccountUpdateList;

}

 

On Bill_Account__c, I have this trigger to go off and update the Organization (Account) record:

trigger updateOrganization on Bill_Account__c (after insert, after update, after delete, after undelete) {

// This trigger will update a organization record with values found on the related Bill Accounts

// First, we create a Set to hold organization IDs of Bill Accounts in trigger and a List to hold organizations which will be updated
    Set<Id> organizationIdSet = new Set<Id>();
    List<Account> organizationUpdateList = new List<Account>();

// Next, we go through either trigger.old (in the case of deletes or updates) or trigger.new
// if this is an insert, update or undelete.  When going through the trigger, we create ths set of organization IDs
// that are found within the trigger.

// If this is a delete trigger, pull them the trigger.old, otherwise from trigger.new
    if(Trigger.isDelete || trigger.isUpdate)
        for(Bill_Account__c ba : trigger.old)
          {if(ba.Organization__c != null)
              organizationIdSet.add(ba.Organization__c);
            }
    if(trigger.isInsert || trigger.isUpdate || trigger.isUndelete)
        for(Bill_Account__c ba : trigger.new){
            if(ba.Organization__c != null)
              organizationIdSet.add(ba.Organization__c);
       }    
   
    // Now we query for all of the organizations associated with the Bill Accounts in the trigger using the above created set of Ids
    List<Account> organizationList = [select o.Name, o.Id, (select Bill_Account_Status__c, Rolling_12_Month_kWh_Total__c, Rolling_12_Month_Revenue_Total__c, Rolling_12_Month_Peak_Demand__c, Name, Id, Division__r.External_Division_ID__c, Division__r.Name, Premise_City__c From Bill_Accounts__r where Bill_Account_Status__r.External_Status_ID__c = '02') From Account o where o.Id IN :organizationIdSet];

    for(Account o : organizationList){
        Decimal totAnnualkWh = 0;
        Decimal totAnnualRev = 0;
        Decimal peakDemand = 0;
        String divList, locList;
        Integer i=0,j=0;
        Set<String> uniqueDivisionSet = new Set<String>();
        Set<String> uniqueLocationSet = new Set<String>();
        Map<String, String> divMap = new Map<String,String>();
        dataClean dc = new dataClean();        

        // Sum the revenue and kWh and check to see if the current record has the highest peak demand of all of the Bill Accounts        
        for(Bill_Account__c ba : o.Bill_Accounts__r){
            totAnnualRev += ba.Rolling_12_Month_Revenue_Total__c;
            totAnnualkWh += ba.Rolling_12_Month_kWh_Total__c;
            if (ba.Rolling_12_Month_Peak_Demand__c >= peakDemand)
                peakDemand = ba.Rolling_12_Month_Peak_Demand__c;

            // We have a List of Division Names and ID's, but need to put them in a Map structure to sort by Division number.
            if (ba.Division__r.Name!=null){
                divMap.put(ba.Division__r.External_Division_ID__c,ba.Division__r.Name);
                uniqueDivisionSet.add(ba.Division__r.Name);
            }
            
dc.toTitleCase(ba.Premise_City__c));
            uniqueLocationSet.add(dc.toTitleCase(ba.Premise_City__c));
        }           
        List<String> sortedLocList = new List<String>();
        sortedLocList.addAll(uniqueLocationSet);
        sortedLocList.sort();
        for (string locName:sortedLocList){
            if (sortedLocList.size()>7){
                locList = sortedLocList.size() + ' locations around the state.';
            }else{ 
                if (j==0){ 
                    locList = locName;
                }else if (j==sortedLocList.size()-1){
                    locList = locList + ' and ' + locName;
                }else {
                    locList = locList + ', ' + locName;
                }
            }
            j++;
        }
        
        //We'll add all of the Map keys to a list so we can sort them.
        List<String> sortedDivIDList = new List<String>();
        sortedDivIDList.addAll(divMap.keySet());
        sortedDivIDList.sort();

        //Run through the sorted list and grab the Value associated with the newly-sorted Keys.
        i=0;
        for(string divName : sortedDivIDList){
            if (i==0){ 
                divList = divMap.get(divName);
            }else if (i==(uniqueDivisionSet.size()-1)){
                divList = divList + ' and ' + divMap.get(divName);
            }else {
                divList = divList + ', ' + divMap.get(divName);
           }
            i++;
        }

  // Set the organization record's fields with the new values.
    o.Divisions__c = divList;        
    o.Bill_Account_Locations__c = locList;
    o.Rolling_12_Month_kWh_Total__c = totAnnualkWh;
    o.Rolling_12_Month_Revenue_Total__c = totAnnualRev;
    o.Rolling_12_Month_Peak_Demand__c = peakDemand;
        
    organizationUpdateList.add(o);
    }
    
    if(!organizationUpdateList.IsEmpty()) update organizationUpdateList;

}

 

(Note in the above code, that there are a few extra bits of code which, for every Organization getting updated, go through and make a string out of values shown in its child records, notably a list of our divisions and a list of cities that the bill accounts are in.)

 

When I do a 12k record premise upsert through the Data Loader, I get the "System.LimitException: Too many script statements: 200001" error.  I have a feeling that it has to do with the 3-level deep trigger firing that is going on, but can't really pinpoint where to optimize the "for" loops or SOQL.


I'd greatly appreciate any tips or thoughts on lines that I could tune-up to get past what is obviously "scripting gone wild".

 

Jamie

 

SeAlVaSeAlVa

Have you tried reducing the Bulk-size on your dataloader settings? It will make the load slower, but it might work.

 

Regards

JWSJWS

Sergio, thanks for the reply.  I set it down to 50 from 200 and ouf of the 12k or so record, it gave the same error on about 400 records.  When I look at these premise records, it seems that the parent organizations (two levels up, remember) for these records have a fairly large number (150+) bill accounts associated with them.

 

So when the premise updates the bill account, it's fine since that's usually a 1or 2-to-1 relationship size.  But when the bill account trigger is firing to update the organization upstream, it's failiing because there are a bunch of child records for that organization that it's updating and I'm hitting the limit.

 

This upserting process was going to run nightly, but it seems that I'd have to back down the bulk-size so much that it might take a while every single night.


Any other thoughts would be appreciated.

Jamie

sfdcfoxsfdcfox

I would suggest that you could vastly reduce the script usage by making one small change. Instead of depending on the children records to update their parents, the parents should be calculating their children. This will reduce the overall code size and queries/DMLs required. I also suspect that you'll want to use aggregate functions when possible to offload the heavy lifting to the database server and off of your code. Something like this:

 

trigger updatefrompremise on premise__c (after insert, after update, after delete, after undelete) {
    set<id> keys = trigger.old==null?trigger.newmap.keyset():trigger.oldmap.keyset();
    update [select id from bill_account__c where id in (select bill_account__c from premise__c where id in :keys];
}

 

trigger updatefrombillaccount on bill_account__c (after insert, before update, after update, after delete, after undelete) {
set<id> keys = trigger.old==null?trigger.newmap.keyset():trigger.oldmap.keyset();
  if(trigger.isbefore) {
    for(aggregateresult ar:[select bill_account__c, sum(kilowatt_hours__c) sumkh, ... from premise__c where bill_account__c in :keys group by bill_account__c]) {
         // plug totals into bill account records.
    }
  }
  update [select id from account where id in (select account__c from bill_account__c where id in :keys)];
}

 

trigger updatefromaccount on account (before update) {
   // aggregate results from bill_account__c records...
}

You'll need to change some filters, etc, etc, but as you can see, you should ideally be able to update all of your data in one fell swoop just by pulling the calculations from the other direction. If AggregateResult won't work for you, you can still do a normal child query and update the values in the before update event. Make sure you filter correctly, and don't use intermediate variables when possible.

 

For example, it looks like you could probably avoid a number of script statements by eliminating the variables declared immediately inside the for loops. Remember to take advantage of rvalues, such as:

 

        // Three script statements
        Decimal totAnnualkWh = 0;
        Decimal totAnnualRev = 0;
        Decimal peakDemand = 0;
        // One script statement
        Decimal totAnnualkWh = 0, totAnnualRev = 0, peakDemand = 0;

AggregateResult can let you find peakDemand, totalAnnualRev and totAnnualkWh instantly without looping through each individual premise, etc.

 

Use ternary operators sparingly, but for best effect (such as the Trigger.new/Trigger.old bit demonstrated above). Take advantage of Map.putAll, Set.addAll, and List.addAll whenever possible; they count only as one script statement instead of potentially hundreds.

 

You can use queries in filters, as shown above. Use sparingly, because each does count towards the limit of 100, but when it means you can knock out a for loop in the process, it might be worth it.

 

Check your dataClean class, make sure that toTitleCase isn't doing something silly, like looping through each character individually (hint: tolowercase the entire string, split by spaces, uppercase the first character with substring, and join them back together; sprinkle in other logic like not capitializing "a", "the", etc using a stop-gap list).

 

Since you're already at the maximum limit, by far, try using the profiler in the Debug Logs (Your Name > System Console) to determine the offending functions, and optimize them.

 

Finally, feel free to ask us, the community, for any snags that you run into that you can't seem to resolve yourself. We're here for you.

JWSJWS

Thanks, sfdcfox.  Incredibly helpful post.  I need to get in the Sandbox and check this out in earnest.  It certainly is a shift in thinking for me to consider it backward.  I do worry, however, that the upper-level Organization might not have the most updated values by setting it top-down.

 

Question: If I have Organization O, then associate Bill Accounts BA1, BA2, and BA3 on those respective records, O will never get updated until something is edited on the O record, correct?  This was part of my thinking in always looking upward -- you're promoting the lower, newly-associated values upward to the parent.  Otherwise, what's the event to cause O to update values on associated records of BA?

 

I'll take a look and see if there are aggregate opportunities that might make things easier as well.  I think that might be my biggest optimization here if I think through it.

 

Curious: why use ternary operators sparingly?

 

The "toTitleCase" function that you mention does exactly as you described -- splits a phrase, does a "toUppperCase" or "toLowerCase" and checks for the filler words (also has some logic for "McDonald" and "O'Neal", for instance.

 

Anxious now to see if I can better this beast.  Thanks again for the input!

 

Jamie

 

sfdcfoxsfdcfox

Question: If I have Organization O, then associate Bill Accounts BA1, BA2, and BA3 on those respective records, O will never get updated until something is edited on the O record, correct?  This was part of my thinking in always looking upward -- you're promoting the lower, newly-associated values upward to the parent.  Otherwise, what's the event to cause O to update values on associated records of BA?

 

The solution to that is in the example above. What happens is that BA1, BA2, and BA3, which may be in the the same or separate trigger invocations (e.g. possibly with the Apex Data Loader, Import Wizard, or manual creation one a time) will simulate an update on O, so even though no user manually edited the record, the trigger will be called regardless. This is known as a recursive trigger, where each trigger in the chain invokes the next trigger by calling a DML operation on the parent record. The example I gave above shows how it would work. The event that actually causes O to update is the event on BA1, BA2, or BA3.

 

Curious: why use ternary operators sparingly?


They're really condensed if-then-else statements, and if used sparingly, makes it really easy to condense code. However, consider the following:

 

int x = a==0?b==1?2:c==3?4:5:d==6?7:e==8?f==9?10:11:12;

This is the (relatively) legible version of above:

 

if(a==0) {
  if(b==1) {
    x = 2;
  } else if(c==3) {
    x = 4;
  } else {
    x = 5;
  }
} else if(d==6) {
  x = 7;
} else if(e==8) {
  if(f==9) {
    x = 10;
  } else {
    x = 11;
  }
} else { 
  x = 12;
}

You can create very compact code with ternary operators, but legibility suffers. Generally, if you need one or sometimes two if-then-else statements in a row, consider a ternary operator, otherwise use a bulkier if-then-else statement to represent complex logic. One common idiom I use with ternary operators is as follows:

 

for(... :trigger.old==null?trigger.new:trigger.old) {
 ...
}

This allows me to use one list or the other, depending on the type of trigger that runs (e.g. a before insert or after delete). You can use this to remove unsightly "if(trigger.old==null) ... else ..." where the rest of the body of the if statement are two loop statements that perform the exact same operation, but on a different list. It's a nifty language feature, but the possibility for abuse (both to yourself six months later and whomever replaces you when you're promoted) means that you should use it only on an as-needed basis.

 

Again, I think aggregate queries might help you reduce your script statements significantly. You might not need it on the Bill Accounts nor Organization (Account)-- I would focus on the lowest-hanging fruit first and see if that single optimization is sufficient.

JWSJWS

sfdcfox wrote:

Question: If I have Organization O, then associate Bill Accounts BA1, BA2, and BA3 on those respective records, O will never get updated until something is edited on the O record, correct?  This was part of my thinking in always looking upward -- you're promoting the lower, newly-associated values upward to the parent.  Otherwise, what's the event to cause O to update values on associated records of BA?

 

The solution to that is in the example above. What happens is that BA1, BA2, and BA3, which may be in the the same or separate trigger invocations (e.g. possibly with the Apex Data Loader, Import Wizard, or manual creation one a time) will simulate an update on O, so even though no user manually edited the record, the trigger will be called regardless. This is known as a recursive trigger, where each trigger in the chain invokes the next trigger by calling a DML operation on the parent record. The example I gave above shows how it would work. The event that actually causes O to update is the event on BA1, BA2, or BA3.


But isn't this still upward-looking trigger fires?  Premise is the lowest object.  As I do inserts via the Data Loader, the Premise trigger fires and updates Bill Account upstream.  The Bill Account update DML causes the Bill Account trigger to fire, which updates the Organization upstream from it.  This is essentially what I'm doing now, though.  Were you suggesting something else directionally or just optimization of the *way* I was going about it?

 

I really like your ternary use of trigger.old and trigger.new.  I'll start making that a standard practice.

sfdcfoxsfdcfox

I tend to think of it as a matter of where to place the business logic. Ideally, each record should be responsible for its own data, following in the pattern of object-oriented programming.

 

Cons of using the "parent does the tallying" model include:

 

1) Unnecessary query at the parent level when no children are updated.

 

Pros of using the "parent does the tallying" model include:

 

1) Re-tallying all child records can be done simply by updating the parent records.

2) Memory usage reduced because the records are already in memory.

3) Script statement count reduced because records are already in memory.

4) Query statement count can be reduced because you don't need a query to update the parents.

5) Parent records are more likely to be updated consistently and correctly, since the logic is simplified.

6) Locating the logic for the tallying is simplified: one needs only look as far as the parent record (important for maintenance).

 

Generally speaking, making each record responsible for its own data is a good practice, and the pros of following this design are pretty major. Unless you're planning on having a large number of child records that are very infrequently updated, and parent records that are disproportionately modified more frequently than any of the children, it's probably an idea worth implementing.