function readOnly(count){ }
Starting November 20, the site will be set to read-only. On December 4, 2023,
forum discussions will move to the Trailblazer Community.
+ Start a Discussion
thunksalotthunksalot 

Suddenly getting "Too many script statements: 200001" without having changed anything

I deployed a class *successfully* on 1/14/2012.  Seven days later, I discovered that when I try to deploy anything at all, even a class that only has whitespace changes, the deployment fails with the message that the test for the class that I deployed successfully on 1/14 has "Too many script statements: 200001".  How can that be?  How can it have passed the test successfully while deploy successfully 10 days ago and now I can't deploy anything because it is failing, when I haven't changed it in any way?!

The worst part is that aside from keeping me from being able to deploy anything, which I discovered on Friday, one of my users discovered yesterday that using the button that calls this class is now resulting in the "Too many script statements: 200001" error message.  So, I now do have business processes being interrupted.  Which is really, really strange because right after I *successfully* deployed that class on 1/14, I used that button to *successfully* process far more records (180) than my users are processing when they use it (1-2).  

 

Anybody have any idea what could have changed between when it was deploying and working fine and now?  I'm totally at a loss to explain what could have changed.  I have all the info about how to deal with this error, in general, such as using @future and reducing for loops, but I want to know how the code could have become broken *after* deploying successfully and running correctly under much greater stress than it is failing under now.  That is the most disconcerting thing to me.

Thanks for any clues!

 

Best Answer chosen by Admin (Salesforce Developers) 
sfdcfoxsfdcfox

There are a number of possible solutions, but you'd almost need a debug log to figure it out for sure. The way I see it, you might have run into one of these conditions:

 

1) A new trigger was added afterwards that itself deployed okay, but would fail when it was called as a result of being called from your class. Stated as an example, given triggers A and B, A deploys fine and runs only when it meets certain conditions. B also deploys fine and runs only when it meets certain conditions. As long as both conditions are not met, the script stays below 200k script statements, but if both criteria are met, it fails.

 

2) A workflow rule with a field update was added, or was not tested before deployment. It might be that you had a workflow rule that only triggers on certain conditions, and you didn't test a situation that invoked the workflow rule. This is important, because a field update will cause triggers to run a second time on affected records (a recursive call).

 

3) You have a very limited amount of related data to process (say, a test account with virtually no data), and your live data uses significantly more. Or, perhaps a large amount of data was imported by a user later that caused the issue. Try running your code against different sample records and see if it makes a difference. This is especially true when you have loops inside loops.

 

For example:

 

for(integer x = 0; x < 100000, x++)

  for(integer y = 0; y < 2; y++)

    // one line of code.

 

If you read this, you'll see that you'd run right up to 200,000 script statements in two lines of code. It's obvious here, but consider this:

 

for(sobject a:alist)

  for(sobject b:blist)

    if(a.somefield__c == b.somefield__c)

      //

 

If alist contains 1000 elements and blist contains 1000 elements, this loop requires at minimum 1000000 lines of execution, over 5 times the limit. You may have tested your code on "180" records with only 10-20 related items (180*20 is only 3600 lines), while they might have invoked a separate multiplier, such as 2 records times 25000 rows, with 8 lines of executed logic per loop (200,000 script statements total). This is usually my first suspect when I see something like that crop up.

 

Finally, you might have a rare condition where a loop simply doesn't terminate in some situations, so you might need to see if you have a possibly-infinite loop in there.

 

In short, we don't have enough information to truly assist you, but your debug logs should assist you.

All Answers

thunksalotthunksalot

Thanks for replying sivaext, but I'm afraid neither of those links answer my query:  what could have changed between when it was deploying and working fine and now?  

 

I aready have all the info you linked to about how to reduce script statements, in general, but I want to know how the code could have become broken *after* deploying successfully and running correctly under much greater stress than it is failing under now.  The fact that that could happen - when I haven't deployed any other code or even added any workflow rules - is what is most disconcerting to me.

sfdcfoxsfdcfox

There are a number of possible solutions, but you'd almost need a debug log to figure it out for sure. The way I see it, you might have run into one of these conditions:

 

1) A new trigger was added afterwards that itself deployed okay, but would fail when it was called as a result of being called from your class. Stated as an example, given triggers A and B, A deploys fine and runs only when it meets certain conditions. B also deploys fine and runs only when it meets certain conditions. As long as both conditions are not met, the script stays below 200k script statements, but if both criteria are met, it fails.

 

2) A workflow rule with a field update was added, or was not tested before deployment. It might be that you had a workflow rule that only triggers on certain conditions, and you didn't test a situation that invoked the workflow rule. This is important, because a field update will cause triggers to run a second time on affected records (a recursive call).

 

3) You have a very limited amount of related data to process (say, a test account with virtually no data), and your live data uses significantly more. Or, perhaps a large amount of data was imported by a user later that caused the issue. Try running your code against different sample records and see if it makes a difference. This is especially true when you have loops inside loops.

 

For example:

 

for(integer x = 0; x < 100000, x++)

  for(integer y = 0; y < 2; y++)

    // one line of code.

 

If you read this, you'll see that you'd run right up to 200,000 script statements in two lines of code. It's obvious here, but consider this:

 

for(sobject a:alist)

  for(sobject b:blist)

    if(a.somefield__c == b.somefield__c)

      //

 

If alist contains 1000 elements and blist contains 1000 elements, this loop requires at minimum 1000000 lines of execution, over 5 times the limit. You may have tested your code on "180" records with only 10-20 related items (180*20 is only 3600 lines), while they might have invoked a separate multiplier, such as 2 records times 25000 rows, with 8 lines of executed logic per loop (200,000 script statements total). This is usually my first suspect when I see something like that crop up.

 

Finally, you might have a rare condition where a loop simply doesn't terminate in some situations, so you might need to see if you have a possibly-infinite loop in there.

 

In short, we don't have enough information to truly assist you, but your debug logs should assist you.

This was selected as the best answer
thunksalotthunksalot

Holy cow!  You rock my socks, sfdcfox!!  Your answer is *exactly* what I was looking for.  

 

I'm going to run through and investigate all four of those.  Thank you, thank you!

sfdcfoxsfdcfox
If you come across a possible cause, or you have any further questions, just go ahead and let us know, and we'll see if we can't help you diagnose the problem.
thunksalotthunksalot

Well, I think I finally found the cause and it might be helpful for others hear about...

 

I recently added dozens of new product objects and added each of those to several pricebooks each.  That resulted in 600 new pricebook entries.  It turns out that a piece of code that loops over a map of all the pricebook entry records is eating up all my statements, now that there are so many pricebook entries.  Clearly, that code needs to be refactored.   

 

What's interesting is that I wasn't seeing this looping in the log file, which is why I was having so much trouble isolating the problem.  After reading sfdcfox's message and sleeping on it for a night, it finally occured to me it might have to do with the new products and pricebookentries.  This morning, I looked in the log file for the place where the pricebookentry processing code gets excuted.  This is what I found! 

 

10:50:13.212 (212517000)|SOQL_EXECUTE_END|[53]|Rows:680
*** Skipped 186942184 bytes of detailed log
10:50:40.702 (27702600000)|SYSTEM_METHOD_EXIT|[64]|Id.compareTo(Id, Boolean)

So, moral of the story is that if you are ever tearing your hair out looking for excessive looping in your log files but aren't seeing anything, try searching the log file for "*** Skipped"!

sfdcfoxsfdcfox
That's a new trick (truncating in the middle of the log, instead of the end). I'm sure we've all learned a moral here. That said, just remember that maps are your friend. A map can contain other maps, and that's a very useful structure for storing product/pricebook entry values. You can move the mapping process outside the main loop and solve the problem.
thunksalotthunksalot
Thanks again for all your help.