+ Start a Discussion
EJWEJW 

Summer '14 bug - Limits.getScriptStatements() always reporting you've executed 200,000 script statements.

I've opened a case for this, 10197829, but wanted to report it here as well in hopes of getting it noticed/resolved faster.  I realize this limit is no longer enforced, but older code that checks to see if the limit has been exceeded is now erroring out due to this issue.  Yes, the code can be fixed but in this case the code is part of a managed package with several major versions in the wild and fixing this would require fixing and pushing updates to almost 30 different major releases.

To reproduce the issue, simply execute this in the developer console:

System.debug( Limits.getScriptStatements() );

You'll see that it both returns 200,000 and the cumulative stats will report that 200,000 of 200,000 script statements have been executed.  See this log:

30.0 APEX_CODE,DEBUG;APEX_PROFILING,FINEST;CALLOUT,ERROR;DB,DEBUG;SYSTEM,DEBUG;VALIDATION,ERROR;VISUALFORCE,ERROR;WORKFLOW,ERROR
Execute Anonymous: System.debug( Limits.getScriptStatements() );
Execute Anonymous: System.debug( Limits.getLimitScriptStatements() );
09:04:43.038 (38268000)|EXECUTION_STARTED
09:04:43.038 (38284000)|CODE_UNIT_STARTED|[EXTERNAL]|execute_anonymous_apex
09:04:43.038 (38842000)|SYSTEM_METHOD_ENTRY|[1]|Limit.getScriptStatements()
09:04:43.038 (38906000)|SYSTEM_METHOD_EXIT|[1]|Limit.getScriptStatements()
09:04:43.038 (38930000)|SYSTEM_METHOD_ENTRY|[1]|System.debug(ANY)
09:04:43.038 (38945000)|USER_DEBUG|[1]|DEBUG|200000
09:04:43.038 (38953000)|SYSTEM_METHOD_EXIT|[1]|System.debug(ANY)
09:04:43.038 (38964000)|SYSTEM_METHOD_ENTRY|[2]|Limit.getLimitScriptStatements()
09:04:43.038 (38996000)|SYSTEM_METHOD_EXIT|[2]|Limit.getLimitScriptStatements()
09:04:43.039 (39008000)|SYSTEM_METHOD_ENTRY|[2]|System.debug(ANY)
09:04:43.039 (39019000)|USER_DEBUG|[2]|DEBUG|200000
09:04:43.039 (39027000)|SYSTEM_METHOD_EXIT|[2]|System.debug(ANY)
09:04:43.129 (39069000)|CUMULATIVE_LIMIT_USAGE
09:04:43.129|LIMIT_USAGE_FOR_NS|(default)|
  Number of SOQL queries: 0 out of 100
  Number of query rows: 0 out of 50000
  Number of SOSL queries: 0 out of 20
  Number of DML statements: 0 out of 150
  Number of DML rows: 0 out of 10000
  Number of code statements: 200000 out of 200000 ******* CLOSE TO LIMIT
  Maximum CPU time: 0 out of 10000
  Maximum heap size: 0 out of 6000000
  Number of callouts: 0 out of 10
  Number of Email Invocations: 0 out of 10
  Number of fields describes: 0 out of 100
  Number of record type describes: 0 out of 100
  Number of child relationships describes: 0 out of 100
  Number of picklist describes: 0 out of 100
  Number of future calls: 0 out of 10
Best Answer chosen by EJW
kaplanjoshkaplanjosh
This was not a bug; this was the original design. We switched it to buy you time to change your code.  More details here: http://blogs.developerforce.com/engineering/2014/02/script-statement-hangover.html

All Answers

EJWEJW
According to the documentation this method (Limits.getScriptStatements()) should always be returning 0 now, but instead it's returning 200,000.

Docs: http://www.salesforce.com/us/developer/docs/apexcode/index_Left.htm#StartTopic=Content/apex_System_Limits_getScriptStatements.htm?SearchType=Stem
Maros SitkoMaros Sitko
I have this issue on CS17 instance too. Has anyone contact salesforce?
EJWEJW
Just the case I mentioned in the original post.  I did get a generic response from SFDC support yesterday.  Hopefully I'll get an actual response today.
EJWEJW
Just talked to support and the rep confirmed it's a bug and is going to escalate it to R&D to be fixed.  I asked him to get me an ETA if possible, but in general they don't tend to give out ETA's.  If this isn't fixed before they start upgrading production orgs this is going to become a major problem for us.  If I get an ETA I'll post it here.
EJWEJW
So apparently this change was intentional to force us to switch to checking for cpu time limits instead of script statement limits and will not be reverted.  Why they didn't make this change apply only to the new API version and not all code is beyond me.  We fixed this quite a while ago in our current codebase but still have a large number of customers running old code, and pushing a major upgrade to them isn't an acceptable option as many customers have testing procedures they want to go through before any upgrade happens in production, especially a major release upgrade.
kaplanjoshkaplanjosh
This was not a bug; this was the original design. We switched it to buy you time to change your code.  More details here: http://blogs.developerforce.com/engineering/2014/02/script-statement-hangover.html
This was selected as the best answer
k.n.hage1.392860786624858E12k.n.hage1.392860786624858E12
FYI. The release notes have been updated with the new behavior of getScriptStatements (check "Behavior Change of Limits.getScriptStatements"  https://help.salesforce.com/help/pdfs/en/salesforce_spring14_release_notes.pdf).

This change is for all API versions because script statements are no longer counted on the platform for all versions.
kibitzerkibitzer
I thought the solution they came up with was very good. I do think they really dropped the ball in terms of the sudden change from getScriptStatements returning zero to returning the same value as getLimitScriptStatements without any notice or warning. If they want us to stop using a deprecated function in a prior version, they need to send out warnings and critical update notices over a long period clearly explaining the change so that people have time to update apps and audit their codebase - like with the clickjack protection, they did that one right over the course of over a year now. Because it does take time to audit and update orgs and applications, and everyone knows it. The warnings need to be clear: "Your code will stop working if you don't do this - this is not a versioned change" and need to be widespread (critical update warnings, Emails, etc).
The fact that the solution they came up with is excellent does not negate the fact that somebody really screwed up on this one. I hope they are reviewing their internal processes to prevent this from happening again.