function readOnly(count){ }
Starting November 20, the site will be set to read-only. On December 4, 2023,
forum discussions will move to the Trailblazer Community.
+ Start a Discussion
JayNicJayNic 

Architectural discussion - Apex Limits during TEST runs

Hey all,

 

I'm hoping to learn a great deal by posting here to find some expert feedback, and ideas. 

 

It's always been in the back of my mind that the more complex an application gets: the more "setup" based data is required to be populated during test classes. It seems to me that no matter how efficiently I code, follow best practices, and come up with interesting solutions: that I will eventually hit a wall  when it comes to getting (what salesforce deems) the necessary code coverage.

 

My team and I have been building an application for Supply Chain Management. It's a complex beast with many objects, and a great deal of automation and apex driven business logic. The two main application flows facilitate: Purchasing and Selling of Items.

 

This means we have complex systems surrounding pricebooks for Vendors, and Customers, warehouse/location validation, Item Databases, Bill of Materials Management, SalesOrder/Purchase Order management, Inventory Ins/Outs. There are a great deal of things that have to happen for a user to get to the final stage of one of the two main application flows. None of this is meant to happen all at once. It happens over time through a number of different user interfaces - this is basic application design for complex systems 101.

 

My issue comes when covering my code. In order to cover classes that take place at the end of one of the two previously mentioned flows: I have to essentially set up the entire organization. This includes huge numbers of DML statements/queries that have to occur in a single transaction because tests require this!

 

My question to you all is this: What secrets/tips do you have for developing complex applications and avoiding governor limits inside test classes? How do you avoid or mitigate the unnecessary overhead of re-factoring your architecture just for the needs of silly test classes that are ignored by clients during installation anyways?

 

Before you consider referring me to articles about "how to make your triggers batchable" and so on - know that I already do. There is not a single method or function in my entire application (that uses DML/Queries) that is not batchable. I use maps/sets everywhere to a point of obsession.

 

I look forward to what you all have to offer

zachbarkleyzachbarkley

Hi Jay,

 

We LOVE code coverage as it helps us identify breaks when development in one area breaks another area which is the intention of test methods so code coverage "forced" upon the community is helping the community with best practice on development.

 

However, I hear your greif as we have the same issue and I cannot give you a solution today unfortunately.

 

However a hack is to get your test code which makes up your whole solution for inserting data ( and it might then go on and test a visualforce page) and copy the test method a few times and you then just test a few sections at a time using Start and Stop test.

 

 private static testMethod void test1() {   
    		test.StartTest();
            insertTestAccounts();
            	test.stopTest();
            insertTestObject1();
            insertTestObject2();                  
    }  
    private static testMethod void test2() {   
            insertTestAccounts();
            	test.StartTest();
            insertTestObject1();
            	test.stopTest();
            insertTestObject2();                
    } 
    private static testMethod void test3() {   
            insertTestAccounts();
            insertTestObject1();
            	test.StartTest();
            insertTestObject2();
            	test.stopTest();                
    }

 

I hope this helps. It's a bit of duplicaiton but at least it gets those governing limits only running on the info between start and stop and your logic for the entire test method is still being tested.

 

 

JayNicJayNic

Interesting. I knew that test.startTest/stopTest essesntially "doubled" your governor limits. So outside of the test.start/stop block you have a set of governor limits, and inside the block you have another set of governor limits.

 

When you separate your tests into multiple methods like in your example: do the limits get applied per method or is it for execution of the entire class?

zachbarkleyzachbarkley

So each test method has it's own governing limits. If you are using methods, like i have, to insert data, then it's only running once in a test method, so do that just to keep your method tidy and manageable.

 

Yes you must ensure your code is "Bachified"... but what i generally do is:

 

1) Run test

2) Find first 101 to many soql, usually it might be say on the 6th  object i'm inserting data for.

3) copy test method and put a start and stop around the 1st to the 6th on your original test method.

4) Then on your second test method you copied, put a start on the 7th object and a stop at the end..

5) run test

6) you'll find say on the 11th object insert it again gives you another 101. so put your start and stop around the 7th to 11th

 

and so on.

 

why do we get so many soql statements when we're only inserting data into 6 objects? Probably becuase you have a whole lot of triggers which do other things on insert, update and subsequently inserting data into say the account will give you 20 SOQL statements which run out on the triggers.

 

Future

Also have a look at future methods. If you have a trigger that you don't have to do sync for, do async.

 

Example. I want to insert an account and i want to then create an opportunity every time my account is inserted, but i don't really need any of the info from the opporutnity. well put this in a future method.

 

Put this in your trigger or trigger handler (not inside the for statment!)

MyCustomObject_ASYNC.PR(Trigger.newMap.keySet());

 

 

And create this class

 

public class MyCustomObject_ASYNC {
	@future 
  	public static void PR(Set<Id> RI) {
  		List<MyCustomObject__c> upRec = [SELECT 
  			Id
  			,MyCustomField__c
			 
  		FROM MyCustomObject__c where ID IN :RI];
        
            for (MyCustomObject__c R:upRec){
        	
           }
      }
}  

 

The only drawback is you don't have oldmap in this example but you could bring oldmap data in as required.

 

This will reduce your knock on effect of building up limits within your triggers which help to overall system performance and also helps break apart your governing limits in your production org also.. and it does help on test classes to give you a bigger section to start and stop in.

zachbarkleyzachbarkley

one last thing.. if you have time, spend 5 mins putting debug on the beginning and end of your triggers and in debug console and see just what triggers and "Causing" so much headache. you just might find 1 or 2 are causing you greif and you can work on putting some of that code in future methods.

 

SYSTEM.DEBUG('MyCustomObject START: # of Queries: ' + Limits.getQueries());

// my code in trigger

SYSTEM.DEBUG('MyCustomObject END: # of Queries: ' + Limits.getQueries());

 

please note you only have 10 future methods a single apex code execution so use them wisely!

 

 

Jake GmerekJake Gmerek

One thing I like to do is strictly have one trigger per object and then use a handler class to control program flow.  One advantage of this is sometimes you can combine and reuse queries.  For example if you have 5 triggers on contacts 3 of them may need to query all the accounts, but if you have one trigger, you make the list a accounts a property of the handler class and save yourself two queries.  Also you can update that list at the end of the trigger and save DML statements.  You can see where it goes from there.  Just wanted to throw that out there as I am running into this big time in my current org.

JayNicJayNic

Great tips guys!

 

@

Yes, I quickly realized that I needed a centralized business logic layer per object that would allow me to intelligently grab related records no more than once, and update the same list to keep memory clean and up to date. I went a step further and build my own serailizable "trigger" class that I call "recordContext" which holds all the trigger context variable information and some added helper functions. It also can be used with a standard controller which will clone the record passed in on instantiation and place that in the "trigger.old" property so I can run all of my old versus new logic in memory without issuing dmls on an overridden page. (I'm pretty proud of that one)

 

Having one trigger also makes it immeasurably easier to control my order of execution.

When it comes to managed packages: we have to write our own Validation Rule engine in apex because clients are permitted to simply turn our validation rules off (same with workflow rules) on a managed package... so clearly seeing the execution order is a must with all that compounded code now needed

 

@

zachbarkleyzachbarkley

Guys, do you include 1 trigger per object for before and after? I split the before and after?

sfdcfoxsfdcfox

Here's some tips to help you get started:

 

1) Don't use trigger logic in triggers. A better solution is to call utility classes from your triggers. It looks like this:

 

trigger X on Y (events) {
  Y_Utility.handleTriggerEvent(Trigger.old, Trigger.new);
}

This lets you isolate the actual logic within classes that can be tested independently. For bonus points, you can bypass the trigger itself when running a DML-level test (in order to make sure the trigger gets the required 1% coverage). Such a trigger can look like this:

 

trigger X on Y (events) {
 if(!Test.isRunningTest())
   Y_Utility.handleTriggerEvent(Trigger.old, Trigger.new);
}

This design lets you shortcut the normal flow so you can set the system into a known test state with as few calls as possible.

 

2) Use caching to improve performance. Static variables persist across sub-transactions within a transaction, so you can actually "cheat" by introducing logic that can cheat the system. Combined with the previous tip, you can sometimes simulate entire databases in memory without needing to resort to other methods. For example, let's say that you load accounts in most of your functions. At the risk of a small penalty performance, you can reduce the number of query calls by storing the result the first time it is queried. A simple method looks like this:

 

public with sharing class Cache {
    static Map<Id, SObject> records = new Map<Id, SObject>();
    static Map<SobjectType, Type> types = new Map<SobjectType, Type> {
        Account.SobjectType => AccountQuery.Class
    };
    abstract class SObjectQuery {
        abstract SObject[] results(Set<Id> ids);
    }
    class AccountQuery extends SObjectQuery {
        override SObject[] results(Set<Id> ids) { return [SELECT Id, Name, Industry FROM Account WHERE Id IN :ids]; }
    }

    SObject[] load(Set<Id> recordIds) {
        SObject[] results = new SObject[0];
        Set<Id> missingIds = new Set<Id>(recordIds);
        missingIds.removeAll(records.keySet());
        while(!missingIds.isEmpty()) {
            Id[] tempIds = new List<Id>(missingIds);
            records.putAll(
              types.get(tempids[0].getSObjectType()).results(tempIds));
            missingIds.removeAll(records.keyset());
        }
for(Id recordId: recordIds) {
results.add(records.get(recordId);
} return results; } }

Using this design consistently results in fewer queries, etc, because records are only queried once. Even better performance gains are possible, but that's really outside the scope of this response (but it's fun, no?). Just remember that a cache can grow stale, so you might need a way to periodically purge records within a transaction if you know that a certain piece of code will update data that should already be in the cache (watch out for caching errors, they can cause unexpected behavior).

 

3) Use Test.startTest to double your limits. Make sure you're using this function to your advantage.

 

4) Test.loadData. You can load data with this method. It can save you execution time and possibly DML statements (need to test that part).

 

5) Test individual units as much as practical without using DML statements. This ties back into the first point-- if you can get to a stable point in as few queries/statements as possible, you'll have more time/queries to test the actual logic.