• Don McIntosh 3
  • NEWBIE
  • 0 Points
  • Member since 2015

  • Chatter
    Feed
  • 0
    Best Answers
  • 1
    Likes Received
  • 0
    Likes Given
  • 1
    Questions
  • 1
    Replies
Hi All,

I've recently started having issues with production code whereby an overnight batch started complaining about "Batchable instance is too big". I suspected that the heap size might have at some point exceeded its limits (due to the batch having a stateful shared map that accumulates data across the batch execution blocks). Upon inspection the system was throwing error when the heaps size reached 42% (approx 5.4 million bytes) of its heap size capacity. Confused I called all the Limits class methods at the end of the execution block to see if anything had exceed its limts, yet the only thing even remotely alarming was the heap size, of which again was not exceeding the limit.

In response I tried to replicate the issue with a simple example. The class I created is shown below:

  global class TestBatch implements Database.Batchable<sObject>, Database.Stateful
  {
      global Map<Integer, Integer> globalMap = new map<Integer, Integer>();

      global Database.QueryLocator start(Database.BatchableContext bc) 
      {
          return Database.getQueryLocator('SELECT Id FROM Account');
      }

      global void execute(Database.batchableContext bc, List<sObject> objects)
      {
          Integer i=globalMap.size();
          final Integer interval = 1000;
          for(Integer x=0; x<100000; ++x)
          {
              globalMap.put(i, i);
          
              if(Math.mod(x,interval) == 0) 
              {
                  System.debug(LoggingLevel.Error, 'Limits.getHeapSize(): ' + Limits.getHeapSize());
                  System.debug(LoggingLevel.Error, 'Limits.getLimitHeapSize(): ' + Limits.getLimitHeapSize());
              }
              
              ++i;
          }
      }
      
      global void finish(Database.BatchableContext bc) 
      {
      }
  }

This code should easily compile in any sandbox. If you open the command line and enter the following command:

  Database.executeBatch(new TestBatch(), 1);

It should iterate throught few times, assuming that there are a few accounts in the system. After two successful execution blocks, the system started complaining with the same error, even though the heap size only reached approximately 2.7 million bytes.

Does anyone have any throughts as to why Salesforce complains about a large instance when it hasn't even reached half way through their enforced limits?
Hi All,

I've recently started having issues with production code whereby an overnight batch started complaining about "Batchable instance is too big". I suspected that the heap size might have at some point exceeded its limits (due to the batch having a stateful shared map that accumulates data across the batch execution blocks). Upon inspection the system was throwing error when the heaps size reached 42% (approx 5.4 million bytes) of its heap size capacity. Confused I called all the Limits class methods at the end of the execution block to see if anything had exceed its limts, yet the only thing even remotely alarming was the heap size, of which again was not exceeding the limit.

In response I tried to replicate the issue with a simple example. The class I created is shown below:

  global class TestBatch implements Database.Batchable<sObject>, Database.Stateful
  {
      global Map<Integer, Integer> globalMap = new map<Integer, Integer>();

      global Database.QueryLocator start(Database.BatchableContext bc) 
      {
          return Database.getQueryLocator('SELECT Id FROM Account');
      }

      global void execute(Database.batchableContext bc, List<sObject> objects)
      {
          Integer i=globalMap.size();
          final Integer interval = 1000;
          for(Integer x=0; x<100000; ++x)
          {
              globalMap.put(i, i);
          
              if(Math.mod(x,interval) == 0) 
              {
                  System.debug(LoggingLevel.Error, 'Limits.getHeapSize(): ' + Limits.getHeapSize());
                  System.debug(LoggingLevel.Error, 'Limits.getLimitHeapSize(): ' + Limits.getLimitHeapSize());
              }
              
              ++i;
          }
      }
      
      global void finish(Database.BatchableContext bc) 
      {
      }
  }

This code should easily compile in any sandbox. If you open the command line and enter the following command:

  Database.executeBatch(new TestBatch(), 1);

It should iterate throught few times, assuming that there are a few accounts in the system. After two successful execution blocks, the system started complaining with the same error, even though the heap size only reached approximately 2.7 million bytes.

Does anyone have any throughts as to why Salesforce complains about a large instance when it hasn't even reached half way through their enforced limits?
Hi All,

I've recently started having issues with production code whereby an overnight batch started complaining about "Batchable instance is too big". I suspected that the heap size might have at some point exceeded its limits (due to the batch having a stateful shared map that accumulates data across the batch execution blocks). Upon inspection the system was throwing error when the heaps size reached 42% (approx 5.4 million bytes) of its heap size capacity. Confused I called all the Limits class methods at the end of the execution block to see if anything had exceed its limts, yet the only thing even remotely alarming was the heap size, of which again was not exceeding the limit.

In response I tried to replicate the issue with a simple example. The class I created is shown below:

  global class TestBatch implements Database.Batchable<sObject>, Database.Stateful
  {
      global Map<Integer, Integer> globalMap = new map<Integer, Integer>();

      global Database.QueryLocator start(Database.BatchableContext bc) 
      {
          return Database.getQueryLocator('SELECT Id FROM Account');
      }

      global void execute(Database.batchableContext bc, List<sObject> objects)
      {
          Integer i=globalMap.size();
          final Integer interval = 1000;
          for(Integer x=0; x<100000; ++x)
          {
              globalMap.put(i, i);
          
              if(Math.mod(x,interval) == 0) 
              {
                  System.debug(LoggingLevel.Error, 'Limits.getHeapSize(): ' + Limits.getHeapSize());
                  System.debug(LoggingLevel.Error, 'Limits.getLimitHeapSize(): ' + Limits.getLimitHeapSize());
              }
              
              ++i;
          }
      }
      
      global void finish(Database.BatchableContext bc) 
      {
      }
  }

This code should easily compile in any sandbox. If you open the command line and enter the following command:

  Database.executeBatch(new TestBatch(), 1);

It should iterate throught few times, assuming that there are a few accounts in the system. After two successful execution blocks, the system started complaining with the same error, even though the heap size only reached approximately 2.7 million bytes.

Does anyone have any throughts as to why Salesforce complains about a large instance when it hasn't even reached half way through their enforced limits?