• Iñigo PuigPey
  • 0 Points
  • Member since 2017

  • Chatter
  • 0
    Best Answers
  • 0
    Likes Received
  • 0
    Likes Given
  • 1
  • 1

I have an issue executing a report from Apex. I am getting the Fatal Error "Too many query rows: 50001" when I execute the method "Reports.ReportManager.runReport(reportId, reportMetadata, includeDetails)".

The report returns 13.000 contacts, but when I execute it from apex, it only return the first 2,000 contacts. To get all the records y execute the same report adding a new filter to exclude the contacts I already got adding "Reports.ReportFilter('Contact.Id', 'greaterThan', lastContactId);"

So if the report must return 13,000 records, I will have to run the report 8 times. Every time I run the report I consult the limits and I always get 3 in "Limits.getQueryRows()", but when is executing the report for 5th time I get this exception "System.LimitException: reports:Too many query rows: 50001"

My conclusion is that as the report has 13,000 records, it adds to the limit 13,000 records to the limit everytime the report is executed, so if the report has 50,000 record I will not be able to execute the report not eve once. But if I check the limit with "Limits.getQueryRows()" I will always get 3 records, and never 50,001.

Can anyone help me with this issue? Has anyone launched a report with more than 50,000 records from apex?

Thank you
I have implemented a visualforce page to upload a CSV file and batch apex that reads the CSV and loads data into multiple related custom objects. Now visualforce page allows me to load CSV upto 10 MB but when I try to parse this huge file in apex I run into heap size governor limits.

I have implemented the parsing of CSV file in batch apex exactly as per financialforce customization from below link:

The post mentions:
You can create a batch process to read the file in chunks and define the number of lines to read for each chunk.
To do this, create a batch apex process where the scope size defines the number of records to read for each chunk. In the sample code that follows, lines from a CSV file are to be read rather than records. The start method returns an Iterable<String> that contains the lines to be processed in the execute method. Afterwards, the process reads the list of lines using the CSVReader in the same way as an online process.
global with sharing class ReadAndPopulateBatch implements Database.batchable<String>, Database.Stateful
   private String m_csvFile;
   private Integer m_startRow;
   private CSVParser m_parser;
   private static final Integer SCOPE_SIZE = 100;
   public ReadAndPopulateBatch(){....}
   public static ID run(){....}
   global Iterable<String> start(Database.batchableContext batchableContext)
       return new CSVIterator(m_csvFile, m_parser.crlf);
   global void execute(Database.BatchableContext batchableContext, List<String> scope)  
       //TODO: Create a map with the column name and the position.
       String csvFile = '';
       for(String row : scope)
          csvFile += row + m_parser.crlf;
       List<List<String>> csvLines = CSVReader.readCSVFile(csvFile,m_parser);
       //TODO: csvLines contains a List with the values of the CSV file.
       //These information will be used to create a custom object to
       //process it.
   global void finish(Database.BatchableContext batchableContext){......}
Although this post recommends to read the file in chunks it doesnt explain how to do so. It defines a variable private static final Integer SCOPE_SIZE = 100; but doesnt really use it in the example provided.

The input my batch class constructor gets is a BLOB of size 10 MB. How do I read this file in chunks in my apex class so that the user doesnt have to split the file for the data load to work?

Any advice will be really helpful. Thanks!