• J Bengel
  • NEWBIE
  • 25 Points
  • Member since 2020
  • Application Developer
  • NC Community College System

  • Chatter
    Feed
  • 0
    Best Answers
  • 0
    Likes Received
  • 0
    Likes Given
  • 8
    Questions
  • 15
    Replies
This is coming up when creating a LightningMessageChannel. The problem appears to be that "LightningMessageChannel" is not defined in the namespace schema (or that's the going theory among the answers I've come across at least).
<LightningMessageChannel xmlns="http://soap.sforce.com/2006/04/metadata">

The advertised "solution" is to turn off XML validation in VS Code, which makes the message go away, but is kind of like turning up the radio so that you don't hear the crank-end rod knock your engine is making. You can get away with that for a while, but if you don't fix it, sooner or later you're going to end up with a hole in your oil pan and smoke billowing everywhere.
I've been using the Dataloader desktop app since we got on Salesforce with no problems and to exceptionally good effect. Everything was fine until we brought up MFA. (We opted for MS Authenticator since we use Azure SSO.)

Now? Not so much. MFA has been problematic for us since we adopted it, but so far we've always been able to eventually beat it into submission (SFDX doesn't like it much either).

This is going to bring a bunch of stuff to a grinding halt if we don't find a solution for it, so any wisdom on the topic would be greatly appreciated.
For all of their similarity to Excel formulas, there appears to be no equivalent to the INT() function in Salesforce. I have a long list of "Things I Can't Believe Salesfore Doesn't Do and Have Been On the Idea List for 14 Years", but I can usually work around the things on that list. In this case, though, the output from this formula stubbornly refuses to return a value with no decimal places (even though it is defined as such).
Data Type Formula
Return type Number
Decimal Places 0  
FLOOR((TODAY() - hed__Contact__r.Birthdate) / 365.2425)
Given an input of 8/1/1980 and a current date of 7/9/2021, the formula returns 40.00. Which is correct, since the result of the core calculation is 40.93718557, and I'm only concernign myself with the interger portion of the number. But two things jump out at me here. First the documentation of the FLOOR() function is at best misleading, because it's described as "rounding down to the nearest integer", which is only half right. It is technically rounding down, but it's not returning an integer -- or at least it's not returning something that's displayed as integer.  I suspect that this is related to the fact that Salesforce doesn't distinguish between various numeric data types when defining a custom field. Everything is lumped into "Number" which appears to be defined as Decimal, because defining it with 0 decimal places does not make it an integer.
Which is fine, until you need an integer value. In the example, if you see 40.00 on the screen, you don't get the idea hthat this contact is mere days away from being 41 -- you're displaying his age as exactly 40 years, to two digits of precision.
Creating the field with a return type of Text and wedging the number in there to strip off the trailing decimals is a workaround that fixes that problem (at least I assume you can do that -- but I haven't actually tried it), and it's a fine solution until you need to use the numeric value in a different calculation where some warning is displayed if the contact is under 18 years of age. At that opint you're not only convertign the initial calculation result to text, but you have to convert it back to a number on the other end.

For all its sophistication, this platform has some pretty glaring omissions of what should be very common, basic functionality.
It's complicated. But I can't believe that this hasn't been solved before, so I'm hoping to find one of the Apex Jedi who has seen and conquered it.

We converted a legacy system to a costom app in SF but only convetred the data that was "active" (for a couple of reasons, that woudl take to long to enumerate here). The rest was archive data that we would never need to reference in day to day operations, and in fact if not for this one reporting requirement, we wouldn't have to reference it regularly at all. But we're talking about a federal agency here and apparnelty they need to get the entire data set for all time every quarter. So we send them 2 CSV's every quarter, one that is a reasonable size, and one that is enormous. And as you have probably guessed ('cause you're smart like that) ti's the second one that's causing the weeping and gnashing of teeth.

Since we only converted the data taht we were likely to need, the older archive data is stored as static resources. It took two CSV's to store the big table, but that was the only way to get around the 5Mb size limit on static resources. So the idea was to select all of the records in the object that contains the relevant data for the extract create the CSV records as an array, tehn read the static resource and convert that to an array and .addAll() the second array to the first one. Then the result is flattened and written out to the final CSV for delivery. In the smaller of the extracts this works brilliantly.

But you see what's coming, right?

The "live" data for the second extraction presents no problem. It's bulky, but it still gets in under the limit fairly comfortably. The first of the two static resources gets tacked on without a problem too -- 30,000 or so rows worth. But when the second static resource gets into the act, we hit the wall with a heap size limit exception.

What I'm trying to do, in the abstract, is something like: 
create file1
cat file2 >> file1
cat file3 >> file1
 
And it's the last bit that's failing.

The "create file1" is amassing the results of a query using a typical
for(record:selection) loop happlily hummig along adding the relevant parts of each record to an array called "enrollments" in the execute method of a batch apex job.

When all that's done, the finish method starts with this:
List<String> allLines = NCRAN_Utilities.serializeStaticCSV('USDOLStaticParticipants1');
        enrollments.addAll(allLines);

        allLines.clear();   // clear the previous contents

        allLines = NCRAN_Utilities.serializeStaticCSV('USDOLStaticParticipants2');
        enrollments.addAll(allLines);
NCRAN_Utilities.serializeStaticCSV('USDOLStaticParticipants1'); is a recursive method that parses out the static resource named in the argument (USDOLStaticParticipants1) into a list of lines, which don't need to be broken down into fields because they're already correctly formatted. This is the "cat file2 >> file1". The logs tell me that this works out fine too, and we don't run into trouble until we get to the "cat file 3 >> file 1" part, seen here as 
NCRAN_Utilities.serializeStaticCSV('USDOLStaticParticipants2');

I suspect that the proble doesn't arise until we try to paste the results of the second call to the main array, because the error doesn't occur in the utility class.

What would be ideal would be for the agency in quesiton to use their own resources to store all of this history rather than getting ever-larger uploads from us (or at least give us a web service to send to rather than forcing us to party like it's 1991). Since that's... unlikely to happen, teh next best thing would be a way that the static resources themselves could simply be grafted onto the main file (literally cat file2 >> file1). But I have seen nothing in any of my research so far to suggest that this is an available solution without going outside of Apex. Absent either of those possibilities, I have to find some way of appeasing Apex so that I can join these three bloated arrays topgehter into one and commit that to an attachment.

Any ideas are welcome. (Though ideas that work are preferred.)

​​​​​​​Thanks!
 

I'm getting a "Too many query rows: 50001" error in a queueable class that executes 5 separate SOQL queries, using data captured in the first one to filter the others. None of these quereis individually returns anywhere near 50,000 records (the largest of them is under 19,000), but collectively, they retrieve around 53,000.

This is oversimplifying, but to illustrate what I'm talking about, we have:
Object0, which is only a vehicle to hold lall the parts together. It stores the working parms for the query, and serves as a recod to attach CSV files to. (It's a long story.)

The live data is stored in several custom objects, with  the starting point being Object1, which has lookups on Object2 and Object 3, which in turn have lookups back from Object4, Object5, and Object6. I query Object1, and while processing the data from that, squirrel away the lookup fields in a set of Ids on Object2 and Object3. Every other query can use those unduplciated lists as a filter on either the Id field or a lookup field I can constrain on. Then I process each recordset turning the selected data into a CSV that I attach to the Object0 "base" record. That way the only DML operations I'm doing happen as I'm updating the data fields in the base record, and when I'm inserting the ContentVersion record. This is a stripped down version for illustration.

public Set<Id> obj2Ids = new Set<Id>(); // unduplicated sets of record Ids
public Set<Id> obj3Ids = new Set<Id>(); // on Object2 and Object3
public List<String> obj1Recs = new List<String>(); // arrays of strings that will become
public List<String> obj2Recs = new List<String>(); // the Version data on records in
public List<String> obj3Recs = new List<String>(); // ContenVersion
public List<String> obj4Recs = new List<String>();
public List<String> obj5Recs = new List<String>();
public List<String> obj6Recs = new List<String>();

public void execute(QueueableContext context){
	buildObj1();
	buildObj2();
	buildObj4();
}

public void buildObj1(){

List<Object1> termEnr = [Select field1, field2, Object2Lookup, Object3Lookup, field3, fieldn From Object1 Where field1 <= :parm1 AND statusField Not IN ('In Progress','Submitted') AND
(field2 = null or field2 >= :parm2]

// then I'm saving the lookups on Object2 and Object 3 in an unduplicated collection like:

	for(Object1 enr:ternEnr){.add(
		String obj1Rec = enr.field1+'|'+enr.field2+'|'+enr.field3+'|'+enr.fieldn;
		obj1Recs.add(obj1Rec);
		obj2Ids.add(enr.Object2Lookup);
		obj3Ids.add(enr.Object3Lookup);
	}
	attachCSV(obj1Recs);
}

public void buildObj2(){
	List<Object2> termOcc = [Select field1, field2, field3, fieldn From Object2 Where Id IN :obj2Ids];

	for(Object2 occ:termOcc){
		String obj2Rec = occ.field1+'|'+occ.field2+'|'+occ.field3+'|'+occ.fieldn;
		obj2Recs.add(obj2Rec);
		}
		
	attachCSV(obj2Recs);
}

public void buildObj4(){
// Object4 has a lookup back Object2, whic allows me to filter Object4 based on the list of unique Ids I compiled way back when I was processing Object1
	List<Object4> termWS = [Select field1, field2, field3, Object2Lookup, fieldn From Object4 Where Object2Lookup IN :obj2Ids];
	for(Object4 ws:termWs){
		String obj4Rec = ws.field1+'|'+ws.field2+'|'+ws.field3+'|'+ws.fieldn;
		obj4Recs.add(obj4Rec);
		}
	attachCSV(obj4Recs);
}
 There's more, but you get the idea. I select a bunch of records loop over them putting together a delimited string that gets added ot a list of such strings, then I join the all, turn the result into a blob, and attach the result as a file on the Object0 record. 

So the query that extracts from Object1 reutrns 12,041 records.
The unduplicated list list of Ids on Object2 is 838, so that query reutrns 838 records.
Object3 returns 857.
Object4 returns 2832
Object5 returns 18,755
and Object 6 returns 18,069.

In total, my manual queries add up to 53,393 records across all queries. But none of the 6 alone is remotely close to the limit.

I ran across the article "Working with very large SOQL Queries (https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/langCon_apex_SOQL_VLSQ.htm)", but none of these queries is even approaching the limit on its own, so I don't see how that helps.

I've considered taking the methods that generate the largest queries and turning them into their own classes and chaining them as new jobs, passing the filter sets along from the main class. I'm not crazy about the idea, but if it will work, it would probably beat having to refactor the whole enterprise as a batch job. And I'm not enrtirely sure that that would solve my problem anyway, since the issue is in the size of the recordset returned by the queries.

This may not even be possible, since SOQL for this type of field is pretty skeletal, and the type itself can't really decide what kind of data it wants to be.
All I want my query to tell me is if more than one selection has been made for the field. I'm not concenred with what selections were made, just that there's more than one. LIKE is a bust, and I thought maybe Includes(';') might be a workaround, but that gets interpreted as me looking for NO values selected (I guess it's seeing the pattern as null AND null?).
The problem is mainly that the type is caught between string and an array of strings, and isn't really either of these. If it was actually a string then LIKE '%;%' would give me my answer. If it was actually an array, then one would assume that SOQL would allow for the use of .size() to identify when more than one selection exists. But in that limbo state where you're not really either species, there isn't really a good solution. Once I've gotten it into Apex, I should be able to split it and create a List<String> from it, and then test for the size() of that. But I was hoping to be able to query for this without having to do it in code.
Pretty new to Apex in general, and completely new to Batch Apex, but that's what I'll need to use in order to do the job before me so I'm tyring to get my head around the differences. The Tralihead module on Async Apex has a good example, but doesn't answer my question. The developer guide (https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_batch_interface.htm) shows a couple of different examples, but still doesn't address the quesiton.

So in its shortest form, my quesiotn is what's the difference between these two class declarations? Besides the obvious I mean.
public class SearchAndReplace implements Database.Batchable<sObject>{}
and 
public class batchClass implements Database.batchable{}
One specifiec <sObject> in the protitype, the other doesn't. That suggests it's optional, but I've yet to find any docuemntation that outlines when to use it and when not to.

The difference in the classes themselves is mostly in the start method -- the first returns a QueryLocator and the second an Iterator (the other difference being that I mostly understand what the first one is doing). But it doesn't explain why the <sObject> exists in one prototype and not the other. There may be a use case where you'd use Batch Apex for something besides processing large numebrs of database records, but neiter of these examples does that -- they just differ in how (and presumably how many) they process the records in question.

That's not the only question these examples raise, but trying to tsneak in another one woudl be cheating. So I'll start with this one and see if it helps with the others. 

 
We have two custom objects, Trade and Program Trade, that we are using in an apprenticeship development app.

Trade is a lookup table containing various codes, descriptions, and training details.
Program Trade is an instance of a trade within an apprenticeship program, and (in theory) sets some of its fields to default values using the corresponding fields in Trade. Those values may be modified by the end user after the fact, but by giving them default values based on the Trade, we hope to limit the number of mising and invalid values entered on the record.
Program Trade contains a lookup on the Name field in Trade to establish the reference between the two objects.
If I execute the following query in the Developr console:
SELECT Name,Sponsor__r.Name, Sponsored_Program__r.Name,Training_Type__c,Trades__r.Name, Trades__r.Training_Type__c
FROM Program_Trade__c
WHERE Sponsored_Program__r.Name != Null AND Sponsor__r.Name != Null
LIMIT 10

Name	Sponsor__r.Name	Sponsored_Program__r.Name	Training_Type__c	Trades__r.Name	Trades__r.Training_Type__c
Why?	Really Kool Gizmos	Really Kool Gizmos-Apprenticeship	Time Based	Computer Programmer	Time Based
Painter	Really Kool Gizmos	Really Kool Gizmos-Apprenticeship	Time Based	Painter (Professional and Kindred)	Time Based
Combination Welder	Really Kool Gizmos	Really Kool Gizmos-Apprenticeship	Competency Based	Welder, Combination	Competency Based
Reactor Fixer	Duke Harris	Chernobyl Avoidance		Powerhouse Mechanic	Time Based
The first three were set manually, which is why they have values in them. The fourth was the test case I used for this trigger:
trigger setTradeDefaults on Program_Trade__c(before insert, before update) {
    for (Program_Trade__c pt : Trigger.new){
		System.debug('Trigger setTradeDefaults initial value of pt.Training_Type__c:' + pt.Training_Type__c);
		System.debug('Trigger setTradeDefaults initial value of pt.Trades__r.Training_Type__c:' + pt.Trades__r.Training_Type__c);
        if (String.isBlank(pt.Training_Type__c)){
            pt.Training_Type__c = pt.Trades__r.Training_Type__c;
            System.debug('Trigger setTradeDefaults assigned pt.Training_Type__c the value :' + pt.Training_Type__c);
        }
    }
}
And the debug log shows these three entries when I saved the program trade with the Training Type field left empty:
12:47:20:033 USER_DEBUG [3]|DEBUG|Trigger setTradeDefaults initial value of pt.Training_Type__c:null
12:47:20:034 USER_DEBUG [4]|DEBUG|Trigger setTradeDefaults initial value of pt.Trades__r.Training_Type__c:null
12:47:20:034 USER_DEBUG [7]|DEBUG|Trigger setTradeDefaults assigned pt.Training_Type__c the value :null
This happened at both insert and update, and the debug trace tels me that (a) the trigger fired and (b) the if condition passed.  But what it doesn't tell me is why pt.Trades__r.Training_Type__c is empty.  My best guess (which I fear is the case) is that in order to have visibilty into the lookup record beyond the fields expressly translated on the Program Trade record, I either need to run a SOQL query within the body of the trigger (which seems excessive for the purpose) OR create formula field(s) on Program Trade to make those value(s) accessible directly on the Program Trade record (which doesn't seem like it would be any more efficient than the query solution) and use those to set the default value(s) of the actual data fields.

Neither solution is especially appealing, so I'm hoping that in my novice-ness there's just something I haven't learned yet that would be a better idea. In a perfect world, I'd do this translation client side, and display the default values in real time, but that doesn't appear to be an option -- at least in the Lightning UI. I've seen recommendaiotns for using Workflow rules for tasks like this, but there seems to be a divide on whether those are a good idea these days. Sounds like Workflow in general is being phased out, which doesn't bode well for future proofing in my app.


 
This is coming up when creating a LightningMessageChannel. The problem appears to be that "LightningMessageChannel" is not defined in the namespace schema (or that's the going theory among the answers I've come across at least).
<LightningMessageChannel xmlns="http://soap.sforce.com/2006/04/metadata">

The advertised "solution" is to turn off XML validation in VS Code, which makes the message go away, but is kind of like turning up the radio so that you don't hear the crank-end rod knock your engine is making. You can get away with that for a while, but if you don't fix it, sooner or later you're going to end up with a hole in your oil pan and smoke billowing everywhere.
I've been using the Dataloader desktop app since we got on Salesforce with no problems and to exceptionally good effect. Everything was fine until we brought up MFA. (We opted for MS Authenticator since we use Azure SSO.)

Now? Not so much. MFA has been problematic for us since we adopted it, but so far we've always been able to eventually beat it into submission (SFDX doesn't like it much either).

This is going to bring a bunch of stuff to a grinding halt if we don't find a solution for it, so any wisdom on the topic would be greatly appreciated.
For all of their similarity to Excel formulas, there appears to be no equivalent to the INT() function in Salesforce. I have a long list of "Things I Can't Believe Salesfore Doesn't Do and Have Been On the Idea List for 14 Years", but I can usually work around the things on that list. In this case, though, the output from this formula stubbornly refuses to return a value with no decimal places (even though it is defined as such).
Data Type Formula
Return type Number
Decimal Places 0  
FLOOR((TODAY() - hed__Contact__r.Birthdate) / 365.2425)
Given an input of 8/1/1980 and a current date of 7/9/2021, the formula returns 40.00. Which is correct, since the result of the core calculation is 40.93718557, and I'm only concernign myself with the interger portion of the number. But two things jump out at me here. First the documentation of the FLOOR() function is at best misleading, because it's described as "rounding down to the nearest integer", which is only half right. It is technically rounding down, but it's not returning an integer -- or at least it's not returning something that's displayed as integer.  I suspect that this is related to the fact that Salesforce doesn't distinguish between various numeric data types when defining a custom field. Everything is lumped into "Number" which appears to be defined as Decimal, because defining it with 0 decimal places does not make it an integer.
Which is fine, until you need an integer value. In the example, if you see 40.00 on the screen, you don't get the idea hthat this contact is mere days away from being 41 -- you're displaying his age as exactly 40 years, to two digits of precision.
Creating the field with a return type of Text and wedging the number in there to strip off the trailing decimals is a workaround that fixes that problem (at least I assume you can do that -- but I haven't actually tried it), and it's a fine solution until you need to use the numeric value in a different calculation where some warning is displayed if the contact is under 18 years of age. At that opint you're not only convertign the initial calculation result to text, but you have to convert it back to a number on the other end.

For all its sophistication, this platform has some pretty glaring omissions of what should be very common, basic functionality.
It's complicated. But I can't believe that this hasn't been solved before, so I'm hoping to find one of the Apex Jedi who has seen and conquered it.

We converted a legacy system to a costom app in SF but only convetred the data that was "active" (for a couple of reasons, that woudl take to long to enumerate here). The rest was archive data that we would never need to reference in day to day operations, and in fact if not for this one reporting requirement, we wouldn't have to reference it regularly at all. But we're talking about a federal agency here and apparnelty they need to get the entire data set for all time every quarter. So we send them 2 CSV's every quarter, one that is a reasonable size, and one that is enormous. And as you have probably guessed ('cause you're smart like that) ti's the second one that's causing the weeping and gnashing of teeth.

Since we only converted the data taht we were likely to need, the older archive data is stored as static resources. It took two CSV's to store the big table, but that was the only way to get around the 5Mb size limit on static resources. So the idea was to select all of the records in the object that contains the relevant data for the extract create the CSV records as an array, tehn read the static resource and convert that to an array and .addAll() the second array to the first one. Then the result is flattened and written out to the final CSV for delivery. In the smaller of the extracts this works brilliantly.

But you see what's coming, right?

The "live" data for the second extraction presents no problem. It's bulky, but it still gets in under the limit fairly comfortably. The first of the two static resources gets tacked on without a problem too -- 30,000 or so rows worth. But when the second static resource gets into the act, we hit the wall with a heap size limit exception.

What I'm trying to do, in the abstract, is something like: 
create file1
cat file2 >> file1
cat file3 >> file1
 
And it's the last bit that's failing.

The "create file1" is amassing the results of a query using a typical
for(record:selection) loop happlily hummig along adding the relevant parts of each record to an array called "enrollments" in the execute method of a batch apex job.

When all that's done, the finish method starts with this:
List<String> allLines = NCRAN_Utilities.serializeStaticCSV('USDOLStaticParticipants1');
        enrollments.addAll(allLines);

        allLines.clear();   // clear the previous contents

        allLines = NCRAN_Utilities.serializeStaticCSV('USDOLStaticParticipants2');
        enrollments.addAll(allLines);
NCRAN_Utilities.serializeStaticCSV('USDOLStaticParticipants1'); is a recursive method that parses out the static resource named in the argument (USDOLStaticParticipants1) into a list of lines, which don't need to be broken down into fields because they're already correctly formatted. This is the "cat file2 >> file1". The logs tell me that this works out fine too, and we don't run into trouble until we get to the "cat file 3 >> file 1" part, seen here as 
NCRAN_Utilities.serializeStaticCSV('USDOLStaticParticipants2');

I suspect that the proble doesn't arise until we try to paste the results of the second call to the main array, because the error doesn't occur in the utility class.

What would be ideal would be for the agency in quesiton to use their own resources to store all of this history rather than getting ever-larger uploads from us (or at least give us a web service to send to rather than forcing us to party like it's 1991). Since that's... unlikely to happen, teh next best thing would be a way that the static resources themselves could simply be grafted onto the main file (literally cat file2 >> file1). But I have seen nothing in any of my research so far to suggest that this is an available solution without going outside of Apex. Absent either of those possibilities, I have to find some way of appeasing Apex so that I can join these three bloated arrays topgehter into one and commit that to an attachment.

Any ideas are welcome. (Though ideas that work are preferred.)

​​​​​​​Thanks!
 

I'm getting a "Too many query rows: 50001" error in a queueable class that executes 5 separate SOQL queries, using data captured in the first one to filter the others. None of these quereis individually returns anywhere near 50,000 records (the largest of them is under 19,000), but collectively, they retrieve around 53,000.

This is oversimplifying, but to illustrate what I'm talking about, we have:
Object0, which is only a vehicle to hold lall the parts together. It stores the working parms for the query, and serves as a recod to attach CSV files to. (It's a long story.)

The live data is stored in several custom objects, with  the starting point being Object1, which has lookups on Object2 and Object 3, which in turn have lookups back from Object4, Object5, and Object6. I query Object1, and while processing the data from that, squirrel away the lookup fields in a set of Ids on Object2 and Object3. Every other query can use those unduplciated lists as a filter on either the Id field or a lookup field I can constrain on. Then I process each recordset turning the selected data into a CSV that I attach to the Object0 "base" record. That way the only DML operations I'm doing happen as I'm updating the data fields in the base record, and when I'm inserting the ContentVersion record. This is a stripped down version for illustration.

public Set<Id> obj2Ids = new Set<Id>(); // unduplicated sets of record Ids
public Set<Id> obj3Ids = new Set<Id>(); // on Object2 and Object3
public List<String> obj1Recs = new List<String>(); // arrays of strings that will become
public List<String> obj2Recs = new List<String>(); // the Version data on records in
public List<String> obj3Recs = new List<String>(); // ContenVersion
public List<String> obj4Recs = new List<String>();
public List<String> obj5Recs = new List<String>();
public List<String> obj6Recs = new List<String>();

public void execute(QueueableContext context){
	buildObj1();
	buildObj2();
	buildObj4();
}

public void buildObj1(){

List<Object1> termEnr = [Select field1, field2, Object2Lookup, Object3Lookup, field3, fieldn From Object1 Where field1 <= :parm1 AND statusField Not IN ('In Progress','Submitted') AND
(field2 = null or field2 >= :parm2]

// then I'm saving the lookups on Object2 and Object 3 in an unduplicated collection like:

	for(Object1 enr:ternEnr){.add(
		String obj1Rec = enr.field1+'|'+enr.field2+'|'+enr.field3+'|'+enr.fieldn;
		obj1Recs.add(obj1Rec);
		obj2Ids.add(enr.Object2Lookup);
		obj3Ids.add(enr.Object3Lookup);
	}
	attachCSV(obj1Recs);
}

public void buildObj2(){
	List<Object2> termOcc = [Select field1, field2, field3, fieldn From Object2 Where Id IN :obj2Ids];

	for(Object2 occ:termOcc){
		String obj2Rec = occ.field1+'|'+occ.field2+'|'+occ.field3+'|'+occ.fieldn;
		obj2Recs.add(obj2Rec);
		}
		
	attachCSV(obj2Recs);
}

public void buildObj4(){
// Object4 has a lookup back Object2, whic allows me to filter Object4 based on the list of unique Ids I compiled way back when I was processing Object1
	List<Object4> termWS = [Select field1, field2, field3, Object2Lookup, fieldn From Object4 Where Object2Lookup IN :obj2Ids];
	for(Object4 ws:termWs){
		String obj4Rec = ws.field1+'|'+ws.field2+'|'+ws.field3+'|'+ws.fieldn;
		obj4Recs.add(obj4Rec);
		}
	attachCSV(obj4Recs);
}
 There's more, but you get the idea. I select a bunch of records loop over them putting together a delimited string that gets added ot a list of such strings, then I join the all, turn the result into a blob, and attach the result as a file on the Object0 record. 

So the query that extracts from Object1 reutrns 12,041 records.
The unduplicated list list of Ids on Object2 is 838, so that query reutrns 838 records.
Object3 returns 857.
Object4 returns 2832
Object5 returns 18,755
and Object 6 returns 18,069.

In total, my manual queries add up to 53,393 records across all queries. But none of the 6 alone is remotely close to the limit.

I ran across the article "Working with very large SOQL Queries (https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/langCon_apex_SOQL_VLSQ.htm)", but none of these queries is even approaching the limit on its own, so I don't see how that helps.

I've considered taking the methods that generate the largest queries and turning them into their own classes and chaining them as new jobs, passing the filter sets along from the main class. I'm not crazy about the idea, but if it will work, it would probably beat having to refactor the whole enterprise as a batch job. And I'm not enrtirely sure that that would solve my problem anyway, since the issue is in the size of the recordset returned by the queries.

This may not even be possible, since SOQL for this type of field is pretty skeletal, and the type itself can't really decide what kind of data it wants to be.
All I want my query to tell me is if more than one selection has been made for the field. I'm not concenred with what selections were made, just that there's more than one. LIKE is a bust, and I thought maybe Includes(';') might be a workaround, but that gets interpreted as me looking for NO values selected (I guess it's seeing the pattern as null AND null?).
The problem is mainly that the type is caught between string and an array of strings, and isn't really either of these. If it was actually a string then LIKE '%;%' would give me my answer. If it was actually an array, then one would assume that SOQL would allow for the use of .size() to identify when more than one selection exists. But in that limbo state where you're not really either species, there isn't really a good solution. Once I've gotten it into Apex, I should be able to split it and create a List<String> from it, and then test for the size() of that. But I was hoping to be able to query for this without having to do it in code.
Pretty new to Apex in general, and completely new to Batch Apex, but that's what I'll need to use in order to do the job before me so I'm tyring to get my head around the differences. The Tralihead module on Async Apex has a good example, but doesn't answer my question. The developer guide (https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_batch_interface.htm) shows a couple of different examples, but still doesn't address the quesiton.

So in its shortest form, my quesiotn is what's the difference between these two class declarations? Besides the obvious I mean.
public class SearchAndReplace implements Database.Batchable<sObject>{}
and 
public class batchClass implements Database.batchable{}
One specifiec <sObject> in the protitype, the other doesn't. That suggests it's optional, but I've yet to find any docuemntation that outlines when to use it and when not to.

The difference in the classes themselves is mostly in the start method -- the first returns a QueryLocator and the second an Iterator (the other difference being that I mostly understand what the first one is doing). But it doesn't explain why the <sObject> exists in one prototype and not the other. There may be a use case where you'd use Batch Apex for something besides processing large numebrs of database records, but neiter of these examples does that -- they just differ in how (and presumably how many) they process the records in question.

That's not the only question these examples raise, but trying to tsneak in another one woudl be cheating. So I'll start with this one and see if it helps with the others. 

 
We have two custom objects, Trade and Program Trade, that we are using in an apprenticeship development app.

Trade is a lookup table containing various codes, descriptions, and training details.
Program Trade is an instance of a trade within an apprenticeship program, and (in theory) sets some of its fields to default values using the corresponding fields in Trade. Those values may be modified by the end user after the fact, but by giving them default values based on the Trade, we hope to limit the number of mising and invalid values entered on the record.
Program Trade contains a lookup on the Name field in Trade to establish the reference between the two objects.
If I execute the following query in the Developr console:
SELECT Name,Sponsor__r.Name, Sponsored_Program__r.Name,Training_Type__c,Trades__r.Name, Trades__r.Training_Type__c
FROM Program_Trade__c
WHERE Sponsored_Program__r.Name != Null AND Sponsor__r.Name != Null
LIMIT 10

Name	Sponsor__r.Name	Sponsored_Program__r.Name	Training_Type__c	Trades__r.Name	Trades__r.Training_Type__c
Why?	Really Kool Gizmos	Really Kool Gizmos-Apprenticeship	Time Based	Computer Programmer	Time Based
Painter	Really Kool Gizmos	Really Kool Gizmos-Apprenticeship	Time Based	Painter (Professional and Kindred)	Time Based
Combination Welder	Really Kool Gizmos	Really Kool Gizmos-Apprenticeship	Competency Based	Welder, Combination	Competency Based
Reactor Fixer	Duke Harris	Chernobyl Avoidance		Powerhouse Mechanic	Time Based
The first three were set manually, which is why they have values in them. The fourth was the test case I used for this trigger:
trigger setTradeDefaults on Program_Trade__c(before insert, before update) {
    for (Program_Trade__c pt : Trigger.new){
		System.debug('Trigger setTradeDefaults initial value of pt.Training_Type__c:' + pt.Training_Type__c);
		System.debug('Trigger setTradeDefaults initial value of pt.Trades__r.Training_Type__c:' + pt.Trades__r.Training_Type__c);
        if (String.isBlank(pt.Training_Type__c)){
            pt.Training_Type__c = pt.Trades__r.Training_Type__c;
            System.debug('Trigger setTradeDefaults assigned pt.Training_Type__c the value :' + pt.Training_Type__c);
        }
    }
}
And the debug log shows these three entries when I saved the program trade with the Training Type field left empty:
12:47:20:033 USER_DEBUG [3]|DEBUG|Trigger setTradeDefaults initial value of pt.Training_Type__c:null
12:47:20:034 USER_DEBUG [4]|DEBUG|Trigger setTradeDefaults initial value of pt.Trades__r.Training_Type__c:null
12:47:20:034 USER_DEBUG [7]|DEBUG|Trigger setTradeDefaults assigned pt.Training_Type__c the value :null
This happened at both insert and update, and the debug trace tels me that (a) the trigger fired and (b) the if condition passed.  But what it doesn't tell me is why pt.Trades__r.Training_Type__c is empty.  My best guess (which I fear is the case) is that in order to have visibilty into the lookup record beyond the fields expressly translated on the Program Trade record, I either need to run a SOQL query within the body of the trigger (which seems excessive for the purpose) OR create formula field(s) on Program Trade to make those value(s) accessible directly on the Program Trade record (which doesn't seem like it would be any more efficient than the query solution) and use those to set the default value(s) of the actual data fields.

Neither solution is especially appealing, so I'm hoping that in my novice-ness there's just something I haven't learned yet that would be a better idea. In a perfect world, I'd do this translation client side, and display the default values in real time, but that doesn't appear to be an option -- at least in the Lightning UI. I've seen recommendaiotns for using Workflow rules for tasks like this, but there seems to be a divide on whether those are a good idea these days. Sounds like Workflow in general is being phased out, which doesn't bode well for future proofing in my app.


 
I have been trying to deploy apex classes to a Trailhead Playgound as well as a developer org, but I receive the error "SFDX: Deploy Source to Org failed to run" in VS Code. 

Did I miss something when I set-up VS Code?
  • Salesforce CLI is up to date
  • All extensions are up to date
  • The orgs in question have been authorized successfully
  • The CLI appears to be installed correctly (when I run "sfdx" in the Terminal, I receive the Salesforce CLI menu)

This is the Salesforce CLI output:
Starting SFDX: Deploy Source to Org
11:35:46.868 sfdx force:source:deploy --sourcepath c:\Salesforce\VSCodeQuickStart\force-app --json --loglevel fatal
11:35:48.630 sfdx force:source:deploy --sourcepath c:\Salesforce\VSCodeQuickStart\force-app --json --loglevel fatal ended with exit code 1

Any help would be much appreciated!
Hello,

I'm doing a 'Visualforce Basics' module and I'm stuck at 'Use Standard Controllers' Unit (link here).
I created a page with the following code:
 
<apex:page>
    <apex:pageBlock title="Account Summary">
        <apex:pageBlockSection>

        </apex:pageBlockSection>
    </apex:pageBlock>
</apex:page>

Then I opened a page via the "Preview" button in a Developer Console and opened a JavaScript console in Chrome where I typed:

$A.get("e.force:navigateToURL").setParams({"url": "/apex/AccSum"}).fire();
And I got the following error:
Uncaught ReferenceError: $A is not defined at <anonymous>:1:1
 

Both the snippets are copied from the Unit's sections, I didn't change anything except the page's name - 'AccSum'. I tried all of the above in a Firefox which also did not work.

Does anyone know what's going on?

I'm trying to verify that my code is permanently deleting objects.  For some reason, the test still finds the newly inserted tasks when I query ALL ROWS -even though I just purged it from the recycle bin.  Any ideas how I can test that a record was purged from the recycle bin successfully?

 

Any help is appreciated,

 

Andrew

 

Here's my test code:

 

	static testMethod void testPermanentDelete()
	{
		Task t = new Task(
			Subject = 'subject',
			Priority = 'Normal',
			Status = 'Completed',
			ActivityDate = Date.today());
		insert t;
		Id taskId = t.Id;
		
		//Verify the task was inserted
		List<Task> foundTasks = [Select Id From Task Where Id = :taskId ALL ROWS];
		System.assertEquals(1, foundTasks.size());
		
		Test.startTest();		
		Database.DeleteResult[] deleteResults = Database.delete(foundTasks, false);
		Database.EmptyRecycleBinResult[] emptyRecycleBinResults = Database.emptyRecycleBin(foundTasks);
		Test.stopTest();
		
		//Verify the task was permanently deleted
		foundTasks = [Select Id From Task Where Id = :taskId ALL ROWS];
		System.assertEquals(0, foundTasks.size());
	}

 

Ok, I know there are a lot of posts on this topic, and I am familiar with the two Visualforce techniques for doing this (using an outputField bound to an SObject currency field, and using the outputText value="{0,number,###,##0.00}" ). However, in my use case, I'm trying to display a currency value in the title of a pageBlock:

 

 

<apex:pageBlock title="Refund Amount: {!refundAmount}" >

 

I can't really use the outputText or outputField options here, so I think I need to do the formatting in my controller. The Apex documentation states that String.format(String, List<String>) works "in the same manner as apex:outputText." Has anyone actually used this method to format a Decimal value into a properly formatted currency String in Apex code?

  • November 17, 2010
  • Like
  • 0
I am using the Pattern and Matcher classes to search text from an email.  Sometimes, I get an exception that says Regex too complicated.  I can't find any information on this.  Does anyone know what can cause?  I get the premise of the exception but don't know what to do to fix it.  If I put my regular expression and sample text into the tester on this site, http://www.fileformat.info/tool/regex.htm.  It works fine and returns what I want.  From what I understand Salesforce uses similar functionality as Java which the above site is using.  Any ideas?  Thanks.