• Phil W
  • NEWBIE
  • 40 Points
  • Member since 2017

  • Chatter
    Feed
  • 0
    Best Answers
  • 0
    Likes Received
  • 2
    Likes Given
  • 11
    Questions
  • 37
    Replies
There's documentation and a number of articles about how you can have an LWC component open a page, including with query parameters (such as this one (https://developer.salesforce.com/docs/component-library/documentation/lwc/lwc.use_navigate_add_params_url) from the official documentation). There's also documentation about how you open different types of page, though the NavigationMixin doesn't support direct VF page access.

It appears that, while we can set up a custom tab that wraps the VF page, the VF page is then embedded in an iframe and, due to the lightning and VF pages residing on different domains, there's no communication possible across the iframe boundary (so the query parameters are not visible).

I need to be able to open a VF page from an LWC and pass it several query parameters. My issue is that none of the mechanisms explain how I can get the base URL for a VF page that will work from my LWC (I need to do this for standard and community portal usages).

There's even an article, here (https://developer.salesforce.com/blogs/developer-relations/2017/01/lightning-visualforce-communication.html), that discusses the basic issue (at least in the Aura context). Unfortunately it entirely glosses over how to get the URL for the VF page, simply saying:
 
"vfHost is the host Visualforce pages are loaded from in your environment. In a real-life application, you should obtain this value dynamically instead of hardcoding it"

Do you know how I can obtain the VF page URL in a robust and automatic way from Salesforce (since this component will be part of a package I can't hard-code the access)?
  • August 19, 2019
  • Like
  • 0
I am writing a custom LWC, one aspect of which is the ability to create a new record based on the context of that custom LWC instance. The LWC component holds the ID of a parent object. The child has a Master Detail relationship back to the parent.

When my component creates the new record, I want to use lightning-record-form. The reason is that I want the form to include all fields that the admin has populated into the full layout - my LWC component is part of a package and I cannot make any assumption about the fields that should or should not be in the form over and above the master details field. Use of the lightning-record-edit-form therefore doesn't make sense (since I have to explicitly define the list of fields to be shown).

I am trying to work out how to use the parent ID from the LWC component to pre-populate the Master Details field in the new record, but all my research leads me to the answer that I cannot.

There is an event, "load", generated by the lightning-record-form component, and this provides access to the record data. However, according to what I've read on Stack Exchange, setting the field values in here does NOT cause the form to update its Master Detail field.

There is another event, "submit", generated when the user tries to save the new record, and according to the documentation (https://developer.salesforce.com/docs/component-library/bundle/lightning-record-form/documentation, in the "Editing a Record" section) this event can intercept the data to explicitly override values for certain fields. However, since I'm trying to set what is effectively a mandatory field, I suspect my submit handler won't be invoked before the form decides it is missing a mandatory value.

Has anyone actually got this working? How did you do it?
  • August 07, 2019
  • Like
  • 0
Most of the time my Visualforce page with embedded Lightning components (via Lightning-Out in Visualforce) work fine. However, intermittently the lightning components fail to appear and the quoted error above shows up in the footer of the page.

I'm wondering if this is a browser/desktop performance issue causing mis-timing of DOM manipulation, or something else.

Anyone got any ideas?
The documentation specifically states that managed packages have their own governor limits that don't contribute towards the org's limits (meaning the limits are effectively doubled on that org by having those for the package counted separately to those for the org's own code) with the caveat that there are cumulative limits too.

What I want to understand is whether this separation of distinct limits applies to all versions of the App Exchange package or just to the version currently on the App Exchange. We have an older version currently certified on the App Exchange and are working towards certification of our latest version but we already have customers using versions newer than that on the App Exchange. Are the limits still separated for these customers?
  • April 18, 2018
  • Like
  • 0
We have some CPU intensive processing we have to apply, and as part of this we have to perform a temporary DB upsert of some data (so we know that formulae, triggers and workflows are evaluated for the temporary data as we need to be able to leverage the results of that extra processing). We deal with this by setting a DB save point and, after evalulation, we restore to that save point.

Now, one of our customers has too much processing in some workflows/custom triggers and this is pushing us over the CPU limits (we have a way to avoid this by splitting the data domain into pieces, which we are doing). However, the observed behaviour was that our temporary updates to the DB were actually being persisted after the CPU limits were exceeded.

We rather expected that exceeding CPU limits would roll back our updates, but the customer appears to see the temporary data being persisted. Thus the question in the title of this posting: does Salesforce rollback a transaction when CPU limits are exceeded, or does it commit the transaction instead? If the latter, is there a way to control this behaviour?
  • February 13, 2018
  • Like
  • 0
We created a global Apex class accidentally starting with a lower case letter, and once this was put in a managed package and released, we could not correct this. So:
 
global myApexClass {
    ...
}

Cannot be corrected to:
global MyApexClass {
    ...
}

Because during package update we get an error.

Salesforce, please be consistent with case-insensitivity! You are forcing ugliness on us!​
  • January 27, 2018
  • Like
  • 0

I have a batch process that prevents two different "concurrent" executions of the batch for a given user by maintaining some state (in the databse) that is set in start and cleared in finish (actually using the user for the created AsyncApexJob instances). During start processing, the state is queried from the database; if there is an entry already for the current user the batch is aborted, using System.abortJob, and an empty query locator is returned.

I am trying to test that two different users can successfully execute the batch via the use of the following:

Id profileId = UserInfo.getProfileId();

List<User> fakeUsers = new List<User> {
        new User(Alias = 'X', Email='X@testorg.com', EmailEncodingKey='UTF-8', FirstName = 'Jim', LastName='Testing', LanguageLocaleKey='en_US', LocaleSidKey='en_US', ProfileId = profileId, TimeZoneSidKey='Europe/London', UserName='X@testorg.com'),
        new User(Alias = 'Y', Email='Y@testorg.com', EmailEncodingKey='UTF-8', FirstName = 'Fred', LastName='Testing', LanguageLocaleKey='en_US', LocaleSidKey='en_US', ProfileId = profileId, TimeZoneSidKey='Europe/London', UserName='Y@testorg.com')
};

Test.startTest();

// Simulate running both together under different user accounts
Id id1;
Id id2;

System.runAs(fakeUsers[0]) {
    MyBatch b1 = new MyBatch();

    id1 = Database.executeBatch(b1);
}

System.runAs(fakeUsers[1]) {
    MyBatch b2 = new MyBatch();

    id2 = Database.executeBatch(b2);
}

Test.stopTest();

Unfortunately it seems that both of the batches still get run with the same user (the actual user running the test rather than either of the fake users). I suspect this is because of the way batches are actually executed during the Test.stopTest method invocation, rather than at some asynchronous time.

Have I come across a bug in the way batches are run in a given user context during testing? Is there a workaround I can use?
  • January 18, 2018
  • Like
  • 0

I have a global class in my managed package's "API". It is currently "with sharing". However, we realized that it should be "without sharing". Is it possible to change this to "without sharing" in a new version, even though the package has been installed on orgs with it set to "with sharing"?

I ask because I have found that once something is global it is pretty difficult to change it.

  • January 16, 2018
  • Like
  • 0
I have a global class, A, in my managed package that must only be instantiated by my managed package code. The instance is intended to then be passed to a method in a (global) interface, I, that is to be implemented outside the managed package.

Now, in order to test the interface implementation, E, I need the test to be able to create an instance of A to pass into E's implementation of I's method.

I tried creating a @TestVisible private constructor for A that accepts the required test setup parameters and using that in E's test class, but that didn't work.

Has anyone got any suggestions?
  • January 15, 2018
  • Like
  • 0

Our Salesforce app is structured so we have CRUD/FLS tests in our custom page controllers where needed. We have some utility methods that verify that the user has appropriate access to the objects (for CRUD) and fields (for FLS). These methods throw exceptions/return messages that explain the missing permission(s).

In the FLS test we pass in an array of necessary describe field results, obtained through the Schema namespace, e.g. Schema.SObjectType.Booking__c.fields.Client__c. The message should ideally explain that the user doesn't have the required permission (e.g. update) for the given field in the given object. However, it seems that the DescribeFieldResult doesn't provide a means to access the DescribeSObjectResult representing the containing object, so I can't get hold of the label for the object type to add that into the message.

Other than a massive refactoring to pass both the DescribeFieldResult and the DescribeObjectResult (this would be really ugly too), anyone know a way to navigate from the DescribeFieldResult to the "containing" DescribeSObjectResult?

  • November 13, 2017
  • Like
  • 0
I have a Visualforce controller extension class that is working fine. However, I now need a second controller extension that shares some of the behaviour of the first. As such I have been trying to refactor the original controller extension, but have hit a problem.

The new base class provides some nested classes, public properties and some virtual (overridable) methods, something like:
​public with sharing virtual class MyExtensionBase {
    public class A {
        public String something { get; set; }
        public String somethingElse { get; set; }
    }

    public A myA { get; set; }

    public MyExtensionBase(Apex.StandardController ctrl) {
        myA = getA(ctrl);
    }

    protected virtual A getA(Apex.StandardController ctrl) {
        A anA = new A();
        ...
        return anA;
    }
}
I have then refactored the original controller extension to remove the nested class, the property and the initialization of the property, and to make it now extend the new base class, something like:
public with sharing class MyExtension extends MyExtensionBase {
    ...
    public MyExtension(ApexPages.StandardController ctrl) {
        // Allow the base class to perform its initialization
        super(ctrl);
        ...
    }
}
I can successfully deploy the base class. However, the refactored original controller extension will not deploy, with the following error:

ERROR deploying ApexClass classes/MyExtension.cls: An unexpected error occurred. Please include this ErrorId if you contact support: 515167826-28209 (-2057717022)

I cannot see what I've done wrong and cannot believe that this sort of code structure (and refactoring exercise) isn't common.

Has anyone else seen this sort of GACK raised in this sort of scenario?
  • September 21, 2017
  • Like
  • 0
There's documentation and a number of articles about how you can have an LWC component open a page, including with query parameters (such as this one (https://developer.salesforce.com/docs/component-library/documentation/lwc/lwc.use_navigate_add_params_url) from the official documentation). There's also documentation about how you open different types of page, though the NavigationMixin doesn't support direct VF page access.

It appears that, while we can set up a custom tab that wraps the VF page, the VF page is then embedded in an iframe and, due to the lightning and VF pages residing on different domains, there's no communication possible across the iframe boundary (so the query parameters are not visible).

I need to be able to open a VF page from an LWC and pass it several query parameters. My issue is that none of the mechanisms explain how I can get the base URL for a VF page that will work from my LWC (I need to do this for standard and community portal usages).

There's even an article, here (https://developer.salesforce.com/blogs/developer-relations/2017/01/lightning-visualforce-communication.html), that discusses the basic issue (at least in the Aura context). Unfortunately it entirely glosses over how to get the URL for the VF page, simply saying:
 
"vfHost is the host Visualforce pages are loaded from in your environment. In a real-life application, you should obtain this value dynamically instead of hardcoding it"

Do you know how I can obtain the VF page URL in a robust and automatic way from Salesforce (since this component will be part of a package I can't hard-code the access)?
  • August 19, 2019
  • Like
  • 0
Hi,

We used to be able to do this in Aura:
String labelName = 'mylabel';
$A.getReference("$Label.c."+ labelName);



$A is not accessible in LWC

Following the documentation I can only see a way to get access to Label's value through import.
As far as I know import don't work dynamically. You can only write an import if you know the name of the label.

I was hoping for a solution involving Apex and/or SOQL but could not find anything.

Any idea?
 
When I query string columns on custom metadata objects, I get decent performance.  However, if I query a column that points to a FieldDefinition, my query became much slower.

Is this a known system bug?  Is there a patch on the horizon?

It makes these relationships really unusable.

Here are profiling logs on a sandbox after priming the pump (I'll just include the SOQL profiling):

Querying a String takes 19ms.

    for (My_Metadata__mdt m: [SELECT Id, String__c FROM My_Metadata__mdt]) {
        System.debug(m.id);
    }

    AnonymousBlock: line 1, column 1: [SELECT Id, String__c FROM My_Metadata__mdt]: executed 273 times in 19 ms

Query a FieldDefinition takes about half a second!

    for (My_Metadata__mdt m: [SELECT Id, Field__c FROM My_Metadata__mdt]) {
        System.debug(m.id);
    }

    AnonymousBlock: line 1, column 1: [SELECT Id, Field__c FROM My_Metadata__mdt]: executed 273 times in 511 ms
Most of the time my Visualforce page with embedded Lightning components (via Lightning-Out in Visualforce) work fine. However, intermittently the lightning components fail to appear and the quoted error above shows up in the footer of the page.

I'm wondering if this is a browser/desktop performance issue causing mis-timing of DOM manipulation, or something else.

Anyone got any ideas?
We have some CPU intensive processing we have to apply, and as part of this we have to perform a temporary DB upsert of some data (so we know that formulae, triggers and workflows are evaluated for the temporary data as we need to be able to leverage the results of that extra processing). We deal with this by setting a DB save point and, after evalulation, we restore to that save point.

Now, one of our customers has too much processing in some workflows/custom triggers and this is pushing us over the CPU limits (we have a way to avoid this by splitting the data domain into pieces, which we are doing). However, the observed behaviour was that our temporary updates to the DB were actually being persisted after the CPU limits were exceeded.

We rather expected that exceeding CPU limits would roll back our updates, but the customer appears to see the temporary data being persisted. Thus the question in the title of this posting: does Salesforce rollback a transaction when CPU limits are exceeded, or does it commit the transaction instead? If the latter, is there a way to control this behaviour?
  • February 13, 2018
  • Like
  • 0
Hi,
     How to create multiple records selection from lookup field in salesforce which supports in lightning.
Thanks.
 
Hi Experts,
What is the difference between Clone () and DeepClone() in Apex? When should we use what? Can someone explain with an example?

Thanks!

I'm sending a query to SalesForce (using the SOAP API) that includes an IN clause, however, I keep getting a MALFORMED_QUERY error.  Could someone point me in the right direction of what the query syntax is when using the IN clause?  I've tried the following without success (the ids are made up in these examples):

 

SELECT Id FROM Lead WHERE Id IN {'000000000000000','111111111111111'}

SELECT Id FROM Lead WHERE Id IN '0000000000000','111111111111111'

 

Thanks.

Hallo everyone,

 

I want to write a switch-case block the convention in Java  is:

 

switch( variable)

{

case value:

statement;

break;

case value:

statement;

break;

default

statement;

break;

}

 

or is it different in Apex because I get an exception.


Save error: unexpected token: '{'    after switch(variable).

 

  • April 23, 2012
  • Like
  • 0

I can schedule a job via Apex code:

 

System.schedule('test', '0 0 0 * * ?', new SchedulableClass());

 

The CronTrigger job doesn't have a "Name" field, so I can't query for the Job I just created.  This means I can't check to see if my job already exists calling System.schedule(); instead I just have to call "schedule()" and silently eat the exception it throws if the job already exists.

 

The only way you can figure out which CronTrigger is yours is to cache the return value of System.schedule(), which (it so happens) is the ID of the CronTrigger that is created.  However, you can't delete them from Apex:

 

 

Id jobid = System.schedule('test', '0 0 0 * * ?', new SchedulableClass());
delete new CronTrigger(Id = jobid);

// 'delete' throws 'DML not allowed on CronTrigger'

 

 

So the current state of Scheduled Jobs is:

 

You can create them from Apex Code, but not from the UI

You can delete them from the UI, but not from Apex Code

 

I guess that just seems odd to me.  Why did Salesforce create this whole new API (System.schedule()), with a seemingly random assortment of ways to manipulate it, instead of just exposing the CronTrigger table directly to the full range of DML operations?

 

Placing new functionality into new core objects, rather than new APIs, seems easier on everyone (the whole describe/global describe suite of API calls are an example of something that seems a natural fit for a set of read-only custom objects).

  • April 22, 2010
  • Like
  • 0

How can I determine the URL of a static resource from my Apex code?

 

I'm looking for the equivalent of the VisualForce $Resource variable, but one that I can use from with Apex code.

 

Thanks!

  • February 28, 2009
  • Like
  • 0
When I query string columns on custom metadata objects, I get decent performance.  However, if I query a column that points to a FieldDefinition, my query became much slower.

Is this a known system bug?  Is there a patch on the horizon?

It makes these relationships really unusable.

Here are profiling logs on a sandbox after priming the pump (I'll just include the SOQL profiling):

Querying a String takes 19ms.

    for (My_Metadata__mdt m: [SELECT Id, String__c FROM My_Metadata__mdt]) {
        System.debug(m.id);
    }

    AnonymousBlock: line 1, column 1: [SELECT Id, String__c FROM My_Metadata__mdt]: executed 273 times in 19 ms

Query a FieldDefinition takes about half a second!

    for (My_Metadata__mdt m: [SELECT Id, Field__c FROM My_Metadata__mdt]) {
        System.debug(m.id);
    }

    AnonymousBlock: line 1, column 1: [SELECT Id, Field__c FROM My_Metadata__mdt]: executed 273 times in 511 ms

Hello,

 

I'm trying to format a date's month in the user's language but keep getting it in English.

Here's the code I have:

 

Datetime d = System.now();
Datetime thisDate = DateTime.newInstance(d.year(), d.month(), d.day());
String format = thisDate.format('MMMM');
System.debug(format);

 Any ideas?

Many thanks, Dan