• jhart
  • SMARTIE
  • 685 Points
  • Member since 2007

  • Chatter
    Feed
  • 27
    Best Answers
  • 5
    Likes Received
  • 0
    Likes Given
  • 141
    Questions
  • 354
    Replies

Hi 

 

I have written one trigger on Feedcomment. It's working fine on sandbox but when we move into the production

some new exception is coming i.e.

System.LimitException:Query of LOB fields caused heap usage to exceed limit.

 

The error is coming on this SOQL query:

 List<UserFeed> usefd = [SELECT Id, Type, body, ContentFileName, createdbyId, InsertedById, ParentId, Body, Title, ContentData,     FeedItemId,(SELECT CommentBody, InsertedById, FeedItemId FROM FeedComments ) from UserFeed where CreatedById =: feedcommentid ORDER BY  CreatedDate ];

 

Does any one know what type of error is this.

 

Thanks,

Rajiv

 


  • October 14, 2011
  • Like
  • 0

Hi,

My Chatter feed has a file that was uploaded by someone. I have a remote application from where i am making webservices call to get the feedPost.

What I want to do is to create a link to this file so that the user can click on it to download directly from my remote application. I can get the name, id of the file etc from FeedPost, but I am not sure how to create URL

 

SELECT Id, Type,CreatedBy.FirstName, CreatedBy.LastName,ParentId, Parent.Name,FeedPost.Id, FeedPost.Type, FeedPost.Body, FeedPost.Title,(SELECT Id, FieldName, OldValue, NewValue FROM FeedTrackedChanges ORDER BY Id DESC),(SELECT Id, CommentBody, CreatedDate, CreatedById,CreatedBy.FirstName, CreatedBy.LastName FROM FeedComments ORDER BY CreatedDate DESC LIMIT 4)FROM NewsFeed

 

Does anyone know how to do that?

I'm recreating a detail page and I'm trying to add some related lists that are currently displayed on the existing page.  I'm unable to figure out what the list names are supposed to be though.  The names that are used in the pagelayout editor are not working.  Some example list are "Notes & Attachments", "Approval History", and "Activity History".  How do you find out the names to use in visual force for related lists?
I am having the following error in my unit tests. I was unable to find any information on this error in the documentation. Would like to know if possible whether this is a current limitation in Apex, that is not documented, or I am missing something.

This woks fine in pre-summer 08 orgs, but is failing in my Summer 08 enabled sandboxes.

Here is my code

Code:
static testMethod void InsertInvestmentAccountBulkTriggerBypass(){


User user = [Select Id From User Where Id =: UserInfo.getUserId()];
user.Bypass_Triggers__c = 'SCS_Investment_Account_Time_Series__c;Asset';

update user;

System.assert(GlobalSettings.bypassTriggers('Asset'));
System.assert(GlobalSettings.bypassTriggers('SCS_Investment_Account_Time_Series__c'));

Test.startTest();
Map<Id, Asset> IAMap = unitTests.createTestAsset(200);
System.assertEquals(200, IAMap.size());
Test.stopTest();
}




public static Map<Id, Asset> createTestAsset(Integer numberOfRecords){

Map<Id, Asset> IAMap = new Map<Id, Asset>();
Asset [] IAs = new List<Asset>();

Account client = unitTests.createTestAccount();
Product2 fund = [Select Id from Product2 LIMIT 1]; //TODO: extract out as a helper method

for (Integer i=0; i<numberOfRecords; i++){
Asset IA = new Asset();

IA.Name = 'unitTest_IA';
IA.AccountId = client.Id;
IA.Product2Id = fund.Id;

IAs.add(IA);
}


public static Account createTestAccount(){

Account a = new Account();
a.Name = 'Test Account';
a.Employee_Id__c = '123456';
insert a;

System.assertEquals(1, [Select count() from Account where Id = : a.Id]);

return a;
}

insert IAs;

for(Asset ia : IAs){
IAMap.put(ia.Id, ia);
}

return
}

Full error message:

System.DmlException: Insert failed. First exception on row 0; first error: MIXED_DML_OPERATION, DML operation on non-setup object is not supported after you have updated a setup object: Account

Stack trace:
Class.unitTests.createTestAccount: line 94, column 9
Class.unitTests.createTestAsset: line 146, column 26
Class.TimeSeriesUnitTests.InsertInvestmentAccountBulkTriggerBypass: line 27, column 32



Basically, I need to update a current user record, prior to testing triggers. Problem is that I am not allowed to update a user record and then execute DML statements on other objects.


Message Edited by AxxxV on 05-28-2008 02:29 PM
  • May 28, 2008
  • Like
  • 0
According to the Spring '14 release notes, ISVs will soon be able to delete fields from managed packages.  That is great news.

I have two questions about this.

1.  Will deleted fields be available to InstallHandlers during upgrade?

For performance reasons, sometimes it's necessary to replace a field with an identical field that has "External ID" turned on.  When we do this, our InstallHandler recognizes the upgrade & launches a batch job to copy alll of the old values into the new field.

We'd then like to delete the old field.  Is that ever possible?  If I stagger my upgrades like so:

Version 1 - uses old field
Version 2 - adds new indexed field; upgrade script copies old values into new field
Version 3 - deletes old field.

What happens if a customer upgrades directly from Version 1 to Version 3?  Will it work, or will the old field be deleted before the Version 2 upgrade code runs?


2.  We recently added some lookup fields that are now preventing some customers from upgrading (b/c they are running into a "too many custom relationship fields" error).  We've decided that we'd rather remove the feature so that all customers can continue upgrading.

Now, the release notes state that deleted fields will actually remain in the subscriber org:

Subscribers who upgrade to the new package version will still have the deleted components available in their organization. They’re displayed in the Unused Components section of the Package Details page.

Does that also apply to customers whose upgrade path "skips" the deleted field entirely, eg:

Version 1 - baseline
Version 2 - introduces field X
Version 3 - deletes field X

If a customer on Version 1 cannot upgrade to Version 2 due to a "too many custom fields" limit, will they be able to upgrade directly to Version 3?  Or will the "unused components" still kinda exist and thus prevent upgrade?
  • April 04, 2014
  • Like
  • 0

The built-in "back" button in salesforce1 does not work properly when viewing a custom visualforce page; instead it brings up the Publisher "+" icon.

 

Here's our clickpath; detailed image to follow:

 

1. In our case, we start with a mobile card.

 

2.  Tapping on the mobile card ignores any href within the card and reloads it fullscreen (this is arguably a bug - see this forum post).

 

3.  On that reloaded frame, the "back" arrow works fine.

 

4.  We then tap from the reloaded mobile card into a (custom visualforce) detail view of the desired object.

 

5.  It is on this page that the "back" arrow totally breaks down.  At first tap, it brings up the Publisher "+" icon.  At second tap, it jumps navigation back 3 steps.

 

 

Update 12/16/13:  partner support has recognized the bug & escalated to dev, case # 10009288.

 

 

Here's a walkthrough of the issue which is getting scaled mercilessly; click through for a readable version:

 

 

 

  • December 14, 2013
  • Like
  • 0

Our product, Absolute Automation, updates custom fields on standard objects (Contacts, Leads, Accounts, etc).

 

Some of our customers have custom validation rules defined for those standard objects.

 

But custom validation rules have a serious problem: they prevent *any* update to an object that fails validation, even if the update in question has nothing to do with the validated fields. Because objects may pre-date the custom validation rules, this creates a number of "minefield" objects which cannot be updated until the validation error is fixed.

 

While this may make sense in a user interface context, it makes no sense in an API or code context, and even less sense in a managed packaged.


Let's look at a simplified example.


Starting with a standard test org with sample Contacts, let's define a new Custom Validation which requires that the Contact.FirstName field must be equal to "foo". Of course, all of our existing Contacts don't validate under that rule, but they pre-date the rule so they are already in the database (and SFDC does not prevent me from creating the rule, nor tell me that I have non-conforming objects in my system).

 

If, in the user interface, I go to update an unrelated field - say, Contact.Title - the validation failure does not let me update the Contact:

 

UI validation failure

 

 

Whether or not this makes sense is debatable - the user, after all, is not touching the FirstName field at all, so does it make sense to run the validation against it? If you think the answer is "yes", then it seems we should *also* require validation of all existing records when a Custom Validation rule is created.

 

But that's not the problem. The real problem is that this same error also prevents updates to unrelated fields in Apex Code.

 

Let's run an apex code snippet to update the Title of an existing Contact:

 

update new Contact(Id = '00370000016yOlu', Title = 'test');

 

Oops! Can't do it! The custom validation rule - which, again, is defined on a field that we are not even touching - prevents the update:

 

System.DmlException: Update failed. First exception on row 0 with id 00370000016yOluAAE; first error: FIELD_CUSTOM_VALIDATION_EXCEPTION, First name must be &quot;foo&quot;!: []

 

It becomes much harder to argue this is a good idea. If we are so militant about enforcing Custom Validations that we prevent *any* update to a non-conforming object until the validation is fixed, then SFDC shouldn't let me create Custom Validations unless every existing record conforms.

 

Now let's take it to the next level, and suppose that the Apex update is running in managed package code, and updating a custom field defined within that package. Now you're really getting crazy. Your lovely little app is now being sabotaged by validation rules that have nothing to do with your custom field update, and ... what are you supposed to do about it? You certainly don't know what the right values are to "fix" the validation failure. Instead, your updates just fail. Full stop. Did you want those updates? Sorry!


Essentially every single app on the AppExchange can assume very little about standard objects being updateable. Actually, at scale, you have to assume that you CANNOT update custom fields on standard objects, because at least one of your customers will have objects that don't pass their own Custom Validation rules.

 

Thus, any such update must be coded defensively: how important is this update? If I can't update this field, do I need to cancel the entire transaction? Or can I log the error & skip the field update, knowing that my data model is now somewhat inconsistent? If I let the entire transaction cancel - how can I ever retry it? It will just fail again with the same error. So before I retry my transaction, I have to contact my customer and ask them do either (a) turn off their validation rule, or (b) update their entire database to ensure legacy objects pass the newly defined validation.


Finally: what is the benefit of preventing my update? I'm not touching a field that is being validated (if I were, fine, it makes sense to prevent me from saving new, non-conforming data). Preventing the update doesn't help the object validate - fixing the object presumably will take human intervention, and that human is nowhere to be found when my code is executing. The only consequence of preventing these unrelated updates is that 3rd party apps and integrations are either much less reliable, or much less consistent. Zero upside, tons of downside.

 

 

Salesforce support - I have opened case 09799365 to track this issue.

  • October 16, 2013
  • Like
  • 0

Global search does not respect field-level security for Profiles.

In other words, I can completely hide a field from a given profile.  But, a user in that profile can search through that field using Global Search.  All objects whose hidden field matches that search text will be displayed in the Global Search results.

This seems like a serious bug.

 

Consider salesforce's classic "HR/Recruiting app" example.  A user from whom a "Salary" field is hidden can nonetheless simply enter dollar amounts into Global Search to figure out the field value for every record.

I'm a bit shocked that such a large security hole is present in Global Search, and I think it can't possibly be by design.  I'm so surprised by this behavior that I keep double-checking it, but each time I'm able to search for values in fields that are hidden from my profile.

 

 

Salesforce support- I have created case 09471736 to track this issue.

  • July 15, 2013
  • Like
  • 0

Our application uses embedded Visualforce pages within standard object pages (contact, lead, account, opportunity).

 

Since the Summer '13 upgrade, a couple users (in separate orgs) are reporting that the embedded page will fail to load with this error message:

 

The page you submitted was invalid for your session.  Please click Save again to confirm your change.

 

 

Note that "Save" has not been clicked, nor any changes made.  This is on the initial load of the parent page.

 

The only fix we've discovered so far is to manually delete all salesforce.com and visual.force.com cookies.

 

This is not a bug in our code; these people are running the exact same version they have been running for years.  The only new thing in the mix is Summer '13.

 

Salesforce support - I have created case 09431887 to track this issue.

  • July 02, 2013
  • Like
  • 0

Our "Absolute Automation" app logs emails and their attachments to salesforce.

One of our configurable features is the ability to skip attachments based on extension or file size.

As there are multiple ways that Attachments are logged (Email Services or the normal CRUD API), we implemented this "attachment skipping" feature via a "before insert" trigger on Attachments.

The trigger, simplified, looks like this:

 

trigger AttachmentFilter on Attachment bulk (before insert) {
  for (Attachment a : Trigger.new) {
    if (a.BodyLength < CONFIGURED_SIZE) a.addError('Skipping attachment per SIZE_TOO_SMALL');
    }
  }

This has worked great for awhile now.

However, as of Summer '13, it no longer works.  Why?  Because the "BodyLength" field is now _NULL_ in the trigger context.  This happens both when inserting Attachments via the SOAP API as well as in Apex Tests that insert them directly.

If we modify our trigger like so:

 

trigger AttachmentFilter on Attachment bulk (before insert) {
  System.debug('AttachmentFilter, input is: ' + Trigger.new);
  for (Attachment a : Trigger.new) {
    if (a.BodyLength < CONFIGURED_SIZE) a.addError('Skipping attachment per SIZE_TOO_SMALL');
    }
  }

 We can verify that BodyLength is null for all Attachments in Trigger.new, even though "Body" is not.

Here's the debug output taken from a run of Apex testMethods:

 

16:07:26.074 (7074859000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=oktxt.yada,        ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2048], ... Name=okbig.yada,        ... Id=null, BodyLength=null, ContentType=image/yada},
Attachment:{Body=Blob[2],    ... Name=no_small.yada,     ... Id=null, BodyLength=null, ContentType=image/yada},
Attachment:{Body=Blob[2],    ... Name=no_small.gif,      ... Id=null, BodyLength=null, ContentType=IMAGE/gif},
Attachment:{Body=Blob[2],    ... Name=no_isfoo.FOO,      ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2],    ... Name=no_isfoo.yada.foo, ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2048], ... Name=no_isjpg.jPg,      ... Id=null, BodyLength=null, ContentType=image/yada})

16:07:26.309 (7309998000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=oktxt.yada,        ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2048], ... Name=okbig.yada,        ... Id=null, BodyLength=null, ContentType=image/yada},
Attachment:{Body=Blob[2],    ... Name=no_small.yada,     ... Id=null, BodyLength=null, ContentType=image/yada},
Attachment:{Body=Blob[2],    ... Name=no_small.gif,      ... Id=null, BodyLength=null, ContentType=IMAGE/gif})

16:07:30.363 (11363893000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=oktxt.yada,        ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2048], ... Name=okbig.yada,        ... Id=null, BodyLength=null, ContentType=image/yada})

16:07:30.493 (11493729000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=no_isfoo.FOO,      ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2],    ... Name=no_isfoo.yada.foo, ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2048], ... Name=no_isjpg.jPg,      ... Id=null, BodyLength=null, ContentType=image/yada})

16:07:30.565 (11565822000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=no_small.yada,     ... Id=null, BodyLength=null, ContentType=image/yada},
Attachment:{Body=Blob[2],    ... Name=no_small.gif,      ... Id=null, BodyLength=null, ContentType=IMAGE/gif})

16:07:30.626 (11626327000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=oktxt.yada,        ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2048], ... Name=okbig.yada,        ... Id=null, BodyLength=null, ContentType=image/yada})

16:07:30.710 (11710629000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=no_isfoo.FOO,      ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2],    ... Name=no_isfoo.yada.foo, ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2048], ... Name=no_isjpg.jPg,      ... Id=null, BodyLength=null, ContentType=image/yada})

16:07:30.719 (11719401000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=no_small.yada,     ... Id=null, BodyLength=null, ContentType=image/yada},
Attachment:{Body=Blob[2],    ... Name=no_small.gif,      ... Id=null, BodyLength=null, ContentType=IMAGE/gif})

 

 

 

Salesforce support: I have opened case 09406443 to track this issue.

  • June 25, 2013
  • Like
  • 0

We have two customers who have reported this issue in the past 2 days, and I fear there are more to come.

 

When they have "Data.com Clean" enabled, doing a simple Contact creation from one of our Visualforce pages (with custom controller) fails with the following error:

 

Visualforce Page: /apex/i__aaPendingAddrs

caused by: System.DmlException: Insert failed. First exception on row 0; first error: UNKNOWN_EXCEPTION, INVALID_TYPE: sObject type 'DataDotComEntitySetting' is not supported.: []

Class.i.CtlPendingAddrs.insertC: line 228, column 1
Class.i.CtlPendingAddrs.doSave: line 190, column 1
Class.i.CtlPendingAddrs.saveActions: line 162, column 1

 

The code where the failure occurs is bog standard, simple Contact creation.  This code works perfectly in thousands of installs of our managed package.  Our code does not reference "DataDotComEntitySetting" in anyway.  Here's the code that the stack trace leads to:

  private static void insertC(PagerPending.Item[] items) {
    Contact[] objs = new Contact[0];
    for (PagerPending.Item i : items) {
      if (i.action == 'nc') {
        if (i.addr.LastName__c == null) { i.error = NOLASTNAME; continue; }
        i.newcontact.Email = i.addr.FullAddr__c;
        i.newcontact.FirstName = i.addr.FirstName__c;
        i.newcontact.LastName = i.addr.LastName__c;      
        if (SFDC.hasRecTypes() && i.getCRecordType() != null) i.newcontact.put('RecordTypeId', i.getCRecordType());
        objs.add(i.newcontact);
        }
      }
    if (objs.size() > 0) insert objs;  // THIS IS LINE 228 PER THE
    }


As you can tell it is pretty darn simple.

 

The error message - with its reference to "DataDotComEntitySetting", which does not exist in our code - shows that this is an internal salesforce/data.com bug.  Our customers who have encountered this issue note that it reproduces with 100% certainty if Data.com Clean is enabled, and disappears when Data.com Clean is disabled.

 

I'm guessing that Data.com Clean has a trigger on Contact creation that is buggy and throws an error, but somehow the call stack is lost and it, instead, unwinds to the contact insert statement itself.  However, our customers report that contact creation from within the normal Salesforce user interface still works, so it's some complicated interaction between Data.com Clean and managed packages.

 

 

Salesforce support - I have opened case 09191025 to track this issue.

  • April 26, 2013
  • Like
  • 0

We are seeing intermittent error bounces from Email Services with this error message:

 

The attached message was sent to the Email Service address <(redacted)> but could not be processed because the following error occurred:

554 Failed to get next element

 

 

Manually re-sending the offending message back to Email Services - in exactly the same format as was sent originally - works fine, so the error is not in the email content but instead is a transient bug within Email Services.

 

Edit: given that this is a transient error that can be fixed by retrying the message, this should be grouped with Email Services timeout should retry.

  • April 09, 2013
  • Like
  • 0

We post emails into Salesforce both over the API as well as using Email Services.

 

In our experience, Email Services is less reliable than the API. In particular, we see error messages like this:

 

554 Request timed out waiting for connection: [config 200ms, actual 209ms] to ConnPool_2, Num waiting=5, Thread=/sfdc/Soap/classInstance/..., Current OrgId=..., Current UserId=..., Current url=/sfdc/Soap/classInstance/...

 

If we get an timeout error on the API, we can easily retry the transaction.

 

But with Email Services, the error above is fatal. Rather than retrying after a timeout, Email Services simply throws up its hands and bounces the message.


The lack of a retry-on-error capability within Email Services seems like a bug.  This is a purely internal-to-salesforce error, out of the hands of the user, and retrying the offending transaction takes work (you have to strip the original email out of the error bounce and then repost it to the Email Services address).  

 

Note that Email Services *will* requeue a message after "Over Rate Limit" failures, so there is a retry queue available.  Please use it after transient errors like the above!

  • April 09, 2013
  • Like
  • 0

We are seeing some real-world emails that Email Services cannot handle, instead generating an error message:

 

554 Error processing email content Error ID=1753825505-222 (1527040629)

 

Emails that cause this error have a blank Content-Transfer-Encoding header in the text & html parts:

 

Date: Thu, 4 Apr 2013 19:32:24 -0400
MIME-Version: 1.0
Message-ID: <456cff6ec6ae70ac0bae49733e680ea3@dp1.example.com>
To: <to@example.com>
From: <from@example.com>
Subject: test subject
Content-Type: multipart/alternative;
	boundary="Alexandria=_d516e2a0981b50e1fbf77c61de0bd050"

--Alexandria=_d516e2a0981b50e1fbf77c61de0bd050
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 

text body

--Alexandria=_d516e2a0981b50e1fbf77c61de0bd050
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<body>
html body
</body>
</html>

--Alexandria=_d516e2a0981b50e1fbf77c61de0bd050--

 

 

Email Services handles the email just fine if you simply remove the header:

 

Date: Thu, 4 Apr 2013 19:32:24 -0400
MIME-Version: 1.0
Message-ID: <456cff6ec6ae70ac0bae49733e680ea3@dp1.example.com>
To: <to@example.com>
From: <from@example.com>
Subject: test subject
Content-Type: multipart/alternative;
	boundary="Alexandria=_d516e2a0981b50e1fbf77c61de0bd050"

--Alexandria=_d516e2a0981b50e1fbf77c61de0bd050
Content-Type: text/plain; charset="utf-8"

text body

--Alexandria=_d516e2a0981b50e1fbf77c61de0bd050
Content-Type: text/html; charset="utf-8"

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<body>
html body
</body>
</html>

--Alexandria=_d516e2a0981b50e1fbf77c61de0bd050--

 

A minor bug, but Email Services should be able to handle almost any email that is "roughly valid".  The world of email is wild & wooly and full of messages that are nonconforming in minor ways like this, and Email Services should be ready to handle them.

 

 

Salesforce support: I have created case 09011561 to track this issue.

 

 

  • April 09, 2013
  • Like
  • 0

This post talks about OAuth, but the exact same issues apply for username + password
interactions.

Salesforce has multiple instances on the backend, but uses a single "front door" for

logins (login.salesforce.com).  After successful login, you are sent to the right instance
to connect to (na15.salesforce.com, cs1.salesforce.com, etc).  At the user level
this is handled with a simple redirect; at the API level this is information passed
back in the login response.

HOWEVER, that top level unified domain only works for production logins.  Sandbox
logins have to use a different domain (test.salesforce.com), even though there
are *many* situations in which the calling code cannot possibly know which domain
to use.


As an example, let's say you want OAuth support in your API client app.

The first step in the OAuth flow is sending the user to a special page at which
they can OK access by your app.  You need to be able to determine if the user
is sandbox or not, because if they are, the initial "approve access" URL is
different:

 

normal: https://login.salesforce.com/services/oauth2/authorize
sandbox: https://test.salesforce.com/services/oauth2/authorize

This is annoying but surmountable - given that you are already interacting with the user,
you can probably figure out some hamfisted way of determing which URL to send them to.

But the next step is totally opaque.

The user authorizes access, and is redirect to your redirect_uri.  Your redirect_uri
is given a single parameter ("code") for the next call to "authorization_code".

In other words, your redirect_uri page is given zero indication of whether the given
code is used in the sandbox or not.

The only solution is to try to the "authorization_code" on the production URL, and if
that fails, cache the error & try the sandbox URL.  If that works, great, you know
it's a sandbox situation.  If try #2 fails, you now have two errors, and you need
to arbitrate amongst them to decide which to bubble up to the user.

 

(Note there is a totally nasty hack to guess sandbox vs. production - you could
look at the HTTP "Referrer" header, and hope that it's present & correct, and
then perform some mapping by which you hardcode that "cs4.salesforce.com" is
sandbox but "na3.salesforce.com" is production...  I do not count this is a
robust solution)

Given that the entire point of "login.salesforce.com" is to provide a single
"front door" that then tells you which actual domain to use, why don't sandbox
logins use the exact same front door?  Using "test.salesforce.com" for logins
means that client apps that wish to support the sandbox are needlessly complex.



Edit: it is possible to pass a "state" parameter to the oauth login page, and that will get passed back

to the redirect_uri, so one can detect sandbox prior to sending the user to the login page, and then

encode that knowledge into the "state" parameter so the redirect_uri knows.  It's a solution for this

particular case, but in general having to decide ahead of time on the "front door' endpoint means

that everyone who does an integration has to cart around extra data about each oauth token

(ie, whether it is sandbox or not). Still seems like a design bug.

  • March 19, 2013
  • Like
  • 1

I have a test email service with the "Advanced Email Services" check disabled.

 

Nonetheless, I am getting an "SPF validation failure' response from SFDC's mail servers.

 

 

[220] 'mx1-sjl.mta.salesforce.com ESMTP'
> EHLO localhost
[250] 'mx1-sjl.mta.salesforce.com says EHLO to ...:61009'
[250] 'SIZE 20971520'
[250] 'STARTTLS'
[250] 'PIPELINING'
[250] 'ENHANCEDSTATUSCODES'
[250] '8BITMIME'
> MAIL FROM: <test@REDACTED.com>
[250] 'SPF validation failure'
...

 

This exact same pattern occurs regardless of whether the "Advanced Email Security" box is checked or not.

 

It's a requirement for our use case that Advanced Email Security not be enabled for this particular Email Service.

 

Anybody else seeing this?

  • November 15, 2012
  • Like
  • 0

Writing some InstallHandler code for upgrade scripts.

 

Using Database.executeBatch, naturally, for large-scale data changes.

 

One of our classes that implements Database.Batchable has an instance field of type System.Version (to track the version that this particular batch job is doing the upgrade work for).

 

Turns out the System.Version class - which is just 3 integers - isn't serializable ... and as a result the call to executeBatch fails:

 

System.SerializationException: Not Serializable: system.Version

 

Now, System.Version is just three integers, so I expect this lack of serializability was a simple oversight in the underlying code.

 

 

For the moment we're working around it with our own "Version" class:

 

// We can't have "Version" as an instance field b/c it's not serializable (wut)
// So we have to roll our own here
public class V {
  public final integer major, minor, patch;
  public V(Version v) {
    this.major = v.major();
    this.minor = v.minor();
    this.patch = v.patch();
    }
  public Version version() {
    return new Version(major, minor, patch);
    }
  }

 

That *is* serializable, so can be happily reference by a Database.Batchable implementation.

  • October 12, 2012
  • Like
  • 0

The new-in-Summer-12 "InstallHandler" interface is a huge addition to Apex - thanks salesforce devs & PMs.

 

Question:

 

I note that the InstallHandler script runs as "special system user that represents your package".

 

Can I call System.schedule() within an InstallHandler to create a new Scheduled Job w/ that same user context?  If so, will it run successfully, or will it fail b/c that special app user no longer exists?

 

I recognize that I can find this out on my own, but rolling a package & installing it, etc, will take quite some time ... does anyone just know the answer?

 

Note - I really hope the answer is "you can create Scheduled Jobs with that User, and they work great!" because that would solve a major pain-point of applications like ours that require Scheduled Apex and are installed at the customer for years:  eventually, the User account associated with our Scheduled Job is deactivated, and then our scheduled job fails, and then we need to reschedule it with a new user ID ... until it is deactivated, etc.

  • October 11, 2012
  • Like
  • 0

Our application stores username & pwd/token credentials to execute API calls via the SOAP and WSDL apis.

For some customers, we see incorrect INVALID_LOGIN responses occasionally, and we'd like to understand why. 
This is a particular problem for us b/c, when we get an INVALID_LOGIN response, we assume the password has been changed and we pause integration. We also start sending "nag" emails to the customer to get them to give us the new pwd + token combo.

As you can imagine, our customers get very annoyed when they get this "nag" email but their username & password has not, in fact, changed.

Most recently this happened two days ago - on October 2nd at 12:29:12 pm (PDT), one of our API calls received this response:

INVALID_LOGIN: Invalid username, password, security token; or user locked out.

However, yesterday on the 4th (two days after this response), I verified that the username & password + token continued to work ... so the INVALID_LOGIN response was incorrect. (note that, because the token is unique for each new password, we can be certain that the password did not change - the same set of credentials worked before the 2nd, and after the 4th,  but that single "freak" response is enough to make our system think that the pwd is bad & to pause the integration.)


Please shed light on why we received that API response, and if this sort of behavior is expected with some percentage frequency. If so, we will need to change our algorithms to be on guard against false INVALID_LOGIN responses.

 

As far as we can tell it is somewhat rare, but does effect a small percentage (<1%) of customers at any given time; however it is quite disruptive for those it effects.

 

Salesforce support - we have opened case 08225908 to track this issue.

  • October 05, 2012
  • Like
  • 0

Hi!

 

We here at iHance are looking for a great developer to do some contract work for us.  Our headline salesforce app is Absolute Automation Email Logger, one of the first apex+visualforce apps written and among the top 5 paid apps on the AppExchange.


Who we are looking for:

A highly skilled in Apex/Visualforce developer who knows the ins & outs of Salesforce's different versions and permission structures.

NOTE - We are looking for a direct contact with a single individual.  We are not interested in working with a consulting group nor offshore development shops; please do not contact us if you fit that description.


About the project:

Absolute Automation was written for Enterprise Edition and above, but we it should be possible to get it running under Professional Edition as well.  Your job will be to figure out all our incompatible spots and then develop solutions for them.

The project scope is small and likely can be completed within a month.  We are open to flexible time commitments - if you want to do this on nights & weekends because you've got a weekday job, no problem.  We have no geographical requirements; contractors from all over the world are welcome.


Should everything go well with this project, we will have additional projects that we'd be happy to have your help with.


Who will I be working with?


During the project, you'll be working with our CTO, John Hart.  John, who can be found on the Force.com Boards as "jhart", is a Dreamforce "Developer Hero" award winner and member of the "Native Platform Council" advisory group (woot!).  He knows a lot about some parts of the platform, less about others - hopefully you'll both learn from each other.
 
You'll get the chance to work on one of the largest Apex codebases out there.  Your work will be deployed into thousands of installs.  If, during the project, you run into platform bugs that really *should* be fixed, we'll throw our weight behind you and try to get resolution through our ISV channels.


Please contact apex.dev.wanted@ihance.com if you're interested

  • April 06, 2012
  • Like
  • 0

If you send an email to a Contact or Lead from the Activity History "Send an Email" interface, and the Contact/Lead has a blank Email address, you are prompted to enter an Email address:


 

 

 

If you click "Save Address" or "Save Address and Send", the Contact/Lead will be updated with the given address.


However, no Contact or Lead triggers fire for this update (this is the bug).

 

Any app logic that depends on Contact/Lead triggers to detect Email address changes will therefore be left in an inconsistent state.


To verify this, I wrote a quick trigger for the Contact object:

 

trigger logEmailUpdate on Contact bulk (after delete, after insert, after update, after undelete) {
  string addrs = '\n';
    for (integer i=0; i<Trigger.size; i++) {
      addrs += string.format('[{0}]: Email changing from "{1}" to "{2}"\n', new string[] {
        '' + i,
        Trigger.isInsert ? '(isInsert - no old record)' : Trigger.old[i].Email,
        Trigger.isDelete ? '(isDelete - no new record)' : Trigger.new[i].Email
        });
      }

  AALog.log('DEBUG', 'Contact trigger fired '
    + (Trigger.isBefore ? 'before' : '')
    + (Trigger.isAfter  ? 'after'  : '')
    + ' '
    + (Trigger.isInsert ? 'insert' : '')
    + (Trigger.isUpdate ? 'update' : '')
    + (Trigger.isDelete ? 'delete' : '')
    + (Trigger.isUndelete ? 'undelete' : '')
    + ', email changes are '
    + addrs
    );
  }

 

The "AALog.log" method here writes a row to a simple logging object.


I then created a new Contact with a null address via the UI:


Contact trigger fired after insert, email changes are 
[0]: Email changing from "(isInsert - no old record)" to "null"

 I then went through the "Send Email" interface as shown above & updated the Email address that way.

The trigger was not called, however the Contact was updated.

The very next thing I did was edit the Contact to delete the (silently updated) Email address.  As expected, the trigger fired:


Contact trigger fired after update, email changes are 
[0]: Email changing from "test@ihance.com" to "null"

 

Note that the Email addr is already set, even though our trigger was never called for that update.

I then updated the Contact via the normal UI to set its Email field, to verify the trigger output is as expected:


Contact trigger fired after update, email changes are 
[0]: Email changing from "null" to "test@ihance.com"

 

This is what we should have seen when the Contact was updated from the "Send Email" dialog.

 

Finally I deleted the Contact altogether:


Contact trigger fired after delete, email changes are 
[0]: Email changing from "test@ihance.com" to "(isDelete - no new record)"

 Note that all actions invoke the trigger except the Email update done via the Send Email interface.


Leads have the exact same bug.

 

 

Salesforce support - I created case 05987983 to track this issue.

  • August 12, 2011
  • Like
  • 0

During setup of Absolute Automation, we provide the admin with a link directly to the User Profiles page so they can rapidly add the right permissions to their Profiles.

 

However, the profile URL changes depending on whether or not the "enhanced" profile view has been enabled - and, if you choose the wrong one, the user is given an ugly error screen ("Insufficient privileges") instead of being redirected to the right one.

 

The two possible paths are:

 

/setup/ui/profilelist.jsp?setupid=Profiles

 

or

 

/00e?setupid=EnhancedProfiles

 

If there's no way to determine which link is the right one, we'll have to remove the hyperlink altogether and just tell the admin to navigate there...

  • August 11, 2011
  • Like
  • 0

Some quick background on merges and Apex triggers:

When a User merges Leads or Contacts, the merge losers are deleted and the
merge winners are updated with the fields chosen by the User in the merge UI.

The only way to detect a merge in Apex triggers is to trigger "after delete" on
the loser records and check them for the "MasterRecordId" field.  If present,
the record being deleted is a merge loser, and the "MasterRecordId" points to
the merge winner.

(this is all covered in the docs )

As stated in the docs, the losers are deleted before the merge winner is
updated with the fields chosen by the User in the UI.

So, let's say that I merge two Leads: Lead A ("a@test.com") and Lead B
("b@test.com").  In the UI I choose Lead A as the master record (winner), but
in the "decide between conflicting fields" UI I choose "b@test.com" as the
Email address to use for the winner.

Two DML ops happen:

 

DELETE loser (via merge)
Trigger.old = { LastName = "B", Email = "b@test.com" }

UPDATE winner (via merge)
Trigger.old = { LastName = "A", Email = "a@test.com" }
Trigger.new = { LastName = "A", Email = "b@test.com" }


However, if we update the winner during the loser delete trigger (the only time
we can detect a merge, remember) ... then something buggy happens.

Our application does exactly this, by detecting merges and copying the loser's

Email address into a custom "OtherEmails" field of the winner.  (this isn't just
arbitrary, there's a good reason for it).


So, during the "DELETE loser" trigger, we update the winner like so:

DELETE loser (via merge)
Trigger.old = { LastName = "B", Email = "b@test.com" }
{
// our custom trigger code
Lead winner = [select Id, Email, OtherEmails from Lead where Id = '<winnerId>']
winner.OtherEmails += loser.Email
update winner;
// this update of course fires triggers too, which would look like this:
UPDATE winner (via standard update)
Trigger.old = { LastName = "A", Email = "a@test.com", OtherEmails = null }
Trigger.new = { LastName = "A", Email = "a@test.com", OtherEmails = "b@test.com" }
}

 

The bug happens in the merge-driven winner update, where SFDC should be
applying the fields chosen by the User during conflict resolution.

The fields chosen by the User are simply gone.  They never get updated into the
winner.  Instead, an update fires that looks like this:

UPDATE winner (via merge)
Trigger.old = { LastName = "A", Email = "a@test.com", OtherEmails = null }
Trigger.new = { LastName = "A", Email = "a@test.com", OtherEmails = "b@test.com" }

 
The User's choice of "Email = b@test.com" is simply gone ... instead this
merge-driven update is a duplicate of the update that happen in the loser's
delete trigger.


What do I expect to happen?

This is a tricky situation, hence the title of this post.  With the present
order of operations - with the loser delete happening before the winner update,
and with the merge only being detectable in the loser delete, I can't think of
any good way to resolve conflicts between trigger-driven winner updates and the
user-selected winner updates.  A couple other changes may fix the issue:

1.  Update the winner before deleting the loser.

This way, custom merge logic (in loser-delete triggers) would be working with a
Winner that's already been updated with the User-selected fields.

Of course, this is a breaking change for implementations that rely on the
current behavior (though I don't see how they could), and there are probably
good reasons for the current order of operations that I can't think of but
which are obvious to SFDC's devs.

2.  Provide an actual "after merge" trigger that provides the losers & winners at the same time.

This "after merge" trigger would be just like an "after update" trigger (ie,
Trigger.old/new contain the pre- and post-update state of the Winners), plus a
new contact variable Trigger.mergeLosers that contains what you would expect.

 

 

 

Salesforce support - i have created case 05650893 to track this issue.

  • June 21, 2011
  • Like
  • 1

At certain clients, we are getting the dreaded "data skew" error for the following query:

[select Id from Task where EmailId__c in :msgIds]


This is called in a trigger, and msgIds will typically have only a single member (& definitely no more than 200).

Similarly, this query will generally return only a single Task - in other words, filtering by the EmailId__c field is highly selective.


However, at certain clients (those who have many Task objects), we are getting this error:

Error on create: CANNOT_INSERT_UPDATE_ACTIVATE_ENTITY: i.tAA_EmailHasAttachments: execution of AfterInsert

caused by: System.QueryException: Non-selective query against large object type (more than 100000 rows). Consider an indexed filter or contact salesforce.com about custom indexing.
Even if a field is indexed a filter might still not be selective when:
1. The filter value includes null (for instance binding with a list that contains null)
2. Data skew exists whereby the number of matching rows is very large (for instance, filtering for a particular foreign key value that occurs many times)

 

Our filter list does not include the value null, and the number of matching rows is quite small, so it appears to be an indexing issue.

A couple things of note:

a.  This field holds IDs, but is defined as a simple text field, because Tasks cannot have custom lookup fields (why?)

We would certainly prefer it be a lookup field.  Perhaps lookup fields are auto-indexed and wouldn't have this problem.


b.  Our packaged application cannot require this field be indexed.

When defining the custom field, the only way to index it is to mark it as an External ID, but that requires a unique value, which this field is not.


 


Questions:

 

Is our only option contacting each of our effected customers and asking them to ask salesforce to add a custom index?

Are there any plans for letting us define custom fields which are indexed but not unique?

Are there any plans for letting Tasks have custom lookup fields?



Salesforce support - I have created case 05569974 to track this issue.

  • June 08, 2011
  • Like
  • 0

This post talks about OAuth, but the exact same issues apply for username + password
interactions.

Salesforce has multiple instances on the backend, but uses a single "front door" for

logins (login.salesforce.com).  After successful login, you are sent to the right instance
to connect to (na15.salesforce.com, cs1.salesforce.com, etc).  At the user level
this is handled with a simple redirect; at the API level this is information passed
back in the login response.

HOWEVER, that top level unified domain only works for production logins.  Sandbox
logins have to use a different domain (test.salesforce.com), even though there
are *many* situations in which the calling code cannot possibly know which domain
to use.


As an example, let's say you want OAuth support in your API client app.

The first step in the OAuth flow is sending the user to a special page at which
they can OK access by your app.  You need to be able to determine if the user
is sandbox or not, because if they are, the initial "approve access" URL is
different:

 

normal: https://login.salesforce.com/services/oauth2/authorize
sandbox: https://test.salesforce.com/services/oauth2/authorize

This is annoying but surmountable - given that you are already interacting with the user,
you can probably figure out some hamfisted way of determing which URL to send them to.

But the next step is totally opaque.

The user authorizes access, and is redirect to your redirect_uri.  Your redirect_uri
is given a single parameter ("code") for the next call to "authorization_code".

In other words, your redirect_uri page is given zero indication of whether the given
code is used in the sandbox or not.

The only solution is to try to the "authorization_code" on the production URL, and if
that fails, cache the error & try the sandbox URL.  If that works, great, you know
it's a sandbox situation.  If try #2 fails, you now have two errors, and you need
to arbitrate amongst them to decide which to bubble up to the user.

 

(Note there is a totally nasty hack to guess sandbox vs. production - you could
look at the HTTP "Referrer" header, and hope that it's present & correct, and
then perform some mapping by which you hardcode that "cs4.salesforce.com" is
sandbox but "na3.salesforce.com" is production...  I do not count this is a
robust solution)

Given that the entire point of "login.salesforce.com" is to provide a single
"front door" that then tells you which actual domain to use, why don't sandbox
logins use the exact same front door?  Using "test.salesforce.com" for logins
means that client apps that wish to support the sandbox are needlessly complex.



Edit: it is possible to pass a "state" parameter to the oauth login page, and that will get passed back

to the redirect_uri, so one can detect sandbox prior to sending the user to the login page, and then

encode that knowledge into the "state" parameter so the redirect_uri knows.  It's a solution for this

particular case, but in general having to decide ahead of time on the "front door' endpoint means

that everyone who does an integration has to cart around extra data about each oauth token

(ie, whether it is sandbox or not). Still seems like a design bug.

  • March 19, 2013
  • Like
  • 1

Some quick background on merges and Apex triggers:

When a User merges Leads or Contacts, the merge losers are deleted and the
merge winners are updated with the fields chosen by the User in the merge UI.

The only way to detect a merge in Apex triggers is to trigger "after delete" on
the loser records and check them for the "MasterRecordId" field.  If present,
the record being deleted is a merge loser, and the "MasterRecordId" points to
the merge winner.

(this is all covered in the docs )

As stated in the docs, the losers are deleted before the merge winner is
updated with the fields chosen by the User in the UI.

So, let's say that I merge two Leads: Lead A ("a@test.com") and Lead B
("b@test.com").  In the UI I choose Lead A as the master record (winner), but
in the "decide between conflicting fields" UI I choose "b@test.com" as the
Email address to use for the winner.

Two DML ops happen:

 

DELETE loser (via merge)
Trigger.old = { LastName = "B", Email = "b@test.com" }

UPDATE winner (via merge)
Trigger.old = { LastName = "A", Email = "a@test.com" }
Trigger.new = { LastName = "A", Email = "b@test.com" }


However, if we update the winner during the loser delete trigger (the only time
we can detect a merge, remember) ... then something buggy happens.

Our application does exactly this, by detecting merges and copying the loser's

Email address into a custom "OtherEmails" field of the winner.  (this isn't just
arbitrary, there's a good reason for it).


So, during the "DELETE loser" trigger, we update the winner like so:

DELETE loser (via merge)
Trigger.old = { LastName = "B", Email = "b@test.com" }
{
// our custom trigger code
Lead winner = [select Id, Email, OtherEmails from Lead where Id = '<winnerId>']
winner.OtherEmails += loser.Email
update winner;
// this update of course fires triggers too, which would look like this:
UPDATE winner (via standard update)
Trigger.old = { LastName = "A", Email = "a@test.com", OtherEmails = null }
Trigger.new = { LastName = "A", Email = "a@test.com", OtherEmails = "b@test.com" }
}

 

The bug happens in the merge-driven winner update, where SFDC should be
applying the fields chosen by the User during conflict resolution.

The fields chosen by the User are simply gone.  They never get updated into the
winner.  Instead, an update fires that looks like this:

UPDATE winner (via merge)
Trigger.old = { LastName = "A", Email = "a@test.com", OtherEmails = null }
Trigger.new = { LastName = "A", Email = "a@test.com", OtherEmails = "b@test.com" }

 
The User's choice of "Email = b@test.com" is simply gone ... instead this
merge-driven update is a duplicate of the update that happen in the loser's
delete trigger.


What do I expect to happen?

This is a tricky situation, hence the title of this post.  With the present
order of operations - with the loser delete happening before the winner update,
and with the merge only being detectable in the loser delete, I can't think of
any good way to resolve conflicts between trigger-driven winner updates and the
user-selected winner updates.  A couple other changes may fix the issue:

1.  Update the winner before deleting the loser.

This way, custom merge logic (in loser-delete triggers) would be working with a
Winner that's already been updated with the User-selected fields.

Of course, this is a breaking change for implementations that rely on the
current behavior (though I don't see how they could), and there are probably
good reasons for the current order of operations that I can't think of but
which are obvious to SFDC's devs.

2.  Provide an actual "after merge" trigger that provides the losers & winners at the same time.

This "after merge" trigger would be just like an "after update" trigger (ie,
Trigger.old/new contain the pre- and post-update state of the Winners), plus a
new contact variable Trigger.mergeLosers that contains what you would expect.

 

 

 

Salesforce support - i have created case 05650893 to track this issue.

  • June 21, 2011
  • Like
  • 1
The Inbound Email object doesn't decode MIME-Header encoded subject lines if they use spaces (instead of underscores, as the spec demands).

So this subject line:

=?UTF-8?Q?My_Company=E2=84=A2?=

Will be received by Email Services and parsed correctly into "My Company™"

But the same subject line is not decoded if it uses a space instead of a _:

=?UTF-8?Q?My Company=E2=84=A2?=

So the Apex code that processes the email is presented with the encoded format, instead of the proper decoded content.


While this is strictly correct behavior per the spec (RFC 2047), it doesn't fit the real world.  In particular, GMail uses spaces instead of underscores ... so any email sent from GMail (or Google Apps) that has an encoded subject line won't be decoded by Email Servies.

Many other RFC 2047 decoders (eg, Perl's Encode::MIME::Header) are tolerant of spaces in the encoded-text.


(Salesforce support - I created case 02325915 to track this issue).


Message Edited by jhart on 12-25-2008 01:05 PM
  • December 25, 2008
  • Like
  • 1
Let's say I have an unpackaged page that wants to point at a packaged page via a standard <a> tag:

Code:
<a href="/apex/i__packagedPage">Go from unpackaged to packaged</a>

 
That works fine, as "https://na1.salesforce.com/apex/i__packagedPage" redirects to "https://i.na1.visual.force.com/apex/packagedPage".


However, I cannot do the reverse.  If I'm on a packaged page, this:

Code:
<a href="/apex/unpackagedPage">Go from packaged to unpackaged</a>

 
Does not work, because "https://i.na1.visual.force.com/apex/unpackagedPage" does not redirect back to the unpackaged domain; instead the sub-domain host just throws an error.


An understandable workaround would be if you had to use a controller method get the right URL by using a PageReference:

Code:
// packaged page:
<a href="{!unpackagedUrl}">Go from packaged to unpackaged</a>

// packaged controller:
public string getUnpackagedUrl() {
  PageReference p = new PageReference('/apex/unpackagedPage');
  return p.getUrl();
  }


But that doesn't work either - the code above generates the same relative URL, just like our HREF example above (ie, PageReference doesn't have any additional domain smarts).


So whatever is generating these URLs from the https://i.na1.visual.force.com" domain has to know that "https://na1.salesforce.com" is the appropriate unpackaged domain for the current user.

On the client side, this could be done via javascript.


But - I need to do this on the *server* side, because the actual use case is redirecting from a packaged page to an unpackaged version of that page (in order to support client customizations of our packaged UI):

Code:
// page:
<apex:page controller="PackagedController" action="{!checkForUnpackagedOverride}">


// controller:
public PageReference checkForUnpackagedOverride() {
  ApexPage[] override = [select Name from ApexPage where ...];
  if (p.size() == 0) return null;
PageReference p = new PageReference('/apex/' + override[0].Name); p.setRedirect(true); return p; }

 
But this doesn't work.  What can I do on the server side to figure out my salesforce instance URL?

I'll post what I find, if anything.
  • December 15, 2008
  • Like
  • 1
I've got an object defined with an "external ID" field (in this case, email address).

We've got code that takes real-world values and upserts them into this object.  We don't bother querying for matches first - we just upsert the given values, knowing that the object they will either insert (if new) or update (thus resolving to the correct ID).  Either way my SObjects are guaranteed to have a valid ID post-upsert.

However, if I run enough processes in parallel, I'll get an upsert error:

EXCEPTION: Upsert failed.  First exception on row 0; first error:
DUPLICATE_VALUE, duplicate value found: i__FullAddr__c duplicates value on record with id: a0T700000003bX4


I'll investigate using the Database.upsert() method to retry the upsert on failure; hopefully that will fix the problem.

Am I smoking crack to think that upsert should be an atomic operation?  This is empirically not the case - we've clearly got interleaved upserts - but is this a bug or a "feature"?
  • May 01, 2008
  • Like
  • 1
According to the Spring '14 release notes, ISVs will soon be able to delete fields from managed packages.  That is great news.

I have two questions about this.

1.  Will deleted fields be available to InstallHandlers during upgrade?

For performance reasons, sometimes it's necessary to replace a field with an identical field that has "External ID" turned on.  When we do this, our InstallHandler recognizes the upgrade & launches a batch job to copy alll of the old values into the new field.

We'd then like to delete the old field.  Is that ever possible?  If I stagger my upgrades like so:

Version 1 - uses old field
Version 2 - adds new indexed field; upgrade script copies old values into new field
Version 3 - deletes old field.

What happens if a customer upgrades directly from Version 1 to Version 3?  Will it work, or will the old field be deleted before the Version 2 upgrade code runs?


2.  We recently added some lookup fields that are now preventing some customers from upgrading (b/c they are running into a "too many custom relationship fields" error).  We've decided that we'd rather remove the feature so that all customers can continue upgrading.

Now, the release notes state that deleted fields will actually remain in the subscriber org:

Subscribers who upgrade to the new package version will still have the deleted components available in their organization. They’re displayed in the Unused Components section of the Package Details page.

Does that also apply to customers whose upgrade path "skips" the deleted field entirely, eg:

Version 1 - baseline
Version 2 - introduces field X
Version 3 - deletes field X

If a customer on Version 1 cannot upgrade to Version 2 due to a "too many custom fields" limit, will they be able to upgrade directly to Version 3?  Or will the "unused components" still kinda exist and thus prevent upgrade?
  • April 04, 2014
  • Like
  • 0

I noticed when adding visualforce to Salesforce1 in either the page layout, mobile card or publisher action, any interaction on the page causes it to reload in it's own frame before allowing the interaction.

 

For example, a page with a link, the link will be rendered, but attempting to click it or anywhere else on the embedded page causes the visualforce page to load full screen, then I'm able to tap it and navigate.

 

Is that behaviour intended, or is there a workaround?

Our product, Absolute Automation, updates custom fields on standard objects (Contacts, Leads, Accounts, etc).

 

Some of our customers have custom validation rules defined for those standard objects.

 

But custom validation rules have a serious problem: they prevent *any* update to an object that fails validation, even if the update in question has nothing to do with the validated fields. Because objects may pre-date the custom validation rules, this creates a number of "minefield" objects which cannot be updated until the validation error is fixed.

 

While this may make sense in a user interface context, it makes no sense in an API or code context, and even less sense in a managed packaged.


Let's look at a simplified example.


Starting with a standard test org with sample Contacts, let's define a new Custom Validation which requires that the Contact.FirstName field must be equal to "foo". Of course, all of our existing Contacts don't validate under that rule, but they pre-date the rule so they are already in the database (and SFDC does not prevent me from creating the rule, nor tell me that I have non-conforming objects in my system).

 

If, in the user interface, I go to update an unrelated field - say, Contact.Title - the validation failure does not let me update the Contact:

 

UI validation failure

 

 

Whether or not this makes sense is debatable - the user, after all, is not touching the FirstName field at all, so does it make sense to run the validation against it? If you think the answer is "yes", then it seems we should *also* require validation of all existing records when a Custom Validation rule is created.

 

But that's not the problem. The real problem is that this same error also prevents updates to unrelated fields in Apex Code.

 

Let's run an apex code snippet to update the Title of an existing Contact:

 

update new Contact(Id = '00370000016yOlu', Title = 'test');

 

Oops! Can't do it! The custom validation rule - which, again, is defined on a field that we are not even touching - prevents the update:

 

System.DmlException: Update failed. First exception on row 0 with id 00370000016yOluAAE; first error: FIELD_CUSTOM_VALIDATION_EXCEPTION, First name must be &quot;foo&quot;!: []

 

It becomes much harder to argue this is a good idea. If we are so militant about enforcing Custom Validations that we prevent *any* update to a non-conforming object until the validation is fixed, then SFDC shouldn't let me create Custom Validations unless every existing record conforms.

 

Now let's take it to the next level, and suppose that the Apex update is running in managed package code, and updating a custom field defined within that package. Now you're really getting crazy. Your lovely little app is now being sabotaged by validation rules that have nothing to do with your custom field update, and ... what are you supposed to do about it? You certainly don't know what the right values are to "fix" the validation failure. Instead, your updates just fail. Full stop. Did you want those updates? Sorry!


Essentially every single app on the AppExchange can assume very little about standard objects being updateable. Actually, at scale, you have to assume that you CANNOT update custom fields on standard objects, because at least one of your customers will have objects that don't pass their own Custom Validation rules.

 

Thus, any such update must be coded defensively: how important is this update? If I can't update this field, do I need to cancel the entire transaction? Or can I log the error & skip the field update, knowing that my data model is now somewhat inconsistent? If I let the entire transaction cancel - how can I ever retry it? It will just fail again with the same error. So before I retry my transaction, I have to contact my customer and ask them do either (a) turn off their validation rule, or (b) update their entire database to ensure legacy objects pass the newly defined validation.


Finally: what is the benefit of preventing my update? I'm not touching a field that is being validated (if I were, fine, it makes sense to prevent me from saving new, non-conforming data). Preventing the update doesn't help the object validate - fixing the object presumably will take human intervention, and that human is nowhere to be found when my code is executing. The only consequence of preventing these unrelated updates is that 3rd party apps and integrations are either much less reliable, or much less consistent. Zero upside, tons of downside.

 

 

Salesforce support - I have opened case 09799365 to track this issue.

  • October 16, 2013
  • Like
  • 0

Global search does not respect field-level security for Profiles.

In other words, I can completely hide a field from a given profile.  But, a user in that profile can search through that field using Global Search.  All objects whose hidden field matches that search text will be displayed in the Global Search results.

This seems like a serious bug.

 

Consider salesforce's classic "HR/Recruiting app" example.  A user from whom a "Salary" field is hidden can nonetheless simply enter dollar amounts into Global Search to figure out the field value for every record.

I'm a bit shocked that such a large security hole is present in Global Search, and I think it can't possibly be by design.  I'm so surprised by this behavior that I keep double-checking it, but each time I'm able to search for values in fields that are hidden from my profile.

 

 

Salesforce support- I have created case 09471736 to track this issue.

  • July 15, 2013
  • Like
  • 0

Our application uses embedded Visualforce pages within standard object pages (contact, lead, account, opportunity).

 

Since the Summer '13 upgrade, a couple users (in separate orgs) are reporting that the embedded page will fail to load with this error message:

 

The page you submitted was invalid for your session.  Please click Save again to confirm your change.

 

 

Note that "Save" has not been clicked, nor any changes made.  This is on the initial load of the parent page.

 

The only fix we've discovered so far is to manually delete all salesforce.com and visual.force.com cookies.

 

This is not a bug in our code; these people are running the exact same version they have been running for years.  The only new thing in the mix is Summer '13.

 

Salesforce support - I have created case 09431887 to track this issue.

  • July 02, 2013
  • Like
  • 0

Our "Absolute Automation" app logs emails and their attachments to salesforce.

One of our configurable features is the ability to skip attachments based on extension or file size.

As there are multiple ways that Attachments are logged (Email Services or the normal CRUD API), we implemented this "attachment skipping" feature via a "before insert" trigger on Attachments.

The trigger, simplified, looks like this:

 

trigger AttachmentFilter on Attachment bulk (before insert) {
  for (Attachment a : Trigger.new) {
    if (a.BodyLength < CONFIGURED_SIZE) a.addError('Skipping attachment per SIZE_TOO_SMALL');
    }
  }

This has worked great for awhile now.

However, as of Summer '13, it no longer works.  Why?  Because the "BodyLength" field is now _NULL_ in the trigger context.  This happens both when inserting Attachments via the SOAP API as well as in Apex Tests that insert them directly.

If we modify our trigger like so:

 

trigger AttachmentFilter on Attachment bulk (before insert) {
  System.debug('AttachmentFilter, input is: ' + Trigger.new);
  for (Attachment a : Trigger.new) {
    if (a.BodyLength < CONFIGURED_SIZE) a.addError('Skipping attachment per SIZE_TOO_SMALL');
    }
  }

 We can verify that BodyLength is null for all Attachments in Trigger.new, even though "Body" is not.

Here's the debug output taken from a run of Apex testMethods:

 

16:07:26.074 (7074859000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=oktxt.yada,        ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2048], ... Name=okbig.yada,        ... Id=null, BodyLength=null, ContentType=image/yada},
Attachment:{Body=Blob[2],    ... Name=no_small.yada,     ... Id=null, BodyLength=null, ContentType=image/yada},
Attachment:{Body=Blob[2],    ... Name=no_small.gif,      ... Id=null, BodyLength=null, ContentType=IMAGE/gif},
Attachment:{Body=Blob[2],    ... Name=no_isfoo.FOO,      ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2],    ... Name=no_isfoo.yada.foo, ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2048], ... Name=no_isjpg.jPg,      ... Id=null, BodyLength=null, ContentType=image/yada})

16:07:26.309 (7309998000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=oktxt.yada,        ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2048], ... Name=okbig.yada,        ... Id=null, BodyLength=null, ContentType=image/yada},
Attachment:{Body=Blob[2],    ... Name=no_small.yada,     ... Id=null, BodyLength=null, ContentType=image/yada},
Attachment:{Body=Blob[2],    ... Name=no_small.gif,      ... Id=null, BodyLength=null, ContentType=IMAGE/gif})

16:07:30.363 (11363893000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=oktxt.yada,        ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2048], ... Name=okbig.yada,        ... Id=null, BodyLength=null, ContentType=image/yada})

16:07:30.493 (11493729000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=no_isfoo.FOO,      ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2],    ... Name=no_isfoo.yada.foo, ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2048], ... Name=no_isjpg.jPg,      ... Id=null, BodyLength=null, ContentType=image/yada})

16:07:30.565 (11565822000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=no_small.yada,     ... Id=null, BodyLength=null, ContentType=image/yada},
Attachment:{Body=Blob[2],    ... Name=no_small.gif,      ... Id=null, BodyLength=null, ContentType=IMAGE/gif})

16:07:30.626 (11626327000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=oktxt.yada,        ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2048], ... Name=okbig.yada,        ... Id=null, BodyLength=null, ContentType=image/yada})

16:07:30.710 (11710629000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=no_isfoo.FOO,      ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2],    ... Name=no_isfoo.yada.foo, ... Id=null, BodyLength=null, ContentType=text/plain},
Attachment:{Body=Blob[2048], ... Name=no_isjpg.jPg,      ... Id=null, BodyLength=null, ContentType=image/yada})

16:07:30.719 (11719401000)|USER_DEBUG|[2]|DEBUG|
AttachmentFilter, input is: (
Attachment:{Body=Blob[2],    ... Name=no_small.yada,     ... Id=null, BodyLength=null, ContentType=image/yada},
Attachment:{Body=Blob[2],    ... Name=no_small.gif,      ... Id=null, BodyLength=null, ContentType=IMAGE/gif})

 

 

 

Salesforce support: I have opened case 09406443 to track this issue.

  • June 25, 2013
  • Like
  • 0

We have two customers who have reported this issue in the past 2 days, and I fear there are more to come.

 

When they have "Data.com Clean" enabled, doing a simple Contact creation from one of our Visualforce pages (with custom controller) fails with the following error:

 

Visualforce Page: /apex/i__aaPendingAddrs

caused by: System.DmlException: Insert failed. First exception on row 0; first error: UNKNOWN_EXCEPTION, INVALID_TYPE: sObject type 'DataDotComEntitySetting' is not supported.: []

Class.i.CtlPendingAddrs.insertC: line 228, column 1
Class.i.CtlPendingAddrs.doSave: line 190, column 1
Class.i.CtlPendingAddrs.saveActions: line 162, column 1

 

The code where the failure occurs is bog standard, simple Contact creation.  This code works perfectly in thousands of installs of our managed package.  Our code does not reference "DataDotComEntitySetting" in anyway.  Here's the code that the stack trace leads to:

  private static void insertC(PagerPending.Item[] items) {
    Contact[] objs = new Contact[0];
    for (PagerPending.Item i : items) {
      if (i.action == 'nc') {
        if (i.addr.LastName__c == null) { i.error = NOLASTNAME; continue; }
        i.newcontact.Email = i.addr.FullAddr__c;
        i.newcontact.FirstName = i.addr.FirstName__c;
        i.newcontact.LastName = i.addr.LastName__c;      
        if (SFDC.hasRecTypes() && i.getCRecordType() != null) i.newcontact.put('RecordTypeId', i.getCRecordType());
        objs.add(i.newcontact);
        }
      }
    if (objs.size() > 0) insert objs;  // THIS IS LINE 228 PER THE
    }


As you can tell it is pretty darn simple.

 

The error message - with its reference to "DataDotComEntitySetting", which does not exist in our code - shows that this is an internal salesforce/data.com bug.  Our customers who have encountered this issue note that it reproduces with 100% certainty if Data.com Clean is enabled, and disappears when Data.com Clean is disabled.

 

I'm guessing that Data.com Clean has a trigger on Contact creation that is buggy and throws an error, but somehow the call stack is lost and it, instead, unwinds to the contact insert statement itself.  However, our customers report that contact creation from within the normal Salesforce user interface still works, so it's some complicated interaction between Data.com Clean and managed packages.

 

 

Salesforce support - I have opened case 09191025 to track this issue.

  • April 26, 2013
  • Like
  • 0

We are seeing intermittent error bounces from Email Services with this error message:

 

The attached message was sent to the Email Service address <(redacted)> but could not be processed because the following error occurred:

554 Failed to get next element

 

 

Manually re-sending the offending message back to Email Services - in exactly the same format as was sent originally - works fine, so the error is not in the email content but instead is a transient bug within Email Services.

 

Edit: given that this is a transient error that can be fixed by retrying the message, this should be grouped with Email Services timeout should retry.

  • April 09, 2013
  • Like
  • 0

We post emails into Salesforce both over the API as well as using Email Services.

 

In our experience, Email Services is less reliable than the API. In particular, we see error messages like this:

 

554 Request timed out waiting for connection: [config 200ms, actual 209ms] to ConnPool_2, Num waiting=5, Thread=/sfdc/Soap/classInstance/..., Current OrgId=..., Current UserId=..., Current url=/sfdc/Soap/classInstance/...

 

If we get an timeout error on the API, we can easily retry the transaction.

 

But with Email Services, the error above is fatal. Rather than retrying after a timeout, Email Services simply throws up its hands and bounces the message.


The lack of a retry-on-error capability within Email Services seems like a bug.  This is a purely internal-to-salesforce error, out of the hands of the user, and retrying the offending transaction takes work (you have to strip the original email out of the error bounce and then repost it to the Email Services address).  

 

Note that Email Services *will* requeue a message after "Over Rate Limit" failures, so there is a retry queue available.  Please use it after transient errors like the above!

  • April 09, 2013
  • Like
  • 0

We are seeing some real-world emails that Email Services cannot handle, instead generating an error message:

 

554 Error processing email content Error ID=1753825505-222 (1527040629)

 

Emails that cause this error have a blank Content-Transfer-Encoding header in the text & html parts:

 

Date: Thu, 4 Apr 2013 19:32:24 -0400
MIME-Version: 1.0
Message-ID: <456cff6ec6ae70ac0bae49733e680ea3@dp1.example.com>
To: <to@example.com>
From: <from@example.com>
Subject: test subject
Content-Type: multipart/alternative;
	boundary="Alexandria=_d516e2a0981b50e1fbf77c61de0bd050"

--Alexandria=_d516e2a0981b50e1fbf77c61de0bd050
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 

text body

--Alexandria=_d516e2a0981b50e1fbf77c61de0bd050
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<body>
html body
</body>
</html>

--Alexandria=_d516e2a0981b50e1fbf77c61de0bd050--

 

 

Email Services handles the email just fine if you simply remove the header:

 

Date: Thu, 4 Apr 2013 19:32:24 -0400
MIME-Version: 1.0
Message-ID: <456cff6ec6ae70ac0bae49733e680ea3@dp1.example.com>
To: <to@example.com>
From: <from@example.com>
Subject: test subject
Content-Type: multipart/alternative;
	boundary="Alexandria=_d516e2a0981b50e1fbf77c61de0bd050"

--Alexandria=_d516e2a0981b50e1fbf77c61de0bd050
Content-Type: text/plain; charset="utf-8"

text body

--Alexandria=_d516e2a0981b50e1fbf77c61de0bd050
Content-Type: text/html; charset="utf-8"

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<body>
html body
</body>
</html>

--Alexandria=_d516e2a0981b50e1fbf77c61de0bd050--

 

A minor bug, but Email Services should be able to handle almost any email that is "roughly valid".  The world of email is wild & wooly and full of messages that are nonconforming in minor ways like this, and Email Services should be ready to handle them.

 

 

Salesforce support: I have created case 09011561 to track this issue.

 

 

  • April 09, 2013
  • Like
  • 0

The new-in-Summer-12 "InstallHandler" interface is a huge addition to Apex - thanks salesforce devs & PMs.

 

Question:

 

I note that the InstallHandler script runs as "special system user that represents your package".

 

Can I call System.schedule() within an InstallHandler to create a new Scheduled Job w/ that same user context?  If so, will it run successfully, or will it fail b/c that special app user no longer exists?

 

I recognize that I can find this out on my own, but rolling a package & installing it, etc, will take quite some time ... does anyone just know the answer?

 

Note - I really hope the answer is "you can create Scheduled Jobs with that User, and they work great!" because that would solve a major pain-point of applications like ours that require Scheduled Apex and are installed at the customer for years:  eventually, the User account associated with our Scheduled Job is deactivated, and then our scheduled job fails, and then we need to reschedule it with a new user ID ... until it is deactivated, etc.

  • October 11, 2012
  • Like
  • 0

I can schedule a job via Apex code:

 

System.schedule('test', '0 0 0 * * ?', new SchedulableClass());

 

The CronTrigger job doesn't have a "Name" field, so I can't query for the Job I just created.  This means I can't check to see if my job already exists calling System.schedule(); instead I just have to call "schedule()" and silently eat the exception it throws if the job already exists.

 

The only way you can figure out which CronTrigger is yours is to cache the return value of System.schedule(), which (it so happens) is the ID of the CronTrigger that is created.  However, you can't delete them from Apex:

 

 

Id jobid = System.schedule('test', '0 0 0 * * ?', new SchedulableClass());
delete new CronTrigger(Id = jobid);

// 'delete' throws 'DML not allowed on CronTrigger'

 

 

So the current state of Scheduled Jobs is:

 

You can create them from Apex Code, but not from the UI

You can delete them from the UI, but not from Apex Code

 

I guess that just seems odd to me.  Why did Salesforce create this whole new API (System.schedule()), with a seemingly random assortment of ways to manipulate it, instead of just exposing the CronTrigger table directly to the full range of DML operations?

 

Placing new functionality into new core objects, rather than new APIs, seems easier on everyone (the whole describe/global describe suite of API calls are an example of something that seems a natural fit for a set of read-only custom objects).

  • April 22, 2010
  • Like
  • 0
Am I missing something? there appears to be no way to do this. I need to delete a tracking number field from an Order Custom Object. It allows me to edit it but does not allow me to delete. The problem is that its not a one to one relatioship. So I created a tracking object and made it a related list. That works fine but its not giving me any option to delete the old custom field that was existing before and now useless to the application.
Apex code should run in system context (except when running executeAnonymous, or using "with sharing", or testing with "runAs").

Specifically, @future calls should run in system context (or, at least, in the same context as their caller).

Instead, they are running in the user context.

Here's a quick example of an api method that queries for a private contact belonging to 'user1':

Code:
 1  Webservice static void now_and_later() {
 2    queryPrivateContact();
 3    queryLater();
 4   }
 5  public static void queryPrivateContact() {
 6    Contact c = [select Name from Contact where Id = '0037000000cWgTXAA0'];
 7    System.debug(c);
 8    }
 9  @future static void queryLater() {
10    queryPrivateContact();
11    }

 If I call this api method as 'user1', everything works fine (here's the debug output, from the non-future call at line 2):

Code:
$ perl api.pl -u user1
--------------------------------------------------------------------------------
Calling 'i.aa_api.now_and_later' as user1
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
DEBUG LOG:
20081118002023.813:Class.i.aa_api.queryPrivateContact: line 7, column 9:
        Contact:{Name=Carrie Sloan, Id=0037000000cWgTXAA0}
--------------------------------------------------------------------------------

 
If I call it as 'user2', who is a standard user & thus isn't normally able to see the private contact, the line 2 call to queryPrivateContact works fine (because the code is running in system context).  As a result, the debug output is what you would expect:

Code:
$ perl api.pl -u user2
--------------------------------------------------------------------------------
Calling 'i.aa_api.now_and_later' as user2
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
DEBUG LOG:
20081118002029.727:Class.i.aa_api.queryPrivateContact: line 7, column 9:
        Contact:{Name=Carrie Sloan, Id=0037000000cWgTXAA0}
--------------------------------------------------------------------------------

 However, an exception is thrown when the @future method queryLater runs, and the developer org gets an an error email:

Code:
Apex script unhandled exception by user/organization: 00570000001IdET/00D70000000Jqpv

Failed to invoke future method 'static void queryLater()'

Debug Log:
System.QueryException: List has no rows for assignment to SObject

Class.i.aa_api.queryPrivateContact: line 6, column 21
Class.i.aa_api.queryLater: line 10, column 9

 
This shows that the @future method is running in the user context, rather than the system context.  As a result, the SOQL in "queryPrivateContact" returns empty, and thus we get the "List has no rows for assignment" error.



Message Edited by jhart on 11-17-2008 04:32 PM

Message Edited by jhart on 11-17-2008 04:32 PM
  • November 18, 2008
  • Like
  • 0