• Neil_S
  • 0 Points
  • Member since 2010

  • Chatter
  • 0
    Best Answers
  • 0
    Likes Received
  • 0
    Likes Given
  • 4
  • 0


We have a problem with duplicate records being created by triggers.


This is our scenario.


In an after update trigger on Opportunity we are checking the before and after value of FieldA (an Account reference).  If it has changed we create a Chatter Feed against that new Account.  So, if the field is changed from AccountA to AccountB, AccountB receives a Chatter update.


Additionally, we have a workflow on the Opportunity object that copies a value from a field on the FieldA Account to the Opportunity for audit purposes. Let's call this Opportunity field FieldX.


I've refered to Execution Order to try and understand what's going on, but am struggling.


Step 5 in the list states that the record is saved to the database before the after triggers are fired.

Step 6 runs as expected and creates a Chatter record as FieldA has changed from AccountA to AccountB

Steps 7, 8 and 9 run.

A field update has been made (step 10), so the before and after triggers fire again (step 11)


This it where it goes awry.

As the after trigger is processed a 2nd time, we are still seeing the change from AccountA to AccountB, as well as the new FieldX update, even though the new record update has been triggered purely by an update to FieldX.  Looking at step 1, this new update should have "Loaded the original record from the database", in the case the saved but not yet commited update to AccountB, from step 5 on the 1st update.


So, is this by design, and if so, how on earth do you manage additional processing in after triggers when you also have workflow updates on the object??


Interesting, or perhaps more confusingly, if the Opportunity is created with a value in FieldA, the secondary update triggered by the workflow does not detect a FieldA update (ie, it  sees AccountA & AccountA, rather than null & AccountA).


Any guidance greatly appreciated,




  • November 30, 2011
  • Like
  • 0

Hi All,


We've just gone live with SalesForce and we have our first user-raised query.  I've managed to replicate it, but I can't believe this is how it's meant to work!  Any advice would be appreciated.



We have a custom object that is initially assigned to a queue.

Users who are members of the queue can monitor the queue using a view and 'Accept' the objects for them to work on.

If 2 users are monitoring the queue at the same time and see the same object, they can each tick it and say 'Accept', with the ownership passing from the queue, to user 1, and then to user 2.


We were not expecting the ownership to pass from user 1 to user 2....


Is this expected behaviour?  If so, any fixes suggested to prevent the allocation from user 1 to user 2 when user 2 presses the 'Accept' button?


Thanks in advance


  • April 04, 2011
  • Like
  • 0



We use SalesForce to maintain Account and Contact details, and keep our other internal systems in sync by using Outbound Messages fired on create / modify.  These messages are monitored by an internal process to fetch the data out of SalesForce via the API and update internally.


The problem we have is that we want a delete in SalesForce to flag the record as deleted in our internal systems.


The issues we're hitting are:

Workflows aren't fired on delete (doesn't count as a modify?)

Can't manually trigger an Outbound Message, say in a before delete trigger (this is my assumption, please advise if wrong)

Even if we did manually trigger an OM, once the object is deleted, my testing shows it is removed from the OM queue if not already delivered (=> unreliable)


So, from what I can see we have a couple of options:

1) Allow delete, but have an additional, scheduled process that scans SF looking for records deleted in the last x hours / minutes and update internal systems.

2) Implement some sort of Approval process for deletes, requiring the setting of a non-SF delete flag to tell our internal systems, before allowing delete after manual confirmation of the internal update.


Both of these are quite clunky given the existing processing of creation / updates.  Has anyone been here before?  Got a neat solution?





  • March 23, 2011
  • Like
  • 0

Hello All,


We're integrating SF with our internal systems using Outbound Messages when certain conditions are met on object save.  We're considering the impact / contingencies required due to system failures / interruption to service issues.


When dealing with Outbound Messages, I'm aware that delivery failures can be tracked by looking at the O/B Message Delivery Status screen.  However, this relies on a user pro-actively checking this page.


Is there any mechanism available to automatically raise an alert (by email?) if a message delivery has failed after X attempts or Y minutes?  Can these delivery status queues be accessed via the API?


Thanks in advance