• Joseph Bleau
  • NEWBIE
  • 10 Points
  • Member since 2016

  • Chatter
    Feed
  • 0
    Best Answers
  • 0
    Likes Received
  • 0
    Likes Given
  • 3
    Questions
  • 3
    Replies
According to the documentation it is considered best practice to maintain multiple triggers even for Events:
 
... However, having multiple triggers on the same object isn’t a best practice because we can't guarantee the order of execution, so we recommend that you add only one trigger per object.

https://developer.salesforce.com/docs/atlas.en-us.platform_events.meta/platform_events/platform_events_subscribe_batch_resume.htm

But it seems to me less obvious that this is as important for events as it is for objects. Where an Object trigger is directly tied back to a singular record in Salesforce concurrency issues are obvious. Events on the other hand don't necessarily have to be underpinned by any specific record, nor do they even need to be contextually related to existing records at all. 

It seems to me sort of antithetical to the notion of a pub/sub to captiulate at the point of subscription and re-couple subscriber logic in one place in the Org. Wouldn't it be better if all interested consumers implemented their own event listeners? There would not be a single point of failure and conflict which is a common ailment of Object triggers (granted, with patterns that help). 

I'd like to see this footnote in the documentation expanded on a bit more. It doesn't seem obvious to me that this advice is as applicable as it was and I'd be curious to hear others thoughts as well.
Hello all,

I am reviewing Platform Events for an upcoming project and based on my understanding of how High Volume events work I know it to be true that "publishing" them actually in effect simply queues them to be published. That is, publishing is an asynchronous action. Previously, with the now deprecated Standard Volume events it was a serial operation and publishing failures could be made immediately available to the publisher.

The documentation concedes that a failure might occur after a successful queueing of an event, but it does not give any reason or explanation for why this might happen. It even references an in-beta feature for subscribing to a dedicated channel to receive these events (which apparently is no longer available for new orgs).

My question is pretty straight-forward: In what circumstances might event successfully be queued but fail to ultimately be published? 
Hi all,

I'm hoping to leverage SF's existing fuzzy matching capability specifically with regards to addresses. Before I just implement their solution for myself I'm hoping the functionality is exposed somewhere.

See: https://help.salesforce.com/HTViewHelpDoc?id=matching_rules_matching_methods.htm&language=en_US#matching_rules_matching_methods

Specifically their fuzzy street matching algorithm (weighting number, street address, and suffix differently).

Is this possible?

Thanks,
Joe 
According to the documentation it is considered best practice to maintain multiple triggers even for Events:
 
... However, having multiple triggers on the same object isn’t a best practice because we can't guarantee the order of execution, so we recommend that you add only one trigger per object.

https://developer.salesforce.com/docs/atlas.en-us.platform_events.meta/platform_events/platform_events_subscribe_batch_resume.htm

But it seems to me less obvious that this is as important for events as it is for objects. Where an Object trigger is directly tied back to a singular record in Salesforce concurrency issues are obvious. Events on the other hand don't necessarily have to be underpinned by any specific record, nor do they even need to be contextually related to existing records at all. 

It seems to me sort of antithetical to the notion of a pub/sub to captiulate at the point of subscription and re-couple subscriber logic in one place in the Org. Wouldn't it be better if all interested consumers implemented their own event listeners? There would not be a single point of failure and conflict which is a common ailment of Object triggers (granted, with patterns that help). 

I'd like to see this footnote in the documentation expanded on a bit more. It doesn't seem obvious to me that this advice is as applicable as it was and I'd be curious to hear others thoughts as well.
Hello all,

I am reviewing Platform Events for an upcoming project and based on my understanding of how High Volume events work I know it to be true that "publishing" them actually in effect simply queues them to be published. That is, publishing is an asynchronous action. Previously, with the now deprecated Standard Volume events it was a serial operation and publishing failures could be made immediately available to the publisher.

The documentation concedes that a failure might occur after a successful queueing of an event, but it does not give any reason or explanation for why this might happen. It even references an in-beta feature for subscribing to a dedicated channel to receive these events (which apparently is no longer available for new orgs).

My question is pretty straight-forward: In what circumstances might event successfully be queued but fail to ultimately be published?