function readOnly(count){ }
Starting November 20, the site will be set to read-only. On December 4, 2023,
forum discussions will move to the Trailblazer Community.
+ Start a Discussion
chriscwharrischriscwharris 

Sharing records in complex object hierarchy

I developed a complex app with 25 objects all related to a single person. I initially developed the relationships as a Master-Detail hierarchy however it became apparent that the not being able to modify security of the related objects since they inherit from the master was a problem.

So I figured that I need to change every relationship to a Lookup instead but that means that I need to set the security on each record in code or rely on user manually setting sharing. To minimise the risk of security not being set correctly I am looking to use Apex Managed Sharing which I have tested on a single relationship.

So my question is this, In the top level record I will set 2 user lookups, these need varying levels of access to the related objects. So at the moment it looks like I will need to use the trigger to update all 24 related objects each time the user lookup fields are changed. Is updating this with a trigger the best solution? Is there a better approach? Any examples anyone can direct me to? Thanks
NagendraNagendra (Salesforce Developers) 
Hi Chriscwharris,

A naive implementation of this approach could be very costly. Consider, for example, if you insert and delete shares on each object separately (I will assume you don't update or undelete the custom share records since that is usually not necessary). In the worst case, that would be 48 DML Statements consumed by your trigger logic. So in the worst case scenario, your package consumes almost a third of the available limit!

True, you can combine up to ten separate object types into one DML Statement. So you could combine many of the above operations and reduce 24 insert calls and 24 delete calls to 3 of each. But this all takes time, and you will chew through CPU and Heap limits in addition to DML Statements.

You might consider processing these shares asynchronously to reduce strain on the system. It can be a lot of work to figure out an asynchronous processing framework that performs well across the various conditions you may find in a customer org. If you really want to get some good ideas about how to build an asynchronous framework, Advanced Apex Programming by Dan Appleman would be a good buy.

As a basic example, it isn't too hard to wrap your head around how to dump this logic in a @futuremethod, but then what if it gets kicked off from within a @future method? You cannot call a @future method from inside a @future method!

You can get around this limitation by processing synchronously when you are already in a @futurecontext:
public void process(Map<Id, Set<Id>> toCreate, Map<Id, Set<Id>> toDelete)
{
    if (!system.isFuture)
    {
        processAsync(userToRecords);
        return;
    }
    // insert new shares, delete old shares

    // parameters can be of whatever form you like
    // (within the constraints of a @future method)
    // it seems best to map the user added to all the records they were added to
    // and the same structure for the old lookup values
    // (where you want to remove shares)
}
@future
public void processAsync(Map<Id, Set<Id>> toCreate, Map<Id, Set<Id>> toDelete)
{
    processCreate(userToRecords);
}
The above might work out for you. You could also consider chained jobs (likely using the Queueableinterface), which are a lot more work to set up but may be more robust. Appleman lays it out in a lot more detail and I don't have much experience in setting up such a framework. My main point is that if you have this much logic, trying to process it synchronously might result in a lot of CPU timeouts.

Kindly mark this post as solved if the information help's so that it gets removed from the unanswered queue and becomes a proper solution which results in helping others who are really in need of it.

Best Regards,
Nagendra.P