+ Start a Discussion

API Best Practices -- Concurrent Connections...

I have a growing number of sForce controls that need access to several different objects int he system. Currently my largest is at about 12 objects. In order to adequately enter the "Select ... From" SOQL text I need to be able to determine what, if any, fields each user has access to. This necessitates about 12 calls to DescribeSObject() to get that metadata.

There is (almost) nothing more painful then sitting watchign the status bar crawl through those 1/12.... 2/12... 3/12... Of course hearing the users complain is actually more painful, but I digress.

Working in .Net I am now making those calls Asynchronously to help the problem along.

My question is this:

Is there a Best Practices document that talks about the maxinum number of concurrent requests we should have outstanding?

Message Edited by daroz on 11-03-2004 04:22 PM


Hi daroz,

I think you are limited to 3 concurrent connections from the same IP, but I'll try to verify this.


Thanks Dave,

I know that's definately not the case currently as up until my lynching earlier by my client I was running 4 connections async from our .net integration. (It's most noticeable when running 12 consecutive describeSObject() calls. 7.5s sync on average to 3.5s async.)

The reason I asked is because in a test earlier today from the development machine (which is also running a sync application averaging about 15-20 operations/min) the same app was running w/o connection limits (the limit was set to 100 -- the app only leaves 12 outstanding at maximum -- the describeSObject() calls) and the profiling showed a significant speed improvement in that section. In other words I don't think it's limited at all.

(As an side -- for those of us with PCs and servers behind NAT, if a limit is ever imposed please don't restrict on just IP address, perhaps sessionId and IP would be more realistic)

I'm trying to find a balance in the delays in the describe calls and number of async connections.

We are working on a call to allow you to pass an array of strings to a describeSObject call and get an array of DescribeSObjectResults.

I think a call like this would help your performance?

benjasik wrote:
We are working on a call to allow you to pass an array of strings to a describeSObject call and get an array of DescribeSObjectResults.

I think a call like this would help your performance?

Absolutely! For our User-facing controls that could effectively cut our ramp-up time (the time from loading the control to ready for interaction) by somewhere from 33-50%. The only other bottleneck is dealing with the inability of SOQL to effectively 'join' tables/objects.

If I could 'wave my magic wand' I would be able to execute 1 API call for my describe calls, 1 API call for my retrieve calls (across objects), and n-calls for my Queries (with Joins). In the ideal world I'd be looking at 3-4 calls. Right now to print an invoice I interface with 12 objects and make over 25 calls to the API. (It'll become 11 and 22 after Winter 05 when I can relate Activities to Custom Objects)

The unknown quantity in assessing the performance is in determining the ratio of time spent in connection overhead vs. actual operation time (query, search, insert etc.) Right now the only control we (as users) have over the former is through parallelization.

Can you tell me a bit more about what you're looking for in joins?

Say you had the ability to get account and all their contacts,opportunities, etc, and set a where clause on children?
What if you could retrieve a contact and pull account data back as well?

How many queries would that get you down to?


I can do you one better - Drop me an email or give me a call (I use the same forum name on the crmsuccess.com forums) and I'll give you access to our EE account and let you see what I'm trying to do.

For the benefit of others: We have invoices integrated into Salesforce. The invoice itself is a child of the Account object, and has lookup relations to Contact, Opportunity, and our custom Installed Product object (asset tracking).

A single SQL-like query would be like this:

Select Invoice.*, Account.Name, Account.Address, Contact.Name, Contact.Address, Contact.Phone, Opportunity.Name, InstalledProduct__c.*, From Invoice JOIN Account ON Account.Id = Invoice.AccountId JOIN Contact ON Contact.Id = Invoice.ContactId JOIN Opportunity ON Opportunity.Id = Invoice.OpportunityId JOIN InstalledProduct__c on InstalledProduct__c.Id = Invoice.InstalledProductId Where Invoice.Id = '<>'

I can see a few other situations where 'simple' child to parent joins would save quite a few calls. Think of the times you would want to show a Contact record to a user but want to include the Account.Name field.... anyway back on topic... That SQL-like statment, if executed via the API would return all the bottom-up information I need from the invoice. What uses 5 calls right now would become 1. I would guess the biggiest question would be how that information would be returned. I would presume an array of sObjects. Then it comes down to being able to optimize the query on your side as not to kill performance on the DB end.

There are also several child objects, mostly what you'd expect: Line Items, Payments, Notes, and Tasks and Events. (The latter 2 being the biggest kludge... Can't hardly wait the 8 days to get rid of it.)

In the case of these invoices I have to pull down all the related child data for the invoice. So if I can make a hypothectial call like

sObject[] records = sfBinding.RetrieveWithChildren("");

and get the invoice object with all it's 1st-tier children (typecasting can be done by the IdPrefix and the DescribeSObject items) I can roll another 5 queries up into 1 retrieve call right there. I'd still need to do some other lookups. (Our line items contain links to the Product2 object and would be 2nd tier off the invoice).

The remaining API calls are unoptimized (the current kludge to attach activities is not optimized because it's going away in 8 days), or unoptimizeable. I.E. I call getUserInfo() and do a Retrieve() on that user to get his/her First and Last Name (we use initials not full names).

So I guess the totals would be like this:

Buttom up queries now 5, could be 1.

Top-Down queries now 5, could be 1.

Other (Product2, getUserInfo, and Retrieve User object) 3.

When we include the 12 describeSObject calls becomming 1 here's what we get:

25 calls now could become 6. If each calls has 1/3rd of a second of overhead (setup, SSL, teardown) and runs in series that's about 8 1/4 seconds of overhead now vs what could be 2 seconds -- a savings of 6 1/4 seconds or about 75% of the overhead.


I have a similar situation ...

In an sforce control for an opportunity, I'm accessing:

     account (retrieve), opportunity (retrieve), account roles (query) opportunity contact roles (query)
     tasks (query), opportunity line items (query), pricebookentries (retrieve with array), product2
     (retrieve with array), stages (query), opportunity again (query for name uniqueness),
     opportunityCompetitor (query), opportunityHistory (query)

Like the previous example, most of these could be handled by a single query that returned parent and children.  Of course, with multiple children, we would not want a typical sql join with a sql answer set, or the "explosion" of each child against the other would be huge.


John SaundersJohn Saunders


I personally wish I had some help in caching describeSObject results. For instance, if I could quickly determine the last schema change date, then I could keep results around for a while. I could do even better if I knew the last schema change for each object type (perhaps in the result from describeGlobal?)

Is this something that would be useful to you, and if so, perhaps salesforce.com would be kind enough to implement that for us.

John Saunders



John Saunders wrote:

I personally wish I had some help in caching describeSObject results. For instance, if I could quickly determine the last schema change date, then I could keep results around for a while. I could do even better if I knew the last schema change for each object type (perhaps in the result from describeGlobal?)

That's not half bad thinking... I would think the answer to the question would be strictly performance related.

In order to determine if your cached copy is still 'valid' you will need to run at least one API call to get that information. At that point, given the fact that you need to make the call, the API setup/teardown connection overhead is moot. That leaves the question:

Is there a way to efficiently determine that value on the SFDC side that presents a significant enough performance boost compared to returning the full describe call to warrant its implementation.

The only other factor is bandwidth/transmission time for the SOAP response to the describe query vs. the last update time... Thankfully that's not a factor for us as we do get good throughput using compression.

Given what I see UI side, individual elements in a picklist get their own timestamp that doesn't 'seem' to roll-up to the parent object (table), I'm not sure it would present sufficient computational or I/O savings on their side. But hey... I can be wrong.

Excellent question though.

John SaundersJohn Saunders

No deep thought here. Just thinking about how frequently my code calls describeSObject vs. how often the returned data changes. Speed up the call enough and I won't mind calling it when the data haven't changed. I'd rather do that than to maintain the caching code.

Maybe 5.0 will be fast enough that I won't care, but in the meantime, I thought I'd stick my nose in...

Perf work we did in 5.0 was around cursor creation times (initial time for query to return)

I think you'll have to wait for the next release when we can provide a bulk describeSObject call for this, and also for a syntax where you don't have to make as many calls to get data from multiple tables (we are working on this)
John SaundersJohn Saunders
Thanks! I guess I'll wait.