+ Start a Discussion
Jon KeenerJon Keener 

Questions Regarding Eclipse around a warning and maximum timeout value

1. SInce upgrading to the latest version that supports v13, I am getting the following warning after doing a "refresh from server":
 
Severity and Description Path Resource Location Creation Time Id
Refresh error: Unable to retrieve file for id Case of type Workflow due to an internal error:1659875132-10 (370857700) Sandbox/src/unpackaged package.xml line 1 1214436401828 129
 
Considering "Case" can be a reserved word, (I remember the things that had to be done with vb.net previously), I'm wondering if it's something to do with that.
 
 
2. At least half of the time at work, I end up timing out when performing a "refresh from server", :create a new sforce project, or try to deploy something to production.  I've got the timeout value set to 600 (the max), but it still happens frequently.  At work, we are going through a proxy that is probably slowing things down a bit, combined with a rather large/complex salesforce configuration (supporting 3000+ users).  I have tried this at home, without a proxy, and it succeeds much more frequently than at work, but I have still gotten timeouts once in a while.  If I edit the "com.salesforce.toolkit.prefs" file directly, can I increase the timeout value to something greater than 600 and will that work? If so, what would be the true maximum value I could use?  My assumption, is that with the continued additions to metadata, which is a great thing, this is only going to go slower for me.
 
Thanks!
 
Jon Keener
Bill EidsonBill Eidson
  Jon -

On #1, that's a problem specific to one of the alert recipients on the Case object in your org; we'll have it resolved soon in the next patch release (usually middle of next week).  It's not related to Case generically.

  Thanks

  - Bill

Jon KeenerJon Keener
I've done some additional digging into the issues I've having with eclipse and the timeouts.
 
#1, It does not appear that you can exceed the 600 seconds currently.
 
#2, The current uncompressed size of a complete metadata download for our org is around 42 MB.  Approximately 36.5 MB of that data is tied to two items, Some documents(16.5 MB) that I'm unsure as to why just a couple of folders are being downloaded (UPDATE:  The documents were in a package we had created a couple of years ago for a migration from a previous org.  I was able to clear this out to eliminate the 16.5 MB), and secondly, profiles (20 MB).  We have 150+ profiles currently.
 
I'm including some screenshots below for reference.
 
I've managed to help some of the issue (as a temporary fix) by not downloading the larger items into eclipse when I create the project.  Even though I do this, it is still an issue in with the "Deploy to Server" functionality, when moving between Step 2 to Step 3.  I'm not sure whether it's the fact that I'm choosing "Destination Archive", or whether it is retrieving all information from the destination for comparison in the next step.  This typically times out about 75% of the time.  (Update:  Even with clearing the documents above, the 20 MB of profile information continues to cause timeouts as described here.  I'm assuming separate "retrieve" metadata API calls happen per item, and that the call for the profiles item is causing the timeout)
 
Jon Keener
 
 
Total File:
 
 
 
Unpackaged Details:
 
 
 
 


Message Edited by Jon Keener on 07-01-2008 05:44 PM

Message Edited by Jon Keener on 07-01-2008 05:45 PM