• anschoewe
  • NEWBIE
  • 10 Points
  • Member since 2010

  • Chatter
    Feed
  • 0
    Best Answers
  • 0
    Likes Received
  • 0
    Likes Given
  • 15
    Questions
  • 12
    Replies
I'm trying to sort a list of my custom Apex class objects. I always receive this error message: 
System.ListException: One or more of the items in this list is not Comparable

To help trouble shoot, I'm literally running this Salesforce example code in the Developer Console and still receiving this error message.
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_comparable.htm#apex_comparable_example

Was a bug introduced or is there something wrong with my Salesforce org?

 
List<Employee> empList = new List<Employee>();
empList.add(new Employee(101,'Joe Smith', '4155551212'));
empList.add(new Employee(101,'J. Smith', '4155551212'));
empList.add(new Employee(25,'Caragh Smith', '4155551000'));
empList.add(new Employee(105,'Mario Ruiz', '4155551099'));

// Sort using the custom compareTo() method
empList.sort();

// Write list contents to the debug log
System.debug(empList);

// Verify list sort order.
System.assertEquals('Caragh Smith', empList[0].Name);
System.assertEquals('Joe Smith', empList[1].Name); 
System.assertEquals('J. Smith', empList[2].Name);
System.assertEquals('Mario Ruiz', empList[3].Name);



public class Employee implements Comparable {

    public Long id;
    public String name;
    public String phone;
    
    // Constructor
    public Employee(Long i, String n, String p) {
        id = i;
        name = n;
        phone = p;
    }
    
    // Implement the compareTo() method
    public Integer compareTo(Object compareTo) {
        Employee compareToEmp = (Employee)compareTo;
        if (id == compareToEmp.id) return 0;
        if (id > compareToEmp.id) return 1;
        return -1;        
    }
}

 
I'm using the SOAP Api to retrieved all of my task -archived and 'un'-archived.  First, I download all of the Task Ids using SOQL.  Then, I would like to use the very fast EnterpriseConnection.retrieve(...) call to retrieve those Tasks in batches of 2,000.  Sadly, it looks like the retrieve method do NOT retrieve archived tasks older than 12 months.  Is this true.  Whil the documentation for 'retrieve' states it does not retrieve deleted tasks, i was hoping it would retrieved archived tasks.  After all, I'm giving it the exact Id(s).  Can anyone confirm this functionality?  I guess what i really want is the equivalent of queryAll(), like retrieveAll().

I suspect I'll have to stick with SOQL.  I thought using retrieve() would be faster.

Andrew
I'm trying download, locally, all of our Tasks.  We have over 3.5 million tasks.  I'm using the SOAP API to download these (ALL ROWS).  I'm having trouble with query timeouts so I'm going to download in batches.  I've looked over the documentation and I'm trying my best to rely on the standard indexes.

I'm only relying on the Id field to help order and define my batches.  Right now I'm tyring to find the largest batch size that will use the Index and not produce timeouts.  So, imagine I've downloaded the first 99,999 Tasks. That limit was NOT chose arbitrarilly.  There is SF documentation (http://www.salesforce.com/docs/en/cce/ldv_deployments/salesforce_large_data_volumes_bp.pdf) stating an index will typically be used if you limit your records to under 100,000 rows.

Select Id from Task Order By Id ASC Limit 99999

Then, I look at the last Task.Id downloaded and get all Tasks greater than that one (sort order is important here).

Select Id from Task Where Id > '00T7000000cr8SzEAI' Order By Id ASC Limit 24999

I'd then repeat this until I've downloaded all my tasks.  I cant' understand why these subsequent queries cannot use the 99,999 limit.  24,999 is as large as I can go.  If I enter 25,000, it will NOT use the Index and instead timeout after attempting a full table scan.  I realize the differnece between the first and second query is the 'where' clause, but I still thought the index would be used if we were returning < 30% of the first million records.  Is this 25,000 limit some undocumented characteristic of the query optimizer?  The Query Optimizer (https://help.salesforce.com/apex/HTViewSolution?id=000199003&language=en_US) tool is really helpful here.  I just can't understand where this 25,000 limit is coming from.

Any insight is appreciated.

Andrew

I'm using the Ant Migration Tool to manage deployments.  Frequently I'm asked by team members to help them with their deployments after they've failed.  It's very useful to look at the log output, in particular the error messages.  However, I often don't have the asynRequestId of their deployment -or they run another deployment and lost the ANT ouput that showed the asyncRequestId when the deployment started.

 

Is it possible to find the asyncRequestId from the deploymentId?  The deploymentId is the only information visible in the Setup -> Monitor Deployments page.  I would love to use that and somehow download the log for a given deployment.  I'm using Ant Migration Tool with the following command:

 

ant -Dsf.asyncRequestId=<04s.......> <some target>

 

Any help is appreciated.

 

Andrew

We have a lot of Salesforce data.  I would like to automate a weekly export.  The native Data Export UI is too clunky with 30GB data, and it's difficult to automate.  It's still an option, but I'm also exploring the option of using the Bulk API to query for the data. 

 

There are some limitations associated with the Bulk API.  Only 10 files of 1GB each can be returned by one batch.  This means I'll need to break my queries into smaller sets.  I'm doing some testing and tried to download 1 million Cases.   I received errors about exceeding the 10 retry limit: "InternalServerError : Retried more than 10 times."  I think because it was simply too slow to return all of the data.  My query specifies every field on the Case object.  So, I'm trying to break my batches into sets of 100,000 records.

 

In order to do this, I basically need to paginate through the Case rows.  Unfortunately, OFFSET is not available in Bulk API queries.  Also, it has a limit of 2,000 rows, so this wouldn't work.  My plan is to do some up-front queries (non-bulk API).  I want to order the Cases by CreatedDate and and limit my bulk-API query to the first 100k records.  Then, knowing the maximum CreatedDate of the first 100K records, I would do a second bulk-API query for the next 100k records where CreatedDate > previousBatchMaxCreatedDate.  And so on....

 

It's a little clunky sounding but I thought doable.  The problem is, I can't find the SOQL query that will tell me the max CreatedDate in a list of ordered Cases.  This does not work:  select Max(CreatedDate) from Case Order by CreatedDate ASC LIMIT 100000 .  I receive an error: " Ordered field must be grouped or aggregated: CreatedDate."

 

Is there an easy way around this problem?  I don't want to bring in all of the records.  I just need the max CreatedDate in each set of 100k records. Then I'll use that for my Bulk API query. Thoughts?

 

I'm also open to other ideas about how to retrieve this data reliably.  We use DBAmp but it seems to have issues and take ~2days  to complete when calling Replicate_All.  Alternatively, we could schedule Data Exports and then write some code to authenticate into SF and scan the Export page for all of the zip files.  The program would follow those links and download the files.  That might be the easiest solution in the end.  Scrubbing HTML just seems brittle.

 

Your thoughts are appreciated,

Andrew

We send numerous emails to numerous internal Salesforce users for things like job failures in Apex.  We always assumed that setting the toAddresses() field to the email address of an internal user (ie. first.last@mycompany.com) would qualify as an internal email.  After hitting some limits, I'm questioning that assumption. It appears that sending an email like this will still qualify as an external email even though there is a internal Salesforce user with that email.  Can someone please confirm?

 

Should we instead use setTargetObjectId(internalUserId) to guarantee the email as counted as an internal address and will not count against our limit?  If so, how do we send to numerous internal users at the same time?  Messaging.SingleEmailMessage only has the singular method setTargetObjectId() and not the plural version setTargetObjectIds().

 

Any clarification is appreciated.

 

Andrew

 

Messaging.SingleEmailMessage emailMsg = 
     new Messaging.SingleEmailMessage();  
emailMsg.setTargetObjectId(internalUserId);
//must be false if sending to internal user    
emailMSg.saveAsActivity = false; 
emailMsg.setSenderDisplayName('sender name');  
emailMsg.setSubject('some subject');  
emailMsg.setBccSender(false);  
emailMsg.setUseSignature(false);  
emailmsg.setPlainTextBody('email body');  
Messaging.sendEmail(new Messaging.SingleEmailMessage[] { emailMsg });

 

We're trying to embrace best practices by externalizing some of our constants and configurations.  We've adopted Custom Settings in that vein.  Similiarly, we're looking to transition our unit tests to the newest api version (v24) since, by default, it isolates test data from existing data in Salesforce.  This is desirable since it will ensure our tests remain portable between different orgs -especially development sandboxes that don't have any data by default.

 

This is where we've hit a problem.  Using v24, none of our custom settings are available when the test starts.  Fine. Just like Accounts or Contacts, we'll create the data required by our tests.  So, I create the custom setting (in the example below, that's My_Custom_Setting__c).

 

Now the custom setting is available.  Then, when I test my actual code later in the test method, I make calls to My_Custom_Setting__c.getInstance(...), to get the custom setting.  Unfortunately, it returns null.  This is unfortunate since using getInstance(...) is the recommended way to access custom settings in code.  Why?  Because it's cached in the application cache and therefore very fast and efficient.

 

I'm going to guess that the below test fails because... when the application cache loads at the beginning of the test, there are no custom settings to cache.  Later on, when I insert the custom setting, the cache is not updated.  For this reason, furture calls to getInstance() don't return the custom setting I just inserted.  I would love it if someone could verify this.  I'm just guessing.

 

If that is the case, what's a developer to do? 

 

  1. Is there some way to refresh the application cache when custom settings are inserted?  I had hoped that inserting a custom setting the cache would be updated.
  2. Do I need to switch all of my code away from calling My_Custom_Setting__C.getInstance(...), to something like [Select ... from My_Custom_Setting_c Where Name = 'Standard Setting']?  If you run the below test, you'll see that querying for the custom setting using SOQL DOES return my just-inserted custom setting.  There's no caching there.
  3. Should I introduce a utility method that will need to be called whenever a piece of code requires my custom settings?  The method could check if it's running in a test context (Test.isRunningTest).  If so, it would do the SELECT, if not, it would call getInstance().

Thoughts?  Either I'm missing something, or this really is just a current difficiency in how Custom Settings are implemented.  I really want to migrate my tests to v24, but I would prefer not having to refactor non-test code to do it.

 

Andrew

 

//Test with api v24    
@isTest static void testAumConfig()
{
        My_Custom_Setting__c c = new My_Custom_Setting__c();
        c.Name = 'Standard Configuration';
        insert c;
       
        List<My_Custom_Setting__c> cs = [Select Id from My_Custom_Setting__c Where Name = 'Standard Configuration']; 
        System.assert(cs.size() == 1);
        
        c = My_Custom_Setting__c.getInstance('Standard Configuration');
        System.assert(c != null); //this fails.  It doesn't find the custom setting, even though I just inserted it above!!!
}

 

I'm trying to verify that my code is permanently deleting objects.  For some reason, the test still finds the newly inserted tasks when I query ALL ROWS -even though I just purged it from the recycle bin.  Any ideas how I can test that a record was purged from the recycle bin successfully?

 

Any help is appreciated,

 

Andrew

 

Here's my test code:

 

	static testMethod void testPermanentDelete()
	{
		Task t = new Task(
			Subject = 'subject',
			Priority = 'Normal',
			Status = 'Completed',
			ActivityDate = Date.today());
		insert t;
		Id taskId = t.Id;
		
		//Verify the task was inserted
		List<Task> foundTasks = [Select Id From Task Where Id = :taskId ALL ROWS];
		System.assertEquals(1, foundTasks.size());
		
		Test.startTest();		
		Database.DeleteResult[] deleteResults = Database.delete(foundTasks, false);
		Database.EmptyRecycleBinResult[] emptyRecycleBinResults = Database.emptyRecycleBin(foundTasks);
		Test.stopTest();
		
		//Verify the task was permanently deleted
		foundTasks = [Select Id From Task Where Id = :taskId ALL ROWS];
		System.assertEquals(0, foundTasks.size());
	}

 

We are calling a webservice that returns it's response in XML. We've discovered that when the text of an XML tag contains an apostrophe, the text is truncated.  This is causing us problems.  Here's an example showing the problem:

 

String xmlString = '<txtProEmail>paul.o&apos;connor@company.com</txtProEmail>'; //paul.o'connor@company.com
XmlStreamReader xsr = new XmlStreamReader(xmlString);
while(xsr.hasNext())
{
     if (xsr.getEventType() == XmlTag.START_ELEMENT && xsr.getLocalName() == 'txtProEMail') {
        xsr.next();
        if (xsr.getEventType() == XmlTag.CHARACTERS) {
            System.debug('******** Value: ' + String.escapeSingleQuotes(xsr.getText()));
        }
    }
    xsr.next();
}

 

 

---------------

 

The debug statement always shows: paul.o

This is a problem since we are enforcing that the emails are valid.  Since it's getting truncated, it's not valid.  Any thoughts?   The 'String.escapeSingleQuotes()' call doesn't seem to affect the output.

 

Any help is appreciated,

 

Andrew

We have a time based workflow rule that updates a field on an opportunity.  When the field is updated by the rule, a trigger is fired.  In the trigger logic, we do different work depending on the user executing the trigger.  Who is the user running the time-based workflow rule?  Is there any way to control who the user is?  We would like to dictate the path taken in our logic (solely user based) but it appears that the user executing the workflow rule is the user who last modified the opportunity.  This is causing problems for us and seems extremely arbitrary.  Is there a workaround?

 

Andrew

We recently started working with Scheduable Apex and batchable classes.  We rely heavilly on the Ant Migration Tool to enable our continuous integration tool Hudson to work with our Apex code.  Behind the scenes, it's calling the Salesforce Ant Migration Tool.  We also use the migration tool for deployment.

 

This was all working fine until we start wrting Scheduable classes.  Now, whenever we try to do a validate deployment using the migration tool (without tests running) I always get failures for a few classes.  The error message is always:

 

Batchable class has jobs pending or in progress; Schedulable class has jobs pending or in progress

 

I get this error message next to classes that aren't even scheduable classes.  I can however deploy fine from Eclipse.  I've also heard that change sets will work.  Has anyone else experienced this?  Is this a bug with the current Salesforce Ant Migration Tool?  Is there a workaround?  This is a huge problem for our team.  We rely on Hudson to validate our build every time we check in code to SVN.  Each time, it's telling us that the build failed.  Help please!!

 

Andrew

 

FYI.... I can run all the tests fine through the Salesforce UI and I don't receive any errors.  This seems to indicate that no sceduled Apex jobs are actually running.

 

Our sandbox was recently updated to the Winter 11 release.  After upgrading, I tried to generate a new Enterprise WSDL.  However, I now receive lots of errors in Eclipse when it tries to parse it.  I'm also unable to parse it correctly to generate my proxy classes (Java) when using the built-in JDK tool 'wsimport.'  This use to work great.

 

Both tools seem to complain that schema is invalid because of the casing (upper/lower case) of some of the schema elements.  In particular, I've noticed the following changes between my old and new wsdl that are causing errors:

 

Old Version New Version

<portType name="Soap"> <porttype name="Soap">

<complexType name="sObject"> <complextype name="sObject">

<complexContent> <complexcontent>

 

 

I'm sure there are more.  Is this still considered a valid WSDL.  My Java parses don't think so.  It's unclear why this has changed between versions.

 

Any help in resolving this problem is appreciated,

 

Andrew

I'm exploring the possibility of moving some of our cron jobs (Java) into an Apex Schedulable class.  For the most part, I've been able to make this work.  However, I've noticed that if I continue development of the Schedulable class, I am not allowed to save or deploy it to my sandbox until I've deleted the scheduled job.  I'm using System.schedule() to schedule it in the 'Execute Anonymous' view in Eclipse.

 

This becomes a real problem because I recently adopted the Salesforce Ant Migration Tool to ease our deployment process.  Now, if we make lots of code changes, we can deploy all of our trigger and classes by just running our Ant script.  However, I fear this wont' be possible anymore if the Schedulable classes are scheduled in the target Salesforce instance.  Is there a way around this?  Is there something wrong with my deployment process?  I like to keep deployment a one-click operation so I really thought an Ant script was the way to go.

 

Any help is appreciated,

 

Andrew

I've created a Batchable class that also implements Schedulable.  I can get it to work but I have concern about receiving the errors should there be problems.    I can send an email in the finish() method, but that will only give me a summary of the total batch jobs and the first abbreviated error message (if there was one).  How can I see the actual exception messages for each of the batches in the job (ie. if 2 of the 6 total batches failed, I would like 2 emails with the exception.getMessage(), or one large summary email with each exception message)?

 

I tried wrapping my code in the 'execute(Database.BatchableContext bc, List<sObject> objects)' method in a try/catch but it doesn't seem to catch anything.  In the 'catch()', I wrote code to email me the exception.getMessage().  Looking at the Apex Jobs screen, I can see that some of the batches are failing, but I don't receive any emails.

 

Is there anyway I can see the exception for each failed batch in a scheduled  job?  It's very hard to debug if I can't see the actual exceptions that are being thrown for a scheduled batch job.

 

Andrew

Hello,

 

I'm trying to avoid duplicate sobjects in my list of sobjects that I submit for update/delete/insert.  I realized that Sets are great for this.  The documentation states the following concerning uniqueness of sObjects in Sets:

 

http://www.salesforce.com/us/developer/docs/apexcode/index.htm

 

"Uniqueness of sObjects is determined by IDs, if provided. If not, uniqueness is determined by comparing fields. For example, if you try to add two accounts with the same name to a set, only one is added"

 

However, in my experience, this is not the case.  In the following example, I have provided the ID of the opportunities, but after changing one field, both opportunities are still added to the Set.  This is not the expected behavior because the ID of the opportunities are supplied and identical.

 

 

Opportunity opp1 = [Select Id from Opportunity Where  Id = '006Q00000054J7u'];
Set<Opportunity> opps = new Set<Opportunity>(); 
opps.add(opp1); 
opp1.Name = 'Something new';
opps.add(opp1);
System.debug('SIZE: ' + opps.size()); //prints 2, expect 1

 

 

 

What am I doing wrong?  Is this an API version issue?  I believe I'm using api version 19.0.

 

I will need to rewrite a lot of code if the Set uniqueness does not work as advertised.

 

Thanks for any help you might provide,

 

Andrew

I'm trying to sort a list of my custom Apex class objects. I always receive this error message: 
System.ListException: One or more of the items in this list is not Comparable

To help trouble shoot, I'm literally running this Salesforce example code in the Developer Console and still receiving this error message.
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_comparable.htm#apex_comparable_example

Was a bug introduced or is there something wrong with my Salesforce org?

 
List<Employee> empList = new List<Employee>();
empList.add(new Employee(101,'Joe Smith', '4155551212'));
empList.add(new Employee(101,'J. Smith', '4155551212'));
empList.add(new Employee(25,'Caragh Smith', '4155551000'));
empList.add(new Employee(105,'Mario Ruiz', '4155551099'));

// Sort using the custom compareTo() method
empList.sort();

// Write list contents to the debug log
System.debug(empList);

// Verify list sort order.
System.assertEquals('Caragh Smith', empList[0].Name);
System.assertEquals('Joe Smith', empList[1].Name); 
System.assertEquals('J. Smith', empList[2].Name);
System.assertEquals('Mario Ruiz', empList[3].Name);



public class Employee implements Comparable {

    public Long id;
    public String name;
    public String phone;
    
    // Constructor
    public Employee(Long i, String n, String p) {
        id = i;
        name = n;
        phone = p;
    }
    
    // Implement the compareTo() method
    public Integer compareTo(Object compareTo) {
        Employee compareToEmp = (Employee)compareTo;
        if (id == compareToEmp.id) return 0;
        if (id > compareToEmp.id) return 1;
        return -1;        
    }
}

 
I'm trying download, locally, all of our Tasks.  We have over 3.5 million tasks.  I'm using the SOAP API to download these (ALL ROWS).  I'm having trouble with query timeouts so I'm going to download in batches.  I've looked over the documentation and I'm trying my best to rely on the standard indexes.

I'm only relying on the Id field to help order and define my batches.  Right now I'm tyring to find the largest batch size that will use the Index and not produce timeouts.  So, imagine I've downloaded the first 99,999 Tasks. That limit was NOT chose arbitrarilly.  There is SF documentation (http://www.salesforce.com/docs/en/cce/ldv_deployments/salesforce_large_data_volumes_bp.pdf) stating an index will typically be used if you limit your records to under 100,000 rows.

Select Id from Task Order By Id ASC Limit 99999

Then, I look at the last Task.Id downloaded and get all Tasks greater than that one (sort order is important here).

Select Id from Task Where Id > '00T7000000cr8SzEAI' Order By Id ASC Limit 24999

I'd then repeat this until I've downloaded all my tasks.  I cant' understand why these subsequent queries cannot use the 99,999 limit.  24,999 is as large as I can go.  If I enter 25,000, it will NOT use the Index and instead timeout after attempting a full table scan.  I realize the differnece between the first and second query is the 'where' clause, but I still thought the index would be used if we were returning < 30% of the first million records.  Is this 25,000 limit some undocumented characteristic of the query optimizer?  The Query Optimizer (https://help.salesforce.com/apex/HTViewSolution?id=000199003&language=en_US) tool is really helpful here.  I just can't understand where this 25,000 limit is coming from.

Any insight is appreciated.

Andrew

Hello folks,

 

I hope you all are doing well.

Can anyone tell me that what is the difference between aloha app and native app of salesforce ?

Can anyone tell me what feature salesforce is giving in force.com various editions.

 

Thanks,

Minkesh Patel

We're trying to embrace best practices by externalizing some of our constants and configurations.  We've adopted Custom Settings in that vein.  Similiarly, we're looking to transition our unit tests to the newest api version (v24) since, by default, it isolates test data from existing data in Salesforce.  This is desirable since it will ensure our tests remain portable between different orgs -especially development sandboxes that don't have any data by default.

 

This is where we've hit a problem.  Using v24, none of our custom settings are available when the test starts.  Fine. Just like Accounts or Contacts, we'll create the data required by our tests.  So, I create the custom setting (in the example below, that's My_Custom_Setting__c).

 

Now the custom setting is available.  Then, when I test my actual code later in the test method, I make calls to My_Custom_Setting__c.getInstance(...), to get the custom setting.  Unfortunately, it returns null.  This is unfortunate since using getInstance(...) is the recommended way to access custom settings in code.  Why?  Because it's cached in the application cache and therefore very fast and efficient.

 

I'm going to guess that the below test fails because... when the application cache loads at the beginning of the test, there are no custom settings to cache.  Later on, when I insert the custom setting, the cache is not updated.  For this reason, furture calls to getInstance() don't return the custom setting I just inserted.  I would love it if someone could verify this.  I'm just guessing.

 

If that is the case, what's a developer to do? 

 

  1. Is there some way to refresh the application cache when custom settings are inserted?  I had hoped that inserting a custom setting the cache would be updated.
  2. Do I need to switch all of my code away from calling My_Custom_Setting__C.getInstance(...), to something like [Select ... from My_Custom_Setting_c Where Name = 'Standard Setting']?  If you run the below test, you'll see that querying for the custom setting using SOQL DOES return my just-inserted custom setting.  There's no caching there.
  3. Should I introduce a utility method that will need to be called whenever a piece of code requires my custom settings?  The method could check if it's running in a test context (Test.isRunningTest).  If so, it would do the SELECT, if not, it would call getInstance().

Thoughts?  Either I'm missing something, or this really is just a current difficiency in how Custom Settings are implemented.  I really want to migrate my tests to v24, but I would prefer not having to refactor non-test code to do it.

 

Andrew

 

//Test with api v24    
@isTest static void testAumConfig()
{
        My_Custom_Setting__c c = new My_Custom_Setting__c();
        c.Name = 'Standard Configuration';
        insert c;
       
        List<My_Custom_Setting__c> cs = [Select Id from My_Custom_Setting__c Where Name = 'Standard Configuration']; 
        System.assert(cs.size() == 1);
        
        c = My_Custom_Setting__c.getInstance('Standard Configuration');
        System.assert(c != null); //this fails.  It doesn't find the custom setting, even though I just inserted it above!!!
}

 

I'm puzzled

 

Running Eclipse Helios and Force.com IDE plug 20.0.1

 

  • On 2011-05-20, our Sandbox was on Spring 11.  If I used the IDE and did Run Tests on class Foo, I got a full debug log of the entire class execution in the Apex Test Runner view
  • On 2011-05-23, after our Sandbox was upgraded to Summer 11, running the exact same test in the IDE on class Foo yields a vastly truncated debug log in the Apex Test Runner view. The log is only about 130KB.

 

I know the code executes to completion because the same Run tests in the Force.com browser Apex Classes | Run Tests yields a debug log of 3300KB using the same log filters.

 

As far as I can tell, it is something about Summer 11 and how the Eclipse IDE version 20.0.1 obtains the debug log.

 

Any ideas greatly appreciated (I'm already filtering the log to LoggingLevel.INFO and thus avoiding the noise; but I need more than 130KB of debug log to analyze my execution).

 

 

We recently started working with Scheduable Apex and batchable classes.  We rely heavilly on the Ant Migration Tool to enable our continuous integration tool Hudson to work with our Apex code.  Behind the scenes, it's calling the Salesforce Ant Migration Tool.  We also use the migration tool for deployment.

 

This was all working fine until we start wrting Scheduable classes.  Now, whenever we try to do a validate deployment using the migration tool (without tests running) I always get failures for a few classes.  The error message is always:

 

Batchable class has jobs pending or in progress; Schedulable class has jobs pending or in progress

 

I get this error message next to classes that aren't even scheduable classes.  I can however deploy fine from Eclipse.  I've also heard that change sets will work.  Has anyone else experienced this?  Is this a bug with the current Salesforce Ant Migration Tool?  Is there a workaround?  This is a huge problem for our team.  We rely on Hudson to validate our build every time we check in code to SVN.  Each time, it's telling us that the build failed.  Help please!!

 

Andrew

 

FYI.... I can run all the tests fine through the Salesforce UI and I don't receive any errors.  This seems to indicate that no sceduled Apex jobs are actually running.

 

Our sandbox was recently updated to the Winter 11 release.  After upgrading, I tried to generate a new Enterprise WSDL.  However, I now receive lots of errors in Eclipse when it tries to parse it.  I'm also unable to parse it correctly to generate my proxy classes (Java) when using the built-in JDK tool 'wsimport.'  This use to work great.

 

Both tools seem to complain that schema is invalid because of the casing (upper/lower case) of some of the schema elements.  In particular, I've noticed the following changes between my old and new wsdl that are causing errors:

 

Old Version New Version

<portType name="Soap"> <porttype name="Soap">

<complexType name="sObject"> <complextype name="sObject">

<complexContent> <complexcontent>

 

 

I'm sure there are more.  Is this still considered a valid WSDL.  My Java parses don't think so.  It's unclear why this has changed between versions.

 

Any help in resolving this problem is appreciated,

 

Andrew

Hello,

 

I'm trying to avoid duplicate sobjects in my list of sobjects that I submit for update/delete/insert.  I realized that Sets are great for this.  The documentation states the following concerning uniqueness of sObjects in Sets:

 

http://www.salesforce.com/us/developer/docs/apexcode/index.htm

 

"Uniqueness of sObjects is determined by IDs, if provided. If not, uniqueness is determined by comparing fields. For example, if you try to add two accounts with the same name to a set, only one is added"

 

However, in my experience, this is not the case.  In the following example, I have provided the ID of the opportunities, but after changing one field, both opportunities are still added to the Set.  This is not the expected behavior because the ID of the opportunities are supplied and identical.

 

 

Opportunity opp1 = [Select Id from Opportunity Where  Id = '006Q00000054J7u'];
Set<Opportunity> opps = new Set<Opportunity>(); 
opps.add(opp1); 
opp1.Name = 'Something new';
opps.add(opp1);
System.debug('SIZE: ' + opps.size()); //prints 2, expect 1

 

 

 

What am I doing wrong?  Is this an API version issue?  I believe I'm using api version 19.0.

 

I will need to rewrite a lot of code if the Set uniqueness does not work as advertised.

 

Thanks for any help you might provide,

 

Andrew

Hello,

 

We have 2 custom objects: 'Property' and 'Marketed_Property'. These objects are related with a look up field on Marketed_Property. So in this relationship, Property is the Parent object and Marketed_Property is the Child object. We have a trigger on the 'Marketed Property' object called 'updateStatusOnProperty' which updates all parent Property records when fields on the child Marketed_Property object are updated. There are also 2 triggers on Property which amend fields on the Property object when updated. The Marketed_Property object is populated by 10 @future calls carrying out a bulk update, fed by an input file of approx 2000 rows of data.

 

While processing the data using the @future jobs, we are get an "UNABLE_TO_LOCK_ROW" error for one of the Property record updates in one of the @future jobs. The other 9 @future jobs complete successfully. The error is reproducible but only on our live org and only sporadically, and with the lock occurring against different single records each time. We have cloned our live environment in a full size test org but cannot recreate the problem here, nor in any sandbox or DE org.

 

The trigger code is 1) doing a select on Property for all records where there is a child Marketed_Property object, 2) doing some comparisons on the Marketed_Property data to determine which Property rows/fields should be updated, and 3) updating the relevant Property records....and it's this step that's failing.

 

The code is below:

 

if(mpIds.size() == 0){return;}
private List<Property__c> RecordsBatch=new List<Property__c>();
List<Property__c> props = [Select id,Property_Status__c,Asking_Price__c,Estate_Agent__c,Beds__c,Weeks_On_Market__c,Date_Marketed__c,Property_Type__c,Type__c,Last_Update_Date__c,Matched__c,(Select id,Property_Status__c,Asking_Price__c,Estate_Agent__c,Beds__c,Weeks_On_Market__c,Date_Marketed__c,Property_Type__c,Type__c,Last_Updated__c from Properties__r order by LastModifiedDate desc)from Property__c where Id IN : mpIds];
for(Property__c p : props){
Property__c p1 = new Property__c(ID = p.Id);
List<Marketed_Property__c> listMP = p.Properties__r;
if(listMP.size()>0)
{
if(listMP.size()==2)
{
if(listMP[0].Asking_Price__c < listMP[1].Asking_Price__c)
{
p1.Asking_Price__c = listMP[0].Asking_Price__c;
}
else
{
p1.Asking_Price__c = listMP[1].Asking_Price__c;
}
if(listMP[0].Property_Status__c == 'For Sale' && listMP[1].Property_Status__c == 'For Sale')
{
p1.Property_Status__c = 'For Sale';
}
else if((listMP[0].Property_Status__c == 'For Sale' && listMP[1].Property_Status__c == 'Sold STC')||(listMP[0].Property_Status__c == 'Sold STC' && listMP[1].Property_Status__c == 'For Sale'))
{
p1.Property_Status__c = 'Sold STC';
}
else if((listMP[0].Property_Status__c == 'For Sale' && listMP[1].Property_Status__c == 'Sold')||(listMP[0].Property_Status__c == 'Sold' && listMP[1].Property_Status__c == 'For Sale'))
{
p1.Property_Status__c = 'Sold';
}
else if(listMP[0].Property_Status__c == 'Withdrawn' && listMP[1].Property_Status__c == 'Withdrawn')
{
p1.Property_Status__c = 'Withdrawn';
}
Marketed_Property__c MP = null;
if(listMP[0].Date_Marketed__c == listMP[1].Date_Marketed__c){
list<String> forEA = new List<String>();
forEA.add(listMP[0].Estate_Agent__c);
forEA.add(listMP[1].Estate_Agent__c);
forEA.sort();
if(forEA[0] == listMP[0].Estate_Agent__c){
MP = listMP[0];
}else{
MP = listMP[1];
}
} else if(listMP[0].Date_Marketed__c > listMP[1].Date_Marketed__c){
MP = listMP[1];
}else
{MP = listMP[0];}

p1.Estate_Agent__c = MP.Estate_Agent__c;
p1.Beds__c = MP.Beds__c;
p1.Weeks_On_Market__c = MP.Weeks_On_Market__c;
p1.Date_Marketed__c = MP.Date_Marketed__c;
p1.Property_Type__c = MP.Property_Type__c;
p1.Type__c = MP.Type__c;
p1.Last_Update_Date__c = MP.Last_Updated__c;
}
else
{
p1.Property_Status__c = listMP[0].Property_Status__c;
p1.Asking_Price__c = listMP[0].Asking_Price__c;
p1.Estate_Agent__c = listMP[0].Estate_Agent__c;
p1.Beds__c = listMP[0].Beds__c;
p1.Weeks_On_Market__c = listMP[0].Weeks_On_Market__c;
p1.Date_Marketed__c = listMP[0].Date_Marketed__c;
p1.Property_Type__c = listMP[0].Property_Type__c;
p1.Type__c = listMP[0].Type__c;
p1.Last_Update_Date__c = listMP[0].Last_Updated__c;
}
}
if(p.Matched__c == false){
//p.Matched__c = true;
p1.Matched__c = true;
}
RecordsBatch.add(p1);
if(RecordsBatch.size()== 1000)
{
update RecordsBatch;
RecordsBatch.clear();
}
}
if(RecordsBatch.size()> 0)
{
update RecordsBatch;
RecordsBatch.clear();
}

 

 

 

The error message is below:

18:3:38.13|CODE_UNIT_FINISHED
18:3:38.631|CODE_UNIT_STARTED|[EXTERNAL]updateStatusOnProperty on Marketed_Property trigger event AfterUpdate for a0DA0000000ukYY, a0DA0000000ukYZ, <snip> 186 IDs </snip>
18:3:38.719|DML_BEGIN|[62,2]|Op:Insert|Type:MatchingProHistory__c|Rows:187
18:3:39.338|DML_END|[62,2]|
18:3:39.339|SOQL_EXECUTE_BEGIN|[112,28]|Aggregations:1|Select id,Property_Status__c,Asking_Price__c,Estate_Agent__c,Beds__c,Weeks_On_Market__c,Date_Marketed__c,Property_Type__c,Type__c,Last_Update_Date__c,Matched__c,(Select id,Property_Status__c,Asking_Price__c,Estate_Agent__c,Beds__c,Weeks_On_Market__c,Date_Marketed__c,Property_Type__c,Type__c,Last_Updated__c from Properties__r order by LastModifiedDate desc)from Property__c where Id IN : mpIds
18:3:39.427|SOQL_EXECUTE_END|[112,28]|Rows:139|Duration:88
18:3:39.605|DML_BEGIN|[280,5]|Op:Update|Type:Property__c|Rows:139
18:3:49.7|DML_END|[280,5]|
18:3:49.8|EXCEPTION_THROWN|[280,5]|System.DmlException: Update failed. First exception on row 0 with id a0CA0000000Y27WMAS; first error: UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record: []
18:3:49.13|FATAL_ERROR|System.DmlException: Update failed. First exception on row 0 with id a0CA0000000Y27WMAS; first error: UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record: []

We've been trying to rewrite this query but it isn't improving things. Our current theory is that since the asynchronous @future calls are running in parallel across the Marketed_Property object records, and since some of these child records have the same parent then the same Property record is being updated multiple times and being locked as a result. However opinion is divided as some of the team thinks Salesforce execution controls and prevents such a situation occurring.

 

Anyone seen this before and can see something we're missing?

 

Thanks.

Message Edited by davehilary on 03-17-2010 08:32 AM

So, I tried to use a map with an Enum object (which I am told I can use as a datatype in the documentation), but it does not work.

 

 

Declaration in the class definition:

 

public enum Season {WINTER, SPRING}

 

Definition of Map in the constructor:

 

 

Map<Season,String> heatMap = new Map<Season,String> ();

 

 Error:

 

 

 

 

 

Error: Compile Error: Map by CustomCon.Season not allowed at line 27 column 37

 






 

I keep getting this error when I try to validate a deployment. The classes in question are not schedule classes or related to scheduling in any way. Once is a controller for a VF page, then tests for that controller, and a utility class with some static methods.

I have a completely seperate and unrelated scheduled job but it was not running when I got this error.

 

 

BUILD FAILED
C:\Documents and Settings\venable\Desktop\deploy\build.xml:7: Failures:
classes/configAddProductsVF.cls(1,8):Schedulable class has jobs pending or in progress
classes/configAddProductsVFTEST.cls(1,8):Schedulable class has jobs pending or in progress
classes/utility.cls(1,8):Schedulable class has jobs pending or in progress

Anyone seen this before?

Thanks,
Jason

  • December 22, 2009
  • Like
  • 0