• irfan aziz
  • NEWBIE
  • 0 Points
  • Member since 2017

  • Chatter
    Feed
  • 0
    Best Answers
  • 0
    Likes Received
  • 0
    Likes Given
  • 14
    Questions
  • 6
    Replies
I just tested the outbound message feature of SF with an end point url using putsreq.com. But i dont know how to set up the listener url on my own. Do i need to install some http server or ?.
So far we have been using the bulk api via python scripts to extract data in csv. Now aiming for a real time case using outbound messaging. Can anyone give a link to some guide or something especially related to setting up a listner url to which the outbound messages will be sent.  I think the rest i ll manage.

Thank you.
Hi
Can anyone tell me how do i write this login.txt content in json format so i send requests with content-type as json.
<?xml version="1.0" encoding="utf-8" ?>
<env:Envelope xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:env="http://schemas.xmlsoap.org/soap/envelope/">
<env:Body>
<n1:login xmlns:n1="urn:partner.soap.sforce.com">
<n1:username>your_username</n1:username>
<n1:password>your_password</n1:password>
</n1:login>
</env:Body>
</env:Envelope>

 
Trying to do the curl bulk steps in python. I can do the login via postman which returns the Session ID but when i try to run the code in python, it keeps executing without finnishing nor giving any error.  However, i am able to create a job using the session id from the postman post.
Here is the code. Any suggestion?
import requests  as r
url1='https://login.salesforce.com/services/Soap/u/39.0'
header1={'Content-Type':'text/xml; charset=UTF-8','SOAPAction':'login'}
body1=open('login.txt','r')
resp1=r.post(url1,headers=header1,data=body1)
print resp1.status_code

 
There is option to run for large datasets with pk chun option in Curl . Which divides the large data set into smaller chunks. I am wondering how to achieve the same thing in Java? I have managed to run this sample code. But it fails for large tables with around 1 million rows due to out of memory error. 
Hi  i managed to set the java bulkapi client Which connects and queries the SF org. However, i was using direct connection via internet to salesforce org. Now i ll run it via proxy server, i am not sure if the blow small code snifet does work. I have put these two lines and its working on my system which has an active internet connection. How to check if my proxy is being used or not.
 System.setProperty("http.proxyHost", proxyHostName);
 System.setProperty("http.proxyPort", ""+proxyPort);
The other way is check it on the server.
public boolean login() {
        boolean success = false;
        //change from login to test to move to sandbox and last number is API version being used
        String serverurl="https://eu1.salesforce.com/services/Soap/u/39.0/00E58000110cvU";
        String soapAuthEndPoint = serverurl;
//                "https://login.salesforce.com/services/Soap/u/39.0";
        //first part is the header on the url of the org you are pointing this tool at
        String bulkAuthEndPoint = "https://eu1.salesforce.com/services/async/39.0";
        try {
          ConnectorConfig config = new ConnectorConfig();
          config.setUsername(userId);
          config.setPassword(passwd);
          config.setProxy("proxy.abc.fi", 808);
          System.setProperty("http.proxyHost", "proxy.abc.fi");
          System.setProperty("http.proxyPort", ""+ 808);
          config.setAuthEndpoint(soapAuthEndPoint);
          config.setCompression(true);
          config.setTraceFile("C:\\Users\\uz0343\\workspace\\BulkApi\\src\\traceLogs.txt");
          config.setTraceMessage(false);
          config.setPrettyPrintXml(true);
          config.setRestEndpoint(bulkAuthEndPoint);
          System.out.println("AuthEndpoint: " +
              config.getRestEndpoint());
          PartnerConnection connection = new PartnerConnection(config);
          System.out.println("SessionID: " + config.getSessionId());
          bulkConnection = new BulkConnection(config);
          success = true;

 
Hi
I managed to run the example code shared here but since my account table in SF has more than 1 million rows and i am extracting all the columns. When tried to run the extraction at 619000 rows giving the heap space error. how can i split the result into multiple files? I expected the program will do the split as bulk is meant to do so but in the given code it did not.

Any idea?
I have been trying to implement the java bulk api example given (https://salesforce.stackexchange.com/questions/24081/exporting-data-to-csv-file-via-bulk-api) but i am getting the error URL not reset. attached is the snapshotUser-added image
As you can see i am able to get the sessionid but i fails after that. I dont know if i am using the bulkAuthEndPoint value correctly.
I am following the code given in java. (https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_code_walkthrough.htm)
I am able to connect and create a job but now i want the createbatchesfromcsv() to query the salesforce object and write to csv file. Where i can get the documentation or example to do this.
I am using SOQL and trying to get the data into local time zone. I found one link which does not give a complete answer. The below give the date in local time zone but for time there is only hour_in_day() function. There is no mention of how to retrieve the minutes in local time. Since i am using the SOQL in a python script to extract data  from SF i dont see it feasible to declare some complex logic to do the timestamp converstion.
SELECT day_only(convertTimezone(CreatedDate))FROM Opportunity
GROUP BY DAY_only(convertTimezone(CreatedDate))
I am using simple_Salesforce python library for data extraction. I plan to use it for production environment. But i am a bit sceptical about its reliability and i could not find the limitations. I would like to know your expert option about this library. There is another library in python salesforce-bulk which implements the bulk api.But i am not sure how good it will be.
I have used simple_salesforce which turned out to be a very useful especially in terms of the way the records are accessed.
 
The below select with condition createddate=last_n_days:1 filters last one day data. But I am expecting that we ll need to query multiple times in a day. Is there function that i can use to filter based on hours(like get the date for the last 3 hours or 1 hour).
SELECT AccountId,Amount,CampaignId,CloseDate,CreatedById,CreatedDate,Description,Fiscal,FiscalQuarter,FiscalYear,ForecastCategory,ForecastCategoryName,\
HasOpportunityLineItem,Id,InCurrentHalfYear__c,IsClosed,IsDeleted,IsWon,LastActivityDate,LastModifiedById,LastModifiedDate\
FROM Opportunity where createddate=last_n_days:1
 
I have a field owner(data type lookup(user)) in the opportunity table.
SELECT createddate,closedate,owner from Opportunity limit 10
The above select query fails and says there is no such column owner.  Can anyone help.

I managed to do all the steps mentioned in this process except Generating Stubs From WSDLs.

java -jar target/force-wsc-39.0.1-uber.jar <inputwsdlfile> <outputjarfile>
java -jar target/force-wsc-39.0.0.jar C:\Users\upg022\Downloads\partner.wsdl partner.jar
Here i get the error:
no main manifest attribute, in target/target/force-wsc-39.0.0.jar
I have downloaded the file  force-wsc*.jar and put at different locations but it always fails. I dont know what could be wrong with it.

I even tried this one
java -classpath "${JAVA_HOME}lib/tools.jar:target/force-wsc-39.0.1-uber.jar" com.sforce.ws.tools.wsdlc <inputwsdlfile>  <outputjarfile>
java -classpath "${JAVA_HOME}lib/tools.jar:target/force-wsc-39.0.0.jar" com.sforce.ws.tools.wsdlc C:\Users\upg022\Downloads\partner.wsdl  partner.jar

Again i get the same error Could not find or load main com.sforce.ws.tools.wsdlc

What could be the issue.

Here is  code
from salesforce_bulk import SalesforceBulk
bulk = SalesforceBulk(username='user@abc.com', password='passwd')
after executing this line i get the error 
Traceback (most recent call last):
  File "<pyshell#7>", line 2, in <module>
    cid,sls,sru = SalesforceBulk(username='user@agc.com', password='passwd')
  File "C:\Python27\lib\site-packages\salesforce_bulk\salesforce_bulk.py", line 60, in __init__
    username, password)
  File "C:\Python27\lib\site-packages\salesforce_bulk\salesforce_bulk.py", line 87, in login_to_salesforce
    ', '.join(missing_env_vars)))
RuntimeError: You must set SALESFORCE_CLIENT_ID, SALESFORCE_CLIENT_SECRET, SALESFORCE_REDIRECT_URI to use username/pass login
What could be wrong here. I am following simple examples.
 
The below select with condition createddate=last_n_days:1 filters last one day data. But I am expecting that we ll need to query multiple times in a day. Is there function that i can use to filter based on hours(like get the date for the last 3 hours or 1 hour).
SELECT AccountId,Amount,CampaignId,CloseDate,CreatedById,CreatedDate,Description,Fiscal,FiscalQuarter,FiscalYear,ForecastCategory,ForecastCategoryName,\
HasOpportunityLineItem,Id,InCurrentHalfYear__c,IsClosed,IsDeleted,IsWon,LastActivityDate,LastModifiedById,LastModifiedDate\
FROM Opportunity where createddate=last_n_days:1
 
I have a field owner(data type lookup(user)) in the opportunity table.
SELECT createddate,closedate,owner from Opportunity limit 10
The above select query fails and says there is no such column owner.  Can anyone help.
I know there are some docs where its explained that but.
I have a csv for an custom object with 10000 records.
I want to upload this data to my org using bulk api.
I know there are tools are dataloader.io and apex data loader. but i want it as custom tool in python.
Thanks.

Hi,

 

I am trying to use a standardsetController in conjunction with standard list view. Since, my data base has more than 10,000 objects, if I do not LIMIT the number of records returned by the database.getQueryLocator method, then I get an error - Too many query locator rows: 10001

 


Here's the code snippet that returns the record set -

 

public ApexPages.StandardSetController setCon {
            get {
               if(setCon == null) {
                
                   setCon = new ApexPages.StandardSetController(database.getqueryLocator([SELECT name,Home_Phone__c,Work_Phone__c,Mobile_Phone__c,email__c,Email2__c,City__c from Candidate__c LIMIT 10000]));
                   
                }
               return setCon;
              }
            set;
       }
public List<Candidate__c> getCandidates() {
           setCon.setPageSize(25);
           return (List<Candidate__c>)setCon.getRecords();
       }
 

 

 

However, if I add a LIMIT CLAUSE to my SOQL query, then, when I change the ListView filter to give me a list of objects where the state is set to "CA", I get the following error -

 

ORDER BY Name ASC LIMIT 10000 WHERE ((State__c = 'CA')) ^ ERROR at Row:1:Column:133 unexpected token: WHERE

Message Edited by trish on 03-16-2009 01:52 PM
  • March 16, 2009
  • Like
  • 0
Am sure this is something silly...
 
I delete a contact & run the following SOQL but get no results - I have read access on that Contact.
 
Code:
SELECT Id, isDeleted FROM Contact WHERE isDeleted = true

Any clues?