You need to sign in to do that
Don't have an account?
Dekor
Dataloader CLI timestamped log file creation and import from folder rather than file
As someone who often has to import new assets into our system via the DataLoader tool I've decided I'll try and automate the process a bit as the GUI is long winded for importing multiple files a day.
So I've been reading the documentation and I've created my config files/batch file. Finally got it working but there is a couple of limitations I'm hoping to work around.
Here is my process-conf.xml file
My queries relate to three settings, dataAccess.name, process.outputError and process.outputSuccess.
Now for the dataAccess.name file you have to specify a file name. Is there a way you can specify a folder name and it imports all .csv files in the folder? This would enable me to import multiple files at once rather than having to save each file in the specified folder with the same file name after each run.
In regards to the output error/success, I'd like it to timestamp the results files rather than constantly overwriting the current file. The GUI does this so I assume there is a way to do it via command line. I like to keep an audit of all imports and having these log files is crucial to that.
So I've been reading the documentation and I've created my config files/batch file. Finally got it working but there is a couple of limitations I'm hoping to work around.
Here is my process-conf.xml file
<!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http://www.springframework.org/dtd/spring-beans.dtd"> <beans> <bean id="AssetUpsertProcess" class="com.salesforce.dataloader.process.ProcessRunner" singleton="false"> <description>Asset Upsert job gets the inventory asset record updates from a CSV file and uploads them to salesforce using 'upsert'.</description> <property name="name" value="AssetUpsertProcess"/> <property name="configOverrideMap"> <map> <entry key="sfdc.endpoint" value="https://login.salesforce.com"/> <entry key="sfdc.username" value=""/> <entry key="sfdc.password" value=""/> <entry key="sfdc.timeoutSecs" value="600"/> <entry key="sfdc.loadBatchSize" value="200"/> <entry key="sfdc.externalIdField" value="Oracle_Id__c"/> <entry key="sfdc.entity" value="Asset"/> <entry key="process.operation" value="upsert"/> <entry key="process.mappingFile" value="C:\Salesforce Imports\Mapping\1.3.sdl"/> <entry key="dataAccess.name" value="C:\Salesforce Imports\Inventory Imports\Ready to import\import.csv"/> <entry key="dataAccess.type" value="csvRead"/> <entry key="process.initialLastRunDate" value="2006-12-01T00:00:00.000-0800"/> <entry key="process.outputError" value="C:\Salesforce Imports\errors\error.csv"/> <entry key="process.outputSuccess" value="C:\Salesforce Imports\errors\success.csv"/> <entry key="process.useEuropeanDates" value="true"/> </map> </property> </bean> </beans>
My queries relate to three settings, dataAccess.name, process.outputError and process.outputSuccess.
Now for the dataAccess.name file you have to specify a file name. Is there a way you can specify a folder name and it imports all .csv files in the folder? This would enable me to import multiple files at once rather than having to save each file in the specified folder with the same file name after each run.
In regards to the output error/success, I'd like it to timestamp the results files rather than constantly overwriting the current file. The GUI does this so I assume there is a way to do it via command line. I like to keep an audit of all imports and having these log files is crucial to that.
Bean in a process-config.xml is specific to a single database operation for one object.
In order to perform dataload for multiple objects, you'll need to have multiple bean in your process-config.xml, one for each object with their respective data file and error/success files.
In order to keep history of success/error files, you can move them to folder stamped with datetime as demonstrated in below code of batch file.
Hope this helps.
Just put all .CSV files in single INPUT location
Use some script (schedule it for automation) which will look into INPUT and take file by file, copy it to /read and run the DataLoader job.
After each job run, you should clean /read, and move input file to /Processed location.
Enjoy IT!