function readOnly(count){ }
Starting November 20, the site will be set to read-only. On December 4, 2023,
forum discussions will move to the Trailblazer Community.
+ Start a Discussion
SoCal AdminSoCal Admin 

98 % Wasted Compilation time - Need Deployment Enhancement desperately.

 

Dear Salesforce Development Team:   Please fix the Deployment / Unit Testing process.

 

We do NOT need to recompile every line of Unit Testing code for a single class update.

 

This results in 98% Wasted compiler time and a 15 minute wait time for every minor change - which could be completed in 30 seconds.

 

Contact me if you need ideas - I have many.

 

Why do we need to perform unit testing for classes that were compiled yesterday, and the day before that, and the day before that, and every day for the last 2 months - for a class that doesn't reference them?

 

This is the biggest waste of time by **far**.  There are client-side and server-side solutions to address this.

 

This is a nightmare.

admintrmpadmintrmp

You can always click on the class name and click on Run Test. This will just run the test for that particular class only.

 

What I would like to see is the ability to run a test on a specific method in a class. That would be cool.

 

Personally, I don't see this as a catastrophe but I think some improvements could be had in some places.

sfdcfoxsfdcfox
It's not a recompile, by the way, it's just a run-through. The idea is that the entire system should be tested during a deployment to a mission-critical system. A typical person will use their sandbox or a developer environment, make as many tests and adjustments as they need to there, then deploy to their production environment.
SoCal AdminSoCal Admin

 

It doesn't matter what you call it.  It's "Running through" things that don't need to be run through.

 

Here's the scenario:

1) People do testing in sandbox

2) It works

3) People agree

4) People sign in blood

5) Enemies are made due to code that is slashed in this"one-time" update

6) Code is escalated to production

7) Code is tested in production (minimally because no one has time to do full testing)

8) Changes are demanded after 1-2 days

9) After one or two iterations of Step 8, Steps 1-5 are eliminated

10) Step 5a (Minor Code Change is inserted)

11) Steps 5a  - 8 are repeated indefinitely.

 

Please note:

1) Step 1, above is great for testing but it doesn't solve much for reasons discussed elsewhere

2) Step 8 will recur ***REGULARLY*** because end-users don't have the time or skillset to think it through and Management has a "make-it-happen" mentality

3) This leaves us with "completed, accepted, tested and unit-tested code"  -> *with* *unending* *minor* *updates*.

 

Yes - We understand the *idea* - and that's what it is - an *idea*.  Now we've found that the idea is flawed.  And I think you will agree that optimized ideals are better/faster and more streamlined than inefficient ideals -> Hence the Feature Enhancement Request.

I'm not arguing that the ideal is incorrect - only that it is not implemented correctly.   The "RUN THROUGH" is checking classes, objects, styles, whatever - **that are **unrelated** to the code**.

 

Think about Linux.  It forces a Disk-Check **periodically** (and when requested) - not *every* time you add a user.

SoCal AdminSoCal Admin

 

I do "Run Test".  Regularly.   It doesn't change the user factor. (Changes after production).

sfdcfoxsfdcfox
I'm not entirely against the idea that Run All Tests should be optional. After all, its no skin of salesforce.com's back if someone deploys a broken update that fubar's their organization's business process and causes endless delays.

The fact is, though, that once step 11 is reached, there is a problem with usability and stability, and adoption of the CRM is likely to fall. Often, sometimes a seemingly innocuous update will cause such severe damage that, without a testing phase, would cause problems later.

As an actual, live example, one of our developers added a validation rule to our project. One. Single. Validation. Rule. Now, you'd like to think that everything would have been all fine and dandy. After all, no code at all was changed. Unfortunately, it actually caused 78 test failures during the pre-upload Run All Tests. Had this validation rule gone into effect in a customer's database without any warning, they would have immediately lost about 25% of the total functionality of our package, and that 25% covered about 80% of what makes our package useful.

So, while I agree that it would be nice to skip the tests altogether, especially for seemingly small updates, salesforce.com has decided instead to mitigate the risk of lost productivity from a botched update instead of making it easier for developers to deploy lousy code.

Personally, I applaud their brazen attempt to make developers produce better code. Not many companies would choose to go that route, and I believe, in the end, that developers that regularly develop on Force.com will have better programming practices in general, even if they move on to another field of programming (such as PHP or C++). Someone should make a study on that subject.

And, your analogy on Linux is comparing apples to oranges. A Linux user is like a salesforce.com record, and a Linux package is like Apex Code. So, when you modify records in salesforce.com, it doesn't Run All Tests, just like Linux doesn't do any strenuous testing when a user is added. But, when you install a package in Linux, it runs a test, a version dependency, runs pre-install triggers, modifies the system, runs post-install triggers, cleans up other packages, etc. Linux and salesforce.com are very much alike in design in that regard. Even a Microsoft Update creates a restore point so you can roll back a failed update.

So, while it seems like overkill, I feel like they've almost got it right... you are correct in saying that the system should isolate tests to just suspect items, but that truth is actually reasonably obscure when you consider how everything ties together. Even the "security scanner" takes 6 hours to fully map a decent size project for flaws. Imagine having to go through that period for each "major" upgrade at the savings of having shorter "minor" updates. I don't think the trade-off would be fair.
admintrmpadmintrmp
I think I misunderstood the original post in regards to when this occurs.

I have to agree with sfdcfox, the whole process helps a lot and for me, it's developed how I am as a developer. I will quite happily go to another language and run over the same processes every time I make a deployment/release as it makes life so much easier.

The thing to note is, Salesforce is catering for all types of deployments, whether it's minor changes or major changes, and whether a minor change is just a typo fix or a method refactor. At the end of the day, a typo fix could change the way an entire application works. This needs to be accounted for.
SoCal AdminSoCal Admin

 

Yes, I was aware of the limit of the Linux analogy as I wrote it.  I used a simplistic example because it *does* fit -> I'm talking about wasting time repeatedly. 

 

You're still missing the point.  I'm not talking about not running tests.  I'm talking about not running tests that *don't relate* to the modified code. 

 

I know the effect of the change I made.  Perhaps it was a string change from 'XYZ' to 'ABC'.  Perhaps I added a variable and produced a formula -> it's not going to affect anything in the Accounts, Contacts, Opportunities, [Insert 60 objects here] because it is a custom, isolated object.    Perhaps, in a different scenario,  I *do* customize something related to Accounts and Contacts -> it won't be related to 48 other tables.  On the low-probability-but-eventual-scenario that I make a typo, I expect all the tests **related** to my code to be run to catch that typo.  But you don't have to run tests for ***UNRELATED OBJECTS AND CLASSES***. 

 

It sounds like you have the luxury to work in a static environment where a Project is scoped, designed, coded, tested and promoted.  That's great for people with a full sandbox and no daily Change-Orders - but it doesn't work so well for the absolute minimum Config Sandbox where most of the world works and anyone in the company can request changes.

 

Here's the Solution

Instead of spending all that time running tests that don't relate to the single class being promoted, perhaps SFDC Development could focus on *identifying* the change-sets, running tests on the changed objects/classes/triggers, then ->  adding Storage to Salesforce servers that aren't needed anymore due to the 98% reduction in wasted processing time.  This extra storage could then be offered back to customers to grow their config Sandboxes - because that's not even close to reasonable.

 

 

 

sfdcfoxsfdcfox
As an ISV partner, I strictly work in a managed package in a Developer Organization. But I do agree there could be some streamlining in there. It takes up to an hour to install an update into our demo org each time we want to have updates to show potential clients. That is our biggest time waster. We make dozens if not hundreds of changes daily, and I personally am an incremental coder (write 5-10 lines of code, save, test, repeat). If I had to wait 15 minutes per 10 lines of code, I'd have gone crazy by now.

But, from my perspective, we consider the time well spent-- our product is more polished because of this feature, but development takes longer. The last eleven months here have been mostly shoot-first-ask-questions-later instead of formal project documentation, though.
sfdcfoxsfdcfox
As an aside, a better place to post this type of request on the IdeaExchange. Here's a few ideas you might want to Promote:

http://success.salesforce.com/ideaView?id=08730000000ZhOFAA0
http://success.salesforce.com/ideaView?id=087300000007TLtAAM
http://success.salesforce.com/ideaView?id=08730000000KPkXAAW
http://success.salesforce.com/ideaView?id=08730000000hcgWAAQ
http://success.salesforce.com/ideaView?id=08730000000jt1mAAA
http://success.salesforce.com/ideaView?id=08730000000jBoXAAU

Which all basically state "make testing better", either by not running tests for non-code deployments, targeting specific failed tests, etc.