And there are probably seven other things I forgot to mention.
How do you know when one of these changes has occurred? Salesforce has a Setup Audit Trail that provides a lot of clues. Or, if you use some sort of configuration management tool on your systems, you can do a diff every week to see what's changed. Great.
Here's my formula for the probability that a bug or anomalous behavior has crept into your cloud system somewhere, even if it hasn't been noticed yet:
Probability of bug = NC * # custom objects * # SysAdmins / 100
"NC" is the number of individual changes that have been made from the above list since the last time somebody ran all the test code. Of course, the probability is capped at one and you reach that ceiling pretty quickly.
The above formula is just a wild approximation, but the point is that it won't take many weeks at all before some serious bugs have invisibly evolved.
The first line of defense, of course, is some sort of configuration control board that actually thinks through the consequences of any changes to the system metadata before they are made, then applies other changes to accommodate them. Fat chance, I know.
The next line of defense is to run all your unit tests every week and record the results. Salesforce.com can actually do this at the touch of a button. It's not painful at all. What will be painful, though, is the realization that someone, somewhere, has set up a bunch of sand-traps for your developers. Fix those errors as early as you can, so you can do them in a relaxed and productive way. Stressed-out developers make more mistakes, driving up costs.
What's That 'Nothing' Code Change Really Going to Cost?
Salesforce requires your internal developers to pass all unit tests and cover at least 75 percent of the code as a precursor to deploying anything. While it's easy to scam the system and do only the most basic of unit tests, that turns out to be a false economy: You want to do positive and negative result testing, not just blind exercising of all the code paths.
Of course, the more thorough the test, the more likely that you'll find a bug introduced by the evolving system configuration. This is a good thing, because it traps the errors before the user finds them. Paying this tax early helps you avoid penalties later.
Here's a simple (and therefore inaccurate) model for the cost of that "nothing" code change:
( # outstanding execution error bugs * 200 + # outstanding unexpected results bugs * 400) * average age of bugs (# months) * # of development teams
Sign up for CIO Asia eNewsletters.