Cloud applications, on the other hand, impose a cost for all resource consumption. Running resources that aren't performing any useful work impose a cost despite their waste. Your infrastructure won't be static.
I've heard some people pooh-pooh the need for highly variable public cloud environments, based on the fact that most IT applications run with predictable loads and therefore can leverage static infrastructure environments. Don't use this breezy assumption as a crutch for avoiding the hard work of architecting applications for cloud computing.
The fact is, traditional infrastructure is inflexible and extremely difficult to modify - and impossible to modify quickly. Therefore, traditional IT environments perform as Procrustean beds: Fixed environments in which applications are "right-sized" through stretching or lopping without adjusting the size of the bed to fit the need.
That approach won't be acceptable for next-generation applications. Once it's obvious that these artificial limitations are no longer necessary, developers will insist that whatever infrastructure is used must support flexibility and elasticity. Critically, once developers internalize the assumption that infrastructure is easily available and malleable, they'll discover new application needs that require cloud infrastructure environments - so that once-tenable assumption about the highly stable nature of application infrastructure requirements will be outmoded.
As the saying goes, past experience is no guarantee of future performance. Simply put: Future applications are all cloud applications and need to be designed and operated as such.
The Need for Better Application Management
With this in mind, these four assumptions and practices should guide you as you design and implement future applications:
Assume a dynamic application topology. You'll have virtual machines joining and leaving the application pool frequently, so be sure your application can gracefully accept and release resources. One way to enable dynamic application topology is to ...
Separate code and state. It's tempting use sticky state settings in the load balancer to direct all session interactions to a single server. However, that can cause unbalanced server loads. Worse, if a server crashes, user state can be lost; that can be disastrous.
The right approach is to move state into a separate storage location, such as some kind of database, which has built-in redundancy and can allow any server to pick up state and continue session interaction. Of course, this can make the database a bottleneck, so prepare for the next step and &heiilp;
Move state into cache. Cache tiers keep session data in fast RAM, obviating the need for time-consuming disk access and improving session data retrieval, thereby improving overall application performance. Cache solutions typically incorporate redundant infrastructure, protecting against data loss by resource failure. It's not uncommon to have two or more caching tiers in a highly dynamic app.
Sign up for CIO Asia eNewsletters.