Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

In the Software Defined Data Center, application response time trumps infrastructure capacity management

Gary Kaiser, Dynatrace | April 25, 2016
With applications consisting of a plethora of services delivered from a range of resources, End User Experience (EUE) is key

This vendor-written tech primer has been edited by Executive Networks Media to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

The adoption of software-defined data center (SDDC) technologies is driven by tremendous potential for dynamic scalability and business agility, but the transition is fraught with complexities that need to be considered.

This ecosystem relies on the abstraction or pooling of physical resources (primarily compute, network and storage) by means of virtualization. With software orchestrating new or updated services, the promise is these resources can be provisioned in real-time, without human intervention. In essence, this is the technology response to the agility demands of the modern digital business.

The term SDDC can be applied to today’s public clouds (certainly Amazon and Google and Microsoft clouds qualify), and tomorrow’s private and hybrid clouds as organizations accelerate their transition towards providing data center infrastructure as a service (IaaS). And as in today’s enterprise data centers, tomorrow’s SDDCs will likely support a mix of packaged applications and applications you develop and maintain.

One of the tenets of the SDDC is that capacity is dynamically scalable; not unlimited, of course, but that’s not necessarily a bad way to think of it. This means capacity is treated differently than in the past. The ability to spin up new servers to meet spikes in demand, automatically connect these based on pre-defined policies, and then destroy them as demand wanes, will become the new SDDC paradigm. Instead of being used for alert-generating thresholds, resource utilization becomes an input to the scale-out algorithm.

The infrastructure may be self-regulating, elastic and automated, but that doesn’t absolve us of the requirement for performance monitoring. Instead, it shifts the emphasis from infrastructure capacity management to application (or service) response time. The applications served can be made up of a medley of components and services relying on different stacks and platforms, requiring at least a few different approaches to performance monitoring. In fact, the adoption of multiple monitoring solutions – while a practical necessity – can lead to some operational challenges, including:

  • Inconsistent depth of monitoring insight
  • Limited end-to-end performance visibility
  • Service-focus instead of user-focus

And with dozens, hundreds or even thousands of services required to deliver an application, service quality can no longer be defined by the performance or health of individual components. Virtualization can obscure critical performance visibility at the same time complex service dependencies challenge even the best performance analysts and the most effective war rooms. To attempt this is to face avalanches of false positives, of “internal” alerts that are more informational than actionable.

Consider for a minute a popular SaaS application – Salesforce. It’s delivered as a service from a cloud, just as your own internally-built applications might someday be delivered as services from your private SDDC cloud. How do you, as a member of an IT team responsible for your organization’s application services, evaluate Salesforce service quality?

 

1  2  Next Page 

Sign up for CIO Asia eNewsletters.