Maximising the performance of systems through parallel processing is another way to drive efficiencies, Goerke says. SAP is using Intel's latest generation processor Ivy Bridge to distribute more data onto cores that process it simultaneously, making it "eight times faster" than with older central processing units (CPUs).
"From a performance perspective, the new CPU is much more energy efficient and more powerful than the previous generation," he says.
"But there are certain workloads that cannot be done in parallel because it is dependent on something else. So the problem is that some tasks have to be done in serial if you've got an algorithm or calculation or something.
"The good thing with in-memory platform is the way we have stored data in there and the way we have structured it so you can really divide the workload across many CPUs and parallels.
"For example, I could have 200 billion of data entries I could scan in a single second. So there's enormous computing power because you can split up the set of data you look at, distribute it across the cores, and get the result back together again."
One of the most important ways to reduce energy consumption, increase efficiencies and drive down costs is to monitor the energy usage of your infrastructure, Goerke says.
"We have only a single data centre we've built where only the concrete, the bunker is owned by us and that's the one we have at our headquarters in Germany. What we put in is our infrastructure, and... we connect that infrastructure to our central management hub where we monitor server availability, heat maps to give a sense of how the servers are utilised and the utilisation level, etc."
SAP uses a range of analytical capabilities that collect the CPU and memory utilisation of the servers so it can save on energy during idle time and power up when there's a sudden demand.
"Take time recording, for example. People do time recording at companies usually Friday afternoon before they leave work for the weekend. So everybody logs on to the same server at a very short point in time.
"So what we do is we measure in the system how much load is on the system, and we have software that then allows you to automatically bring new computing power in, spinning off new servers, where we can then start additional instances of our software and distribute the load to those new instances.
"We have some smart algorithms that tell us now is the time to spin up a new system' so we can deal with the load. And if the load goes down after a while, we can actually stop that instance and go back to a lower workload as the system is idling Saturday and Sunday because nobody is logging on.
Sign up for CIO Asia eNewsletters.