Understanding the variation of a process over time provides a bigger pay-off than taking small snapshots, according to Enlogic co-founder, Mike Jansma.
Jansma made the observation at the Datacentre Dynamics conference at Sydney, where he stated that a datacentre can not hope to control something unless it understands the variation over time.
To illustrate his point, Jansma spoke of a problem encountered by a customer of the power distribution unit (PDU) vendor.
"Every two weeks, a guy went out and measured the temperature of every rack because they wanted to optimise the airflow," he said.
"After each inspection, he writes the result down into an Excel spreadsheet to find the hot and cool spaces in the datacentre."
Following the inspection, Jansma said the person went out and made adjustments to airflow to ensure that there are not as many hot and cold spots in the facility.
"A fortnight later, he does the same thing, making approximately the same amount of changes, but there is not much effect," he said.
A bigger picture
Jansma said the issue with this approach is that the person does not know what is going on.
"They're trying to do the right thing by measuring and analysing the data, and trying to implement improvements," he said.
"The problem is that they are taking a snapshot in time."
So even though the person who measures the temperature may always do it at the same time every two weeks, Jansma said it was only at one specific condition at that time of day.
"He did not look at the average and variation, so he kept making changes that did not matter," he said.
Jansma said it also did not represent the mean average of that snapshot in time, and in the process, the person was unable to make any optimisations.
"By understanding variation over time and monitoring the data over that period, he would be able to reduce that variability and improve the cooling at the datacentre," he said.
Sign up for CIO Asia eNewsletters.