Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Sandy wounded servers, some grievously, say services firms

Patrick Thibodeau | Nov. 8, 2012
Companies that specialize in data recovery are still getting many calls for help from businesses and institutions whose equipment was damaged by the effects of Hurricane Sandy.

"Using a baseline of 68 degrees fahrenheit as the benchmark or baseline for failures, a temperature of 104 degrees represents a 66% increase in failures. This seems like a big increase in failures. However, if the average failure rate is 4%, then operating at 104 degrees would result in the failure rate rising from 4% to 7%," said Beaty.

However, he noted that the failure rate is also based on duration.

There are 8,760 hours in a year, he said. "If 10% or 87.6 hours were at 104 degrees and the remainder at 68 degrees, the total failure rate for the year would be a ratio or weighted average. This means the 66% rise would be 6.6% rise in failures. At a 4% failure base, this means 4.66% failures rather than 4%," said Beaty.

Vendors can use equipment built to withstand much higher temperatures. All equipment is manufactured to Class A1 standards and has an upper limit of 89.6 degrees, and increasingly equipment is being made to meet Class A2 standards, up to 95 degrees.

There has been a trend to increase data center temperatures as part of push to use less energy, and it is becoming more common for data centers to operate at 72 degrees to 75 degrees, said Beaty.

There have been experiments by IT managers to put servers in sheds and tents, exposed to temperature and humidity extremes. More often than not these limited efforts surprise people with the durability of the equipment.

Nonetheless, equipment that is operating at higher than recommended temperatures, such as a hot spot in a data center, could see failures, said Scott Kinka, the CTO of cloud services company, Evolve IP.

"Heat equals age in the computer component world," said Kinka, and equipment that has been exposed to high temperatures, such as what may occur in a data center hot spot, may be at a higher risk of component failure at some point and a manager may see an uptick in component problems.

But it may be hard to trace back, exactly, the root cause of the failure because it could happen months later, said Kinka. "The hard part about this one is you are just not going to know," he said.

 

 

Previous Page  1  2 

Sign up for CIO Asia eNewsletters.