Each storage node has just four fans, down from the usual six. Power supplies, which are concentrated on their own shelves in the rack, are also cut down: There's just one power shelf in a rack instead of three, and that rack has just five power supplies instead of seven.
Facebook also found a way to protect against data loss without having to keep multiple extra copies of each file. Instead of putting full copies of all the data on disks, Facebook made the data redundant virtually. It used Reed-Solomon coding, a decades-old technique used in RAID systems, which can break up data into pieces and reconstruct the whole of it using just some of the parts.
Facebook implemented this across multiple systems, so a failure that takes a drive offline in one part of the facility can be corrected with data from another area. Reconstructing the data takes compute cycles, and doing it across many systems takes network capacity, but Facebook wanted those options in its toolkit in addition to just adding more storage, Patiejunas said.
With this so-called erasure coding, the company can provide the equivalent of seven or eight extra copies of each bit of data while using just 1.4 times the capacity that a single copy of the data would take up. In other words, without even one extra copy of the full data to turn to in case of failure, Facebook calculates it can protect the content as well as it could with multiple backups of backups.
Meanwhile, because cold storage holds older content that users may not be looking at much anymore, Facebook runs software in the background to scan all data for "bit rot," a kind of corruption that can happen while bits sit unused.
The scale of all this is big, and getting bigger. The two cold storage centers already hold hundreds of petabytes of data, and just one "data hall" -- one of the big rooms within each center -- ultimately can hold as much as one exabyte. The system is designed to stay just as efficient as it grows to that scale, Facebook says.
Sign up for CIO Asia eNewsletters.