Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Benchmarking Amazon EC2: The wacky world of cloud performance

Peter Wayner | May 2, 2013
Before turning to the world of cloud computing, let's pause to remember the crazy days of the 1970s when the science of the assembly line wasn't well-understood and consumers discovered that each purchase was something of a gamble. This was perhaps most true at the car dealer, where the quality of new cars was so random that buyers began to demand to know the day a car rolled off the assembly line.

In both cases, I started with the stock 64-bit Amazon Linux machine image that comes preloaded with a customized version of OpenJDK 1.6.0_24. After updating all of the packages with Yum, I downloaded the DaCapo benchmarks and started them running immediately.

Micros, Mediums, lemons

In all, I ran the benchmarks 36 times on 10 different machines (five Micro and five Medium). In general, the Micro instances were much slower than the Mediums, but the Micro actually ran faster more than a few times. This generally happened soon after the machine started, probably because EC2 was allowing the machine to "burst" faster and run free. This generosity would usually fade and the next run would often be dramatically slower, sometimes 10 times slower -- yes, 10 times slower.

The performance of the Micro machines varied dramatically. There was one instance that could index files with Lucene in three different runs of 3.4, 4.0, and 4.1 seconds; it was predictable like a watch. But another instance started at 3.4 seconds, then took 39 seconds for the second run and 34 seconds for the third. Another instance took 14, 47, and 18 seconds to build the same Lucene index with the same file. Micro's results were all over the map.

In the DaCapo tests, performance of Amazon's T1 Micro instances was everything but consistent.

To make matters worse, the Micro instances would sometimes fail to finish. Sometimes there was a 404 error caused by a broken request for a Web page in the Web services simulations (Tradesoap and Tradebeans). Sometimes the bigger jobs just died with a one-line error message: "killed." The deaths, though, weren't predictable. It's not like one benchmark would always crash the machine.

Well, there was one case where I could start to feel that death was imminent. One instance was a definite lemon, and its sluggishness was apparent from the beginning. The Eclipse test is one of the more demanding DaCapo benchmarks. The other Micro machines would usually finish it between 500 and 600 seconds, but the lemon started off at more than 900 seconds and got worse. By the third run, it was pushing 2,476 seconds, almost five times slower than its brethren.

This wasn't necessarily surprising. This machine started up on a Thursday just after lunch on the East Coast, probably one of the moments when the largest part of America is wide awake and browsing the Web. Some of the faster machines started up at 6:30 in the morning on the East Coast.

While I normally shut down the machines after the tests were over, I kept the lemon around to play with it. It didn't get better. By late in the afternoon, it was crashing. I would come back to find messages that the machine had dropped communications, leaving my desktop complaining about a "broken pipe." Several times the lemon couldn't finish more than a few of the different tests.

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.