Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Benchmarking Amazon EC2: The wacky world of cloud performance

Peter Wayner | May 2, 2013
Before turning to the world of cloud computing, let's pause to remember the crazy days of the 1970s when the science of the assembly line wasn't well-understood and consumers discovered that each purchase was something of a gamble. This was perhaps most true at the car dealer, where the quality of new cars was so random that buyers began to demand to know the day a car rolled off the assembly line.

For grins, I fired up the same tests on the same machine a few days later on Sunday morning on the East Coast. The first test went well. The lemon powered through Avrora in 18 seconds, a time comparable to the results often reported by the Medium machines. But after that the lemon slowed down dramatically, taking 3,120 seconds to finish the Eclipse test.

Up from Micro

The Medium machines were much more consistent. They never failed to run a benchmark and reported times that didn't vary as much. But even these numbers weren't that close to a Swiss watch. One Medium machine reported times of 16.7, 16.3, and 17.5 seconds for the Avrora test, while another reported 14.9, 14.8, and 14.8. Yet another machine ran it in 13.3 seconds.

Some Medium machines were more than 10 percent faster than others, and it seemed like they arrived with the luck of the draw. The speed of the Medium machines was consistent across most of the benchmarks and didn't seem as tied to the time of day.

The performance of the Medium machines also suggests that RAM may be just as important as the CPU cores but only if you need it. The Medium has roughly six times more RAM than the Micro; it costs, not surprisingly, six times as much. But on the Avrora benchmark, the Micro machine often ran faster or only a few times slower. On the Tomcat benchmark, it never ran faster but routinely finished between four and six times slower.

Performance of Amazon's M1 Medium instances was much more consistent. Unlike the Micros, the Mediums never failed to complete a test run.

In other cases, the Micro just melted down. On the Eclipse test, the Micro was occasionally about five times slower than the Medium, but it was often eight to 10 times slower. Once on the Eclipse test and several times on other tests, the Micro failed completely. The lack of RAM left an unstable machine. (Note that these several failures don't include the dozens of failures of the lemon, which crashed much more consistently than the other Micro instances I tested.)

These experiments, while still basic, show that packaging performance in the cloud is much trickier than with stand-alone machines. Not every machine will behave the same way. The effect of other users on the hardware and the network can and will distort any measurement.

The M1 Medium turned in consistent numbers on some DaCapo tests (such as avrora, above), but not-so-consistent on others (such as the eclipse test, below).

Between a rock and a slow place

To make matters worse, cloud companies are in a strange predicament. If they have spare cycles sitting around, it would be a shame to waste them. Yet if they give them out, the customers might get accustomed to better performance. No one notices when the machines run faster, but everyone starts getting angry if the machines slow down.

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.