The top machine in the latest listing (June 2014) was the Tianhe-2 (MilkyWay-2) at the National Super Computer Center in Guangzhou, China. A Linux machine based on Intel Xeon clusters, it used 3,120,000 cores to achieve 33,862,700 gigaFLOPS (33,862.7 teraFLOPS, or almost 34 petaFLOPS).
Number one in the first list, in June 1993, was a 1,024-core machine at the Los Alamos National Laboratory that achieved 59.7 gigaFLOPS, so the list reflects improvements approaching six orders of magnitude in 21 years.
Linpack was originally a library of Fortran subroutines for solving various systems of linear equations. The benchmark originated in the appendix of the Linpack Users Guide in 1979 as a way to estimate execution times. Now downloadable in Fortran, C and Java, it times the solution (intentionally using inefficient methods to maximize the number of operations used) of dense systems of linear equations, especially matrix multiplication.
Results are submitted to Dongarra and he then reviews the claims before posting them. He explains that the Linpack benchmark has evolved over time; the list now relies on a high-performance version aimed at parallel processors, called the High-Performance Computing Linpack Benchmark (HPL) benchmark.
But Dongarra also notes that the Top 500 list is planning to move beyond HPL to a new benchmark that is based on conjugate gradients, an iterative method of solving certain linear equations. To explain further, he cites a Sandia report (PDF) that talks about how today's high-performance computers emphasize data access instead of calculation.
Thus, reliance on the old benchmarks "can actually lead to design changes that are wrong for the real application mix or add unnecessary components or complexity to the system," Dongarra says. The new benchmark will be called HPCG, for High Performance Conjugate Gradients.
"This will augment the Top500 list by having an alternate benchmark to compare," he says. "We do not intend to eliminate HPL. We expect that HPCG will take several years to both mature and emerge as a widely visible metric."
The plea from IBM
Meanwhile, at IBM, researchers are proposing a new approach to computer architecture as a whole.
Costas Bekas, head of IBM Research's Foundations of Cognitive Computing Group in Zurich and winner of the ACM's Gordon Bell Prize in 2013, agrees with Dongarra that today's high-performance computers have moved from being compute-centric to being data-centric. "This changes everything," he says.
"We need to be designing machines for the problems they will be solving, but if we continue to use benchmarks that focus on one kind of application there will be pitfalls," he warns.
Bekas says that his team is therefore advocating the use of conjugate gradients benchmarking, because conjugate gradients involve moving data in large matrices, rather than performing dense calculations.
Sign up for CIO Asia eNewsletters.