Beyond that, Bekas says his team is also pushing for a new computing design that combines both inexact and exact calculations — the new conjugate gradients benchmarks having demonstrated enormous advantages in doing so.
Basically, double-precision calculations (i.e., FLOPS) are needed only in a tiny minority of cases, he explains. The rest of the time the computer is performing rough sorting or simple comparisons, and precise calculations are irrelevant.
IBM's prototypes "show that the results can be really game-changing," he says, because the energy required to reach a solution with a combination of exact and inexact computation is reduced by a factor of almost 300. With minimal use of full precision, the processors require much less energy and the overall solution is reached faster, further cutting energy consumption, he explains.
Taking advantage of the new architecture will require action by application programmers. "But it will take only one command to do it," once system software modules are aware of the new computing methodology, Bekas adds
If Bekas' suggestions catch on, with benchmarks pushing machine design and machine design pushing benchmarks, it will actually be a continuation of the age-old computing and benchmarking pattern, says Smith.
"I can't give you a formula saying 'This is the way to do a benchmark,'" Smith says. "But it must be complex enough to showcase the entire machine, it must be interesting on the technical side and it must have something marketing can use." When several firms use it for predictions "it feeds on itself, as you build new hardware or software based on the benchmark.
"A result gets published, it pushes the competitive market up a notch, other vendors must respond and the cycle continues," he explains.
Sign up for CIO Asia eNewsletters.