The U.S. today has a clear lead in supercomputing, both in terms of number of systems and the capability of its vendors.
But building an exascale system, or a 1,000 petaflop system, poses many challenges, led by power usage. Building such a system today by extrapolating present technologies would take about 1 GW of power, or about the total output of one power plant.
An exascale system will need processors, memory and network components that use considerably less power, as well as better programming models so applications can scale across millions of cores. Resiliency, the ability to operate without interruption as components fail, is also a key research issue.
The U.S. national laboratories run by the Department of Energy use the largest systems, in part to meet their mission to keep track of the nation's nuclear weapons stockpile. Instead of underground testing, the U.S. uses supercomputers to simulate its weapons and to see how they are faring and wearing in storage.
"It's very important that the United States maintain the key intellectual property" for supercomputers, said Dona Crawford, associate director for computation at Lawrence Livermore National Laboratory, at the hearing "If we control that, we have the high ground for the standards space."
"I would not want to cede that to another country," said Crawford. "I cannot trust U.S. nuclear weapons technology to a system built in China, say. That's untenable."
For now, there is no budget proposal in Congress to push exascale ahead. The White House did not include an exascale specific spending request in the recently released 2014 budget.
"The U.S. research community has repeatedly warned of the potential and actuality of eroding U.S. leadership in computing and in high performance computing," said Daniel Reed, who served on the White House science advisory committee during President George W. Bush's administration.
"And many of these warnings have been largely unheeded," Reed told lawmakers at the hearing.
Sign up for CIO Asia eNewsletters.