Five reasons the U.S. tech lead is in danger

There is a worldwide race to build the next generation of supercomputers, but U.S. efforts have stalled.

China and Europe, in particular, are moving ahead with programs. And Japan is increasingly picking up the pace.

The U.S. government, meanwhile, has yet to put in place a plan for achieving exascale computing. Exascale programs aren’t just about building supercomputers. Development of exascale platforms will also seed new processor, storage and networking technologies. Breakthroughs in these areas in other countries may give rise to new challenges to U.S. tech dominance.

Why are exascale systems important? The systems, which would be capable of achieving 1 quintillion (or 1 million trillion) floating point operations per second — one thousand times more powerful than a petaflop system — could be capable of solving the world’s greatest scientific problems. If the U.S. falls behind, the research would increasingly be done in other countries.

In sum, the world has awakened to need of high performance computing. The U.S., for now, is dozing. Five reasons that the U.S. lead in high performance computing is in danger follow.

1: The U.S. doesn’t have an exascale plan.

An exascale development project wold cost the U.S. billions. Europe has estimated that its own exascale effort will cost US$4.7 billion over ten years. China is putting untold amounts of money into its effort. In 2008, China had 15 systems represented on the Top 500 list of the world’s most powerful systems. In the latest list, released this month, 74 Chinese-built systems, or 14.8% of the world’s total, appeared. In 2010, a China-built system topped the list. Japan now owns the top stop on the supercomputing list as its government shows renewed interest in high performance computing development.

The U.S. continues to fund big projects such as IBM’s planned 20 petaflop computer for Lawrence Livermore National Laboratory that’s due next year. That system may put the U.S. back in first place on the Top 500 list. But despite what’s going on in Europe and China, the U.S. has yet to set a budget for exascale development.

The Department of Energy is due to deliver to Congress no later than Feb. 10 the timetable and the costs of building an exascale system. The delivery couldn’t come at a worse time, particularly with this week’s failure of the Congressional Super Committee to come to a budget agreement, which will trigger mandated cuts. U.S. scientists have been warning for a year that Europe and China are on a faster exascale development path.

Alex Ramirez, computer architecture research manager, Barcelona Supercomputing Center, shows an ARM and Nvidia processor card.

“The EU effort is more organized at this stage with respect to exascale with strong backing from the European Commission,” said Jack Dongarra, a professor of computer science at University of Tennessee, a distinguished research staff member at Oak Ridge National Laboratory, as well as an organizer of the Top 500 list.

“The Europeans see this as an opportunity to work together on a software stack and be competitive on the world stage,” Dongarra said. “The bottom line is that the US appears stalled and the EU, China, and Japan are gearing up for the next generation.”

2: It’s mistakenly assumed the U.S. will win the exascale race.

Although China’s supercomputing development effort gets much attention, the Europeans are focused on developing a technology infrastructure to rival the U.S.

The Large Hadron Collider (LHC), a 16.8 mile circular tunnel on the French and Swiss borders, is establishing Europe as the world’s center for high energy physics research. This may mean that physicists who once wanted to work in the U.S. may find Europe more advantageous. That may help seed the creation of new industries in Europe.

The U.S. once had plans to build a 54-mile supercollider tunnel in Texas, but Congress pulled the funding and abandoned the partially constructed project after its projected cost increased from about US$5 billion in the late 1980s to US$11 billion in 1993.

European nations are also acting jointly in building their own GPS system, Galileo. It’s a US$20 billion project. LHC and Galileo illustrate that European nations are willing to pool resources and work together on technology. They see a similar opportunity in exascale, especially in software development.

“The U.S., Europe, China and Japan all have the potential to realize the first exascale system,” concluded the European Exascale Software Initiative, the group that’s leading Europe’s effort, in a report last month.

3: The path to exascale is uncharted, which opens the door to challengers.

Although the U.S. has not produced a plan for exascale development, it has outlined some requirements for a system. The system must be ready by 2019-2020 and can’t use more than 20 MW of power, which is a small amount of power for a system that may have millions of processors.

The need for low power systems is prompting new approaches to development. The Barcelona Supercomputing Center in Spain, as part of Europe’s exascale initiative, is working with UK-based ARM Holdings, the smartphone chip maker, on technology that combines its processors with Nvidia’s graphics processors. They may use expected ARM co-processors as well.

Alex Ramirez, computer architecture research manager at the Barcelona center, said the project is demonstrating that you can build a high performance computing cluster based on ARM architecture. It is also building a complete software stack for the cluster.

“There are a big number of challenges ahead,” said Ramirez, mostly getting the software to work in an environment that is different from servers or mobile computing. “The human effort and investment in software development is going to be significant,” he added.

Europe has other exascale developments in progress, including one using Intel technology. Ramirez said the Barcelona effort is now two-years-old, and the ultimate goal is to build a system that can reach exascale performance at reasonable power levels. But he also sees European-wide goals in this effort.

“There is an opportunity to keep embedded and high performance industry in Europe in the front line,” said Ramirez. “There is a clear convergence between embedded technology and high performance computing technology.”

4: If the U.S. doesn’t lead in exascale, what happens when planning for zetascale begins?

A computer science freshman today should know in four years the pathway to an exascale system. By the time this same student completes his or her graduate work, there will be discussion about a zetascale system, something that’s one thousand times more powerful.

If high performance computing maintains its historic development pattern, a zetascale system can be expected around 2030. But no one knows what a zetascale system will look like, or whether it’s even possible. Zetascale computing may require entirely new approaches, such as quantum computing.

The White House says it doesn’t want to be in an “arms race” in building ever faster computers, and warned in a report a year ago this month that a focus on speed “could divert resources away from basic research aimed at developing the fundamentally new approaches to HPC that could ultimately allow us to ‘leapfrog’ other nations.”

But the U.S. is in a computing arms race whether it wants it or not. To develop technology that leapfrogs other nations, the U.S. will need sustained basic research funding as well as building an exascale system.

“A lot of countries have realized that one of the reasons the U.S. became so great was because of things like federally funded research,” said Luis von Ahn, an associate professor of computer science at Carnegie Mellon University and a staff research scientist at Google, in an earlier interview. “There are lot of countries that are trying to really invest in science and technology. I think it’s important to continue funding that in the U.S. Otherwise it is just going to lose the edge — it’s as simple as that. “

5: The U.S. hasn’t explained what’s at stake.

President Barack Obama was the first U.S. president to mention exascale computing, but he didn’t really explain the potential of such systems. Supercomputers can help scientists create models, at an atomic level, of human cells and how a virus may attack them. They can be used to model earthquakes and help find ways to predict them, as well design structures that can withstand them. They are increasingly used by industry to create products and test them in virtual environments.

Supercomputers can be used in any way imaginable, and the more power – the more compute capability – the more precise the science. Today, the U.S. dominates the market. IBM alone accounts for nearly 45 per cent of the system share of the Top 500 systems, followed by HP at 28 per cent. Nearly 53 oer cent of the most powerful systems on the list are in the U.S.

At the SC11 supercomputing conference held earlier this month in Seattle, there were 11,000 attendees, more than double the number from five years ago. A key reason: The growing importance of visualization and modeling. This conference draws people from around the globe because the U.S. today is the center for high performance computing, something the world is beginning to challenge on the path to exascale.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Related Tech News

Featured Tech Jobs

 

CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.