U.S. to use climate to help cool exascale systems

In a picturesque spot overlooking San Francisco Bay, the U.S. Department of Energy‘s Berkeley Lab has begun building a new computing center that will one day house exascale systems.

The DOE doesn’t know what an exascale system will look like. The types of chips, the storage, the networking and programming methods that will go into these systems are all works in progress.

DOE is expected to deliver to Congress by the end of this week a report outlining a plan for reaching exascale computing by 2019-2020 and its expected cost.

But what the DOE does have an idea about it is how to cool these systems.

(RELATED STORY: The world’s coolest data centres)

The Computational Research and Theory (CRT) Facility at Berkeley will use outside air cooling. It can rely on the Bay area’s cool temperatures to meet its needs about 95 per cent of the time, said Katherine Yelick, associate lab director for computing sciences at the lab. If computer makers raise the temperature standards of systems, “we can use outside cooling all year round,” she said.

The 140,000-square-foot building will be nestled in a hillside with an expansive and unobstructed view of the Bay. It will allow Berkeley Lab to combine offices that are split between two sites. It will also be large enough to house two supercomputers, including exascale-sized systems. “We think we can actually house two exaflop systems in it,” said Yelick. The building will be completed in 2014.

Supercomputers use liquid cooling, but this building will also use evaporative cooling. Under this process, hot water goes up into a cooling tower where evaporation helps to cool it. The lowest level of the Berkeley building is a mechanical area that will be covered by a gradient that is used to pull in outside air, said Yelick.

An exascale system will be able to reach 1 quintillion (or 1 million trillion) floating point operations per second, which is roughly 1,000 more times powerful than a petaflop. The government has already told vendors that an exascale system won’t be able to use more than 20 megawatts of power. To put that in perspective, a 20 petaflop system today is expected to use somewhere in the range of 7 MWs. There are large commercial data centers, with multiple tenants, that are now being built able to support 100 MWs and more.

A rendering of the Berkeley computational research center planned for the San Francisco Bay area.

The idea of using climate, or what is often called free cooling, is a major trend in data centre design.

Google built a data center in Hamina, Finland, using Baltic Sea water to cool systems instead of chillers. Last October, Facebook announced that it had begun construction of a data center in Lulea, Sweden, near the Arctic Circle, to take advantage of the cool air. Hewlett-Packard built a facility that relies on cold sea air just off the North Sea coast in the U.K.

One project that is carbon free is a data center built by Verne Global in Keflavik, Iceland. The power supply comes from a combination of hydro and geothermal sources.

The cool temperatures in Keflavik allow the data center to make use of outside air for cooling. The company has two modes of operation; one is direct free cooling, which means air is taken directly from the outside and put into the data center. The company can “remix” the returning hot air to have “tight temperature controls,” said Tate Cantrell, the chief technology officer. The air is also filtered.

The data center also has the ability to switch to a recirculation mode where no outside air goes into the data center. Instead, a heat exchanger with a cold coil and a hot coil is used. The cold coil cools the air in the data center air stream, and the hot coil is cooled by the direct outside air, Cantrell said.

The Keflavik data center will use the heat exchanger in two situations. The first is to conserve moisture in the air when the dew point is low, meaning there is a low percentage of water in the airstream. The data center also has humidifiers. Below a certain level of humidity there is a possibility of introducing static into an environment. The other reason for switching to a heat exchanger is to protect the filters in the event that a strong storm kicks up a lot of dust, said Cantrell.

The groundbreaking of the Berkeley facility last week included Steve Chu, the U.S. energy secretary and a former Berkeley Lab director. He said the computational facility, “is very representative of what we have that’s best in the United States in research, in innovation.” Computation will be “a key element in helping further the innovation and the industrial competitiveness of the United States,” he said.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Related Tech News

Featured Tech Jobs

 

CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.