Roadrunner was a supercomputer built by IBM for the Los Alamos National Laboratory in New Mexico, USA. The US$100-million Roadrunner was designed for a peak performance of 1.7 petaflops. It achieved 1.026 petaflops on May 25, 2008, to become the world's first TOP500 LINPACK sustained 1.0 petaflops system. In November 2008, it reached a top performance of 1.456 petaFLOPS, retaining its top spot in the TOP500 list. It was also the fourth-most energy-efficient supercomputer in the world on the Supermicro Green500 list, with an operational rate of 444.94 megaflops per watt of power used. The hybrid Roadrunner design was then reused for several other energy efficient supercomputers. Roadrunner was decommissioned by Los Alamos on March 31, 2013. In its place, Los Alamos commissioned a supercomputer called Cielo, which was installed in 2010. Cielo was smaller and more energy efficient than Roadrunner, and cost $54 million.
IBM built the computer for the U.S. Department of Energy's (DOE) National Nuclear Security Administration.[1][2] It was a hybrid design with 12,960 IBM PowerXCell 8i[3] and 6,480 AMD Opteron dual-core processors[4] in specially designed blade servers connected by InfiniBand. The Roadrunner used Red Hat Enterprise Linux along with Fedora[5] as its operating systems and was managed with xCAT distributed computing software. It also used the Open MPI Message Passing Interface implementation.[6]
Roadrunner occupied approximately 296 server racks[7] which covered 560 square metres (6,000 sq ft)[8] and became operational in 2008. It was decommissioned March 31, 2013.[7] The DOE used the computer for simulating how nuclear materials age in order to predict whether the USA's aging arsenal of nuclear weapons are both safe and reliable. Other uses for the Roadrunner included the science, financial, automotive and aerospace industries.
Roadrunner differed from other contemporary supercomputers because it continued the hybrid approach[7] to supercomputer design introduced by Seymour Cray in 1964 with the Control Data Corporation CDC 6600 and continued with the order of magnitude faster CDC 7600 in 1969. However, in this architecture the peripheral processors were used only for operating system functions and all applications ran in the one central processor. Most previous supercomputers had only used one processor architecture, since it was thought to be easier to design and program for. To realize the full potential of Roadrunner, all software had to be written specially for this hybrid architecture. The hybrid design consisted of dual-core Opteron server processors manufactured by AMD using the standard AMD64 architecture. Attached to each Opteron core is a PowerXCell 8i processor manufactured by IBM using Power Architecture and Cell technology. As a supercomputer, the Roadrunner was considered an Opteron cluster with Cell accelerators, as each node consists of a Cell attached to an Opteron core and the Opterons to each other.[9]
Roadrunner was in development from 2002 and went online in 2006. Due to its novel design and complexity it was constructed in three phases and became fully operational in 2008. Its predecessor was a machine also developed at Los Alamos named Dark Horse.[10] This machine was one of the earliest hybrid architecture systems originally based on ARM and then moved to the Cell processor. It was entirely a 3D design, its design integrated 3D memory, networking, processors and a number of other technologies.
The first phase of the Roadrunner was building a standard Opteron based cluster, while evaluating the feasibility to further construct and program the future hybrid version. This Phase 1 Roadrunner reached 71 teraflops and was in full operation at Los Alamos National Laboratory in 2006.
Phase 2 known as AAIS (Advanced Architecture Initial System) included building a small hybrid version of the finished system using an older version of the Cell processor. This phase was used to build prototype applications for the hybrid architecture. It went online in January 2007.
The goal of Phase 3 was to reach sustained performance in excess of 1 petaflops. Additional Opteron nodes and new PowerXCell processors were added to the design. These PowerXCell processors are five times as powerful as the Cell processors used in Phase 2. It was built to full scale at IBM’s Poughkeepsie, New York facility,[11] where it broke the 1 petaflops barrier during its fourth attempt on May 25, 2008. The complete system was moved to its permanent location in New Mexico in the summer of 2008.[11]
Roadrunner used two different models of processors. The first is the AMD Opteron 2210, running at 1.8 GHz. Opterons are used both in the computational nodes feeding the Cells with useful data and in the system operations and communication nodes passing data between computing nodes and helping the operators running the system. Roadrunner has a total of 6,912 Opteron processors with 6,480 used for computation and 432 for operation. The Opterons are connected together by HyperTransport links. Each Opteron has two cores for a total 13,824 cores.
The second processor is the IBM PowerXCell 8i, running at 3.2 GHz. These processors have one general purpose core (PPE), and eight special performance cores (SPE) for floating point operations. Roadrunner has a total of 12,960 PowerXCell processors, with 12,960 PPE cores and 103,680 SPE cores, for a total of 116,640 cores.
Logically, a TriBlade consists of two dual-core Opterons with 16 GB RAM and four PowerXCell 8i CPUs with 16 GB Cell RAM.[4]
Physically, a TriBlade consists of one LS21 Opteron blade, an expansion blade, and two QS22 Cell blades. The LS21 has two 1.8 GHz dual-core Opterons with 16 GB memory for the whole blade, providing 8GB for each CPU. Each QS22 has two PowerXCell 8i CPUs, running at 3.2 GHz and 8 GB memory, which makes 4 GB for each CPU. The expansion blade connects the two QS22 via four PCIe x8 links to the LS21, two links for each QS22. It also provides outside connectivity via an InfiniBand 4x DDR adapter. This makes a total width of four slots for a single TriBlade. Three TriBlades fit into one BladeCenter H chassis. The expansion blade is connected to the Opteron blade via HyperTransport.
A Connected Unit is 60 BladeCenter H full of TriBlades, that is 180 TriBlades. All TriBlades are connected to a 288-port Voltaire ISR2012 Infiniband switch. Each CU also has access to the Panasas file system through twelve System x3755 servers.[4]
CU system information:[4]
The final cluster is made up of 18 connected units, which are connected via eight additional (second-stage) Infiniband ISR2012 switches. Each CU is connected through twelve uplinks for each second-stage switch, which makes a total of 96 uplink connections.[4]
Overall system information:[4]
IBM Roadrunner was shut down on March 31, 2013.[7] While the supercomputer was one of the fastest in the world, its energy efficiency was relatively low. Roadrunner delivered 444 megaflops per watt vs the 886 megaflops per watt of a comparable supercomputer.[12] Before the supercomputer is dismantled, researchers will spend one month performing memory and data routing experiments that will aid in designing future supercomputers.[7]
After IBM Roadrunner is dismantled, the electronics will be shredded.[13] Los Alamos will perform the majority of the supercomputer's destruction, citing the classified nature of its calculations. Some of its parts will be retained for historical purposes.[13]
The content is sourced from: https://handwiki.org/wiki/IBM_Roadrunner