Tokyo Tech is in the process of building its next-generation TSUBAME supercomputer featuring NVIDIA GPU technology and the company’s Accelerated Computing Platform. TSUBAME 3.0, as the system will be known, will ultimately be used in tandem with the existing TSUBAME 2.5 system to deliver an estimated 64.3 (in aggregate) PFLOPS of AI computing horsepower.
On its own, TSUBAME 3.0, is expected to offer roughly two times the performance of its predecessor. TSUBAME 3.0 will be built around NVIDIA’s Pascal-based Tesla P100 GPUs, which are not only more efficient, but also higher-performing than previous-generation Maxwell GPUs in terms of performance per watt and performance per die area. It is estimated that TSUBAME 3.0 will deliver roughly 12.2 petaflops of double precision compute performance, which would place it among the world’s 10 fastest systems according to the most recent TOP500 list.
A Rendering Of The Tokyo Tech Supercomputer. Image Credit: NVIDIA
The system’s architect, Tokyo Tech’s Satoshi Matsuoka said, “NVIDIA’s broad AI ecosystem, including thousands of deep learning and inference applications, will enable Tokyo Tech to begin training TSUBAME 3.0 immediately to help us more quickly solve some of the world’s once unsolvable problems.”
TSUBAME 3.0 is being designed with AI computation in mind, and is expected to deliver more than 47 PFLOPS of AI horsepower on its own.
“Artificial intelligence is rapidly becoming a key application for supercomputing,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “NVIDIA’s GPU computing platform merges AI with HPC, accelerating computation so that scientists and researchers can drive life-changing advances in such fields as healthcare, energy and transportation.”
TSUBAME 3.0 is expected to be completed this summer. It will be used for education and research at Tokyo Tech, and for information infrastructure for top Japanese universities, though there are plans to make the system accessible to private-sector researchers as well.