Althought the data being handled in the current "Big Data" infrastructure is often actually not so "big" by HPC standards, in the future they are expect to explode by several orders of magnitude, both in terms of their capacity and complexity. This poses immence problems for the exisiting IDC/Cloud "Big Data" infrastructures due to their lack of system bandwidth and processing capacity, as well as for HPC/Supercomputers because of their lack of real-time processing capabilities etc. Our work will focus on the next generation "Extreme Big Data" infrastructure by developing a set of technologies and the resulting system attaining their convergence through co-designing with representatiive future big data applications, aiming for up to 100,000 times improvement in the data processing capabilities over the next 10 years. [Link]

There is an urgent need to develop technology that realizes larger, finer and faster simulations in meteorology, bioinformatics, disaster measures and so on, towards post-petascale era. However, the "memory wall" problem will be the one of largest obstacles; the growth of memory bandwidth and capacity will be even slower than that of processor throughput. For this purpose, we suppose system architecture with memory hierarchy including hybrid memory devices, including non-volatile RAM (NVRAM), and develop new software technology that efficiently utilizes the hybrid memory hierarchy. The area of our research includes new compiler technology, memory management and application algorithms. [Link]

When large disasters occur, all sorts of emergencies arise amid rapidly changing circumstances. In this situation, we need to quickly draw up plans for evacuation, and plan for reconstruction. For such tasks, high-speed processing to gather massive information, to create large-scale graph data as mathematical models and to apply optimization algorithms to them can be very important. However, the current existing methods have limitations, and the high-speed processing of massive amounts of data is difficult. The CREST team will contribute in realizing a safe and secure social structure by creating an advanced computing and optimization infrastructure for extremely large-scale graphs on post peta-scale supercomputer. [Link]

Grant-in-Aid for Scientific Research S (Kiban S): 「Fault Tolerant Infrastructure Toward Billion of Parallelization and Exa-scale Supercomputer」(Adopted FY2011)

“Simulation” is becoming attractive tools as 3th methodology following after theoretical and experimental basis methodology. Thanks to Supercomputers, the large-scale simulations can be achieved. Recently, the performance of Supercomputers increases exponentially every year with increasing demands for computational power. In 2018, exa (1018 ) flops supercomputers are expected to emerge. However, the constant increasingly number of nodes and components will lead to a very high failure frequency for Exa-scale supercomputers. In an optimistic scenario, where the reliability of each component increases several times, the failure frequency will still be dozens of times higher, therefore, the mean time between failures will be no more than tens of minutes, which means computing node doesn’t work in effect. A lot of fault-tolerance techniques are proposed, but current techniques can’t accommodate Exa-scale systems. We will seek a solution to the problem by using post-petascale TSUBAME3.0, which is successor to TSUBAME2.0 and expected to emerge in 2014. [Link]

⇒ [ More Info ]

Concurrency, reliability, and power are the most critical challenges for future high performance computing. We solve these problems in a highly productive way by developing vertically-integrated high performance software stack, which transparently implements advanced HPC technologies such as automatic parallelization, automatic tuning, fault tolerance, power optimization. This project will present two instantiations of the vision, namely a framework for computational fluid dynamics and another for molecular dynamics. Our research outcome will represent an important step towards future software architecture for exascale computing. [Link]

Although the importance of high performance computing (HPC) is widely recognized, the power consumption of HPC systems are unacceptably high and increasing. We propose to develop an ultra low-power HPC (ULP-HPC) that will be 1000 times more power efficient than current systems in 10 years. Our aims are to (1) develop novel performance models and auto-tuning technologies based on mathematical theory; (2) utilize next generation HPC technologies such as many-core CPUs, vector accelerators, low-power memory, and low-power network hardware; and (3) develop power-efficient algorithms for actual large-scale HPC applications. The ULP-HPC will scale-down the supercomputer TSUBAME to desk side systems in 10 years, and we believe these systems will revolutionize scientific computing. [Link]

Copyright (c) 2010 Tokyo Institute of Technology. Matsuoka Labo. All Rights Reserved.