### JST CREST: "Fast and cost-effective deep learning algorithm platform for video processing in social infrastructure" (Adopted FY2016)

The overreaching aim of the project is "to establish a high-performance real-time deep learning algorithm basis for detecting objects and anomalies from a large amount of high definition videos recorded by drive recorders, surveillance cameras, and the like. Computer science researchers specializing in four different levels from architectures to applications including GPU fast computation, parallel computation, machine learning, and data mining, collaborate to realize 1000 times faster processing with 0.1% the amount of memory compared to conventional platforms." [Link]

The primary objective of Matsuoka Laboratory within this project is to investigate methods of preventing performance deterioration when scaling training of deep neural networks to multiple GPUs. At present, we aim at 10 times improvement in training time. To achieve that goal, we develop communication avoiding algorithms, investigate parallelization of training process, as well as study scheduling as a method to optimize resource usage in that context.

### JST CREST: "EBD: Extreme Big Data - Convergence of Big Data and HPC for Yottabyte Processing" (Adopted FY2013)

Althought the data being handled in the current "Big Data" infrastructure is often actually not so "big" by HPC standards, in the future they are expect to explode by several orders of magnitude, both in terms of their capacity and complexity. This poses immence problems for the exisiting IDC/Cloud "Big Data" infrastructures due to their lack of system bandwidth and processing capacity, as well as for HPC/Supercomputers because of their lack of real-time processing capabilities etc. Our work will focus on the next generation "Extreme Big Data" infrastructure by developing a set of technologies and the resulting system attaining their convergence through co-designing with representatiive future big data applications, aiming for up to 100,000 times improvement in the data processing capabilities over the next 10 years. [Link]

### JST CREST: "Software Technology that Deals with Deeper Memory Hierarchy in Post-petascale Era" (Adopted FY2012)

There is an urgent need to develop technology that realizes larger, finer and faster simulations in meteorology, bioinformatics, disaster measures and so on, towards post-petascale era. However, the "memory wall" problem will be the one of largest obstacles; the growth of memory bandwidth and capacity will be even slower than that of processor throughput. For this purpose, we suppose system architecture with memory hierarchy including hybrid memory devices, including non-volatile RAM (NVRAM), and develop new software technology that efficiently utilizes the hybrid memory hierarchy. The area of our research includes new compiler technology, memory management and application algorithms. [Link]

### JST CREST: "Advanced Computing & Optimization Infrastructure for Extremely Large-Scale Graphs on Post Peta-Scale Supercomputers" (Adopted FY2011)

When large disasters occur, all sorts of emergencies arise amid rapidly changing circumstances. In this situation, we need to quickly draw up plans for evacuation, and plan for reconstruction. For such tasks, high-speed processing to gather massive information, to create large-scale graph data as mathematical models and to apply optimization algorithms to them can be very important. However, the current existing methods have limitations, and the high-speed processing of massive amounts of data is difficult. The CREST team will contribute in realizing a safe and secure social structure by creating an advanced computing and optimization infrastructure for extremely large-scale graphs on post peta-scale supercomputer. [Link]

### Grant-in-Aid for Scientific Research S (Kiban S): "Fault Tolerant Infrastructure Toward Billion of Parallelization and Exa-scale Supercomputer" (Adopted FY2011)

“Simulation” is becoming attractive tools as 3th methodology following after theoretical and experimental basis methodology. Thanks to Supercomputers, the large-scale simulations can be achieved. Recently, the performance of Supercomputers increases exponentially every year with increasing demands for computational power. In 2018, exa (1018 ) flops supercomputers are expected to emerge. However, the constant increasingly number of nodes and components will lead to a very high failure frequency for Exa-scale supercomputers. In an optimistic scenario, where the reliability of each component increases several times, the failure frequency will still be dozens of times higher, therefore, the mean time between failures will be no more than tens of minutes, which means computing node doesn’t work in effect. A lot of fault-tolerance techniques are proposed, but current techniques can’t accommodate Exa-scale systems. We will seek a solution to the problem by using post-petascale TSUBAME3.0, which is successor to TSUBAME2.0 and expected to emerge in 2014. [Link]

⇒ [ More Info ]

### JST CREST: "Highly Productive, High Performance Application Frameworks for Post Petascale Computing" (Adopted FY2010)

Concurrency, reliability, and power are the most critical challenges for future high performance computing. We solve these problems in a highly productive way by developing vertically-integrated high performance software stack, which transparently implements advanced HPC technologies such as automatic parallelization, automatic tuning, fault tolerance, power optimization. This project will present two instantiations of the vision, namely a framework for computational fluid dynamics and another for molecular dynamics. Our research outcome will represent an important step towards future software architecture for exascale computing. [Link]

### JST CREST: "ULP-HPC: Ultra Low-Power, High Performance Computing via Modeling and Optimization of Next Generation HPC Technologies" (Adopted FY 2007)

Although the importance of high performance computing (HPC) is widely recognized, the power consumption of HPC systems are unacceptably high and increasing. We propose to develop an ultra low-power HPC (ULP-HPC) that will be 1000 times more power efficient than current systems in 10 years. Our aims are to (1) develop novel performance models and auto-tuning technologies based on mathematical theory; (2) utilize next generation HPC technologies such as many-core CPUs, vector accelerators, low-power memory, and low-power network hardware; and (3) develop power-efficient algorithms for actual large-scale HPC applications. The ULP-HPC will scale-down the supercomputer TSUBAME to desk side systems in 10 years, and we believe these systems will revolutionize scientific computing. [Link]