//!!edit-lock!!
[[MatsuLab. Lecture Note]]
 
*ハイパフォーマンスコンピューティング [#tee749b0]
:日時|月曜日 10:45〜12:15(3〜4限)
:場所|西8号館 832号室
:連絡|
|松岡教授 (Prof. S.Matsuoka) | matsu あっと is. |
|TA 金刺 (H.Kanezashi)         | kanezashi あっと matsulab.is. |
&color(red,white){メーリングリストに追加しますので、至急TAまでメールを送ってください。Please email to Kanezashi (TA) as soon as possible  in order to add you to the mailing list.};

**目次 [#p6791f79]
#contents

**休講予定日 Lecture Cancelled [#h582bdca]
10/19, 11/16, 02/08(補講はありません We do not have supplementary lectures)

**授業概要と参考資料 Guidance and References [#af4d5298]
-ガイダンス資料/Guidance &ref("2015年度ハイパフォーマンスコンピューティング授業内容.pdf");

**発表スケジュール Schedule [#p1183393]
&color(red,white){暫定的な割り当ては以下の通りですが、都合が悪い場合はTAまで希望日をメールしてください。};
|CENTER:|CENTER:|CENTER:|CENTER:|LEFT:|c
|回|日付|担当|発表資料|文献|
| 1 | 10/05 | (ガイダンス) |  |  |
| 2 | 10/26 (W7-302) | 本山 | &ref("hpc_Motoyama.pdf"); | &ref("06735232.pdf"); |
| 3 | 11/02 | 本山 | &ref("HPC_Motoyama2.pdf"); |  |
| 4 | 11/09 (W7-302) | 上原 | &ref("hpc_uehara.pdf"); | &ref("Hyperspectral.pdf"); |
| 5 | 11/30 | 金刺 | &ref("hpc_Kanezashi1.pdf"); | &ref("DaDianNao.pdf"); &ref("DianNao.pdf"); |
| 6 | 12/07 | 寺西 | &ref("hpc_teranishi.pdf"); | &ref("a1-keuper.pdf"); |
| 7 | 12/14 | Jian | &ref("HPC15_2015-12-14_Jian_FireCaffe.pdf"); | &ref("FireCaffe.pdf"); |
| 8 | 12/21 | 寺西 | &ref("hpc_teranishi2.pdf"); |  |
| 9 | 01/04 | Jian | &ref("HPC15_2015-12-14&21_Jian_FireCaffe v2.pdf"); |  |
| 10 | 01/12 | Piyawath | &ref("Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Networks.pdf"); | &ref("p161-zhang-small.pdf"); |
| 11 | 01/18 | 黒田 | &ref("HPC_kuroda.pdf"); | &ref("fpga2014-wjun.pdf"); |
| 12 | 01/25 | 黒田 |  |  |
| 13 | 02/01 | Hamid | &ref("HPC_Class_Presentation_Hamid.pdf"); | &ref("06853195.pdf"); |



** 禁止リスト Inhibited List [#f836aabb]
- Training Large Scale Deep Neural Networks on the Intel Xeon Phi Many-Core Coprocessor
- Memory fast-forward: A low cost special function unit to enhance energy efficiency in GPU for big data processing
- Optimized Deep Learning Architectures with Fast Matrix Operation Kernels on Parallel Platform
- Real-time anomaly detection in hyperspectral images using multivariate normal mixture models and GPU processing
- Hyperspectral Unmixing on GPUs and Multi-Core Processors: A Comparison
- Performance versus energy consumption of hyperspectral unmixing algorithms on multi-core platforms
- Optimizing communication and cooling costs in HPC data centers via intelligent job allocation
- Cost Minimization for Big Data Processing in Geo-Distributed Data Centers
- On Characterization of Performance and Energy Efficiency in Heterogeneous HPC Cloud Data Centers
- DaDianNao: A Machine-Learning Supercomputer
- Mariana: tencent deep learning platform and its applications
- Performance Modeling and Scalability Optimization of Distributed Deep Learning Systems
- Asynchronous parallel stochastic gradient descent: a numeric core for scalable distributed machine learning algorithms
- FireCaffe: near-linear acceleration of deep neural network training on compute clusters
- CA-SVM: Communication-Avoiding Support Vector Machines on Distributed Systems
- Large Scale Distributed Deep Networks
- Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Networks
- 24.77 Pflops on a Gravitational Tree-Code to Simulate the Milky Way Galaxy with 18600 GPUs
- Massively Parallel Models of the Human Circulatory System
- Moving to memoryland: in-memory computation for existing applications
- Intelligent SSD: A Turbo for Big Data Mining
- Scalable Multi-Access Flash Store for Big Data Analytics
- An FPGA-Based Tightly Coupled Accelerator for Data-Intensive Applications
- A reconfigurable fabric for accelerating large-scale datacenter services
- An FPGA-based In-Line Accelerator for Memcached
- Scaling Up the Training of Deep CNNs for Human Action Recognition


**期末レポート Report [#y71cc4e1]
- &color(red,white){期限 Due date: 02/08};
- &color(red,white){期限 Due date: 02/17 (Extended)};
- Summarize the general topic covering and including ALL THREE PAPERS regarding the state of the art in HPC and Big Data convergence.
- It should be 10 pages in [[IEEE conference paper format>http://www.ieee.org/conferences_events/conferences/publishing/templates.html]]
- Please submit it to TA by email &color(red,white){(NOT mailing list)};

**リンク Links [#s10b4a99]
-[[ACM/IEEE Supercomputing>http://www.supercomp.org]]
-[[IEEE IPDPS>http://www.ipdps.org]]
-[[IEEE HPDC>http://www.hpdc.org/]]
-[[ACM International Conference on Supercomputing (ICS)>http://www.ics-conference.org/]]
-[[ISC>http://www.isc-events.com/]]
-[[IEEE Cluster Computing>http://www.clustercomp.org/]]
-[[IEEE/ACM Grid Computing>http://www.gridcomputing.org/]]
-[[IEEE/ACM CCGrid>http://www.buyya.com/ccgrid/]]
-[[IEEE Big Data>http://cci.drexel.edu/bigdata/bigdata2015/]]
-[[CiteSeer.IST>http://citeseer.ist.psu.edu]]
-[[Google Scholar>http://scholar.google.com]]
-[[Windows Live Academic>http://academic.live.com]]
-[[The ACM Degital Library>http://dl.acm.org/]]

トップ   編集 差分 バックアップ 添付 複製 名前変更 リロード   新規 一覧 単語検索 最終更新   ヘルプ   最終更新のRSS