CSE 240D: Accelerator Design for Deep Learning (Fall 2019)
Instructor:
Hadi Esmaeilzadeh
Email: hadi [AT] eng [DOT] ucsd [DOT] edu
Office: CSE 3228
TA:
FatemehSadat Mireshghallah: fmireshg [AT] eng [DOT] ucsd [DOT] edu
Office hours: Thursdays 8-9AM, B240A
Syllabus
Schedule
Participation
Presentation
Critique
Schedule
Lecture
Date
Presenters
0
Dark Silicon and the End of Multicore Scaling
Hadi Esmaeilzadeh
1
TABLA: A Unified Template-based Framework for Accelerating Statistical Machine Learning
10/09
Jonathan Lam
2
DianNao: A Small-Footprint High-Throughput Accelerator for Ubiquitous Machine-Learning
10/21
Utkarsh Singh
3
Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks
10/23
Karthikeyan Sugumaran
4
Stripes: Bit-serial deep neural network computing
10/28
Yanzhe Su
5
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Network
10/30
Tanaya Pradeep Kolankar
6
DNNWEAVER: From High-Level Deep Network Models to FPGA Acceleration
11/4
Brahmendra
7
SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks
11/6
Swapnil Aggarwal
8
In-Datacenter Performance Analysis of a Tensor Processing UnitÂ
11/18
Imtiaz Ahamed Ameerudeen
9
EIE: Efficient Inference Engine on Compressed Deep Neural Network
11/20
Sumiran Shubhi
10
PRIME: A Novel Processing-in-memory Architecture for Neural Network Computation in ReRAM-based Main Memory
11/25
Akash Boghani
11
ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars
11/27
Srinithya Nagiri
12
RedEye: Analog ConvNet Image Sensor Architecture for Continuous Mobile Vision
12/2
Sahand Salamat
14
Cambricon: An Instruction Set Architecture for Neural Networks
12/2
Ritika Prasad
13
Neurocube: A Programmable Digital Neuromorphic Architecture with High-Density 3D Memory
15
Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators