Intelligent Architectures 5LIL0

2019 - 2020 (1st semester, Q1)

Code : 5LIL0
Credits
: 5 ECTS
Lecturer : Prof. dr. Henk Corporaal
Tel. : +31-40-247 5195 (secr.) 5462 (office)
Email:  H.Corporaal at tue.nl
Project assistance: Berk Ulker (b.ulker at tue.nl), Savvas Sioutas (s.sioutas at tue.nl), Kanishkan Vadivel (k.vadivel at tue.nl), and Martin Roa Villescas (m.roa.villescas at tue.nl)
Material: check oncourse.tue.nl/2019 and below.

News

Information on the course:


Description

Machine learning and in particular deep learning has dramatically improved the state-of-the-art in object detection, speech recognition, robotics, and many other domains. Whether it is superhuman performance in object recognition or beating human players in Go, the astonishing success of deep learning is achieved by deep neural networks trained with huge amounts of training examples and massive computing resources. Although already applied successfully in academic use-cases and several consumer products (e.g. machine translation), these data and computing requirements pose challenges for further market penetration.

 

This course on Intelligent Architectures first treats the most important Deep Learning Networks. In particular we treat how they operate, their implementation, and how they perform learning. We will use standard frameworks, like Tensorflow or PyTorch, for building these networks.

These networks require lots of computation and memory accesses, making them costly and very energy consuming. Therefore this Intelligent Architectures course gives an in-depth treatment of several Network and Implementation optimizations steps, like network pruning, quantization and loop-nest transformations for a drastic reducing of computation and memory traffic requirements.

We also treat various processing and accelerator platforms tuned for deep learning algorithms, including (embedded) GPU, Tensor Processing Unit, and TTA (Transport Triggered Architectures tuned for DNNs). Specific hardware can lead to huge cost savings.

Finally we will look in the future, and hint on what other high potential machine learning approaches can offer, like Bayesian learning and Neuromorphic computing.

 

The course includes 2-3 lab assignments, covering above topics. The labs give you real hand-on experience on designing and implementing DNNs.


You will learn:

-          understanding deep learning, including network architectures, inference, and learning methods.

-          how to design Deep Neural Networks (DNNs).

-          how to implement and optimize DNNs using various optimization methods.

-          state-of-the-art DNNs, including the newest type of operators.

-          special processign architectures and hardware efficiently supporting Deep Learning.

-          alternative approaches to the ''classical'' DNNs, like Bayesian learning and Neuromorphic computing.

Topics:

The main emphasis is on Deep Learning, in particular on DNNs (Deep Neural Networks), its algorithms, and its Efficient Implementation, using custom and off-the-shelve processors and accelerators.
In this course we treat among others the following topics:

Most of the topics will be supplemented by very elaborate hands-on exercises.
For a preliminary lecture overview see: schedule.

Handouts

The lecture slides will be made available during the course; see also below.
Mandatory reading material:

Suggested background material:
Check YouTube presentation: Design for Highly Flexible and Energy-Efficient Deep Neural Network Accelerators [Yu-Hsin Chen]


Slides (per topic; see also the course description)

See also oncourse Intelligent Architectures 5LIL0


Student presentations guidelines

As part of this lecture you have to study a hot topic related to this course, and make a short slide presentation about this topic. Details will be announced during the lecture.

Guidelines are as follows:

Hands-on lab work

Becoming an expert in Deep Learning and Deep Neural Networks requires that you get your hands dirty and make practical assignments. Therefore, as part of this course we have put a lot of effort to prepare 3 very interesting lab assignments. Details will be presented during the course, and material will be placed on the oncourse 5LIL0 site.
** labs will be put online during the course **

Hands-on 1: DNN design

You will design a Deep Neural Network (DNN) using one of the well-known frameworks. After learning the network will be tuned.

Hands-on 2: DNN implementation on GPUs

Graphic processing units (GPUs) can contain upto thousands of Processing Engines (PEs). They achieve performance levels of Tera FLops (10^12 floating point operations per second). In the past GPUs were very dedicated, not general programmable, and  could only be used to speedup graphics processing. Today, they become more-and-more general purpose. Lately they also support Deep Neural Networks (DNNs) by supporting smaller data sizes (e.g. Float 16-bit) and having special units speeding up learning and inference.
In this lab you are asked to map a DNN efficiently on a GPU, using all the tricks you can play.

Hands-on 3: DNN implementation on Embedded ASIP

In this lab we will map a Deep Neural Network (DNN) to an Application Specific Instruction-set Processor (ASIP).
We will use the AivoTTA from Tampere University as a target platform. You can tune the platform by adding specific function units.
See the lab3 assignment.
Further files an details are on oncourse.tue.nl

Examination

The examination will be oral about the treated course theory, the lab report(s), and studied articles.
Likely week: 4th week of January.  We will discuss the dates with you.
Grading depends on your results on theory, lab exercises and defense, and your presentation.

Related material and other links



Back to homepage of Henk Corporaal