91259 - Architecture and Platforms for Artificial Intelligence

Academic Year 2021/2022

  • Docente: Luca Benini
  • Credits: 6
  • SSD: INF/01
  • Language: English

Learning outcomes

At the end of the course, the student has a deep understanding of the requirements of machine-learning workloads for computing systems, has an understanding of the main architectures for accelerating machine learning workloads and heterogeneous architectures for embedded machine learning, and of the most popular platforms made available by cloud providers to specifically support machine/deep learning applications.

Course contents


Module 1 (for students of 93398 and 91259, by Prof. L. Benini)

  1. From ML to DNNs - a computational perspective
    1. Introduction to key computational kernels (dot-product, matrix multiply...)
    2. Inference vs training - workload analysis characterization
    3. The NN computational zoo: DNNs, CNNs, RNNs, GNNs, Attention-based Networks
  2. Running ML workloads on programmable processors
    1. recap of processor instruction set architecture (ISA) with focus on data processing
    2. improving processor ISAs for ML: RISC-V and ARM use cases
    3. fundamentals of parallel processor architecture and parallelization of ML workloads
  3. Algorithmic optimizations for ML
    1. Key bottlenecks taxonomy of optimization techniques
    2. Algorithmic techniques: Strassen, Winograd, FFT
    3. Topology optimization: efficient NN models - depthwise convolutions, inverse bottleneck, introduction to Neural Architectural Search

Module 2 (for students of 93398, by Prof. F. Conti)

  1. Representing data in Deep Neural Networks
    1. Recap of canonical DNN loops – a tensor-centric view
    2. Data quantization in Deep Neural Networks
    3. Brief notes on data pruning
  2. From training to software-based deployment
    1. High-performance embedded systems (NVIDIA Xavier, Huawei Ascend)
    2. Microcontroller-based systems (STM32)
  3. From software to hardware acceleration
    1. Principles of DNN acceleration: spatial and temporal data reuse; dataflow loop nests and taxonomy; data tiling
    2. The Neural Engine zoo: convolvers, matrix product accelerators, systolic arrays – examples from the state-of-the-art

Module  2 (for students of 91259,  by Prof. G. Zavattaro)

Introduction to parallel programming.

Parallel programming patterns: embarassingly parallel, decomposition, master/worker, scan, reduce, ...

Shared-Memory programming with OpenMP.

OpenMP programming model: the “omp parallel” costruct, scoping costructs, other work-sharing costructs.

Some examples of applications.

Readings/Bibliography

Refer to virtuale

Teaching methods

Frontal Lectures for theory. In addition, both Module 1 and Module 2 will include hands-on sessions requiring a student laptop.

Assessment methods

Written exam with oral discussion

Teaching tools

Refer to Virtuale

Office hours

See the website of Luca Benini

See the website of Gianluigi Zavattaro