93398 - ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE M

Anno Accademico 2021/2022

  • Docente: Luca Benini
  • Crediti formativi: 6
  • SSD: ING-INF/01
  • Lingua di insegnamento: Inglese
  • Moduli: Luca Benini (Modulo 1) Francesco Conti (Modulo 2)
  • Modalità didattica: Convenzionale - Lezioni in presenza (Modulo 1) Convenzionale - Lezioni in presenza (Modulo 2)
  • Campus: Bologna
  • Corso: Laurea Magistrale in Ingegneria elettronica (cod. 0934)

    Valido anche per Laurea Magistrale in Artificial intelligence (cod. 9063)

Conoscenze e abilità da conseguire

The main goal of the class is to enable students to specify, configure, program and verify complex embedded electronic systems for the Internet of Things and for Artificial Intelligence. The importance of hardware-software interaction will be emphasized, as all practical IoT and AI systems are programmable. The class will provide working knowledge on state-of-the-art hardware platforms used in embedded AI and IoT applications - spanning a wide range of power and cost vs. performance tradeoffs. A detailed coverage will be given of software abstractions and methodologies for developing applications leveraging the capabilities of the above mentioned platforms. Design automation tools and flows will also be covered.

Contenuti

Module 1 (for students of 93398 and 91259, by Prof. L. Benini)

  1. From ML to DNNs - a computational perspective
    1. Introduction to key computational kernels (dot-product, matrix multiply...)
    2. Inference vs training - workload analysis characterization
    3. The NN computational zoo: DNNs, CNNs, RNNs, GNNs, Attention-based Networks
  2. Running ML workloads on programmable processors
    1. recap of processor instruction set architecture (ISA) with focus on data processing
    2. improving processor ISAs for ML: RISC-V and ARM use cases
    3. fundamentals of parallel processor architecture and parallelization of ML workloads
  3. Algorithmic optimizations for ML
    1. Key bottlenecks taxonomy of optimization techniques
    2. Algorithmic techniques: Strassen, Winograd, FFT
    3. Topology optimization: efficient NN models - depthwise convolutions, inverse bottleneck, introduction to Neural Architectural Search

Module 2 (for students of 93398, by Prof. F. Conti)

  1. Representing data in Deep Neural Networks
    1. Recap of canonical DNN loops – a tensor-centric view
    2. Data quantization in Deep Neural Networks
    3. Brief notes on data pruning
  2. From training to software-based deployment
    1. High-performance embedded systems (NVIDIA Xavier, Huawei Ascend)
    2. Microcontroller-based systems (STM32)
  3. From software to hardware acceleration
    1. Principles of DNN acceleration: spatial and temporal data reuse; dataflow loop nests and taxonomy; data tiling
    2. The Neural Engine zoo: convolvers, matrix product accelerators, systolic arrays – examples from the state-of-the-art

Module 2 (for students of 91259, by Prof. G. Zavattaro)

Introduction to parallel programming.

Parallel programming patterns: embarassingly parallel, decomposition, master/worker, scan, reduce, ...

Shared-Memory programming with OpenMP.

OpenMP programming model: the “omp parallel” costruct, scoping costructs, other work-sharing costructs.

Some examples of applications.

Testi/Bibliografia

Refer to Virtuale

Metodi didattici

Frontal Lectures for theory. In addition, both Module 1 and Module 2 will include hands-on sessions requiring a student laptop.

Modalità di verifica e valutazione dell'apprendimento

Written exam with oral discussion

Strumenti a supporto della didattica

Refer to Virtuale

Orario di ricevimento

Consulta il sito web di Luca Benini

Consulta il sito web di Francesco Conti