Dissertation topics suggested by the teacher.
Examples of available projects and theses, it is advised to ask for information to the teacher anyways for possible updated or added proposals, and collaboration with external subjects.
Design of Digital Hardware for Deep Learning
For Master students:
- Precision-Reconfigurable Tensor Processing Units for Ultra-Low Power Inference & Learning
- Programmable Dispatching for Flexible Stationarity Neural Processing Unit
- Integrating SRAM-Based Analog In-Memory Computing into a Digital Neural Processing Unit
- Configurable Ultra-Low-Latency High-Bandwidth Memory Interconnection for Heterogeneous Accelerator Support
- Emulation of Error-Injectable Memories for High-Resilience, Low-Power Deep Neural Networks
For Bachelor (major effort) and Master students:
- Definizione di un testbench per on-chip interconnect a banda elevata e latenza ultra-bassa per integrazione di acceleratori hardware
- Esplorazione e confronto fra architetture digitali sistoliche e non-sistoliche ad elevato fanout per l'accelerazione di reti neurali in Neural Processing Units
Embedded Systems / Microcontrollers
For Master students:
- Dynamic Linking of Code to Enable Highly-Scalable Deployment of Deep Neural Networks on Memory-Constrained Platforms
- Automatic Generation and Tuning of Inference and Training Code Targeted at Hardware-Accelerated RISC-V Platforms
- sEMG Analysis Based on Embedded Deep Learning (in cooperation with Prof. Benatti, UNIMORE)
- Ultra-Low-Power Autonomous Deep Learning-Based Nano-Drones (in cooperation with Dr. Palossi, IDSIA Lugano)
For Bachelor (major effort) and Master students:
- Integrazione di acceleratori hardware in flussi di deployment per reti neurali
Deep Learning / Artificial Intelligence
For Master students:
- Compression Techniques for Latent Representations of Data in Continual Learning Settings
- Integration of Quantized Continual Learning in the Avalanche Framework
- Automatic Generation and Tuning of Inference and Training Code Targeted at Hardware-Accelerated RISC-V Platforms
- Noisy Learning towards Deployment in Analog In-Memory Computing Scenarios
For Bachelor (major effort) and Master students:
- Suite di benchmarking automatizzata per test di nuove architetture neurali
- Inferenza di DNN in precisione mista (FP32/16/8 e INT8)
- Setup di training in metodologia teacher/student
Recent dissertations supervised by the teacher.
First cycle degree programmes dissertations
- Implementazione di modelli di machine learning per monocular depth estimation su sistemi embedded
- PULP-Llama2: Modello di linguaggio su architettura embedded a calcolo parallelo ad alta efficienza energetica
Second cycle degree programmes dissertations
- Autoapprendimento non supervisionato end-to-end di una rete neurale per object detection
- Design and Analysis of PID-Based Control Systems for Motor Applications
- Federated Neural Radiance Fields on a Swarm of Miniaturized Robots
- Integration of a PULP-based heterogeneous cluster into an ESP
system-on-chip
- Latent Replay-Based On-Device Continual Learning using Transformers on Edge Ultra-Low-Power IoT Platforms
- Neural Networks-Based MVDR Beamforming for Real-Time Speech Enhancement on an Ultra-Low Power Microcontroller
- Optimizing Small Language Models: An Experimental Investigation in Compressing Distilled LLaMA Architectures
- Self-learning AI-driven FOC for ARCP based traction inverter
- Simulation and Deployment Support for Transformers on Snitch-based RISC-V Hardware Accelerator
- Softex: Softmax Computing Engine for Fast Exponential Acceleration