29443 - Computer Vision

Course Unit Page


This teaching activity contributes to the achievement of the Sustainable Development Goals of the UN 2030 Agenda.

Quality education Industry, innovation and infrastructure

Academic Year 2019/2020

Learning outcomes

The course aims at providing the notions and tools necessary for the design and implementation of automatic systems able to analyze digital images for object detection and recognition. In particular, the course focuses mainly on the techniques for feature extraction from digital images and the application of these techniques to typical problems in computer vision such as segmentation, localization, classification and similarity searches. Both traditional approaches as well as deep-learning based solutions will be analyzed, with examples in real-life applications.


Course contents

  • Basic techniques for digital image processing and filtering 
  • Feature extraction
    • Color features:

      - color histograms and similarity metrics;

      - color moments

    • Texture features:

    - gray-level co-occurrence matrix and related measures (enthropy, contrast, homogeneity, etc..);

    - Gabor filters: filter banks;

    - Haar features: integral image and efficient feature extraction;

    - Local Binary Pattern;

    • Shape features:

      - Object countour extraction and one-dimensional shape representations; 

      - Shape descriptors, Fourier descriptors;

      - Invariant moments.

    • Handcrafted features vs Representation learning
  • Image stitching and 2D image registration
    • Keypoints and local descriptors:
    • - Keypoint detection: Harris corner detector;

      - Scale invariant detectors: Harris Laplace, Laplacian of Gaussian, Difference of Gaussian;

      - Keypoints and descriptors: SIFT, SURF, BRIEF, Histogram of Oriented Gradients.

      - Ransac algorithm for feature matching.

  • Semantic segmentation in digital images
    • Color-based segmentation techniques, Mean Shift algorithm;
    • Deep learning based techniques with applications in satellite and medical images analysis.
  • Recognition “in the wild”
    • Object detection/classification
    • - Color, texture and shape features for content-based image retrieval;

      - Bag of visual Words;

      - Feature-based Rigid Template matching and applications to object detection and recognition (e.g. grocery products)

      - Hough Transform;

      - Deep learning techniques for object detection and recognition (e.g. pedestrian and road sign recognition, object recognition for robotic vision, face detection and recognition).

  • Video surveillance and video analysis
    • Basic techniques for frame subtraction and background modeling
    • Deep learning approaches for object/person tracking and crowd analysis;
    • Human activity detection and recognition.


  • Forsyth e Ponce, Computer Vision a modern approach, Pearson 2012.
  • Kaehler e Bradski, Learning OpenCV 3, O'Reilly 2017.
  • Shi, "Emgu CV Essentials", Packt publishing 2013.
  • Gonzalez e Woods, Elaborazioni delle immagini digitali, Prentice Hall, 3 edizione, 2008.

Teaching methods


Laboratory exercises based on public multi-latform computer vision libraries (e.g. OpenCV) and deep learning frameworks (e.g. Tensorflow, PyTorch).

Assessment methods

The final exam aims to evaluate the achievement of the educational objectives:

  • Knowing the main techniques of extraction of shape, color, and texture features;
  • Understanding the image representation techniques based on keypoint and local descriptors;
  • Reaching the ability to design and implement object detection/recognition applications, based both on handcrafted features as well as on deep learning techniques.
  • Understanding the main techniques for video analysis and tracking and for huma activity detection and recognition.

The examination consists of the realization and discussion of a homework, individual or in group, and in an oral test. The discussion of the project will take place at the same time as the oral exam and an overall evaluation will be formulated.

Teaching tools

  • Teacher's slides
  • Code traces for laboratory exercises
  • Emgu CV library (C# wrapper for OpenCV)
  • Deep learning frameworks


Office hours

See the website of Annalisa Franco

See the website of Matteo Ferrara