29443 - Computer Vision

Course Unit Page

  • Teacher Annalisa Franco

  • Learning modules Annalisa Franco (Modulo 1)
    Matteo Ferrara (Modulo 2)

  • Credits 6

  • SSD ING-INF/05

  • Teaching Mode Traditional lectures (Modulo 1)
    Traditional lectures (Modulo 2)

  • Language Italian

  • Campus of Cesena

  • Degree Programme Second cycle degree programme (LM) in Computer Science and Engineering (cod. 8614)

  • Course Timetable from Sep 21, 2021 to Dec 21, 2021

    Course Timetable from Sep 28, 2021 to Dec 13, 2021


This teaching activity contributes to the achievement of the Sustainable Development Goals of the UN 2030 Agenda.

Quality education Industry, innovation and infrastructure

Academic Year 2021/2022

Learning outcomes

The course aims at providing the notions and tools necessary for the design and implementation of automatic systems able to analyze digital images for object detection and recognition. In particular, the course focuses mainly on the techniques for feature extraction from digital images and the application of these techniques to typical problems in computer vision such as segmentation, localization, classification and similarity searches. Both traditional approaches as well as deep-learning based solutions will be analyzed, with examples in real-life applications.


Course contents

  • Basic techniques for digital image processing and filtering 
  • Feature extraction
    • Color features:

      - color histograms and similarity metrics;

      - color moments

    • Texture features:

    - gray-level co-occurrence matrix and related measures (enthropy, contrast, homogeneity, etc..);

    - Gabor filters: filter banks;

    - Haar features: integral image and efficient feature extraction;

    - Local Binary Pattern;


    • Shape features:

      - Object countour extraction and one-dimensional shape representations; 

      - Shape descriptors, Fourier descriptors;

      - Invariant moments.

    • Handcrafted features vs Representation learning
  • Image stitching, 2D image registration and Visual SLAM
    • Keypoints and local descriptors:
    • - Keypoint detection: Harris corner detector;

      - Scale invariant detectors: Harris Laplace, Laplacian of Gaussian, Difference of Gaussian;

      - Keypoints and descriptors: SIFT, SURF, BRIEF, Histogram of Oriented Gradients.

      - Ransac algorithm for feature matching.

    - Application to robotics: Visual SLAM (Simultaneous Localization and Mapping)
  • Semantic segmentation in digital images
    • Color-based segmentation techniques, Mean Shift algorithm;
    • Deep learning based techniques with applications in satellite and medical images analysis.
  • Recognition “in the wild”
    • Object detection/classification
    • - Color, texture and shape features for content-based image retrieval;

      - Bag of visual Words;

      - Feature-based Rigid Template matching and applications to object detection and recognition (e.g. grocery products)

      - Hough Transform;

      - Deep learning techniques for object detection and recognition (e.g. pedestrian and road sign recognition, object recognition for robotic vision, face detection and recognition).

  • Video surveillance and video analysis
    • Basic techniques for frame subtraction and background modeling
    • Approaches for object/person tracking and crowd analysis;
    • Human activity detection and recognition.


  • Zhang, Lipton, Li, Smola, Dive into Deep Learning, https://d2l.ai [https://d2l.ai/], 2020.
  • Elgendy, "Deep Learning for Vision Systems", Manning, 2020.
  • Forsyth and Ponce, Computer Vision a modern approach, Pearson 2012.
  • Kaehler and Bradski, Learning OpenCV 3, O'Reilly 2017.
  • Shi, "Emgu CV Essentials", Packt publishing 2013.
  • Gonzalez and Woods, Elaborazioni delle immagini digitali, Prentice Hall, 3 edizione, 2008.

Teaching methods


Laboratory exercises based on public multi-latform computer vision libraries (e.g. OpenCV) and deep learning frameworks (e.g. Tensorflow, PyTorch).

As concerns the teaching methods of this course unit, all students must attend Module 1, 2 on Health and Safety online

Assessment methods

The final exam aims to evaluate the achievement of the educational objectives:

  • Knowing the main techniques of extraction of shape, color, and texture features;
  • Understanding the image representation techniques based on keypoint and local descriptors;
  • Reaching the ability to design and implement object detection/recognition applications, based both on handcrafted features as well as on deep learning techniques.
  • Understanding the main techniques for video analysis and tracking and for huma activity detection and recognition.

The examination consists of the realization and discussion of a homework, individual or in group, and in an oral test. The discussion of the project will take place at the same time as the oral exam and an overall evaluation will be formulated.

Teaching tools

  • Teacher's slides
  • Code traces for laboratory exercises
  • Emgu CV library (C# wrapper for OpenCV)
  • Deep learning frameworks


Office hours

See the website of Annalisa Franco

See the website of Matteo Ferrara