Academic Year 2018/2019

  • Moduli: Enrico Gallinucci (Modulo 1) Andrea Mordenti (Modulo 2)
  • Teaching Mode: Traditional lectures (Modulo 1) Traditional lectures (Modulo 2)
  • Campus: Cesena
  • Corso: Second cycle degree programme (LM) in Computer Science and Engineering (cod. 8614)

Learning outcomes

At the end of the course, the student:- Knows the applications of Big Data technologies and the respective challenges - Knows the available hardware and software architectures to handle Big Data - Knows the techniques to store the data, the programming languages and paradigms generally adopted in this kind of systems - Knows the design methodologies for the different kinds of applications in the area of Big Data - Acquires practical expertise in using the different technologies through laboratory and projects. In particular, the main technologies used in practical exercises will be NoSQL databases and the Hadoop platform: Hive, Spark, Tez, Dremel, Giraph, Storm, Mahout, and Open R

Course contents

For real-time updates on the course's activities, please subscribe to the distribution list enrico.gallinucci.bigdata

1. Introduction to the course and to Big Data: what they are and how to use them

2. Cluster computing to handle Big Data

  • Parallel computing architectures
  • The Apache Hadoop framework and its modules (HDFS, Yarn)
  • Hadoop-specific data structures (Apache Parquet)

3. The MapReduce paradigm: basic principles, limitations, design of algorithms

4. The Apache Spark system

  • Architecture, data structures,basic principles
  • Data partitioning and shuffling
  • Optimization of the computation

5. SQL on Big Data with Spark SQL

6. Data streaming

  • The architecture to handle data streaming
  • Approximated algorithms in the streaming context

7. Big Data Analysis: a complete case study

8. Taking a Data Mining problem to a Big Data platform

Readings/Bibliography

Main readings:

  • Tom White. Hadoop - The Definitive Guide (4th edition). O'Reilly, 2015
  • Matei Zaharia, Holden Karau, Andy Konwinski, Patrick Wendell. Learning Spark. O'Reilly, 2015
  • Andrew G. Psaltis. Streaming Data - Understanding the real-time pipeline.Manning, 2017

Further readings will be mentioned during the course.

Teaching methods

Lessons and practical exercises

Assessment methods

The exam consists in an oral examination on all the covered topics and in the discussion of a project.

The goal of the project (to be arranged with the lecturer) is to identify a big-enough dataset, define an application to analyze the data (using the techniques and tools learned throughout the course) and write a short report. Groups up to 2 people can be formed. The project provides 0 to 4 points, that will be added to the grade obtained with the oral examination. Alternative projects (e.g.: implementation of a data mining algorithm on a Big Data platform; experimental evaluation of a new tool within the Hadoop framework) can be discussed with the lecturer upon request.

Teaching tools

Practical exercises will rely on a virtual cluster of 10 nodes, pre-configured with the Cloudera Express distributions. An SSH client will be used to connect to the cluster and interact with the available software tools (mainly Apache Hadoop and Apache Spark).

Office hours

See the website of Enrico Gallinucci

See the website of Andrea Mordenti