93918 - Neural Systems

Academic Year 2021/2022

  • Docente: Mauro Ursino
  • Credits: 9
  • SSD: ING-INF/06
  • Language: English
  • Moduli: Mauro Ursino (Modulo 1) Mauro Ursino (Modulo 2)
  • Teaching Mode: Traditional lectures (Modulo 1) Traditional lectures (Modulo 2)
  • Campus: Cesena
  • Corso: Second cycle degree programme (LM) in Biomedical Engineering (cod. 9266)

Learning outcomes

At the end of the course the student has acquired theoretical and practical knowledge on the main models of neurons, on neural networks, both artificial and inspired by physiology, on learning techniques for neural systems, on the problems that can be faced through each type of network. In particular: - he/she knows the main types of networks inspired by biology - he/she knows some fundamental elements of deep learning techniques - he/she is able to simulate the behavior of simple computer neural networks and critically evaluate the results - he/she is able to link modeling knowledge with aspects of neurophysiology - he/she has basic knowledge on brain organization, and on the recent problems of cognitive neuroscience. The student is also able to critically examine the function and role of neural networks in various application fields related to medicine and biology

Course contents

Models of neural cells

Review of the neuronal cell. The membrane potential and the role of the main ionic species. The concept of excitable cell and the action potential. Electric equivalent of the cell.

A short review of the Hodgkin-Huxley model. The simplified Hodgkin-Huxley model with two state variables (fast-fast plane and fast-slow plane). The FitzHugh-Nagumo model.

Introduction to “integrate and fire” models. Advantages of such models. Analysis of the discharge rate of the “integrate and fire” model stimulated with a constant current. Insertion of synaptic conductances in the “integrate and fire” model. Model of a network of integrated and fire neurons connected to each other. Advantages and limitations of such models.

Models of neuronal networks

Simplification of the model: from the “spiking neuron” model to the “firing rate” model. Advantages of the simplified model, and its limitations. Indications on choosing the most appropriate model.

General characteristics of a neural network, and considerations on the different properties. Examples of simple neural networks: the pure feedforward model, the feedforward + feedback model, excitatory and inhibitory neurons. The learning of synapses, connectionism and Hebb's rule. Experimental evidence: homosynaptic and heterosianptic strengthening and weakening.

The associative memories

Introduction to hetero-associative memories. The conditioned stimulus and the unconditional stimulus. Example of heteroassociative memories trained with Hebb's rule. The storage of orthogonal patterns and the interference between non-orthogonal patterns. Advantages of such memories (robustness, insensitivity to disturbances).

Introduction to autoassociative memory. The Hopfield's model. Energy of the Hopfield network: convergence theorem. The concept of memory addressed by content. Analysis of the storage capacity of a Hopfield network. Hopfield's network as a model of the hippocampus. Main anatomical and functional characteristics of the hippocampus. Short-term episodic memory.

Networks with supervisor (or error correction)

Introduction to supervisor networks. The Rosenblatt Perceptron. The perceptron training rule and the convergence theorem. The perceptron as a linear classifier: strengths and limits. The problem of the exclusive or. Extension of the perceptron to the case of networks with continuous and differentiable activation functions. The delta rule.

Multilayer feedforward neural networks. The backpropagation algorithm: the training of output neurons and hidden neurons. Advantages and limitations of trained networks with backpropagation. Biological relevance of error correction networks. The anatomical structure and function of the cerebellum. The cerebellum as a perception.

Reinforced learning (or critical learning) and subject-environment interaction. Reinforced learning algorithm in a stochastic network.

 

Elements of deep learning

The concept of deep learning. Deep learning-based network training techniques: - Regularization for deep learning; - Optimization for learning deep models; the choice of hyperparameters. Convolutional networks with examples of applications in the field of neuroscience. The recurrent networks.

 

Self-organized networks

Introduction to unsupervised learning. The purpose and essential characteristics of unsupervised learning. The research of the principal components of a random vector. Networks for the computation of the principal components of a random vector. The rules of Oja and Sanders. The concept of lateral inhibition and its role in sensory systems. Competitive networks. Contrast enhancement in a compound eye model. The formation of categories through self-organized neural networks. “Winner Takes All” networks. Main limitations of these networks. Kohonen's algorithm and the formation of topological maps. Examples of topological maps on the cerebral cortex for sensory perception.

 

Large-scale organization of the brain

Hypothesis on the large-scale organization of the brain. Elements of processing in the posterior cortex: unimodal visual processing (what and where) and somato-sensory processing. Association between different sensory modalities: the role of the amygdala and the orbitofrontal cortex. Need for different types of memory and related neural networks. The role of the hippocampus. Integrative, episodic and working memory (prefrontal cortex).

Readings/Bibliography

Notes provided by the teacher. This material will be uploaded on the platform for the repository of didactic material made available by the University.

The following texts are not strictly required for the exam preparation, but can be used for any further deepening following the exam:

For a comprehensive discussion of the various neuron models:·

P. Dayan, L.F. Abbott. “Theoretical Neuroscience. Computational and Mathematical Modeling of Neural Systems”. The MIT Press, London, England, 2001.

For the mathematical aspects of neural networks and the rigorous proving of some theorems

· J. Hertz, A. Krogh, R. G. Palmer. “Introduction to the Theory of Neural Computation”. Addison Wesley, NewYork, 1991.

· S. Haykin. “Neural Networks. A Comprehensive Foundation”, IEEE Press, NewYork, 1994.

For some elemnts of deep learning

I. Goodfellow, Y. Bengio, A. Courville, “Deep learning”, MIT Press, 2016.

C. C: Aggarwal, “Neural Networks and Deep Learning”, Springer, 2018

For the links with neuroscience and cognitive neuroscience

· J.A. Anderson. “An Introduction to Neural Networks”. The MIT Press, Cambridge, MA, 1995.

· E.T. Rolls, A. Treves. “Neural Networks and Brain Function”. Oxford University Press. Oxford, 1998.

· R. C. O'Reilly, Y. Munakata. “Computational Explorations in Cognitive Neuroscience”. The MIT Press, Cambridge, MA, 2000.

For the physiological aspects of neuroscience:

· E.R. Kandel, J.H. Schwartz, T.M. Jessell. "Principles of Neural Sciences", McGraw Hill, 2005

Teaching methods

The course is divided into ex-cathedra lessons and computer exercises using the MATLAB package. The lectures aim to provide the student with theoretical knowledge on the models of neurons and neural networks, and to make him aware of the merits and limitations of each technique. The exercises aim to train the student to solve simple problems with the use of neural networks, and to show him in practice the possibilities and limits of the models and networks proposed during the course.

As concerns the teaching methods of this course unit, all students must attend Module 1, 2 [https://www.unibo.it/en/services-and-opportunities/health-and-assistance/health-and-safety/online-course-on-health-and-safety-in-study-and-internship-areas] on Health and Safety online

Attendance at lectures and exercises is strongly recommended, since the information provided in the teaching materials, although complete, is deepened and commented in detail by the teacher in the classroom.

Assessment methods

Assessment Methods

The final exam is based on an interview with the student (duration 45-50 minutes). During the interview, the student will be asked three questions, on different aspects of the course concerning neuronal modeling (neuron models, associative networks, error correction networks, deep learning, self-organized networks, etc…, including a discussion of the exercises).

The interview aims to evaluate the achievement of the educational objectives and in particular:

- knowledge of the main neuron models;

- knowledge of the main types of neural networks and their application;

- the basic problems of computational neuroscience;

- the ability to apply the techniques used during the course.

The student's analytical and synthesis skills, his language skills, and expository clarity are also part of the final judgment.

To have the laude it is necessary to have an excellent knowledge of the subject on each of the three questions asked. The grade is then scaled based on the number and severity of the errors made.

Teaching tools

Blackboard, video projector.

Notes provided by the teacher. Photocopies of images related to neuroscience and cognitive science.

Personal computer laboratory.

Matlab software, at the personal computer laboratory, for carrying out computer exercises

Office hours

See the website of Mauro Ursino