Foto del docente

Marco Billi

PhD Student

Department of Legal Studies

Research fellow

Department of Legal Studies

Academic discipline: IUS/20 Philosophy of Law

Research

The right to obtain an explanation is a staple of all legal systems, be they common or civil law. All European regulations have remarked upon the importance of transparency and explainability of AI. Many machine-learning systems, and predictive AI, generally lack the capability to explain the internal process of reasoning behind the decision-making, or the reasons why decisions are made in a certain way. This is the issue most commonly referred to as opaqueness of AI.

The aim of my project is to study how automated legal decisions and predictions can be explained through legal reasoning. In particular, I will consider whether the outcomes of predictive systems in the legal domain can be explained or justified by combining methods for rule-based reasoning and case-based reasoning. It shall be done by looking at both the precedents and the statutory law, and connecting them in reasoning schemes, through factor and dimension-based reasoning. The objective is the creation of guidelines for ensuring that a motivation/justification encompasses both the step-by-step nature of logic rules and the explanation through example of case-based reasoning.

Latest news

At the moment no news are available.