4 semantic technology seminars on May 8th, 9th, 15th, 16th
Four important seminars are scheduled in this week and the next, held by leading scholars in Natural Language Processing, Data Science, Hybrid Artificial Intelligence, and Deep Learning.
The seminars will be held in Aula Affreschi, Via Zamboni 34, Piano Terra, at 2:30 PM, and are supported by the DHDK master degree and the DHARC (Digital Humanities Advanced Research Center).
Wednesday May 8th, 2:30 PM
Marieke van Erp
Why language technology can’t handle Game of Thrones (yet)
Natural language processing (NLP) tools are commonly used in many day-to-day applications such as Siri and Google, but the effectiveness of these technologies is not thoroughly understood. I will present joint work with colleagues from the Vrij Universiteit Amsterdam in which we perform a thorough evaluation of four different name recognition tools on popular 40 novels (including A Game of Thrones). I will highlight why literary texts are so difficult for NLP tools as well as solutions for improving their performance.
Marieke van Erp is a researcher and team leader of the Digital Humanities Lab at the Royal Netherlands Academy of Arts and Sciences Humanities Cluster in Amsterdam, the Netherlands. Her research is focused on applying natural language processing in semantic web applications with a particular interest in digital humanities. She previously worked on the European NewsReader project, which was aimed at building structured indexes of events from large volumes of financial news and the CLARIAH project, a large Dutch project to develop infrastructure for humanities research.
Thursday May 9th, 2:30 PM
Thinking about the Making of Data
(This will be an interactive session. Please do me a favor and come prepared with examples of your favorite datasets and recipes of how it was made.)
A central challenge in our modern information environment is how to use, integrate and repurpose data that stem from a multitude of diverse sources. Within data science, ~60-70% of the time is spent gathering, preparing, integrating, and munging data. In science, there is, for instance, the need to know which of the thousands of prior experimental records are reliable, applicable and can be reused for an experiment. In this talk, I discuss the goal of developing intelligent systems that work with people to combine and reuse data. I give examples from my work on flexible knowledge graph construction and contextualize this in the context of recent work looking at how scientist search for data.
Paul Groth is Professor of Algorithmic Data Science at the University of Amsterdam where he leads the Intelligent Data Engineering Lab (INDElab). His research focuses on intelligent systems for dealing with large amounts of diverse contextualized knowledge with a particular focus on web and science applications. This includes research in data provenance, data integration and knowledge sharing.
Paul led the design of a number of large scale data integration and knowledge graph construction efforts in the biomedical domain. Paul was co-chair of the W3C Provenance Working Group that created a standard for provenance interchange. He has also contributed to the emergence of community initiatives to build a better scholarly ecosystem including altmetrics and the FAIR data principles.
Wednesday May 15th, 2:30 PM
Fabio Massimo Zanzotto
Hey, Merry Men! Robin-Hood Artificial Intelligence is calling you!
Artificial Intelligence may accelerate the third Industrial Revolution exploiting Human Knowledge stored in Personal Data. Will the job market and your future job survive this third Industrial Revolution?
Thursday May 16th, 2:30 PM
Fabio Massimo Zanzotto
Symbolic, Distributed and Distributional Representations in the Era of Deep Learning
Natural and Artificial Languages are inherently discrete symbolic representations of human knowledge. Recent advances in machine learning (ML) and in natural language processing (NLP) seem to contradict the above intuition: discrete symbols are fading away, erased by vectors or tensors called distributed and distributional representations. However, there is a strict link between distributed/distributional representations and discrete symbols, being the first an approximation of the second. A clearer understanding of the strict link between distributed/distributional representations and symbols may certainly lead to radically new deep learning networks. In this talk I make a survey that aims to renew the link between symbolic representations and distributed/distributional representations. This is the right time to revitalize the area of interpreting how discrete symbols are represented inside neural networks.
Fabio Massimo Zanzotto is Associate Professor at the Department of Enterprise Engineering - University of Rome "Tor Vergata”.
Pubblicato il: 06 maggio 2019