- Docente: Sebastiano Moruzzi
- Credits: 6
- SSD: M-FIL/05
- Language: Italian
- Teaching Mode: Traditional lectures
- Campus: Bologna
-
Corso:
Second cycle degree programme (LM) in
Italian Studies and European Literary Cultures (cod. 6689)
Also valid for Second cycle degree programme (LM) in Data, Methods and Theoretical Models For Linguistics (cod. 5946)
-
from Apr 13, 2026 to May 18, 2026
Learning outcomes
At the end of the course, the student achieves an average competence in the philosophy of contemporary language, with the in-depth study of a specific topic and the guided reading of a classic.
Course contents
Topic: Semantics and Pragmatics with AI and LLMs
This course addresses the semantic and pragmatic aspects of language, focusing on how Artificial Intelligence and Large Language Models (LLMs) affect our understanding of these phenomena.
The course will revolve around the following questions:
-
How does our understanding of semantics change when it is applied to systems that lack worldly experience and genuine intentions?
-
How do artificial language models handle pragmatic phenomena such as implicatures, deixis, or conversational context?
-
Can language models be said to possess a form of semantic competence, or are they merely displaying statistical abilities?
-
What limits emerge in LLMs’ ability to refer to real-world entities in a stable and coherent way?
-
Which criteria can we use to distinguish genuine understanding from the simulation of understanding in an artificial system?
-
What role does context play in attributing meaning to an output generated by an LLM, and how much does it depend on human interpretation?
-
To what extent are the syntactic and distributional structures learned by an LLM sufficient to generate correct pragmatic inferences?
-
What does the use of LLMs imply for our traditional conception of meaning as something tied to intentionality, use, and communication?
Course Schedule
The following is a tentative schedule indicating the articles or books to be discussed. It may change depending on how much material can be covered in class and on the interests of attending students.
-
Lessons 1–4 — Introduction to deep learning and Large Language Models: Buckner (2019); Wolfram (2023)
-
Lessons 4–8 — Millière (2024), A Philosophical Introduction to Language Models – Part I: Continuity With Classic Debates & Part II: The Way Forward
-
Lesson 9 — Ma et al. (2025), Pragmatics in the Era of Large Language Models: A Survey on Datasets, Evaluation, Opportunities and Challenges
-
Lesson 10 — Mandelkern and Linzen (2024), Language Models and Semantic Competence
-
Lesson 11 — Baggio and Murphy (2024), Referential Capacity of Language Models
-
Lesson 12 — Bang et al. (2022), The Debate over Understanding in AI’s Large Language Models
-
Lesson 13 — Cuskley et al. (2024), The Limitations of Large Language Models for Understanding Human Language and Cognition
-
Lesson 14 — Johnson and Dupré (forthcoming), Uncanny Performance, Divergent Competence
-
Lesson 15 — Borg (2025), LLMs, Turing Tests and Chinese Rooms: The Prospects for Meaning in Large Language Models
Additional articles may be made available so the syllabus can be adapted to students’ interests.
Other relevant works for the course include: Budding (forthcoming); Cappelen and Dever (2021); Cappelen and Dever (forthcoming); Grindrod et al. (forthcoming); Havlík (2024); Johnson and Dupré (forthcoming); Ostertag (2023); Rothschild (forthcoming); Cappelen et al. (2025); Grindrod (2024); Koch (2025); Mitchell (2021); Pepp (2025); Schwitzgebel et al. (2023); Queloz (2025); Éloïse Boisseau (2024).
All readings will be available on Virtuale.
Readings/Bibliography
NOTE 1: all texts will be available online on Virtuale.
NOTE 2: all the selected texts are in English, this is because, as is the case in scientifically mature disciplines, the best contemporary literature in philosophy is published in English in specialised international journals.
*****
References
(Required readings are those listed in the tentative calendar previous section.)
Baggio, G. and Murphy, E. (2024). On the referential capacity of language models: An internalist rejoinder to Mandelkern & Linzen.
Bang, J., Chung, J., and Rennie, S. J. (2022). The debate over understanding in AI’s large language models. European PMC, PMC10068812. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC10068812/ .
Borg, E. (2025). LLMs, Turing tests and Chinese rooms: The prospects for meaning in large language models. Inquiry.
Buckner, C. (2019). Deep learning: A philosophical introduction. Philosophy Compass, 14(10): e12625.
Budding, C. (forthcoming). What do large language models know? Tacit knowledge as a potential causal-explanatory structure. Philosophy of Science.
Cappelen, H. and Dever, J. (2021). Making AI Intelligible: Philosophical Foundations. Oxford University Press, New York, USA.
Cappelen, H. and Dever, J. (forthcoming). A hyper-externalist manifesto for LLMs. In Cappelen, H. and Sterken, R., eds., Communicating with AI: Philosophical Perspectives. Oxford University Press.
Cappelen, H., Goldstein, S., and Hawthorne, J. (2025). AI survival stories: A taxonomic analysis of AI existential risk. Philosophy of AI.
Cuskley, C., Woods, R., and Flaherty, M. (2024). The limitations of large language models for understanding human language and cognition. Open Mind, 8:1058–1083.
Boisseau, E. (2024). Imitation and large language models. Minds and Machines, 34(4):1–24.
Grindrod, J. (2024). Large language models and linguistic intentionality. Synthese.
Grindrod, J., Porter, J. D., and Hansen, N. (forthcoming). Distributional semantics, holism, and the instability of meaning. In Cappelen, H. and Sterken, R., eds., Communicating with AI: Philosophical Perspectives. Oxford University Press.
Havlík, V. (2024). Meaning and understanding in large language models. Synthese, 205(1):1–21.
Johnson, G. and Dupré, G. (forthcoming). Uncanny performance, divergent competence. In Cappelen, H. and Sterken, R., eds., Communicating with AI: Philosophical Perspectives. Oxford University Press.
Koch, S. (2025). Babbling stochastic parrots? A Kripkean argument for reference in large language models. Philosophy of AI, 1.
Ma, B., Li, Y., Zhou, W., Gong, Z., Liu, Y. J., Jasinskaja, K., Friedrich, A., Hirschberg, J., Kreuter, F., and Plank, B. (2025). Pragmatics in the era of large language models: A survey on datasets, evaluation, opportunities and challenges.
Mandelkern, M. and Linzen, T. (2024). Do language models’ words refer? Computational Linguistics, 50(3):1191–1200.
Millière, R. and Buckner, C. (2023). A philosophical introduction to language models (Part II): The way forward. Arkiv.
Millière, R. and Buckner, C. (2024). A philosophical introduction to language models (Part I): Continuity with classic debates. Arkiv.
Mitchell, M. (2021). L’intelligenza artificiale. Una guida per esseri umani pensanti. Giulio Einaudi Editore, Torino.
Ostertag, G. (2023). Meaning by courtesy: LLM-generated texts and the illusion of content. American Journal of Bioethics, 23(10):91–93.
Pepp, J. (2025). Reference without intentions in large language models. Inquiry.
Queloz, M. (2025). Can AI rely on the systematicity of truth? The challenge of modelling normative domains. Philosophy and Technology, 38(34):1–27.
Rothschild, D. (forthcoming). Language and thought: The view from LLMs. In Sosa, D. and Lepore, E., eds., Oxford Studies in Philosophy of Language. Oxford University Press.
Schwitzgebel, E., Schwitzgebel, D., and Strasser, A. (2023). Creating a large language model of a philosopher. Mind & Language.
Wolfram, S. (2023). What Is ChatGPT Doing ... and Why Does It Work? Wolfram Media, Inc. Paperback.
Teaching methods
Lessons
A detailed syllabus will be posted on Virtuale, in which the lectures with accompanying readings will be scheduled.
Methodologies
Each lesson will consist of a short introduction to the topic followed by a discussion involving all students.
Depending on the size of the class, during the lessons I will use the peer instruction method (if the class is large, see also the explanation in teaching methods in this course of mine) or the community of inquiry methodology (if the class is small) to directly involve the class group.
For these active teaching methods to work, the class group is required to read the compulsory readings assigned to each lesson in advance.
All texts will also be made available online on Perusall so that they can be discussed asynchronously before the lesson.
Assessment methods
Assessment during the course:
- collective reading of texts during the course via the social reading platform perusall.com
- comprehension questions using the peer instruction method during class.
NOTE: these tests will not be averaged but will be useful for attending students to consolidate the learning of the course topics.
Assessment during exam:
- essay writing
- oral test.
The length of the essay varies depending on whether you are attending or not.
ATTENDING STUDENTS short essay: at least 1500 words and no more than 3000 words (everything included: first name, surname, course of study, title, bibliography).
NON-ATTENDING STUDENTS long essay: at least 3000 words and no more than 4000 words (all inclusive: first name, last name, freshman, course of study, title, bibliography).
Attendance or non-attendance will be determined on the basis of the continuity with which the students will participate in the tests during the course: an attending student must complete at least 70% of the assignments on perusall (the web address will be given at the beginning of the course).
VERIFICATION CRITERIA FOR THE EXAMINATION
I will use these verification criteria to determine the following evaluation thresholds:
30 and praise excellent proof, both in knowledge and in the critical and expressive articulation.
30 excellent test, complete knowledge, well articulated and correctly expressed, with some critical ideas.
27-29 good test, comprehensive and satisfactory knowledge, substantially correct expression.
24-26 discrete test, knowledge present in the substantial points, but not exhaustive and not always correctly articulated.
21-23 sufficient proof, knowledge present in a sometimes superficial way, but the general thread is understood. Short and often inappropriate and incomplete expression and articulation.
18-21 superficial knowledge, the common thread is not understood with continuity. The expression and the articulation of the discourse also have significant gaps.
<18 insufficient evidence, absent or very incomplete knowledge, lack of orientation in the discipline, defective and inappropriate expression. Examination not passed.
Students with disabilities and Specific Learning Disorders (SLD)
Students with disabilities or Specific Learning Disorders are entitled to special adjustments according to their condition, subject to assessment by the University Service for Students with Disabilities and SLD. Please do not contact teachers or Department staff, but make an appointment with the Service. The Service will then determine what adjustments are specifically appropriate, and get in touch with the teacher. For more information, please visit the page:
https://site.unibo.it/studenti-con-disabilita-e-dsa/en/for-students
Teaching tools
Virtuale, slide and handouts, Wooclap and Perusall (http://persuall.com) software for peer instruction.
Office hours
See the website of Sebastiano Moruzzi