28002 - Philosophy of Language (1) (LM)

Academic Year 2025/2026

  • Teaching Mode: Traditional lectures
  • Campus: Bologna
  • Corso: Second cycle degree programme (LM) in Semiotics (cod. 8886)

Learning outcomes

At the end of the course, the student achieves an average competence in the philosophy of contemporary language, with the in-depth study of a specific topic and the guided reading of a classic.

Course contents

Philosophy of Language, AI, and LLMs

This course explores foundational concepts in the philosophy of language with particular reference to metasemantics, considering the challenges posed by Large Language Models (LLMs) and Artificial Intelligence.

The following questions will be addressed:

#1 Introduction to AI and LLMs
What is deep learning, and how do LLMs work?

#2 Metasemantics and the Intelligibility of LLMs
How can metasemantics help clarify how LLMs generate meaning, and what role does external context play?

#3 Language Models and Classical Philosophy of Language
How do LLMs fit into classical philosophical debates about meaning, reference, and intentionality?

#4 Intentionality and Meaning in LLMs
Can LLMs be attributed genuine intentionality, or are they merely simulating it?

#5 Meaning, the Turing Test, and the Chinese Room
What do thought experiments such as the Turing Test and the Chinese Room tell us about LLMs' understanding of language?

#6 Reference Without Intentions
Is it philosophically coherent to say that LLMs can refer to the world despite lacking intentions?

#7 Kripkean Semantics and LLMs
What are the strengths and limitations of applying Kripkean semantics to the analysis of LLMs?

#8 Meaning Attribution and Imitation
What issues arise when we attribute meaning to texts produced by LLMs, considering their reliance on imitation?

Course Schedule

The following is a tentative lesson schedule indicating the readings and texts that will be discussed. It may be adjusted depending on how much material is covered in class and the interests of participating students.

  • Lessons 1–4: Introduction to deep learning and Large Language Models
    Buckner (2019), Wolfram (2023)

  • Lessons 4–8:
    Millière (2024), A Philosophical Introduction to Language Models – Part I: Continuity With Classic Debates & Part II: The Way Forward

  • Lesson 9:
    Cappelen and Dever (forthcoming), A Hyper-Externalist Manifesto for LLMs

  • Lesson 10:
    Grindrod (2024), Intentionality and Language Models

  • Lesson 11:
    Borg (2025), LLMs, Turing Tests and Chinese Rooms: The Prospects for Meaning in Large Language Models

  • Lesson 12:
    Rothschild (forthcoming), Reference and Thought in LLMs

  • Lesson 13:
    Pepp (2025), Reference Without Intentions in Large Language Models

  • Lesson 14:
    Koch (2025), Kripkean Semantics and LLMs

  • Lesson 15:
    Havlík (2024), Meaning Attribution and AI
    Boisseau (2024), Imitation and Large Language Models

Additional articles may be made available and discussed, depending on students’ interests.

Relevant works for the course include:
Budding (forthcoming), Cappelen and Dever (2021, forthcoming), Grindrod et al. (forthcoming), Havlík (2024), Johnson and Dupré (forthcoming), Ostertag (2023), Rothschild (forthcoming), Baggio and Murphy (2024), Mandelkern and Linzen (2024), Bang et al. (2022), Cappelen et al. (2025), Grindrod (2024), Ma et al. (2025), Mitchell (2021), Cuskley et al. (2024), Schwitzgebel et al. (2023), Queloz (2025), Boisseau (2024).

All reading materials are available on Virtuale.

Readings/Bibliography

NOTE 1: all texts will be available online on Virtuale.
NOTE 2:
all the selected texts are in English, this is because, as is the case in scientifically mature disciplines, the best contemporary literature in philosophy is published in English in specialised international journals.

*****

References
(Required readings are those listed in the tentative calendar previous section.)

Baggio, G. and Murphy, E. (2024). On the referential capacity of language models: An internalist rejoinder to Mandelkern & Linzen.

Bang, J., Chung, J., and Rennie, S. J. (2022). The debate over understanding in AI’s large language models. European PMC, PMC10068812. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC10068812/ .

Borg, E. (2025). LLMs, Turing tests and Chinese rooms: The prospects for meaning in large language models. Inquiry.

Buckner, C. (2019). Deep learning: A philosophical introduction. Philosophy Compass, 14(10): e12625.

Budding, C. (forthcoming). What do large language models know? Tacit knowledge as a potential causal-explanatory structure. Philosophy of Science.

Cappelen, H. and Dever, J. (2021). Making AI Intelligible: Philosophical Foundations. Oxford University Press, New York, USA.

Cappelen, H. and Dever, J. (forthcoming). A hyper-externalist manifesto for LLMs. In Cappelen, H. and Sterken, R., eds., Communicating with AI: Philosophical Perspectives. Oxford University Press.

Cappelen, H., Goldstein, S., and Hawthorne, J. (2025). AI survival stories: A taxonomic analysis of AI existential risk. Philosophy of AI.

Cuskley, C., Woods, R., and Flaherty, M. (2024). The limitations of large language models for understanding human language and cognition. Open Mind, 8:1058–1083.

Boisseau, E. (2024). Imitation and large language models. Minds and Machines, 34(4):1–24.

Grindrod, J. (2024). Large language models and linguistic intentionality. Synthese.

Grindrod, J., Porter, J. D., and Hansen, N. (forthcoming). Distributional semantics, holism, and the instability of meaning. In Cappelen, H. and Sterken, R., eds., Communicating with AI: Philosophical Perspectives. Oxford University Press.

Havlík, V. (2024). Meaning and understanding in large language models. Synthese, 205(1):1–21.

Johnson, G. and Dupré, G. (forthcoming). Uncanny performance, divergent competence. In Cappelen, H. and Sterken, R., eds., Communicating with AI: Philosophical Perspectives. Oxford University Press.

Koch, S. (2025). Babbling stochastic parrots? A Kripkean argument for reference in large language models. Philosophy of AI, 1.

Ma, B., Li, Y., Zhou, W., Gong, Z., Liu, Y. J., Jasinskaja, K., Friedrich, A., Hirschberg, J., Kreuter, F., and Plank, B. (2025). Pragmatics in the era of large language models: A survey on datasets, evaluation, opportunities and challenges.

Mandelkern, M. and Linzen, T. (2024). Do language models’ words refer? Computational Linguistics, 50(3):1191–1200.

Millière, R. and Buckner, C. (2023). A philosophical introduction to language models (Part II): The way forward. Arkiv.

Millière, R. and Buckner, C. (2024). A philosophical introduction to language models (Part I): Continuity with classic debates. Arkiv.

Mitchell, M. (2021). L’intelligenza artificiale. Una guida per esseri umani pensanti. Giulio Einaudi Editore, Torino.

Ostertag, G. (2023). Meaning by courtesy: LLM-generated texts and the illusion of content. American Journal of Bioethics, 23(10):91–93.

Pepp, J. (2025). Reference without intentions in large language models. Inquiry.

Queloz, M. (2025). Can AI rely on the systematicity of truth? The challenge of modelling normative domains. Philosophy and Technology, 38(34):1–27.

Rothschild, D. (forthcoming). Language and thought: The view from LLMs. In Sosa, D. and Lepore, E., eds., Oxford Studies in Philosophy of Language. Oxford University Press.

Schwitzgebel, E., Schwitzgebel, D., and Strasser, A. (2023). Creating a large language model of a philosopher. Mind & Language.

Wolfram, S. (2023). What Is ChatGPT Doing ... and Why Does It Work? Wolfram Media, Inc. Paperback.

Teaching methods

Lessons

A detailed syllabus will be posted on Virtuale, in which the lectures with accompanying readings will be scheduled.

Methodologies

Each lesson will consist of a short introduction to the topic followed by a discussion involving all students.

Depending on the size of the class, during the lessons I will use the peer instruction method (if the class is large, see also the explanation in teaching methods in this course of mine) or the community of inquiry methodology (if the class is small) to directly involve the class group.

For these active teaching methods to work, the class group is required to read the compulsory readings assigned to each lesson in advance.

All texts will also be made available online on Perusall so that they can be discussed asynchronously before the lesson.

Assessment methods

Assessment during the course:

  • collective reading of texts during the course via the social reading platform perusall.com
  • comprehension questions using the peer instruction method during class.

NOTE: these tests will not be averaged but will be useful for attending students to consolidate the learning of the course topics.

Assessment during exam:

  • essay writing
  • oral test.

The length of the essay varies depending on whether you are attending or not.

ATTENDING STUDENTS short essay: at least 1500 words and no more than 3000 words (everything included: first name, surname, course of study, title, bibliography).

NON-ATTENDING STUDENTS long essay: at least 3000 words and no more than 4000 words (all inclusive: first name, last name, freshman, course of study, title, bibliography).

Attendance or non-attendance will be determined on the basis of the continuity with which the students will participate in the tests during the course: an attending student must complete at least 70% of the assignments on perusall (the web address will be given at the beginning of the course).

VERIFICATION CRITERIA FOR THE EXAMINATION

I will use these verification criteria to determine the following evaluation thresholds:

30 and praise excellent proof, both in knowledge and in the critical and expressive articulation.

30 excellent test, complete knowledge, well articulated and correctly expressed, with some critical ideas.

27-29 good test, comprehensive and satisfactory knowledge, substantially correct expression.

24-26 discrete test, knowledge present in the substantial points, but not exhaustive and not always correctly articulated.

21-23 sufficient proof, knowledge present in a sometimes superficial way, but the general thread is understood. Short and often inappropriate and incomplete expression and articulation.

18-21 superficial knowledge, the common thread is not understood with continuity. The expression and the articulation of the discourse also have significant gaps.


<18 insufficient evidence, absent or very incomplete knowledge, lack of orientation in the discipline, defective and inappropriate expression. Examination not passed.

 

Students with disabilities and Specific Learning Disorders (SLD)

S
tudents with disabilities or Specific Learning Disorders are entitled to special adjustments according to their condition, subject to assessment by the University Service for Students with Disabilities and SLD. Please do not contact teachers or Department staff, but make an appointment with the Service. The Service will then determine what adjustments are specifically appropriate, and get in touch with the teacher. For more information, please visit the page:

https://site.unibo.it/studenti-con-disabilita-e-dsa/en/for-students

Teaching tools

Elearning, slide and handouts, Wooclap and Perusall (http://persuall.com) software for peer instruction.

Office hours

See the website of Sebastiano Moruzzi