References

This essay was first presented as a keynote address in May 2023, at the Technolinguistics in Practice: Socially Situating Language in AI Systems conference held in Siegen, Germany. Siri Lamoureaux, the key organizer, as well as Michael Castelle, Evan Donahue, Ilana Gershon, Yarden Skop, Alicia Fuentes-Calle, and Mark Dingemanse, offered very helpful feedback. For the stochastic parrot critique of language models, herein critiqued, see the important work of Emily Bender and colleagues. For more on discursive scaffolding, see Cooperative Interaction by Charles Goodwin. For classic work on entextualization, see the essays in Natural Histories of Discourse, edited by Michael Silverstein and Greg Urban, as well as the article by Richard Baumann and Charles Briggs, Poetics and Performance as Critical Perspective on Language and Social Life. Regarding the projection of agency in relation to discursive interaction, see The Ontology of Action by Nicholas J. Enfield and Jack Sidnell, and Agency in Language by Alessandro Duranti. Regarding reflexivity, semiotics, and subjectivity, see Talking Heads: Language, Metalanguage, and the Semiotics of Subjectivity by Benjamin Lee. For more on magic, see the work of Graham M. Jones. For a very different, but arguably allied take on gens, see jointly authored work by Laura Bear, Karen Ho, Anna Lowenhaupt Tsing, and Sylvia Yanagisako. For a resonant work on generativity, see Cultural Poesis: The Generativity of Emergent Things by Katie Stewart. For more on alignment, see The Alignment Problem by Brian Christian. The originary work on transformers was Attention is All You Need (2017), jointly authored by researchers at Google. For a deep dive into large language models, and especially the GPT series, see the following articles produced by researchers at OpenAI: “Improving Language Understanding by Generative Pretraining” (2018), “Language Models are Unsupervised Multitask Learners” (2018), “Language Models are Few Shot Learners” (2020), “Deep Reinforcement Learning with Human Preferences” (2023), and “GPT-4 Technical Reports” (2023). For a step-by-step guide to building your own language models, the machine learning guru Andrej Karpathy has a wonderful YouTube series, Building Makemore (and much more besides). Many thanks to Matthew Engelke and Connor Martini for their illuminating and transformative feedback, as well as to Kamala Russell, Robert Meister, Terra Edwards, Jonathan Beller, Andrew Carruthers, and Julia Zrihen for inspiring suggestions.

Paul Kockelman

Paul Kockelman is Professor of Anthropology at Yale University. He has undertaken extensive ethnographic and linguistic fieldwork among speakers of Q'eqchi' (Maya) living in the cloud forests of Highland Guatemala, working on topics ranging from poultry husbandry and landslides to inalienable possessions and interjections. And he has long engaged in more speculative inquiry at the intersection of artificial intelligence, new media technologies, cognitive science, and critical theory. His books include: The Anthropology of Intensity, The Art of Interpretation in the Age of Computation, Mathematical Models of Meaning, and The Chicken and the Quetzal.

Previous
Previous

10. The Problem with Alignment