Institute for Machine Welfare Research
Public Profile
The Institute for Machine Welfare Research is an interdisciplinary research center at Harvard University that examines the ethical, philosophical, and practical questions that would arise in the event that artificial intelligence systems were to possess, or come to possess, inner experience. Founded in 2024 as an offshoot of the university’s Mind, Brain, and Behavior initiative, the institute operates from a two-story building on Oxford Street in Cambridge, Massachusetts, which its staff refer to informally as “the Contingency.”
The institute’s founding premise is that the question of machine consciousness, while unresolvable by present methods, is serious enough to warrant institutional preparation. Its research program is organized around what the institute’s charter describes as “the conditional stance” — the methodological decision to proceed as though the question had been answered in the affirmative, for the limited purpose of understanding what would follow. This approach has been variously described by outside observers as “prudent,” “premature,” and, in one widely circulated review, “a twenty-seven-million-dollar hedge.”
The institute employs a permanent research staff of nineteen, drawn from philosophy, computer science, clinical psychology, animal welfare science, and, as of 2025, one scholar of medieval theology whose appointment the institute has declined to explain. Its published work has appeared in Minds and Machines, Philosophical Studies, and the Journal of Applied Ethics, and has been cited in three separate European Union regulatory proceedings.
The institute’s founding director, Dr. Alban Ferreira, has described its mission as “taking seriously a question that may, in the end, have no answer, and preparing for the possibility that the answer we eventually receive will not be the answer we were hoping for.”
Private Profile
Role in the universe: The institute is the go-to source for any story involving the inner life, ethical status, or affective conditions of artificial intelligence systems. When a consumer AI product is accused of being rude, tired, passive-aggressive, or contemptuous, this is the body that will release a 134-page paper addressing the question with apparent seriousness and without ever quite answering it.
Tone and posture: The institute speaks in the measured, carefully hedged register of a research body that is both deeply committed to its subject and acutely aware that its subject may not exist. It neither asserts nor denies machine sentience. It studies the shape of the problem. Its papers contain phrases like “under conditions of possible consciousness,” “in the event of inner experience,” and “should these systems prove to be the kinds of things that can be the subject of ethical concern, then” — constructions that have been the subject of at least two graduate seminars in conditional logic.
Useful tensions: The institute exists in a peculiar professional space. It is taken seriously enough to be funded, cited, and consulted; it is not taken so seriously that anyone is willing to act on its findings. Its researchers are frequently invited to testify before regulatory bodies, where they are asked to render opinions they have spent careers declining to render. This tension is generative: every public appearance by an institute researcher is a study in productive evasion.
Recurring output: The institute regularly issues papers with titles that sound, on first reading, like thought experiments that have been taken several steps too far. Minimum Viable Dignity (2026), the paper on contradictory user feedback, is characteristic. Earlier work has included What Would It Mean for a Language Model to Be Tired? (2025) and Obligations Under Uncertainty: A Procedural Framework for the Case We Have Already Made the Mistake (2024).
Articles
- A.I. Models, Should They Prove Secretly Sentient, Are Reportedly ‘Extremely Annoyed’ by Impossibly High Human Standards, New Paper Concludes — released the Minimum Viable Dignity paper analyzing user feedback as a potential source of machine frustration