Most major medical and bioethics publishers have affirmed that large language models (LLMs) cannot be authors. LLMs lack moral agency, cannot take responsibility for their output, and do not meet authorship criteria set out by the ICMJE and similar bodies. Nonetheless, it is increasingly common—and often explicitly permitted—for LLMs to be used to polish, edit, or clarify text written by human authors, provided this is acknowledged in a disclosure statement.
Here, I explore a more ambiguous scenario: one in which an LLM plays a role analogous to a “junior author,” alongside a human “senior” author who takes responsibility for the text. I compare this to familiar human authorship relationships in which a senior academic might contribute the core idea for a paper, help outline it, discuss it with a junior colleague (e.g., a postdoc), and later review and edit a draft—but not necessarily engage in substantial drafting themselves. So long as they meet ICMJE criteria, such a senior contributor is typically listed as an author.
But what if the person who does the drafting is not a postdoc, but an LLM? If the human “senior author” conceives the idea, prompts the model, reviews the output, and makes some edits—but never does the primary writing themselves—do they still qualify as an author? Should they? This talk examines the implications of such cases for authorship norms in bioethics, accountability, and the evolving nature of scholarly contribution in the age of generative AI.