Recent developments in computing technology present new possibilities for using artificial intelligence (AI) to predict the preferences of incapacitated patients. While the possibility of using a computer to predict patient preferences based on demographic information has already been considered, large-language models (LLMs) now make it possible to personalize these predictions based on personalized text from the patient, such as emails or social media activity. This hypothetical technology has been named the personalized patient preference predictor (P4). Advocates of the P4 argue that it will better respect patient autonomy by more accurately predicting patient preferences than surrogates. I present a critique of this argument. First, I argue that human autonomy is valuable not only because it allows individuals to satisfy their preferences, but because it allows individuals to live integrally by identifying which of their preferences are more intrinsically valuable. Second, I turn to the existentialist tradition in philosophy to argue that the contemplation and experience of death, or being-towards-death, encourage humans to live integrally by reminding them of their finite existence. From these premises, I argue that the P4 has the potential to diminish the autonomy qua integrity of both patients and surrogates by offloading their being-towards-death to AI. Designers and implementers of preference predictor technologies like the P4 should be aware that their more accurate prediction of patient preferences may come at a cost to the integrity of patients and their surrogates.