Emotion recognition is a technology that employs algorithms or machine learning to infer an individual’s emotional states by analyzing biometric and behavioral data. While its applications are expanding across social and governmental contexts, emotion recognition raises serious ethical concerns that call for robust legal and regulatory responses. Chief among these is the issue of privacy. Although emotion recognition is inherently intrusive, emotional data often consists of both sensitive and non-sensitive information, complicating its treatment under existing data protection frameworks. Moreover, the use of emotion recognition technology may enable pervasive surveillance, potentially producing a chilling effect on free expression and threatening the spaces essential for freedom of thought and conscience. Scientific validity, along with risks of bias and discrimination, further compound these challenges. Because these systems make evaluative judgments about individuals, they may have profound implications for fundamental rights. The European Union’s Artificial Intelligence Act, passed in 2024, prohibits emotion recognition only in education and employment contexts. Other uses are categorized as either high-risk—subject to regulatory obligations—or lower risk, with minimal oversight. The global influence of this legislative approach merits close attention. At the same time, international discussions on neuro-rights, particularly those emphasizing the protection of thought and conscience, offer important perspectives for regulating emotion recognition technologies. By examining these ethical and legal concerns, this article explores whether the evolving technological landscape demands recognition of a renewed right to be let alone against the intrusions posed by emotion recognition systems.