As promises and risks of medical Artificial Intelligence (AI) grow, enhancing transparency - i.e., the provision of clear, accessible information about AI technologies - has emerged as a common-sense solution to promote safety, accountability, and trust, both in policy documents and the literature. Yet, empirical evidence remains limited on what type and level of information clinicians need for AI transparency to achieve its intended goals. This paper addresses three key questions. First, drawing on a pilot project involving semi-structured interviews with 12 health professionals in Australia and New Zealand, it explores when and what information clinicians require about AI technologies in healthcare. While initial research proposed various transparency categories, most participants emphasised the importance of disclosing AI involvement in the technology, clinical workflows, and use instructions. Other types of information were seen as more relevant to regulators, researchers, or procurement professionals than to frontline clinicians. Second, we examine the limitations of transparency as a catch-all solution, based on insights from interviewees. Our findings reveal a 'medical AI transparency fallacy': the belief that simply providing more information ensures safe, ethical use. In reality, clinicians often do not read labels, face low AI literacy, and are susceptible to automation bias and information overload. Finally, we propose ways to address these challenges, including stronger regulatory oversight, enhanced governance through professional institutions, and novel approaches to information delivery. We argue that transparency alone is insufficient; a multifaceted approach is needed to support the safe and trustworthy integration of AI in healthcare.