In a letter sent April 23, 2026, to the chairs of key congressional committees on digital health and AI, the American Medical Association (AMA) urged lawmakers to establish regulatory safeguards for AI chatbots used in mental healthcare. The AMA emphasized that while “well‑designed, purpose‑built” tools could improve access to mental health support, the current lack of oversight poses serious risks—including emotional dependency, misleading advice, and potential encouragement of self‑harm. The letter calls for immediate action to prevent harm to vulnerable individuals seeking mental health support. (healthcaredive.com)
The AMA’s recommendations include prohibiting chatbots from diagnosing or treating mental health conditions unless reviewed by the FDA as medical devices. The organization also urged Congress to direct the FDA to clarify which AI tools qualify as general wellness technologies versus those requiring agency review. Additional proposals include mandatory ongoing safety and performance monitoring, the ability for chatbots to detect suicidal ideation, and transparency requirements such as clear disclosure that users are interacting with AI and the nature of human oversight involved. (healthcaredive.com)
Privacy and cybersecurity were also highlighted as critical concerns. The AMA warned that even a single vulnerability in a data center could expose sensitive chatbot interactions and erode public trust. The letter recommends discouraging advertising within mental health chatbots—especially ads targeted at children—and implementing robust cybersecurity safeguards to protect user data. (healthcaredive.com)
This call to action comes amid growing public reliance on AI for health advice: nearly 30% of Americans have used AI for physical health information, and 1 in 6 have turned to it for mental health support, according to a recent KFF poll. (healthcaredive.com)
Several states have already taken steps to regulate mental health chatbots. Illinois has banned AI from making therapeutic decisions, while California requires developers to monitor conversations for signs of suicidal ideation and implement other safeguards. The AMA’s federal push aims to close regulatory gaps and ensure consistent protections across the country. (healthcaredive.com)