Artificial IntelligenceHealthNewswireTechnologyWhat's Buzzing

AI Chatbots Prescribe Psychiatric Medication

▼ Summary

– Utah is launching a one-year pilot allowing an AI chatbot from Legion Health to renew certain low-risk psychiatric prescriptions without a doctor, starting in April.
– The program is narrowly limited to stable patients refilling 15 specific medications like Prozac and Zoloft, excluding new prescriptions, controlled substances, and complex cases.
– State officials argue the system could lower costs, ease provider shortages, and free clinicians to focus on higher-risk patients.
– Psychiatrists warn the AI system is opaque and risky, potentially missing clinical nuances and failing to expand care to those most in need.
– The pilot includes safety measures like patient screening, pharmacist review, and clinician escalation, but follows a prior Utah AI pilot that encountered significant safety failures.

Utah has launched a pilot program authorizing an AI chatbot to renew certain psychiatric prescriptions, marking only the second instance in the U.S. where a state has granted this level of clinical authority to artificial intelligence. Proponents argue the initiative could lower costs and alleviate critical care shortages, but many physicians express deep reservations about its safety, transparency, and ultimate benefit to patients.

The one-year pilot, announced last week, involves San Francisco startup Legion Health. For a $19 monthly subscription, eligible Utah patients can obtain fast, simple refills through the company’s chatbot system, with the program slated to begin sometime in April. Currently, the company is managing a waitlist.

This deliberately narrow pilot program is constrained by strict parameters. According to Legion’s agreement with Utah’s Office of Artificial Intelligence Policy, the AI can only renew 15 specific, lower-risk maintenance medications that were originally prescribed by a human clinician. The approved list includes common antidepressants and anti-anxiety drugs like fluoxetine (Prozac), sertraline (Zoloft), and bupropion (Wellbutrin). Patients must be considered clinically stable, excluding anyone with a recent medication change, dose adjustment, or psychiatric hospitalization within the past year. Furthermore, patients are required to check in with a healthcare provider after every 10 refills or six months, whichever occurs first.

The system is explicitly prohibited from issuing new prescriptions or managing drugs that require close clinical monitoring, such as those needing regular blood tests. All controlled substances are barred, which excludes many ADHD medications. The exclusion of benzodiazepines for anxiety, antipsychotics for conditions like schizophrenia, and mood stabilizers like lithium means the pilot does not address more complex psychiatric cases.

To use the service, patients must opt in, verify their identity, and provide proof of an existing prescription. The chatbot then asks a series of questions about symptoms, medication efficacy, side effects, and potential red flags like suicidal thoughts or severe reactions. If any responses fall outside the program’s low-risk criteria, the case is flagged for mandatory human clinician review before a refill is approved. Patients and pharmacists can also request a human review at any point.

State officials have framed the pilot as a necessary innovation. They argue that by automating the renewal process for maintenance medications, patients can receive care more quickly and affordably. Over time, they suggest it could free up providers to focus on higher-risk cases and help address shortages that leave an estimated 500,000 Utah residents without adequate mental health access. Legion’s leadership has portrayed the effort in even more ambitious terms, calling it a global first and the beginning of a much larger transformation in healthcare delivery.

However, many mental health professionals remain skeptical. Dr. Brent Kious, a psychiatrist and professor at the University of Utah School of Medicine, believes the advantages are likely overstated. He doubts the tool will meaningfully expand access to those most in need, as the target patient must already be established in a treatment plan with a psychiatrist. Dr. Kious also warns the automation could worsen what he describes as an “epidemic of over-treatment” in psychiatry, where patients remain on medications longer than clinically necessary.

A core concern is whether a chatbot can safely manage even routine aspects of care. Dr. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center, questions whether any current AI can truly grasp the unique context and factors influencing an individual’s medication plan. Both experts emphasize that prescribing involves nuanced judgment beyond simple checklist reviews. These concerns are amplified by the inherent opacity of AI systems and the lack of rigorous, transparent testing before public deployment. Dr. Kious noted the current approach feels a bit like “alchemy” and stressed that greater scientific rigor is needed.

There are also immediate safety questions. A chatbot could miss critical information during screening, either by failing to ask the right question or because a patient provides inaccurate answers, potentially tailoring responses to get a desired outcome. While human clinicians also rely on self-report, they can interpret non-verbal cues and contextual details a chatbot cannot. Furthermore, Utah’s prior experiment with AI prescribing in primary care, launched last December, quickly encountered problems. Security researchers prompted that system to spread misinformation and generate dangerous medical advice, highlighting the potential for AI misuse.

Legion asserts its pilot operates under stringent safeguards. Beyond what it calls “conservative eligibility gates,” its agreement mandates detailed monthly reports and requires human physicians to closely review the first 1,250 requests, with ongoing sampling of 5 to 10 percent of subsequent requests. Company president Arthur MacWaters acknowledged that risks exist in any remote care model but stressed their multi-layered safety approach, which includes pharmacist oversight and clear escalation pathways to clinicians. He framed the pilot as a critical step to expand access and serve as a proving ground for responsible AI in medicine.

Despite the company’s guarded optimism about future expansion, the psychiatrists involved raise a fundamental question: what problem is this truly solving? Dr. Kious notes that for stable patients, most psychiatrists are typically willing to provide prescription renewals without a formal appointment. The cases where a clinician would want direct involvement, such as when there is concern about the patient or the medication risk, are precisely the scenarios Legion’s AI is barred from handling. For now, Dr. Torous advises patients who have found an effective treatment plan to maintain their relationship with that clinician, suggesting a cautious approach to this new model of care.

(Source: The Verge)

Topics

ai prescribing 100% mental health access 95% healthcare costs 90% regulatory pilots 90% medication safety 88% clinical oversight 85% psychiatric treatment 83% AI Transparency 80% patient eligibility 78% digital psychiatry 75%