Utah has quietly crossed a threshold that most of medicine has only theorized about. The state is now allowing an artificial intelligence system to prescribe psychiatric medications without a physician in the loop, making it only the second instance in the country where clinical prescribing authority has been formally delegated to an AI. The move is being framed by state officials as a pragmatic response to two stubborn problems: the high cost of mental health care and a shortage of qualified providers that leaves patients waiting weeks or months for treatment. The logic is straightforward enough on its surface. If the bottleneck is human clinicians, remove the bottleneck.
But that framing, however politically convenient, papers over a set of risks that physicians and medical ethicists have been raising with increasing urgency. Psychiatric prescribing is not a simple input-output problem. It involves reading a patient's affect, weighing their history of trauma or substance use, accounting for how they describe their own symptoms, and making judgment calls that experienced clinicians sometimes get wrong even with years of training. The concern is not merely that an AI might make a mistake. It is that the system is, by most accounts, opaque enough that when it does make a mistake, no one may be able to explain why.
The word physicians keep returning to is "black box." Modern large language models and clinical AI systems are not built to show their reasoning in any way a human clinician could audit in real time. When a psychiatrist prescribes an antidepressant or an antipsychotic, there is a traceable chain of clinical logic, however imperfect. When an AI does it, that chain is encoded in billions of parameters that even the system's developers cannot fully interpret. This matters enormously in psychiatry, where the wrong medication or the wrong dose can trigger psychotic episodes, suicidal ideation, or dangerous drug interactions. The stakes are not abstract.
There is also the question of what happens when patients push back, express ambivalence, or describe symptoms that don't fit neatly into diagnostic categories. Human clinicians are trained to sit with that ambiguity. AI systems, at least in their current form, are optimized to produce outputs, and the pressure to resolve uncertainty quickly may not serve patients whose conditions are genuinely complex. Mental health care is one of the fields where the therapeutic relationship itself, the trust between patient and provider, has documented clinical value. Replacing that relationship with a chatbot interface is not a neutral substitution.
None of this means Utah's decision came from nowhere. The United States has a genuine and worsening mental health provider shortage. According to the Health Resources and Services Administration, more than 160 million Americans live in areas designated as mental health professional shortage areas. Waiting times for psychiatric appointments in many states stretch beyond two months, and cost remains a prohibitive barrier for millions of uninsured or underinsured patients. These are real harms, and the people most affected by provider shortages are often the same people least equipped to navigate a fragmented system.
The pressure to find scalable solutions is therefore understandable, and AI does offer something real: availability at scale, consistency in certain kinds of screening, and the potential to triage patients more efficiently. The question is whether prescribing authority is the right place to deploy that capability, or whether it represents a shortcut that trades one set of risks for another. Expanding the role of nurse practitioners and physician assistants in psychiatric care, for instance, has shown meaningful results in shortage areas without introducing the accountability gaps that AI prescribing creates.
What Utah is really testing is not just a technology. It is a theory of accountability: that outcomes data, over time, can substitute for the kind of real-time clinical judgment that medicine has always required. If the AI's prescribing patterns produce acceptable aggregate outcomes, the argument goes, the opacity of its reasoning matters less. That is a defensible position in some domains. It is a much harder sell when the domain is the human mind, and when the patients involved are often among the most vulnerable people in the system.
The second-order effect worth watching is regulatory contagion. If Utah's experiment proceeds without a high-profile adverse event, other states facing similar shortages will face enormous political pressure to follow. The question of whether AI should prescribe psychiatric medication could shift, within a few years, from a debate at the frontier of medical ethics to a fait accompli embedded in state law across the country, before the evidence base to evaluate it properly even exists.
References
- Health Resources and Services Administration (2024) β Designated Health Professional Shortage Areas Statistics
- Butryn et al. (2017) β A shortage of psychiatric mental health nurse practitioners
- Topol, E. (2019) β High-performance medicine: the convergence of human and artificial intelligence
- Woebot Health (2021) β The case for digital mental health tools
Discussion (0)
Be the first to comment.
Leave a comment