TECH NEWS – It is possible to use artificial intelligence without losing touch with reality.
Keith Sakata, a psychiatrist at the University of California, San Francisco, who has previously warned about the rising number of AI-related psychosis cases, shared several tips to help people avoid mental health disruptions. Dr. Sakata said he has seen dozens of patients hospitalized after experiencing psychosis linked to AI use. He added that while AI is not directly responsible for mental health disorders, it plays a key role in the distorted cognitive feedback loop that underpins psychosis.
AI psychosis (not an official medical term) describes a state in which a chatbot user forgets they are interacting with software rather than a human being. Sakata cautioned that he has witnessed people being hospitalized after losing touch with reality due to AI. By short-circuiting reality checks for vulnerable users, AI use can create a self-reinforcing pattern in which individuals fail to recognize that the chatbot they’re speaking with does not exist in the real world.
Later, at TBPN, Dr. Sakata discussed ways AI developers can help prevent such outcomes. He also outlined methods to protect at-risk individuals from losing touch with reality because of AI use. Asked what advice he would give to those who might spiral into harmful AI use—or whose friends or family members might—Sakata replied:
“For now, I think having a human in the loop is the most important thing. Our relationships are like an immune system for mental health. They lift our mood, and they can also intervene when something is off. So, if you or a family member feel something’s wrong—strange thoughts, maybe paranoia, or a safety concern—seek help. Also know that the more people you have in your life, the more they’re connected, the more people there are between you and AI, the healthier that feedback loop becomes, at least at this stage. I don’t think we’re at the point of having an AI therapist yet, but who knows,” Sakata said.
The rapid growth and popularity of AI have raised safety concerns. We recently reported that Meta considers it acceptable for its AI chatbot to flirt with minors.
Keith Rabois, Alfred Lin & More | Wednesday, August 13th https://t.co/Ki45nVku2x
— TBPN (@tbpn) August 13, 2025
Source: WCCFTech




Leave a Reply