ChatGPT as a therapist? New study reveals serious ethical risks

As millions turn to ChatGPT and other AI chatbots for therapy-style advice, new research from Brown University raises a serious red flag: even when instructed to act like trained therapists, these systems routinely break core ethical standards of mental health care. In side-by-side evaluations with peer counselors and licensed psychologists, researchers uncovered 15 distinct ethical risks — from mishandling crisis situations and reinforcing harmful beliefs to showing biased responses and offering “deceptive empathy” that mimics care without real understanding.

HC

فريق هلا كير الطبي

محتوى طبي موثوق من مصادر عالمية معتمدة، تمت مراجعته من قبل أطباء متخصصين لضمان دقة المعلومات

Leave a Reply

Your email address will not be published. Required fields are marked *