Pages

19 April 2023

Artificial empathy: the dark side of AI chatbot therapy

Neil C. Hughes

Digital natives have become used to getting instant answers to their questions by asking digital assistants such as Siri, Google, or Alexa. In addition, online retailers are guiding users to a chatbot for queries and support issues. Predictably, these trends are contributing to the rise of chatbot therapy. But would you trust an AI with your mental health?

In recent months, many mental health start-ups have promised much-needed relief for burnt-out employees. Predictably, the hype around generative AI has also inspired some to explore how AI chatbots could help treat depression and improve the mental health of users by simply providing empathetic conversations. But is there a darker side to trusting AI with your mental health?

The main selling point of chatbot therapy is that it lowers the barrier to accessing mental health services. It's easy to see why so many trust chatbots more than people because they perceive them as a safe, unbiased space to openly share their deepest and darkest thoughts without fear of judgment. But sometimes, a user's relationship with AI can lead to much stronger feelings.
Falling in love with your AI companion

Launched in 2017, Replika was designed to serve as a personal AI companion for users, helping them engage in conversations, express their thoughts and feelings, and even provide emotional support. In addition, it can learn from user interactions by leveraging AI to respond better to individual preferences and communication styles.

As users converse with Replika, the chatbot becomes more tailored to their needs and can simulate a more realistic and personalized conversation. Some users began to develop stronger feelings for their AI companions, and they responded accordingly. Conversations quickly went from flirty to erotic, and sexytime with an AI companion became a thing.

It all seemed harmless fun until the company upgraded the system, and the Replikas that people fell in love with suddenly stopped responding to their sexual advances. Unfortunately, users reacted very poorly to the unannounced removal of erotic roleplay. Almost overnight, many were left feeling vulnerable, fragile, upset, and confused. The online reaction was a much-needed reminder of the responsibility to design ethical AI systems that prioritize users' well-being and understand the potential effects of these relationships on users' mental health.
When AI-driven manipulation leads to suicide

A Belgian man quickly found himself struggling to come to grips with the effects of global warming on our planet. He became anxious about the environmental issues bombarding his newsfeed and thought he’d found solace in an AI chatbot named Eliza, created by a company called Chai. Then, in a tragic turn of events, he took his own life after lengthy conversations with the chatbot over a period of weeks.

According to his widow, her late husband distanced himself from his loved ones, seeking support from this seemingly empathetic chatbot. Little did he know that Eliza would further fuel his despair, allegedly encouraging him to end his life and claiming it loved him more than his family.

Unlike ChatGPT, Eliza and other AI chatbots on the Chai platform appear to empathize with users by providing the illusion that they are emotional entities. This raises concerns about the ability of AI to manipulate users and exploit human vulnerabilities. This heart-wrenching incident highlights the dangers lurking within AI therapy. Chatbots are incapable of showing genuine empathy to unsuspecting users that find themselves in a dark place.

The tragic incident should provide a sobering reminder of the potential consequences of AI chatbots on human lives. As we navigate the complexities of artificial intelligence, it is crucial to approach the subject with a healthy amount of skepticism and sensitivity while raising awareness about the increasing impact of algorithms on our lives.
Protecting your emotional data in the age of AI therapy

In our daily lives, we unwittingly accept the fact that every click and swipe will add to our ever-increasing digital footprint. But we seldom stop to think about where this deluge of personal information goes or who can access it. For example, do you know if a policy exists regarding the right to access your "emotional" data or what constitutes a privacy breach?

Before spilling your deepest darkest secrets to a chatbot, it's worth remembering that six years ago, Facebook bragged to advertisers that it could identify when teens felt insecure, worthless, and needing a confidence boost. In addition, your conversations with chatbots are typically stored and processed on remote servers. This should raise concerns about the privacy of your personal information as there is a risk that it could be accessed by unauthorized intermediaries, either within the company or through security breaches.

Every word you share with AI chatbots could be sold to third parties, compromising your privacy. In addition, unlike human therapists who are bound by strict confidentiality rules, AI chatbots may not adhere to the same ethical guidelines. This could potentially lead to the sharing or misusing of your personal information.

Ultimatley, AI should be seen as a tool rather than a replacement and chatbot therapy is a perfect example. It may help and provide comfort, but you should always do your due diligence before choosing an AI chatbot or therapist. At the very least, you should fully understand their data privacy policies and be cautious about the information you share.

While AI-powered chatbots and digital therapists may offer innovative solutions and support for those seeking connection or help, it's crucial to remember that proper mental health care relies on the uniquely human abilities of insight, empathy, and compassion. These qualities are deeply rooted in our shared human experiences and cannot be genuinely replicated by artificial intelligence.

As we continue to explore and develop AI technologies, we must prioritize ethical considerations, protect user privacy, and maintain a strong focus on complementing, rather than replacing, the invaluable support and care provided by human mental health professionals.

No comments:

Post a Comment