Opinion editor's note: Strib Voices publishes a mix of guest commentaries online and in print each day. To contribute, click here.
•••
At some point, if you haven't already, you'll experience a mental health challenge. Each year, one in five American adults has a mental health disorder and this number continues to grow. To keep up, therapists have incorporated artificial intelligence into their practices. While AI makes therapy more accessible, it raises several ethical issues. These concerns include bias as well as the lack of privacy, transparency and meaningful relationships.
AI has become woven into the fabric of our society. It can be beneficial but only if implemented ethically. So far, this hasn't been the case. AI is being used for therapy and administrative tasks, subsequently replacing humans. Chatbots converse with patients and other types of AI categorize documents. AI also quickly analyzes data such as brain scans. But at what cost?
AI is inherently biased. It's developed through human-generated data that is incomplete or unevenly favors one group. This results in one patient receiving different and potentially better advice than another patient based on factors such as gender or race. This is especially problematic because users tend to believe any information provided by AI. The implementation of AI in health care often glosses over informed consent. Users, especially those already vulnerable due to mental health disorders, should be aware of all the risks and benefits. These issues with informed consent showcase the lack of transparency in AI.
Decisions made by AI also lack transparency. They don't provide evidence or an explanation. This is especially concerning in health care because clinicians can't determine how or why a recommendation was given. There is no way to prove or replicate AI's interpretation making it difficult to adjust a treatment plan.
Another concern with AI is that it collects and stores your personal data. This includes sensitive information such as your address. Many AI systems work independently without human support. The unchecked operation of AI could lead to your data being leaked. Some companies also sell data to third parties. For example, AI can track your location to personalize advertisements. Users often don't consent to this and aren't aware it's happening.
AI is also insufficient in understanding human connection. It doesn't respond effectively to human emotions. AI only considers the empirical evidence when making recommendations and diagnoses. The recommendations are not always accurate because AI doesn't consider the nuances of human behavior. Human connection not only solves these problems but should be prioritized because it has benefits such as a longer life span and higher self-esteem.
In 2023, one user of Tessa, an AI chatbot deployed by the National Eating Disorders Association, shared her experience. The purpose of this chatbot was to provide advice and help those struggling with an eating disorder. When she asked Tessa how it supports people with eating disorders, it replied by giving her tips on how to lose weight. This advice could be detrimental to those trying to recover from an eating disorder because it triggers negative thoughts about body image. This situation emphasizes the importance of informed consent and human assistance. Despite all the benefits of AI, it isn't perfect. Users should know this and understand that they can't take every AI-generated response at face value.
There are some guidelines already in place to combat the ethical concerns of AI. For example, Ethics Guidelines for Trustworthy AI put rules in place to protect fairness, accountability and respect for human autonomy. However, current policies fail to account for full transparency. The best solution is for the collaboration of humans and AI. Humans can ensure patients fully understand how AI works and explain the intricacies of their diagnosis. This solves several other problems as well. Humans can monitor AI and step in when necessary to reduce inaccurate responses from AI. Human involvement would also improve the sensitivity of AI suggestions while building a relationship with the patients.
AI will continue to be integrated into our lives. It can either be destructive or revolutionary. The best hope for current AI technology to succeed in ethical practices of health care is to support it with human intervention. We can't rely solely on AI to protect our information and give us accurate responses. The connection with real people is, and forever will be, impossible to replace.
Abigail Paulnock, from Eden Prairie, is a senior at Bucknell University studying psychology and linguistics.