Artificial intelligence is expanding business but would also lead to dangerous experiences if used improperly.
The topic of AI has swallowed up the news cycle in 2023, and the development and popularity of these tools are sure to be one of the biggest stories of the year when all is said and done – for good reason. There is a lot that AI can do, and much of it is very useful, especially for business owners and the healthcare industry as a whole. But it’s not all good news. Plenty of people have pointed out the potential risks and downsides of utilizing AI on a broad scale, and there are even some concerns that the way these tools work could be harming the mental health of people who are already in a vulnerable position. Making sure that AI is working for us, and its dangers are kept at bay, should be one of the primary goals and responsibilities of this movement.
One of the pressing concerns related to AI and mental health is the fact that many of these tools deliver results based on inputs without any regard to what the user should be seeing given their own individual mental state and history. That’s understandable, but it can also be dangerous.
For instance, if someone with an eating disorder or someone who is at a high risk of developing an eating disorder asks AI tools to see images that relate to body image ideals, exposure to those images could push them further toward a negative outcome. While those images are also available simply on the web, having access to them in an AI tool that makes it feel more like the person is having an actual conversation could potentially lead to more engagement and maximize its dangers.
Knowing its dangers, some AI tools have attempted to put restrictions and limitations in place to avoid negative outcomes. However, to this point, the disclaimers and limits on what information can be accessed seem easy to get around for anyone with even limited experience using the tools. There is a long way to go before it will genuinely be difficult to access information that can be harmful.
There is also the problem of authority to consider in this situation. When people ask a question to an AI tool, they tend to automatically believe the response. Sometimes, that response is accurate, but that’s certainly not always the case. With something like an eating disorder, AI is not a doctor, or any other kind of trained professional. It is a model that has been “trained” on documents from many different sources, some more legitimate and accurate than others.
The example in this article of how AI could harm mental health related to eating disorders and body image issues, but that’s just one of potentially countless issues that could arise from using these tools to retrieve information. Improved testing of the AI tools that hit the market could help address this issue, along with responsiveness from AI companies when issues are discovered by users. There is no doubt that AI is going to play an even larger role in the future as it continues to be developed and fine-tuned; thus, it’s important to ensure this role is as positive as possible.
Sources:
Popular AI Tools Can Hurt Your Mental Health, New Study Finds
Join the conversation!