In a recent review published in Nature Medicine, a group of authors examined the regulatory gaps and potential health risks of artificial intelligence (AI)-driven wellness apps, especially in handling mental health crises without sufficient oversight.
The rapid advancement of AI chatbots such as Chat Generative Pre-trained Transformer (ChatGPT), Claude, and Character AI is transforming human-computer interaction by enabling fluid, open-ended conversations.
Projected to grow into a $1.3 trillion market by 2032, these chatbots provide personalized advice, entertainment, and emotional support. In healthcare, particularly mental health, they offer cost-effective, stigma-free assistance, helping bridge accessibility and awareness gaps.
Advances in natural language processing allow these ‘generative’ chatbots to deliver complex responses, enhancing mental health support.
Their popularity is evident in the millions using AI ‘companion’ apps for various social interactions. Further research is essential to evaluate their risks, ethics, and effectiveness.