Is this completely familiar? You woke up with a headache. After 30 minutes, I was beginning to feel like a brain tumor. Some web pages said that could happen. A small number of people on social media said that was how it started for them. However, several doctors later visited and found out it was sinus pressure.
This pattern is so common that scientists have given it a name: cybercondria. Anxiety cascade often comes with excessive searching for online health information.
Cyber Chondria is nothing new. The harmful effects on mental health are recognized as long as they enjoy widespread access to the internet. But today it’s easier than ever to roll a cyber rabbit hole.
Thanks to interactive media such as social media and generator AI, today’s digital health tools don’t just recommend some tangential articles. They “chat” with you, analyze your symptoms and provide plausible (but often vigilant) possibilities.
No more scrutinizing the internet for a general list of potential causes, symptoms, or treatments. More and more, we are engaged in active, real-time conversations that provide us with diagnosis and treatment from afar.
A recent study conducted by USF and FAU researchers measured whether Floridians rely on these media to seek health information and whether these behaviors are associated with increased health anxiety. The findings revealed how often Floridians fall into these same neural feedback loops while they search for health information online.
Thirty-four percent of Floridians say they feel the need to repeatedly search online for the same symptoms as “frequently” or “occasionally.” Additionally, around 20% of respondents admitted to going down the Cyberchondria Rabbit Hole, spending more time than they were looking for health information online.
But do these online searches help Floridians to be safer or more informed about their health? In many cases, the answer is “no.”
After these searches, one in four (25%) say they actually experienced an increase in anxiety and distress, while 31% of Floridians report feeling of uncertainty despite continuing to search for health information online.
Spend your days with Hayes
Subscribe to our free Stephenly newsletter
Columnist Stephanie Hayes shares thoughts, feelings and interesting business with you every Monday.
You’re all signed up!
Want more free weekly newsletters in your inbox? Let’s get started.
Check out all options
These findings show that a significant proportion of online health searches produce emotional experiences that can have a negative impact on mental health, rather than providing accurate health information. This is especially important in an age where search engines and artificial intelligence are being developed quickly and misinformation can coexist on these platforms with reliable medical guidance.
To be clear, recent advances in technology could revolutionize healthcare for better. For example, data suggest that AI tools may already be more reliable than human practitioners in some domains. These include important tasks such as reading medical images and diagnosing skin cancer.
Additionally, online health communities on platforms such as Facebook and Twitter can provide insight, clarity and support to people suffering from chronic illnesses. Also, many patients suffering from mental health disorders are more comfortable sharing their experiences and gaining insights from AI tools such as ChatGpt and Woebot.
However, for those suffering from generalized health anxiety, the unregulated use of this technology can become a double-edged sword. Not only can these tools and platforms make mistakes, they can amplify unfair concerns, exacerbate health-related panic disorders, and overburn the health system with unnecessary appointments, testing and referrals driven more than the real medical needs by digitally induced anxiety.
While some have insisted a pause on AI development and integration, it is rare for Americans to become less dependent on these platforms. As these tools continue to grow, there is a growing need to improve digital self-awareness. Among other things, this means teaching users how to assess sources, how to incorporate safeguards into chatbots and apps, allowing patients to better recognize how to prevent an unhealthy transition from curiosity to anxiety.
While AI and other emerging technologies definitely could improve health outcomes, as digital literacy has not improved, these tools can be as frighteningly inflamed as they alleviate them.
Stephen Neely is an associate professor at the University of South Florida’s School of Public Relations, specializing in public opinion and research research. Grace Mercer is a recent graduate of USF’s Health Sciences program, focusing on social and behavioral healthcare. She will start earning her bachelor’s degree in nursing from January.