Opinion: The WHO’s unhealthy approach to artificial intelligence

The World Health Organization's new genAI chatbot highlights the need for caution and responsibility when deploying and using such technologies.
By Brian R. Spisak
12:07 PM

Photo: Farknot_Architect/Getty Images

The World Health Organization recently went live with Sarah, its generative AI chatbot tasked with advising the public on leading healthier lifestyles.

According to the WHO, Sarah, which stands for Smart AI Resource Assistant for Health, is a "digital health promoter, available 24/7 in eight languages via video or text. She can provide tips to de-stress, eat right, quit tobacco and e-cigarettes, be safer on the roads as well as give information on several other areas of health.”

At first glance, Sarah presents as an innovative use of technology for the greater good – an AI-powered assistant capable of offering tailored advice anytime, anywhere, with the potential to help billions.

But upon closer inspection, Sarah is arguably as much a product of hype and AI FOMO as it is a tool for positive change.

The artificial intelligence used to build Sarah, generative AI, brings with it an incredible amount of risk. Bots powered by this technology are known to provide inaccurate, incomplete, biased and generally bad advice.

A recent and infamous case is the now defunct chatbot, Tessa. Developed for the National Eating Disorders Association, Tessa was meant to replace the organization's long-standing human-powered hotline.

But just days before going live, Tessa went rogue. The bot started recommending that people with eating disorders restrict their calories, have frequent weigh-ins and set strict weight loss goals. Fortunately, NEDA pulled the plug on Tessa, and a crisis was averted – but it does highlight the pressing need for caution and responsibility in the use of such technologies.

This worrying output emphasizes the unpredictable – and at times dangerous – nature of generative AI. It's a sobering illustration that, without stringent safeguards, the potential for harm is immense.

With this cautionary backdrop in mind, one might expect large public health organizations to proceed with extra caution. Yet, this appears not to be the case with the WHO and its chatbot. Despite being clearly aware of the risks associated with generative AI, it has released Sarah to the public.

The WHO's disclaimer reads as follows:

WHO Sarah is a prototype using Generative AI to deliver health messages based on available information. However, the answers may not always be accurate because they are based on patterns and probabilities in the available data. The digital health promoter is not designed to give medical advice. WHO takes no responsibility for any conversation content created by Generative AI.

Furthermore, the conversation content created by Generative AI in no way represents or comprises the views or beliefs of WHO, and WHO does not warrant or guarantee the accuracy of any conversation content. Please check the WHO website for the most accurate information. By using WHO Sarah, you understand and agree that you should not rely on the answers generated as the sole source of truth or factual information, or as a substitute for professional advice.

Put simply, it appears WHO is aware of the possibility that Sarah might disseminate convincing misinformation widely, and this disclaimer is its approach to mitigating the risk. Tucked away at the bottom of the webpage, it essentially communicates: "Here's our new tool. You shouldn't rely on it entirely. You’re better off visiting our website."

That said, the WHO is safeguarding Sarah by implementing heavily restricted responses aimed at reducing the risks of misinformation. However, this approach is not foolproof. Recent findings indicate that the bot doesn’t always provide up-to-date information.

Moreover, when the safeguards are effective, they can make the chatbot impractically generic and void of valuable substance, ultimately diminishing its usefulness as a dynamic informational tool.

So what role does Sarah play? If the WHO explicitly recommends that people visit their website for accurate information, then it appears that Sarah's deployment is driven more by hype than by utility.

Obviously, the WHO is an extremely important organization for advancing public health on a global scale. I’m not questioning their immense value. But is this the embodiment of responsible AI? Certainly not! This scenario epitomizes the preference for speed over safety.

It’s an approach that must not become the norm for integrating generative AI into business and society. The stakes are simply too high.

What happens if a chatbot from a well-respected institution starts propagating misinformation during a future public health emergency, or it promotes harmful dietary practices similar to the infamous Tessa chatbot mentioned earlier?

Considering the ambitious rollout of Sarah, one might wonder whether the organization is heeding its own counsel. In May 2023, the WHO published a statement emphasizing the need for safe and ethical AI usage, perhaps a guideline it ought to revisit.

WHO reiterates the importance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing and deploying AI for health.

The six core principles identified by WHO are: (1) protect autonomy; (2) promote human well-being, human safety and the public interest; (3) ensure transparency, explainability and intelligibility; (4) foster responsibility and accountability; (5) ensure inclusiveness and equity; (6) promote AI that is responsive and sustainable.

It’s clear that WHO's own principles for the safe and ethical use of AI should guide its decision-making, but it’s not when it comes to Sarah. This raises critical questions about its ability to usher in a responsible AI revolution.

If the WHO is using this tech in such a way, then what chance is there for the prudent use of AI in contexts where financial incentives might compete with or overshadow the importance of public health and safety?

The response to this challenge necessitates responsible leadership. We need leaders who prioritize people and ethical considerations above the hype of technological advancement. Only through responsible leadership can we ensure the use of AI in a way that truly serves the public interest and upholds the imperative to do no harm.

Brian R. Spisak is an independent consultant focusing on digital transformation in healthcare. He's also a research associate at the National Preparedness Leadership Initiative at Harvard T.H. Chan School of Public Health, a faculty member at the American College of Healthcare Executives and the author of the book Computational Leadership.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.