As artificial intelligence With no signs of slowing down, some studies predict that AI will grow by more than 37% annually between now and 2030, the World Health Organization (WHO) has issued a recommendation calling for “safe and ethical deployment of AI.” I put it out. health. “
The agency cautions against using “AI-generated large-scale language model tools (LLMs) to protect and promote human well-being, human safety and autonomy, and to maintain public health.” recommended.
Chat GPTBard and Bert are currently one of the most popular LLMs.
In some cases, chatbots have been shown to rival real doctors in terms of the quality of their responses to medical questions.
CHATGPT Offers Better Medical Advice Than Real Doctors In Blind Study: ‘This Will Be A Game Changer’
The WHO acknowledges there is “great excitement” about the potential use of these chatbots for health-related needs, but stresses the need to carefully consider the risks.
“This includes broad adherence to key values of transparency, inclusiveness, public engagement, professional oversight and rigorous evaluation.”
The World Health Organization (WHO) has issued recommendations calling for “safe and ethical AI for health.” (St. Petersburg)
The agency warned that premature deployment of AI systems without thorough testing could lead to “medical worker error” and “harm to patients.”
WHO outlines specific concerns
In its recommendations, WHO said that LLMs like ChatGPT are trained on biased data and “generate misleading or inaccurate information that could pose risks to health equity and inclusion.” ” warned of the possibility.
“Being prudent is paramount to patient safety and privacy.”
And while these AI models appear confident and authoritative, they can still generate false answers to health questions, officials say.
CHATGPT, Meal Plans, Food Allergies: Experts Warn, Study Measures ‘RoboDiet’ Safety
“LLM can be abused to generate and disseminate compelling disinformation in the form of text, audio, or video content that is difficult for the general public to distinguish from authoritative health content.” WHO said.

AI models, while seemingly confident and authoritative, run the risk of generating false answers to health questions, officials said. (St. Petersburg)
Another concern is that LLMs may be trained on the data without the consent of the person who provided it in the first place, and that adequate protection is not provided for sensitive data that patients enter when seeking advice. may not have been applied.
“LLM produces data that appears to be accurate and conclusive, but can be completely wrong.”
“While WHO is committed to leveraging new technologies such as AI and digital health to improve human health, technology companies are working to commercialize LLM, while policy makers are working to improve patient safety and protection. We encourage you to secure it,” the group said.
AI experts weigh the risks and benefits
CEO Manny Krakaris said, based in san francisco Health technology company OrgMedics said it supports the WHO’s recommendations.
“This is a rapidly evolving topic and tread carefully is paramount to patient safety and privacy,” he told Fox News Digital in an email.
New AI tools help doctors streamline documentation so they can focus on patients
Augmedix leverages LLM and other technologies to create medical documentation and data solutions.
“LLM can bring significant efficiencies when used with proper guardrails and human oversight for quality assurance,” says Krakaris. “For example, it can be used to provide summaries or to quickly rationalize large amounts of data.”

The agency cautions against using “AI-generated large-scale language model tools (LLMs) to protect and promote human well-being, human safety and autonomy, and to maintain public health.” recommended. (St. Petersburg)
he highlighted some potential risksbut.
“LLM can be used as a support tool, but doctors and patients cannot rely on LLM as a standalone solution,” said Krakaris.
“As the WHO notes in its recommendations, LLM produces data that appear to be accurate and conclusive, but may be completely wrong,” he continued. “This can have devastating consequences, especially in medicine.”
Click here to sign up for our health newsletter
Augmedics combines LLM with Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and structured data models in creating ambient medical document services to ensure that output is accurate and relevant Assures you of that, says Krakaris.
AI has ‘promises’, but needs attention and testing
Mr. Krakaris AI in healthcareAs long as these technologies are used carefully, properly tested and guided by human involvement.
CLICK HERE TO GET THE FOX NEWS APP
“AI will never completely replace humans, but using the right parameters so that quality of care is not compromised will increase efficiency and ultimately lead to problems such as clinician shortages and burnout in today’s healthcare. We can help with some of the biggest issues plaguing the industry,” he said.