If health is a fundamental human right, then healthcare delivery needs to improve globally to achieve universal access. However, the limited number of medical practitioners poses a barrier to all health systems.
Artificial intelligence (AI)-powered approaches to healthcare delivery are poised to fill this gap. Whether in urban hospitals or homes in rural and remote areas, AI is reaching beyond what medical professionals can hope to achieve. People looking for health information can get it quickly and conveniently. Patient safety must remain a priority for effective healthcare.
The news is full of novel applications of AI. Riding the recent wave of interest in conversational agents, researchers at Google have developed an experimental diagnostic AI, Articulate Medical Intelligence Explorer (AMIE). When people seeking health information provide their symptoms through a text chat interface, AMIE begins asking questions and providing recommendations just like a human clinician would. Researchers claim that when compared to clinicians, AMIE outperforms clinicians in both diagnostic accuracy and performance.

(Google)
The potential of large-scale language models (LLMs) like AMIE is clear. By being trained on large text databases, LLMs can generate text, identify underlying meaning, and respond in a human-like manner. When patients have access to the Internet, they can receive personalized health advice quickly and easily, allowing medical professionals to triage cases that are best served.
However, these tools are still experimental and have limitations. AMIE researchers say more research is needed to “envision a future where conversational, empathetic, and diagnostic AI systems are safe, useful, and accessible.”
Precautions must be taken. Providing health care is a complex task. Left unregulated both professionally and internationally, it poses challenges to quality of care, privacy, and security.
medical decision making
Medical decision making is one of the most complex and important activities of all. While it may seem unlikely that AI will perform as effectively as human clinicians, decades of research show that algorithmic approaches to decision-making are as good as, or even worse than, clinical intuition. This suggests that it may exceed.
Pattern recognition represents the core of medical expertise. Like other forms of expertise, medical professionals require extensive training to learn diagnostic patterns, recommend treatments, and provide care. Through effective instruction, learners focus their attention on diagnostic features and ignore non-diagnostic features.
But effective healthcare delivery requires more than the ability to recognize patterns. Healthcare professionals must be able to communicate this information to patients. In addition to the difficulty of communicating expertise to patients with varying levels of health literacy, health information is often emotional, leading to communication traps in which doctors and patients withhold information. . Healthcare professionals can bridge these gaps by building strong relationships with patients.
LLM conversational features such as ChatGPT have received considerable public interest. While claims that ChatGPT has “broken the Turing test” are exaggerated, the human-like responses make LLM more appealing than previous chatbots. Future LLMs like AMIE may prove to fill gaps in healthcare delivery, but implementation must be done with caution.
The promise of accurate and explainable AI in healthcare

(Shutterstock)
AMIE isn’t Google’s first healthcare technology. In 2008, Google Flu Trends (GFT) was used to estimate the prevalence of influenza in a population using aggregated search terms. They assumed that users’ search behavior should be related to the prevalence of influenza, and that past search trends predict future influenza epidemics.
GFT’s early predictions were very promising. Until old data was identified as a source of bias and failed. Subsequent efforts to retrain the model using updated search trends once again proved successful.
IBM’s Watson offers another warning. IBM has invested significant capital in developing Watson and has implemented over 50 healthcare projects. Watson’s potential was never realized, and the underlying technology was quietly sold. That distrust was warranted, as the system not only failed to engender trust, but recommended treatments that were “unsafe and inaccurate.”
AI developed to diagnose, triage, and predict the progression of COVID-19 provides the best example of how AI in healthcare is poised to address public health challenges. Masu. Extensive reviews of these efforts cast doubt on their results. The validity and accuracy of the models and their predictions were generally lacking. This is mainly due to the quality of the data.
One lesson from the use of AI during the coronavirus pandemic is that there is no shortage of researchers and algorithms, but there is a dire need for researchers and algorithms. human quality management. This has created a need for human-centered design.
This also applies to expert reviews of the technology itself. Similar to his AMIE at Google, many of the publications evaluating these technologies are released as preprints before or during the peer review process. There can also be significant delays between preprints and final publication. Research has shown that the number of social media mentions is a greater predictor of a publication’s download rate than the quality of the publication.
Unless the adequacy of training and implementation methods is ensured, medical technologies may be introduced without any formal quality control measures.
Technology as folk medicine
The problem with AI in healthcare becomes clear when we realize that many healthcare ecosystems can exist in parallel. Medical pluralism is observed in two or more cases. system Available to health consumers. This usually takes the form of traditional medicine and Western biomedical approaches.
Apps represent a new form of folk medicine because they are direct-to-consumer health technology. Users adopt these technologies based on trust rather than understanding how they work. Without medical knowledge or technical understanding of how AI operates, users are left with no choice but to look for clues about the technology’s effectiveness. App Store ratings and recommendations can replace expert reviews from medical professionals.
Users may prefer to use AI-enabled technology over humans if their health concerns are related to stigma or chronic psychological distress. However, the accuracy of these systems can be delayed due to data update failures.
Providing user data also poses challenges. Similar to 23andMe, when users reveal personal information, it can leave clues about others in their social networks.
If these technologies are left unregulated, quality of care will be challenged. Professional and national regulation is needed to ensure that these technologies truly benefit the public.