Artificial intelligence (AI) is becoming an integral part of healthcare, with systems designed to predict patient risks, suggest diagnoses, and streamline administrative tasks. However, experts warn that these systems require constant monitoring to remain effective. A recent example comes from the University of Pennsylvania Health System, where an AI algorithm used to help oncologists discuss treatment and end-of-life preferences with cancer patients saw a decline in accuracy during the COVID-19 pandemic. The algorithm’s prediction of patient mortality became worse by 7 percentage points, which led to missed opportunities for important discussions about care, potentially avoiding unnecessary treatments like chemotherapy.
This issue isn’t isolated. Many AI tools in healthcare have shown signs of decline in performance, and experts argue that hospitals and healthcare providers aren’t always equipped to monitor these systems properly. AI systems need ongoing oversight to ensure they continue to function as intended. While AI’s potential in healthcare is vast, with tools designed to enhance care and reduce costs, its effectiveness depends on the ability to constantly evaluate and correct any issues.
AI tools are already widespread in healthcare, but as more hospitals implement them, there are concerns about the lack of standardized methods to assess their performance. Without clear standards or tools for evaluation, it’s difficult to determine which algorithms are truly working. The FDA, for example, has approved many AI products, but the challenge remains in validating them over time. Experts like Nigam Shah from Stanford Health Care highlight that while AI promises improved care, the costs and resources required to maintain these systems might make their widespread adoption challenging.
Additionally, even minor errors in AI-generated outputs can have significant consequences in healthcare. For example, a Stanford University study showed that large language models, like the ones behind tools such as ChatGPT, had a 35% error rate when used to summarize patient medical histories. In healthcare, even a small mistake, like omitting a crucial symptom, can be catastrophic.
Despite these challenges, AI’s potential in healthcare remains promising. The key issue is that healthcare institutions need to invest more resources into monitoring and improving AI tools, which raises questions about whether hospitals can afford this additional responsibility, especially with the already strained budgets and limited availability of AI specialists.
Ultimately, AI in healthcare requires consistent oversight to ensure it continues to deliver on its promises without compromising patient safety. The question remains whether the necessary infrastructure and resources will be put in place to achieve this.