Artificial Intelligence for COVID-19: saviour or menace?
The current state of health emergency across the world has led to a new wave of transformative technologies emerging as a possible solution to contain the COVID-19 pandemic. Bringing an array of fresh opportunities to tackle critical challenges, artificial intelligence (AI) tools are emerging as a prospective saviour of the day.
An unexamined AI algorithm has even received emergency authorisation from the United States Food and Drug Administration. But a question arises: with new technologies and concepts taking shape every day, is it safe to assume that AI will take centre stage to control such pandemics?
The lax regulatory landscape for Covid-19 AI algorithms has raised substantial concern among medical researchers. A living systematic review published in the BMJ reveals that Covid-19 AI models are poorly reported and trained on small or low-quality datasets with a high risk of bias. Reporting of all important details of the development and evaluation of prediction models for Covid-19 is vital. Failure to report those details contributes to research waste. More importantly, it can lead to a poorly developed and reviewed model being used that could cause more harm than benefit in clinical decision making.
Source code and deidentified patient datasets for Covid-19 AI algorithms should be open and accessible to the research community to support transparent and reproducible reporting. A report by The Lancet Digital Health shows a new AI Covid-19 screening test named CURIAL AI that uses routinely collected clinical data for patients presenting to the hospital. Hoping that AI can keep patients and health workers safe, Andrew Soltan and his fellow researchers state that the AI test could allow exclusive patients who do not have covid-19 and make sure that patients with Covid-19 receive treatments rapidly. This is one of the massive AI studies to date with clinical data from more than a hundred thousand cases in the United Kingdom. Prospective validation of the AI screening test showed accurate and faster results compared with gold standard RT-PCR tests.
However, like other Covid-19 AI models, CURIAL AI requires validation across geographically and ethnically diverse populations to assess real-world performance. “We also still don’t if the AI model would generalise to patient cohorts in different countries, where patients may come to the hospital with a different spectrum of medical problems,” Andrew emphasized.
The Alphabet subsidiary X announced that they could develop an AI to identify features of electroencephalography data that might be useful for diagnosing depression and anxiety, but they found that experts were not convinced of the diagnostic aid’s clinical value. How AI tools for diagnosing health conditions can enhance medical care is not always well understood by AI developers. Thereby, COVID-19 AI models must be developed in close partnership with healthcare workers to get insights into how these models’ output could be applied in patient care.
If AI tools cannot be proven to discern one pneumonia from another, premature use of such technologies could increase misdiagnosis and sabotage clinical care for patients. If allowed to scale, mistakes like this will slow future use of potentially life-saving technologies and compromise clinical and patient trust in artificial intelligence. Clinical trials are essential to understand how AI can support Covid-19 patients in the real world to assess AI tools’ correct accuracy for Covid-19.