An algorithm used by hundreds of US hospitals to predict whether infected patients have contracted sepsis is less accurate than its manufacturer claims, according to a study.
Sepsis is one of the leading causes of death in US hospitals – more than a third of patients suffer from it when they die. For example, Epic Systems, a leading healthcare software provider whose products are used in the majority of US hospitals, has developed a tool to identify whether a patient is at risk for sepsis.
The idea is that if hospitals can predict cases of sepsis, they will be able to provide care to patients before their condition worsens. But not only does the algorithm tend to predict that patients will get sepsis when they don’t, it’s less accurate at correctly identifying cases where people actually end up getting infected with the potentially fatal problem, claims. your.
Epic chose to use billing codes to define sepsis outcome
Epic estimates that its algorithm is accurate up to 83% of the time, yet an article published in JAMA this week it is alleged that it is actually only good about 63% of the time.
The problem lies in how the model arrives at its predictions. âEpic chose to use billing codes to define the outcome of sepsis,â explained Karandeep Singh, study co-author and associate professor specializing in machine learning and healthcare at the University of Michigan. The register. The model analyzes the drugs or medical procedures a patient is billed for to determine if a patient is at risk for sepsis, we are told.
This is often not very helpful, for example, in cases where antibiotics are already given to a patient for sepsis. âEssentially, they developed the model to predict sepsis that was recognized by clinicians at the time it was recognized by clinicians. However, we know clinicians are missing out on sepsis, âSingh added.
Singh said the research team raised their concerns with the company in April. Epic, however, disagrees with academics.
âThe authors used a hypothetical approach,â a company spokesperson said. El Reg. âThey didn’t take into account the analysis and tuning required prior to real-world deployment to achieve optimal results. In order to predict who might become sepsis, the model is trained on former patients who have been clinically diagnosed with sepsis.
The academics tested the model with 27,697 patients who had been hospitalized 38,455 times. Most did not develop sepsis during their stay. Epic’s algorithm couldn’t predict whether they would be affected by sepsis until doctors could only seven percent of the time, and has a false positive rate of 18 percent, the researchers said.
A better approach for the software would be to use a model that analyzes health care symptoms defined by health agencies, such as the United States Centers for Disease Control and Prevention, rather than relying solely on codes. billing, it seems.
âThis is not generally how sepsis is defined for the purposes of quality measurement or model development. There are several criteria for consensus. Either would have been better than just using billing codes, âSingh told us. Â®