Home

In the news this week:

The American Medical Assoc. find AI to be interesting… maybe compelling… but too expensive and prone to attack and error to be something they would outright recommend.

While their press briefing almost sounds positive, the report itself, and Hansard of the board discussion during proceedings to develop the recommendations, were far less cordial to the concept of AI in healthcare environments. A lot of the discussion seemed to be FUD (Fear, Uncertainty, and Doubt) and references were made to a number of analyses and research reports published earlier this year (and last year), especially the one by Finlayson et al from April this year (linked below on ARXIV).

 

It seems one of the most powerful potential proponents of improving healthcare systems have made their position felt.

 

In fairness… Anyone suggesting implementation of a medical AI on its own, as some large organisations have, should be taken out the back and quietly disposed of. The biggest complaint seems to be that the majority of AI solutions attempted either lack any input from those with clinical expertise entirely (some of the Deepmind solution attempts), or they use such a limited degree of clinical input (IBM Watson) that they are useless when any one of the most basic of parameters change (i.e. treatment location, patient ethnicity (or gender… or in one trial, even changing the age of the patient led Watson to make erroneous treatment suggestions), local CPGs, local medication guidelines, location-specific research publications, and so on). The other factor that the AMA focused on was the idea that AI should be relied on solely as the treatment decision-maker. This is patently ridiculous. We should not be considering AI as something to supplant our medical brethren. Replacing years of medical training and decades of experience. Rather, anyone pitching diagnostic or treatment-recommending systems should be suggesting them as augmentation to clinical expertise… something to give the clinician access to the latest relevant information and support and scaffold his or her treatment choices for this particular patient. I love the way that, after hours of argument on the point, the AMA chose to define AI as AUGMENTED Intelligence rather than artificial. Suggesting that they too would rather see it pitched as a support tool in addition to clinical expertise, and not a replacement for it.

 

This is exactly how we see Pambayesian. It relies on the input of clinicians in order to understand causal factors and support effective decision-making. We do not see our efforts as pushing out the doctor… rather we see it as elevating the individual patient.

 

 

 

The Register: AI is cool and all, but doctors and patients don’t really need it (according to the American Medical Assoc. at least)

https://www.theregister.co.uk/2018/06/15/ama_medical_ai/

 

AMA Press Briefing on AI report

https://www.ama-assn.org/ama-passes-first-policy-recommendations-augmented-intelligence

 

AMA Report 41- Augmented Intelligence in Healthcare

https://www.ama-assn.org/sites/default/files/media-browser/public/hod/a18-refcomm-b.pdf#page=64

 

Research finds Medical AI may be easily corruptible through error and fraud attacks

https://arxiv.org/pdf/1804.05296.pdf