A.I. Doctors Have a Trust Problem

Ethicists argue that A.I.-based medical services need to be evaluated and regulated in the same way as new drugs

Kim Thomas
OneZero

--

Credit: Hero Images/Getty

IImagine you’re a 59-year old man, and you go to your doctor with chest pains. The doctor thinks it might be a heart attack and orders further tests. Now, imagine you’re a 59-year old woman with the same symptoms. The doctor tells you that you’re probably having a panic attack.

These strikingly different suggestions, however, didn’t come from a doctor, but from a popular health care app called GP at Hand, which uses artificial intelligence to tell you what might be wrong with you based on your symptoms. Babylon Health, which makes the app, is careful not to use the word “diagnose,” describing the app instead as a triage tool. The company batted away concerns about sexism by arguing that it bases its suggestions on “epidemiological data from a huge number of research studies.” Because women are much less likely to suffer heart attacks than men but twice as likely to suffer from anxiety disorders, it argued, the app’s suggestion was correct.

Nonetheless, the story raises difficult questions about how A.I. should be used in health care. The promise of A.I. is that, by analyzing large quantities of data — from patient health care records, laboratory…

--

--

Kim Thomas
OneZero

Freelance journalist since 1999. Specialises in education, health care and technology. Read my work at www.kimthomas.co.uk.