An x-ray of the author’s chest, with the text “Body on the Grid” juxtaposed over the image.
An x-ray of the author’s chest, with the text “Body on the Grid” juxtaposed over the image.
An X-ray of the author’s chest. All photos courtesy of the author.

‘Like Being Grilled Alive’: The Fear of Living With a Hackable Heart

A device connected to my heart could save my life. It could also be hacked.

Three nights before Christmas 2016, I was standing in my bathroom when a gallop broke out across my chest. It was ventricular tachycardia, a dangerous kind of arrhythmia where only one side of the heart pumps and does so at high speed, denying blood from moving through it. At the age of 23, I’d had arrhythmias all my life, but had never felt anything like this. Twenty minutes later, with the arrhythmia still going, I was in the back of a parked ambulance. Alone with the EMTs, I braced for the shock of a defibrillator.

The pain was overwhelming, like being grilled alive. It ran out from a center point in my chest and flowed into every organ, every limb, into my fingers and toes. Later, waiting in the trauma section of the Mount Sinai emergency room, doctors shocked me again.

Months of testing followed. I started taking drugs that would help reduce my arrhythmias, but in addition, my doctors suggested they replace my pacemaker with something called an ICD. The ICD would be a fail-safe, a tiny defibrillator inside my body that could go everywhere that I went.

When I came across an FDA safety notice warning that some ICDs, namely those made by a company called St. Jude, could be hacked, I was only days away from surgery. Once hacked, the devices could allow an external actor to gain control of the ICD, reprogram its functions, and inflict all kinds of damage—even trigger death.

The week before surgery, I texted my nurse practitioner about the FDA warning. She responded quickly, “Don’t worry. We’re using a different brand,” as if the issue was settled. In the blur of acute disease, I ignored the instinct to dig further into what exactly these cybersecurity concerns might mean or what other concerns might be hiding just below the surface.

When they first came to market in the 1980s, ICDs (implanted cardioverter-defibrillators) were implanted rarely, mostly in patients who had already experienced a life-threatening episode of ventricular tachycardia or even cardiac arrest. They were often called “secondary prevention” tools — meaning a patient has already experienced a life-threatening event and the device had the potential to stop a second event. In the 40 years since, clinical guidelines have changed dramatically, and the use case for ICDs has broadened. The United States has become the biggest market in the world for ICDs, with new ICD implantations increasing almost ninefold from 1993 to 2006. Doctors now implant at least 10,000 new devices each month in the United States. Many of these devices are now used for “primary prevention,” meaning a patient hasn’t yet experienced an event that could be stopped by an ICD, but they might be at risk for one.

“The pain was overwhelming, like being grilled alive.”
“The pain was overwhelming, like being grilled alive.”

In the past 13 years, these devices have also been fully integrated into the so-called Internet of Things—millions of everyday consumer items being programmed for and connected to the internet. Once connected to the internet, the devices ease the work of physicians and hospitals, who can now manage the device and monitor the patient’s condition remotely. Patients are typically charged each time their device sends data to the hospital. Think of it as a subscription—for your heart.

ICDs are just one increasingly popular medical gadget in a rising sea of clinical and commercial wireless health devices. Whether it is the growing suite of cardiac-monitoring devices available at home and on the go or an Apple Watch outfitted with diagnostic software, we are outsourcing more and more of our health to internet-enabled machines.

Having now lived with an ICD for more than three years and a pacemaker for the preceding 14, I understand intimately the consequences of being a body paired to the grid. If your smart fridge loses connectivity, maybe your food goes bad a few days early. But if a wireless ICD experiences a failure, the result could be lethal. I am stalked by the fear of the device misfiring and have wondered endlessly whether the documented security risks posed by these devices could end up harming me.

The first cardiac device I had was a pacemaker, implanted when I was nine years old. Though pacemakers and ICDs have overlapping patient demographics and are sometimes bundled in the same device, they have drastically different functions. Pacemakers help a patient’s normal heart rhythm cycle, while ICDs are tiny defibrillators meant to terminate dangerous arrhythmias and prevent cardiac arrest. In everyday life, defibrillators wait in hospitals and public spaces (gyms, churches, movie theaters) for disaster to strike — they are tools you seek out in an emergency. But an ICD brings the emergency response to you. It is watchful, an active listener. I think of a pacemaker as a heartbeat assistant; an ICD is an arrhythmia assassin.

For as long as I’ve had one, I’ve been acutely aware that a pacemaker is a sensitive machine and can be derailed by plenty of things: airport security; laser tag vests; the seats in 4D amusement park rides; store security towers; cellphones; and still, somehow, microwaves. All of these things could disrupt the pacemaker, reprogram it, even stop it cold. As a child in the grocery store, I ran through the theft towers quickly, like I was trying to shoplift. I sat on the sidelines while friends ripped through laser tag arenas at birthday parties. Fewer than two years into post-9/11 hysteria, I panicked as a nine-year-old when a TSA agent came toward me with a security wand. I bolted, running farther into the terminal at Boston’s Logan Airport. I only made it a few yards before I was stopped by a knee to my chest, a muscled agent pulling me to the ground. My panic had made me into an apparent security threat.

Doctors also posed a risk to my new device. During regular office checkups, ominously called “interrogations,” they would place a large magnetic wand over the pacemaker to take control of it. Between in-office interrogations, every three months, my physicians mandated that I do “home monitoring,” which involved a complicated and archaic process. I would hook myself up to a transmitter box that would screech out a dial-up tone to a stranger sitting in a call center somewhere via the receiver of a landline phone. And just like in-office interrogations, I needed to place a heavy round magnet over the device. Because a heavy magnet disrupts a pacemaker, I would sit in a wave of dizziness and nausea while a distant tech received the information. The whole process often lasted 15 or 20 minutes. When it was done, I would sit back in the kitchen chair, spent, waiting for the blood to return to my head.

I don’t have to do this anymore. Remote-monitoring pacemakers were first sold to the general public around 2007; currently, the industry standard for remote monitoring involves routers paired via Bluetooth to wireless-enabled cardiac devices. These routers sit in a patient’s bedroom and run constantly, pulling data at regular intervals and transmitting it straight to their doctor via the internet. No phone calls and no magnets involved. Ideally, a patient never even knows their data is being collected.

“We’ve yet to find a device that we’ve looked at that we haven’t been able to hack.”
“We’ve yet to find a device that we’ve looked at that we haven’t been able to hack.”

Clinically, the benefits of remote monitoring are twofold: The patient doesn’t have to enter a medical setting to be monitored, which reduces the likelihood of iatrogenic disease — illness caused by the interference of the medical system. At the same time, doctors get more data than they’ve ever had access to, allowing them, ideally, a window to disease prevention. (I, along with many other patients, take issue with the second proposition, given that we cannot access our own data; there’s a substantial activist movement toward data liberation that includes cardiac patients who have fought for more than a decade to gain access to the information generated by wireless-enabled pacemakers and ICDs.)

“[The benefits of remote monitoring have] been held up over the years with just being able to diagnose something early,” said Dr. Leslie Saxon, a cardiologist and electrophysiologist who runs the Center for Body Computing at the University of Southern California. In 2010, Saxon led a study in partnership with device manufacturer Boston Scientific that found improved survival rates for patients who were monitored with remote monitoring, as compared with patients who were only followed with periodic in-clinic visits. “We also learned that we could learn how to program and make these devices a lot better if we were looking at all this data all the time,” she said.

But as remote monitoring has become more widespread, concerns about the cybersecurity of the practice have only grown. Since 2011, the FDA has issued at least 11 warnings and many recalls on pacemakers and ICDs over concerns relating to cybersecurity and safety. This includes the 2017 notice for St. Jude devices that I found just before my surgery. The security defect affected at least a half-million patients and was ultimately resolved by a software patch sent directly to their remote monitors.

Manufacturers like Medtronic often advise that patients keep their monitors turned on and connected so this sort of patch or upgrade can be delivered. But patches, often quietly sent to the devices, can leave patients in the dark: There is no streamlined process to let patients know when a vulnerability has been identified in their specific device or when a patch might be on its way. And researchers have argued that retroactive patches are no replacement for baked-in security. “The main concern is if vendors continuously rely on reactively resorting to pushing patches instead of securing their devices by design,” Fotis Chantzis, a security engineer who used to hack medical devices for a major health care institution and the lead author of Practical IoT Hacking: The Definitive Guide to Attacking the Internet of Things, told OneZero. “Usually these patches fix a particular vulnerability,” he continued, “but keep in mind that there is also this view of the security community that every bug can potentially be exploited given the right circumstances.”

Device companies and doctors are often quick to insist that the cybersecurity concern is overblown. For years, they’ve maintained that while the routers can communicate with and gather data from patient devices, they can’t actually control the devices or deliver reprogramming directives. Dr. Rob Kowal, chief medical officer for cardiac rhythm and heart failure at Medtronic, told OneZero, “[Remote programming is] not possible,” at least with his company’s current home routers.

There are two kinds of connections involved in remote monitoring: the connection from the patient’s implanted device to the router, which is often Bluetooth, and the connection from the router back to the data portal seen by the physician, which can use anything from a home Wi-Fi network to a hardline Ethernet cable or a phone line. Manufacturers insist that these channels have now been made secure.

But many related FDA warnings have warned that hackers could, in fact, assume control and reprogram a patient’s device. Researchers and white hat hackers have demonstrated that the connections from the device to the router and from the router to the data portal are exploitable. Hackers have made headlines over the past decade-plus by exposing vulnerabilities in pacemakers and ICDs from every major developer, including St. Jude’s (now Abbott), Medtronic, and Boston Scientific.

In 2018, researchers Billy Rios and Jonathan Butts from cybersecurity firm Whitescope demonstrated that they could hack into both cardiac devices and insulin pumps built by Medtronic, with potentially deadly results: They could shock a patient’s heart into cardiac arrest or administer a lethal amount of insulin. They told Wired that the devices lacked basic security functions: Medtronic’s MiniMed line of insulin pumps used radio frequencies that were easy to figure out, and there was no encryption on communications between the pumps and their remote controls. Rios and Butts also discovered that the company’s pacemakers didn’t use code signing, a standard security function that authenticates the legitimacy of things like software updates.

Bill Aerts, Medtronic’s former director of product security until 2016, is now the executive director at the Archimedes Center for Healthcare and Device Security at the University of Michigan, which was founded by the researcher who, in 2008, co-authored the first major paper on cardiac device security. “Like anything else,” Aerts told me, the level of security built into such devices “was a matter of demand and costs.” He went on to say, “It took a while to educate the engineering community about these risks… Then the boss says, ‘No, that’s going to cost too much to add that extra functionality [security features].’ And so that took a while to get people to believe that, yes, it’s worth investing in.”

The company took more than a year and a half to respond to the security concerns flagged by Rios and Butts and was apparently reluctant to offer solutions. “They are more interested in protecting their brand than their patients,” Rios told CNBC at the time. In an article from CBS News, Butts put it bluntly: “We’ve yet to find a device that we’ve looked at that we haven’t been able to hack.”

Two photos side-by-side: The author; The author’s previous cardiac devices.
Two photos side-by-side: The author; The author’s previous cardiac devices.
The author (left); the author’s previous cardiac devices.

Have you heard the one about Dick Cheney? Talk to a cardiac device patient long enough and they’re bound to bring it up. The former vice president first got an ICD in 2001. In 2007, as the battery ran down, he needed to have it replaced. At the time, Cheney was a candidate to be one of the first patients to wear an ICD with wireless monitoring. But there was a problem: national security. Even before independent hackers raised the alarm, his doctors were worried that a potential terrorist could gain access and trigger the ICD to shock him to death. Cheney and his doctor decided to disable the wireless function before implantation, which required a custom adjustment from Medtronic.

Cheney’s special treatment wasn’t disclosed until 2013, when he and his cardiologist collaborated on a book about his heart saga titled — what else — Heart. The sensationalism of the claim, that the vice president could invite a terror attack through his own heart, made it somehow easier for doctors and manufacturers to dismiss the concerns of average patients. Who are you to worry? You’re not Dick Cheney, after all.

For the general public, concerns around medical device cybersecurity first emerged in 2008, not long after the debut of remote monitoring. But I and hundreds of thousands of other patients were never given the option of a custom ICD with the wireless function disabled. Instead, we live with the knowledge that it could be hacked, with few people taking our concerns seriously.

To be clear: There is no documented evidence that a patient’s ICD or pacemaker has ever been hacked for malicious purposes. But the potential for hacking is hardly theoretical. What exists for now are two parts of an equation that have been proven independently: 1) Devices can be hacked; and 2) devices can cause unintentional and catastrophic harm. Put together, they would equal an opportunity for direct control over a patient’s life and safety in a way never previously seen in medicine.

Devices misfire, sans hacking, all the time. A 2017 study published by the American Heart Association found that during a two-year period, about 10% of ICD patients experienced shocks of some kind. But within that population, 38% of shocks delivered by the device were inappropriate—meaning patients were cardioverted or defibrillated when they didn’t need to be.

Some patients who experience a needless shock to the heart will suffer no obvious or immediate side effects. Some may have psychological side effects, like anxiety or depression related to the fear the device will shock them again. And then there are the more serious consequences. In 2017, Boston Scientific disclosed that a patient had died when their ICD malfunctioned. The device’s memory had been corrupted after exposure to radiation similar to what someone might be exposed to in radiation treatment for cancer. But the patient hadn’t received any such treatment, and Boston Scientific wasn’t able to establish where this exposure might’ve come from. The FDA’s public database of medical device reports contains pages of entries regarding the deaths of ICD and pacemaker patients, citing everything from lead fracture to memory failure, but these reports often decline to cite a device problem as firmly causal in a patient’s death. It is hard to pin down a number of deaths related directly to ICDs because autopsies are rarely performed, and U.S. law requires family consent for device removal after death.

“At first, my fear of the device infected every moment of the day.”
“At first, my fear of the device infected every moment of the day.”

A 2015 study published in the Journal of the American Medical Association investigated a small sample of pacemaker and ICD patients who had died suddenly. The researchers found that about half of the sudden-death cases with ICDs that they observed had some form of “device concern” present, highlighting issues including hardware failures, undersensing, improper programming, and lead fracture.

I was told that if my device ever fired, a shock would come with a short warning sound ringing out from below my skin. I don’t know what the alarm sounds like, but I imagine it as something just loud enough to fill the seconds before defibrillation with an appropriate sense of panic.

When I woke up from surgery in January 2017, the world was sideways — literally. It had lasted for so long, nearly 10 hours, that I developed acute vertigo from how my body was arranged on the table; the ICD had actually knocked me off balance. For the first several months of living with the ICD, I kept the remote monitor plugged in and turned on. It sat on the dresser in the corner of my bedroom, constantly lit up green.

My fear of the device infected every moment of the day. In the months after implantation, I walked around in a state of gloom like I was awaiting an invitation to my own funeral. I sunk quickly and comfortably into a depression that turned me almost completely housebound, worried about every potential action that could invite the device to fire: Could a coffee send me into an arrhythmia? Would exercise trigger cardiac arrest? Was I opening myself up to invisible risks by keeping myself constantly paired to the internet? Rather than a savior on my side, the ICD was both a permanent reminder of how close I’d been to death and still remained and a constant threat that the pain I experienced in defibrillation could return at any moment. All illusion about afforded freedoms melted away like the cheap marketing it was. I felt the least free I’d ever been, a permanent inmate in the prison of my own skeleton.

I was perversely relieved when I learned, years later, that the psychic effects I felt were not unprecedented. In 2013, a comprehensive review of 25 separate studies into the psychological effects of living with an ICD found that at least 30% of patients showed “signs and symptoms of depression and poor quality of life” after having an ICD implanted, with most of that population meeting the criteria for post-traumatic stress disorder, and many experiencing worse symptoms as time went on. For months after its implantation, I told almost no one about the device. I wasn’t sure if I actually believed that my safety depended on keeping it a secret or if I was searching desperately for some form of control.

My anxiety has lessened with time, but still the question nags at me, settling into a more ambient concern. I no longer spend every second of the day worrying and waiting for another shock, mostly because I’ve learned not to think about it. What bothers me more now is the cavalier way the medical community has decided unilaterally that the threat of hacking does not matter for the average person, and that the side effects are outweighed by the lifesaving nature of the device. Their counter, when it comes to hacking, is not that harm is impossible, but rather that it’s unthinkable: Who would even want to hack a patient? And if nobody comes to mind, is the problem worth fixing?

The author’s remote monitor.

When I moved to a new apartment last summer, I neglected to unpack my remote monitor from its box, letting it collect dust on a high closet shelf for months.

My hesitance over engaging with it retreats and returns like this in cycles. Some doctors have argued that overpublicizing negative information about medical devices unnecessarily adds to hysteria in a way that affects patient compliance. “You have to make sure that the reaction to [a problem like hacking] doesn’t introduce more risk than the potential for a problem,” Dr. Saxon told me. Removing a problematic device, for example, reintroduces all the standard risks of surgery.

But it’s hardly just the threat of hacking that worries me. It’s a feeling instead of living as a guinea pig for an opaque set of private interests, and a feeling that I can’t trust an industry that would ever put unsecure devices inside patients in the first place.

It’s tacitly accepted, especially in health care, that this continued development naturally means improvement; that the further we get from the early 20th-century days of unsterile surgery, from the time before anesthesia and antibiotics, the safer our medicine becomes. But instead, by entangling medicine with the Wild West–like tech industry, the medical IoT poses a new suite of therapies meant to save lives that can also be hacked and sabotaged to endanger them.

“The longer it sits inside my body without having saved my life, the more I think about the device’s ability to end it.”
“The longer it sits inside my body without having saved my life, the more I think about the device’s ability to end it.”

“There is a rising challenge in terms of making sure it’s secure, because some of these devices are, you know, it’s a huge marketplace, there’s a rush to get products out,” said Aerts of commercial medical tech. “Companies either aren’t always willing to invest the time and money it takes to build security into those products or they are new to the marketplace and just don’t understand it.”

What’s worse is that the rapid normalization of people willing to pair their medical data, health monitoring, and disease management to the internet in some ways perpetuates itself. Connected medical devices like pacemakers, ICDs, and continuous glucose monitors for diabetes management, as well as commercial wellness products like fitness wearables, create a flywheel effect that amplifies further use on both sides. Before medical and health IoT devices took off, people generally went to the doctor when they felt unwell. Now people are also going to the doctor when they feel fine but a machine is telling them something is wrong. Diagnostic cardiac tools, for example, like the Apple Watch’s arrhythmia detection, have the potential to drive more people into the health care system and could end up helping to increase the population of cardiac device users.

There’s already some evidence that connected cardiac devices are being overprescribed, perhaps due to the positive bias the devices can engender in doctors and manufacturers. A 2011 study in JAMA found that in a population of more than 100,000 ICD patients, more than one-fifth received an ICD without meeting the standard clinical guidelines for implantation. Many of them had never experienced any kind of ventricular arrhythmia.

I have had my ICD for nearly four years now. In that time, it has never once shocked me. I should feel relieved that the device has not yet gone off, but instead, the longer it sits inside my body without having saved my life, the more I think about the device’s ability to end it. Much of my experience of the health care system has been one of escalating calamity: The solution to one problem invariably causes an even bigger one. Even my very first pacemaker seems linked inextricably to the fact that I later became eligible for an ICD: About 10 months into living with it, I developed an arrhythmia that my doctors later determined was somehow caused by interference from the pacemaker itself. The heart surgeries I’ve had over the years have contributed to a layer of scar tissue on my heart that predisposed me—and likely contributed to—the incident that led to the ICD. When it comes to the security of the ICD itself, it’s less an absolute question of whether the costs outweigh the benefits and more a philosophical one. Not only do I feel less safe, but I am now also acutely aware of the ways in which I might have been destined to end up at this point, tracing the steps that led me here back through the maze of treatment.

The timeworn credo of medicine is “first, do no harm.” It’s hardly a fast rule anymore, but the principle comes into sharp relief when discussing the danger posed by medical devices. Doctors and manufacturers often take a statistical approach when discussing the cost-benefit relationship of these devices: Real harm is minuscule, comparatively. ICDs save lives all the time, so that justifies their use. But this gets the principle completely backwards. The question is not supposed to be will this patient get sicker if you don’t intervene, but rather, will someone get sicker if you do?

This year, my doctor mentioned that their office hadn’t received a transmission from my ICD in a while. When I got home, I reluctantly pulled the monitor out from my closet and plugged it in by my dresser. I began the process of pairing the monitor with the device in my chest, but the monitor was having trouble connecting to my Wi-Fi. I tried it a second time, and then a third: A startup sequence ended in a flashing red light. I stopped trying to pair the devices. I breathed a sigh of relief, feeling as if, even momentarily, I’d regained control over myself, and put the monitor back into the closet.

Writer & Filmmaker jamesonrich.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store