Do Algorithms Know Your Body Better Than You?

The diagnostic regime of targeted advertising has much to teach us about how we categorize and label disability

Credit: Franki Chamaki/Unsplash

EEarlier this year, a survey appeared on my Facebook feed: “We need your expert insight on mobility aids.” Intrigued, I clicked through and found myself being vetted for my preferred wheelchair style. A week later, my feed served up an advertisement for an extra loud alarm clock for people with hearing impairments, and was followed by ads for vision aid devices, postpartum depression support, and nutritional counseling.

I do identify as disabled, and I research disability studies as part of my work. So it makes sense that I might be targeted for disability-related products. But I don’t have any expert insight on wheelchairs or any other type of mobility aid because I’ve never used them. Ironically for a website that documents every event and relationship in my life, neither my body nor my mind corresponds to what these online advertisements have crudely judged me to be. Is the assumption that all people researching disability are disabled, and that all disabled people use a wheelchair?

I wasn’t offended; I’m actually attracted to the boldness of these algorithms that implicitly diagnose me and provide a “prescription” of sorts, in that they think the advertised product might help me. But whether I need the product or not, these algorithms generate the perception of a deficiency; there is something wrong with my body or mind that these products can fix. We can use this kind of “automated diagnosis” to expand the conversation on how our real selves, and the data versions of ourselves, inform one another. The way algorithms perceive us, and how they fail to perceive us accurately, tells us a lot about how we categorize disability and how our bodies are socially organized. And those barriers inhibit thinking about the varying needs of our bodies and minds in more profound and compassionate ways.

Advertisers’ use of algorithms perpetuate the same logic as these health and medical trends; our data speaks louder than us.

In pragmatic or targeted advertising, online services collect, process, and synthesize data about the behavior of users, typically the decisions and information we share on certain sites, data tracking cookies and other strands of identificatory data. Where traditional advertisers bought specific spots, advertisers today buy automated advertising space that matches a product with a suitable audience, and constantly reevaluates the effectiveness of each ad placement.

Much has been said about pragmatic advertising’s impact on privacy and the manipulation of our online experience. Less explored is how algorithmically-targeted advertisements take the form of an implicit diagnosis and corresponding prescription.

The medical process of diagnosis collects information about a body and its history, then processes the symptoms and patient history through prior knowledge of medical science. Diagnosis is similar to the concept of an algorithm in that it is a step-by-step process for solving a problem or achieving a goal. Pragmatic advertising is yet another example of the use of machine learning to replace human decision-making, and therefore, developers would say, human bias and error.

These algorithms are not only trying to reach the ideal customers, but are virtually and partially creating them. All your collected data forms a data double of yourself, a double that is not exactly “you” and not exactly “not-you.” A data double synthesizes our individual digital footprint, our purchases, likes, search history, and even our online connections. Data doubles are our shadows, but they are not mirrors. Rather they are distorted projections that influence how we see ourselves and how we are seen by others.

The collecting and organizing of data to design certain audience groups is an example of what surveillance studies expert David Lyon describes as “social sorting.” The primary goal of digital surveillance is not to simply identify troublemakers, but to use data to classify individuals into various groups, which ultimately correspond to the degree of monitoring they require. While part of advertising has always been to make us believe that we need something — hence why ads imply a deficiency in some way — the surveillance aspect of this personalized process takes on the valence of a clinical, unwilled intimacy that divides us into socioeconomic types based on categories such as gender, income, race, and nationality.

Our search history can reveal more about our private lives than we would typically share. Many of us describe feeling an “eeriness” in how algorithms seemingly know more about our private lives than our closest friends and family. In 2012, a teenage woman tried to hide her pregnancy from her family, but was sent coupons for baby products by Target. Her father marched into the store to complain to a manager about the coupon “mess up,” but actually the woman’s tracked activity online meant Target knew her body better than her family.

Others have reported feeling unsettled by automatic diagnosis, like the author Seth Stephens-Davidowitz, who wondered: “How did the internet know I was balding?” after being served ads for hair-loss cream, despite never posting about his baldness. The accuracy of advertising algorithms does feel creepy, not least because of the pervasive silent, automated analysis of our bodily and mental states.

When algorithms seem to fail — if the young woman was not pregnant, just as I am not a wheelchair user — some might relish these misrecognitions as an indication that data tracking and algorithms are not as precise and accurate as technologists may claim. In other words, surveillance efforts have failed. We have unconsciously cheated the system and our “true selves” remain unseen.

But what exactly do these algorithms know about us? Every social media platform has different regulations around what kinds of data advertisers can use to produce tailored product recommendations. After much criticism, and facing potential lawsuits, Facebook announced changes to its pragmatic advertising standards in May 2019, which particularly affected medical advertisers. To prevent any discrimination, Facebook’s affiliated advertisers are no longer allowed to target campaigns based on race, income, sexual orientation, disability, and purchasing history. But though these factors can’t be directly used, an algorithm could still process other data to make assumptions about someone.

The universal regulation of digital surveillance is years away, and for now, it is ambiguous as to how much and what information is used in these algorithms’ decisions. The rise of pragmatic advertising corresponds to the digitization and automation of healthcare, and the treatment of health maintenance as an individual responsibility, a responsibility that shames people into desiring a constant regime of self-improvement. Health is an ideal, something to always strive towards, which encourages us to buy various preventive and curative products and services, including the ones advertised on our news feed.

Our bodies and minds are not problems to be solved, but simply different.

Pragmatic advertising is part of a larger shift towards digital automation in medicine. Medical professionals now use diagnostic algorithms to streamline and standardize the diagnosis process, and even the rise of online symptom checkers and fitness watches have introduced processes that arguably distance the doctor from the patient. Advertisers’ use of algorithms perpetuate the same logic as these medical trends; our data speaks louder than us. Virtual, automated diagnosis privileges the standardized processing of a person’s data, abstracted from their personhood. Critically, neither a doctor nor a salesperson needs to physically interact with or make decisions about their patients or patrons. Decisions about our bodies are made without ever even seeing our bodies.

In disability studies, disability is recognized as a social, physiological, and mental condition, but it’s also an identity. Our bodies and minds are not problems to be solved, but simply different. We all need to understand how social perceptions about disability emerge and proliferate. In the case of automated diagnosis, these perceptions emerge and proliferate through promoted products that implicitly claim to cure or repair a corresponding problem. “Disability” is identified algorithmically, and “solved” algorithmically, too. It is simply another data pattern, a pattern of human design that unconsciously reflects social biases and perceptions of disability.

The automatic detection of disability implies that there’s a standardized, generic set of data points that define disability. That reduces the experience and identity of disability to a generalization, and emphasizes the division between disabled and nondisabled. This binary is yet another way to identify and determine the value of our bodies. And as we know, binary thinking can result in simplifying your own and others’ identities, confining each to boxes as small as online advertisements.

When I see these incorrect diagnoses on my news feed, it feels automatic to dismiss them. But while I might not be a wheelchair user, the difference between me and that “accurate” target audience is not that easy to define. Maybe my online activity and behavior data crosses over with that of wheelchair users. Maybe the disability/ability binary makes us seem more different than we really are. And maybe our data knows something about us we don’t, because even in their clumsy flaws, algorithms have something to teach us after all.

Ph.D. candidate studying disability and feminist approaches to tech. Lover of sports, poetry, robots, & crip life. Avid Tweeter. Always online.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store