Rowson illustration

This article is a preview from the Spring 2019 edition of New Humanist

As of this year, the US insurer John Hancock will only offer life policies that include monitoring of health and fitness data using wearable devices such as the Fitbit or Apple Watch. When the announcement was made last September, it created headlines around the world. This was likely its purpose. However, the company’s public statements and the reactions that followed illuminate a much broader contemporary phenomenon. The real story is how the renewed pursuit of health aided by personal devices and mobile technologies is giving way to new forms of commercial exploitation, as well as new forms of discrimination against those whose bodies and habits are found wanting.

“John Hancock Leaves Traditional Life Insurance Model Behind to Incentivise Longer, Healthier Lives,” the company’s press release confidently intoned, before boasting that people enrolled in its Vitality programme have been shown to “live 13-21 years longer than the rest of the insured population”. The key to this extraordinary result is apparently a shift from traditional models of insurance to a new paradigm founded on “behavioural-based wellness”. This was presented in turn as the answer to lifestyle diseases that have become the leading cause of death among Americans. Armed with an array of self-monitoring devices and apps, John Hancock’s customers would hold the key to reversing this trend. After all, they could already lay claim to taking “nearly twice as many steps as the average American” as well as “logging more than three million healthy activities including walking, swimming, and biking”. And because a healthy lifestyle is evidently not its own reward, the insurer resolved to drive these behaviours through financial incentives: from customer discounts at more than 400,000 hotels around the world to a one-year Amazon Prime membership. As for the benefit to the insurer, John Hancock CEO Brooks Tingle supplied the money quote to the New York Times: “The longer people live, the more money we make.”

All of these statements and claims deserve to be carefully unpacked, but for the moment I want to set out the premise of this novel discourse: namely, that a combination of state-of-the-art wearable tech and innovative financial incentives can help consumers live longer, healthier lives, and nobody stands to lose. What John Hancock claims to have discovered is in fact nothing less than the philosopher’s stone of insurance: a way to improve “mortality experience”, in the parlance of the industry – that is, to shift the actuarial tables as opposed to merely using them to assess the risk of insuring its customers.

No part of this premise would be possible were it not for the mass adoption of wearable technologies by consumers in the developed world. These range from the most common wrist-worn devices to glasses, jewellery, clothing and shoes – all of them “smart”, all of them sending the user’s biometric data to the Cloud. There are over 80 million such active devices in the US, whereas the number of people who use them in Britain is roughly one in five. Critically, these statistics don’t include the most ubiquitous “wearable” device of all: the smartphone.

Wearables have always been connected primarily with personal well-being. Coined in 2007 by Wired editors Gary Wolf and Kevin Kelly, the term “quantified self” developed into a movement whose members used first-generation devices and the apps that quickly flooded the market to track their individual paths toward physical and spiritual self-improvement. Since 2012, the movement has had a dedicated institute at the Hanze University of Applied Sciences, Groningen. Its motto – “self-knowledge through numbers” – exemplifies the uncritical trust in the inherent validity and value of the data that underpin this superficially humanistic project.

In the interest of full disclosure, I do one of those things. For the past year, I have been tracking my daily steps through the basic and probably fairly inaccurate counter in my smartphone. What’s worse, I have been aiming to reach the goal of 10,000 steps per day promoted by the wearables industry, despite being aware that it’s an arbitrary number first popularised in the lead-up to the 1964 Tokyo Olympics by the marketers of an earlier, mechanical step-counter. There is no medical study or basis for suggesting that this is a desirable health goal for a person of my age and body type, nor is there any evidence that devices such as the Fitbit have led to improvements in the fitness levels of its user base. Yet I still refer to that number most days, possibly because what I do know is that being active is better than being sedentary, and because my smartphone – that I carry around anyway – readily supplies me with information that it would be too bothersome for me to record.

This is fairly harmless, right? But here’s a key fact: my smartphone was recording my daily steps months before I started caring that it did. It was a built-in feature managed by a pre-installed app. This app isn’t called Step Counter or Distance Travelled or anything like that. It’s called Health. And because it’s called Health, the number of steps I take comes with an estimate of the number of calories I burned in the process. This is likely even more inaccurate than the step-counter itself: partly because I live among the hills of Wellington, and the app is unable to differentiate between my walking on the flat or climbing a steep road; but even more so because the app doesn’t know how much I weigh. Yet it’s difficult not to instinctively trust either of those numbers. Such is the aura of our personal devices and the power of a good-looking graph.

The step-counter built into my smartphone illustrates that the quantified self movement has long since stopped being a movement. It is a mainstream practice now, a cultural default for anyone who owns the most basic tool of everyday living – whether they know it or not.

The recorded data goes into two baskets: one is your personal, self-improvement basket. In my case, a rough count of how many steps I’ve taken every day, so I can make sure I maintain a reasonable level of physical activity. The other is the general basket where all of your personal digital information goes, from the places you visit – both physical and virtual – to the purchases you make or are thinking of making, to the social connections you establish. All of this is the quantified self, only a fraction of which is quantified by us.

* * *

Back to the business of insurance. “Consider the possibilities in health care,” observed Randy Rieland of Smithsonian.com in early 2012 – three years before the release of the Apple Watch and arguably some time before wearable technologies achieved critical mass. “In the past, anyone analysing who gets ill and why had to rely on data skewed heavily toward sick people – statistics from hospitals, info from doctors. But now [...] there’s potentially a trove of new health data that could reshape what experts analyse.”

To which the cultural critic Rob Horning responded: “What a dream come true! We can collect enough data to create the profile of the ultimate superbeing: the perfectly average human. And then we can use health-insurance protocols to force everyone to become this or else.”

Both insights are crucial here. The first hints at the fact that turning the clinical gaze onto healthy bodies can lead to pathologising them. What do I do when I don’t reach my daily 10,000-step quota? Is a 9,000-step day a “bad health day” for me now? Does going out for a walk even count, if I leave my phone at home? The second is the very exact photograph of the dynamic at play in recent moves by the insurance industry to integrate wearables technologies more generally.

John Hancock drew attention because in headline form (roughly, “US insurer to stop selling traditional life insurance and only market policies that record the exercise and health data of its customers”) its announcement played on common and well-founded anxieties about the unintended use of our personal data. It is one thing when we choose to quantify ourselves; it is another to be compelled to in order to access a basic service such as life insurance, which is not merely a nice-to-have for the wealthy: in many countries, it is necessary in order to secure a mortgage.

In these situations, it is preferable not to succumb to the feeling of vague dread but try to understand instead what specific dystopia is being prefigured. Upon closer inspection, all that John Hancock did was require future customers to enrol in its Vitality self-monitoring programme. This is not the same as forcing them to record or turn in the data. However, this is not a case of dystopia avoided. It’s the case rather, as Horning intuited, of a second-order attempt to redefine the healthy subject and the ideal customer – with direct consequences for those who no longer fit the picture.

Two terms which have been used by the insurance industry for well over a century are relevant here: “substandard lives” and “preferred lives”. The former denotes people who are assessed as representing a high degree of risk for life insurers (similarly to “subprime” mortgage holders), while the latter denotes individuals who sit at the opposite end of the spectrum and are assessed as having the highest chances of living into a very old age. The criteria for assigning potential customers to either category (and the ones in between) have changed over time and differ at any one time from insurer to insurer, but their relationship is always dynamic: meaning the that presence of a given quality in one category (say, being a smoker) usually implies its absence in the other, and vice versa.

A study published last year by reinsurer Munich RE and health analytics company Vivametrica encouraged insurers to make greater use of wearables not in order to “shift the mortality tables” (as claimed by John Hancock) but rather precisely to gain an edge in the assessment of these risk profiles. This has the double effect of attracting good risk (“preferred lives”) and reducing the profitability of competitors by sending the bad risks (“substandard lives”) their way.

Ostensibly, the paper’s key finding is that steps per day can effectively segment mortality risk even after controlling for other, more traditional indicators. But this is only superficially true. What seems far more relevant is that – as a number of studies in behavioural economics have shown – it is overwhelmingly wealthy, healthy people who engage in self-monitoring or are likely to sign up to programmes with financial incentives.

The real benefit of wearables to insurers, as another industry publication more plausibly puts it, is that they provide “a signal to identify super-fit, active people – encouraging life insurance purchase from consumers who feel invincible”. This explains John Hancock’s staggering claim that customers of its Vitality programme live 13 to 21 years longer than everyone else. It is not the case that they live longer because they take part in the programme. They take part in the programme because they live longer.

* * *

In the US – where the provision of health cover is often left to employers – a number of companies have enrolled their entire workforce in programmes that require some form of self-monitoring, using incentives to save on their insurance bill. Such arrangements may seem fairly benign, but remember that risk profiles exist in a dynamic relationship, and as soon as we describe the healthy subject as one who meets certain measurable, monitored targets of exercise or nutrition, we automatically define what the unhealthy, undesirable subject looks like as well.

In the era of the quantified self, description seamlessly turns into prescription. If my phone tells me that I’ve taken 9,000 steps today, it’s not just a piece of information: it’s also an implicit directive to take at least 1,000 more. Equally, in the insurance industry any criteria for inclusion can easily turn into criteria for exclusion. On the one hand, offering discounted policies to one group of customers is simply another way of making insurance less affordable for others. On the other, the introduction of new and ever more sophisticated and intimate monitoring mechanisms paves the way for future decisions on who is insurable and who isn’t. At this moment, this is especially relevant in the US, where the fight to repeal Obamacare on constitutional grounds rages on. As James Purtill puts it, following a repeal “insurers might look to wearable devices for evidence they could use to refuse to pay for patients’ health care.”

However, all of these issues are fundamental to the politics of life itself, beyond the insurance industry alone. Activity and fitness data such as the vast repository owned by Fitbit, along with the model of the virtuous, healthy subject they construct, are firmly inscribed within the personal information that has become the lifeblood of the digital economy. It has been estimated that companies place 10 billion bids on the browsing data of UK users per day. The information thus obtained goes into building personal profiles that are used predominantly for advertising purposes in a market dominated by Facebook and Google – two giants that understand perhaps more than any other the value of building pictures of people based not on their stated preference and ideologies, but rather on their recorded consumption and online behaviour.

Recently Google in particular, or rather its parent company Alphabet, has moved directly into the health industry through Verily, a life science company devoted to helping people “enjoy longer and healthier lives” – the same slogan deployed by John Hancock in its announcement. The company is busy developing its own line of next-generation wearables. It holds patents for smart contact lenses that can measure the blood sugar of diabetics, and a magnetic wristband for detecting the presence of cancerous cells in the blood. While it’s hard to predict where these efforts might lead, it is not unreasonable to be concerned that a company that already controls either the first or second largest database of personal information in the world is showing interest in the contents of our bloodstreams.

In the background of these developments is the continuing, staggering failure of governments to regulate the handful of technology giants that are redefining notions of citizenship down to the molecular level. Even the much-touted EU data protection regime launched two years ago is largely ineffective with regard to these practices, for it is designed to protect the individual as opposed to the social body and has no mechanisms to curb the commodification of personal information that underlies the global economy. However, the dynamic of substandard and preferred lives reminds us that privacy is not only a personal but a collective good, for we are all affected – sometimes to an existential degree – by the data relinquished by others. It’s time that our politics recognised it.