Tech by VICE

Courts and Insurance Companies Need to Realize Fitness Data Can Be Spoofed

It took a couple of hours to build a program to trick the system.

by Jordan Pearson
Feb 2 2016, 7:34pm

A Fitbit band, which was found to be the most secure of the tested devices. Image: Teresa Timms

The "quantified self" movement—people who track everything from their sleep to the steps they take in a day with wearables—has made it from Palo Alto office complexes and into our courts and insurance system as evidence and the basis for incentives, respectively.

But can fitness data be trusted? According to a new report from researchers at security think tanks Citizen Lab and Open Effect, some of these devices can be hacked by users to display false information—for example, that the user was active and walking around during a certain time of day.

"The attack scenario in this case is a motivated, technically savvy user who wishes to artificially inflate his or her level of reported exercise, or otherwise insert false fitness events into their historical timeline," Andrew Hilts, the lead researcher behind the report, told me in an encrypted message.

"This could have implications for insurance fraud," he added, "as fitness data is being increasingly used as a measure of people's health in that industry. It could also have implications on the use of fitness data in court proceedings, which we've also seen happening."

"The wearable industry must be vigilant about using strong encryption and other security protocols"

At least one case has used fitness data as evidence in court to back up a personal trainer's claim that an accident had hindered their ability to work. Some insurance companies offer financial incentives like discounts for customers whose data shows they make healthy choices.

"For that interaction within the healthcare system, security is clearly important as well as trust," Michelle De Mooy, deputy director of consumer privacy at the Center for Democracy and Technology, wrote me in an email. "Outside of a provider relationship, in an employee wellness program, for example, the implications of a wearable device being spoofed or hacked is concerning because the data may be shared more widely and there is no accountability for such violations."

"As more devices related to personal health are hacked and the data they generate otherwise exploited, the wearable industry must be vigilant about using strong encryption and other security protocols, internally and externally," she added.

The method that Hilts used to create false data wasn't exactly easy, but with the proper expertise, it only took a couple hours, he said. What it came down to was a lack of proper encryption on all aspects of these services. While every tested device app used HTTPS encryption to send data to remote servers, some didn't use it at all times. Withings, for example, doesn't encrypt the user's session ID and user ID when the "share my dashboard" feature is in use. When captured by an attacker who controls the wireless network, the report states that these IDs can be used to get user information from Withings' servers. This is known as a "man in the middle" attack.

Just as importantly, while HTTPS encrypts communication from the user to the company's servers, it doesn't protect against users generating fake data for their own accounts. In the case of Jawbone and Withings, the researchers created programs that exploited a vulnerability wherein servers don't verify whether the data they're receiving is coming from the device via the app, or from a user hijacking the app's credentials. In one case, the researchers were able to fool Jawbone's servers into thinking one user walked one billion steps over five hours.

One notable exception was Fitbit, a wearable that encrypts data on the device itself and routes it through the app and to the servers, where it's presumably decrypted. This effectively means the company's servers don't "trust" the app the way Jawbone's and Withings' do—they're set up to only pay attention to data coming from the device.

What can companies do to make sure that health data can be trusted more, especially when it comes to important matters before the courts?

"Not trusting the mobile application is a good start, as is practiced by Fitbit," said Hilts. "It would also be great to see an industry-wide dialogue or working group about anti-fraud measures."