Parachute Engineering
May 18, 2020 · 12 min read
Privacy & Security

Forensic Investigation: The Shocking State Of Privacy In Safety Apps

This analysis was performed by Parachute, one of the 20 safety apps in this report. To ensure the integrity of our investigation, our entire analytical process was recorded, documented and evidenced from start to finish, and our findings were cross-checked using multiple sources. We took extra steps to hold ourselves accountable to a higher degree than the other apps analyzed (see appendix) and we encourage and enthusiastically support all independent research in this space

Abstract

Forensic investigation of 20 popular iOS safety apps reveals that every single one, with the exception of Parachute, sends customer information to data collection companies, usually for the purposes of tracking, marketing, analytics and advertising. Customer information is being sent to data collection companies mostly without customers’ knowledge and with no way for customers to see it, delete it, control it, or revoke access to it. Because most data collection tools are embedded within the safety apps themselves, deleting cookies or maxing out iOS’s privacy settings has no effect in stopping this data collection. This report presents our findings and analyzes the risks inherent in this pervasive practice.

Introduction

Earlier this year, Gizmodo revealed that Tinder-affiliated safety app “Noonlight” “Is Sharing Your Data With Ad-Tech Companies”. Similarly, the Electronic Frontier Foundation (EFF) revealed that the “Ring Doorbell App [Is] Packed with Third-Party Trackers”. Buzzfeed News exposed how “Popular VPN And Ad-Blocking Apps Are Secretly Harvesting User Data”. And the Washington Post revealed that “Citizen, the app for location-based crime reports [...] repeatedly sent my phone number, email and exact GPS coordinates to the tracker Amplitude”. The repercussions of this large-scale data harvesting are felt beyond the online world, with leaked location data being tied to the spot of a brutal homicide.

On the heels of this barrage of news reports, we decided to conduct a forensic investigation of the 20 most popular iOS safety apps we could find. The apps were selected based on the following criteria: 1) they were among the top 20 search results for safety-related terms on Google or the App Store; 2) were apps that people could use to get help in an emergency. Because Apple does not provide the number of downloads per app, we used the number of ratings to rank app popularity.

We were shocked to find that every single app, with the exception of Parachute, sends customer information — at minimum, their IP address — to at least one data collection company, usually for the purposes of tracking, marketing, advertising and analytics. We are summarizing our findings below, and also making the data available for download in JSON format.

Findings Per App

Findings consist of connections made to third party services during use of the app, third party tools embedded into the app, and questionable practices (pasteboard snooping)

Life360 · 1.9M ratings

ADT Pulse · 647.9K ratings

Neighbors by Ring · 160.0K ratings

Pulsepoint Respond · 75.5K ratings

Citizen · 51.7K ratings

Noonlight · 13.2K ratings

Parachute · 5.3K ratings

None

WeHelp! · 4.8K ratings

ADT Go · 2.6K ratings

Covert Alert · 1.3K ratings

International SOS · 1.0K ratings

Bond · 438 ratings

bSafe · 287 ratings

FallSafety Home · 174 ratings

LiveSafe · 158 ratings

Mayday Safety · 102 ratings

Silent Beacon · 62 ratings

Rave Panic Button · 37 ratings

Katana Safety · 28 ratings

UrSafe · 19 ratings

WearSafe · 8 ratings

Safety Apps Are Secretly Sending Customer Information To Data Harvesters

Information about customers of these safety apps is being sent to data collection companies they probably have never heard of, like “AppBoy” and “Kochava”. It is likely that they will not be able to find exactly who has information about them, even by searching deep into the safety app’s privacy policy. These data collection companies brag about their ability to surveil a person’s every move within the app and across apps, with some even claiming to collect “every imaginable point of user interaction with your website or app” and a “complete dataset of every interaction with your product. Every click. Every tap. Every swipe”.

Because these data collection tools are baked into the apps themselves, there is no way for them to be turned off, even if cookies are deleted and iOS privacy settings are maxed out. In some cases, even deleting the app won’t help, as these companies use a variety of techniques to make sure they can still identify people uniquely: “eliminate the ambiguity of fingerprint-based attribution and unify fragmented data to show you each customer’s full journey” .

Outwardly, these safety apps use caring and empathetic language to market their products. Behind the scenes, their apps are packed with tools that use dehumanizing language like “collect rich, granular event data, delivered to your warehouse of choice” and “track your users across devices and platforms, improving the quality of your behavioral and demographic data”.

Safety Apps Are Exposing Customers To More Risk

The information sent to these data collection companies could include personally identifiable information like the customer’s full name, email, device information, real-time location, phone number, and more. Each data collection company could then share this data with more companies down the line, spraying the customer’s information across the net.

If any of these data collection companies is breached, information about the customer could end up on the dark web. In this scenario, the very act of using a safety app to protect themselves from a stalker (a very common use case for a safety app) ends up exposing them to more danger, as the stalker can now make use of the leaked data to track them more accurately. In fact, in 2017, the spot of a brutal triple-murder was linked to leaked location data. Customer information could also be leaked if any of the data collection companies receives a subpoena from the government or in the course of other legal proceedings.

As Bennett Cyphers, an Electronic Frontier Foundation technologist puts it in a quote to Gizmodo, “The kinds of people that are gonna be coerced into downloading [a safety app] are exactly the kind of people that are put most at risk by the data that they’re sharing”.

Safety apps may claim that some third-party data collection companies anonymize the data they collect, and therefore are respectful of their customers’ privacy. Even if we assume the anonymization is done right (it probably isn’t), it is surprisingly easy to deanonymize customer information by correlating it with related data sets. In December 2019, the New York Times was able to track president Trump by deanonymizing anonymous data. Because these data collection tools make direct connections to the data collection company’s servers to upload the information they collect, the customer’s raw IP address leaks through, making deanonymization even easier. Whether the customer’s information is anonymized, aggregated or processed in any way makes little difference. The breach of trust happens the moment their information is sent out to the data collection company.

Advertisers can make use of the information collected to enrich the “shadow profiles” they maintain on each one of us , so the safety app’s customer and those in their household could start seeing ads for various security products, self-defense classes, pepper spray, domestic violence resources, books and more. Using a safety app to protect themselves from an abusive partner (another very common use case that has seen growth during the COVID-19 pandemic) could backfire once the partner starts seeing repeated ads related to domestic violence. Tools baked into the safety apps allow advertisers to know when and for how long the app is used, so the more it is used the more likely it could be for these ads to be displayed on computers, phones, TVs and other devices in their home. Customer information is sent to advertisers regardless of whether the app itself has any ads (most don’t).

Data Collection Tools Can Access Private And Sensitive Data Undetected

Typical apps that are packed with third-party data collection tools, like free games and news apps, request a minimal set of iOS permissions, usually just push notifications, so the surface area for sensitive data that can be accessed is relatively small. By contrast, safety apps require an expanded set of permissions to highly private and sensitive data in order to work. While they vary by app, these include location access while using the app, 24x7 background location access (as long as the phone is powered on), 24x7 background motion access (allows apps to identify whether the customer is walking, exercising, or in a car), camera access, microphone access while using the app, background microphone access, health data access and 24x7 background Bluetooth access.

Because iOS runs all code that’s included in an app in a single process, it is not possible to load third-party data collection code in a “jailed” or restricted environment . All third-party data collection tools embedded in these safety apps inherit the iOS permissions of the host app. Additional iOS protections such as sandboxing do not apply here. Often, the third party code is closed-source, so even the app’s developer does not know exactly what it is doing. Third-party code can use method swizzling to inject itself between the iOS runtime and the app itself, intercepting user data undetected. Because trackers can run code when the library is loaded, they are able to run arbitrary code before the main app is even loaded in memory.

Data Collection Tools Can Facilitate Surveillance

Even if the third-party tool does not perform any hostile or surreptitious activities, because it enjoys the same iOS permissions as the app itself, a vulnerability in the third-party code could allow an attacker to gain access to the full scope of data the safety app is permitted to access, turning the app into a full-fledged remotely-controlled surveillance tool. Because the third-party code is shared across many apps, a single exploit could allow the attacker to harvest data from millions of people across tens of thousands of apps, posing a huge systemic risk . This risk is amplified because, as our data shows, developers include many third-party data collection tools in their apps, even though they all offer similar and overlapping functionality.

Because safety apps enjoy such expanded permissions, they are juicy targets for governments and “shadowy entities” trying to conduct remote surveillance by compromising these apps. The inclusion of third party tools makes these attacks more economical, as the attacker would be able to reuse exploit vectors on the third party tools instead of having to research each app individually. There is also the possibility that one or more of these data collection companies is owned by or affiliated with government entities or contains backdoors that could be used to conduct remote surveillance. While this may sound like science fiction, such practices have been documented as far back as the 1960’s, as well as recently with governments launching their own apps to surveil their citizens.

Data Collection Tools Are Circumventing iOS Privacy Restrictions

In a never-ending cat-and-mouse game, third party trackers have gone to great lengths to evade iOS’s privacy restrictions, even using the iPhone’s pasteboard as a side-channel for cross-app signaling, in order to fingerprint people so they can be tracked across apps. It was recently revealed that trackers included in very popular apps, such as TikTok and The New York Times, were accessing the pasteboard, presumably without those apps’ developers even knowing this was happening.

In fact, our own analysis found that three of the safety apps, “Life360”, “ADT Pulse” and “Noonlight” were accessing the pasteboard on launch and at other times during app activities that were completely unrelated to text manipulation. This gives them potential access to the entire history of all text, photos, passwords and other sensitive data copied and pasted since the phone was last powered on. With universal clipboard, this could potentially include data copied and pasted on their customer’s Mac, as well. Because some apps bring up web browsers to allow their customers to log in to various services or display content, they are also able to set long-lasting tracking cookies that allow for cross-app persistent fingerprinting and long-term tracking that persists even after the safety app is deleted.

Increasing media scrutiny about third-party trackers has led data harvesting companies to find alternative, out-of-band ways to track people. For example, Facebook is now offering offline tracking that lets a businesses send customer information directly to Facebook, without it being detectable by the customer, as the data never passes through the app itself. Customers will be completely oblivious to the fact that they are being tracked, unless they check very deep into Facebook’s privacy settings. Customer lists uploaded to Google or Facebook achieve a similar effect.

What Can Customers Do About This?

While customers of safety apps can presently use apps like “Charles” to monitor the data going in and out of an app, evasive techniques such as CNAME cloaking and out-of-band tracking will soon make third-party data collection undetectable. This makes it imperative to support investigative journalism and watchdog organizations such as the Electronic Frontier Foundation, who are on the frontlines of this issue. By reporting on privacy, these journalists risk getting themselves and their publications blacklisted from access to these companies, putting their livelihood on the line. Journalists are able to communicate with whistleblowers from inside these companies, so they can expose things that cannot be seen from the outside.

Customers can also look for red flags in the track record and history of each app. In a world where there is no way of telling if apps are sending customer information to data collection companies, it’s important to evaluate each app on its general ethos, as an ethically questionable past could be emblematic of a cavalier attitude towards customer privacy. As we are writing this, the conferencing app “Zoom”, which has risen in popularity due to the Coronavirus pandemic, is facing huge backlash over several privacy violations it has engaged in over the years and a lawsuit over sending their customers’ information to Facebook.

Customers can also take a look at the privacy policy of each company. An app that does not have a clear and unambiguous plain-language privacy policy (here's ours) may be using obscure lawyerly language as a cover for questionable practices. This also extends to how an app’s customer support helpdesk treats privacy-related inquiries.

What Can Developers Do About This?

Developers should exercise caution before adding yet another third-party tool to their code. Especially so for safety apps, which have the privilege of expanded permissions to their customers’ data. Every additional tool broadens the attack surface and increases the risk of data exposure. It only takes one third-party tool to misbehave to expose their customers’ data, in ways that can be made undetectable to the developer.

Developers should be especially cautious with free third-party code put out by for-profit companies. Because it is so tempting to offload hard, commonplace, or tedious tasks by making use of a free third-party tool, data harvesters will often use this tool as a trojan horse to enter into their codebase and then deploy additional tracking functionality. For example, “Crashlytics” started as a simple crash reporting library, was then bought by Twitter, then turned into behavioral analysis tool “Fabric”, and then bought again by Google, which has now integrated it into a broader set of tracking tools as part of “Firebase”. Similarly, Google Analytics solves the tedious problem of keeping track of visitor count, but also allows Google to collect browsing activity and serves as a low-friction way to enable additional customer tracking.

Developers should clearly communicate which data collection companies their app sends data to, give their customers the option to purchase a private version that does not send their information to data collection companies and not keep tracking them after they have paid for premium services.

A Final Appeal To Developers

Speak up. Life is too short to be working against your customers instead of for them. Make work that you are proud of, not ashamed of. If you’re sending information to data collection companies, this is the time to change things. Chances are, you don’t even need any of this data in the first place.

Many of the people who helped perpetrate this culture of exploitation of user privacy employ some of the most restrictive technology usage within their own homes — restricting their kids from using technology at home, and more. Don’t do anything to your customers that you would not want done to yourself or your own family. Treat your customers as would treat your family. To exploit people’s privacy is to chip away at their humanity. If you’re developing a safety app, this is even more critical. Treat this data like someone’s life depended on it. Because it very well could.


Appendix

Setting The Highest Standard

This analysis only looked at information sent by the safety apps themselves, and not their informational websites, unless there was crossover during use of the app. We believe that even a visit to a safety app’s informational website to learn more about an app could be compromising. Setting the highest standard for ourselves, we combed through the contents of our informational public website and made the following changes to eliminate any chance of informational leakage when browsing our website: 1) Moved our blog away from Medium and hosted it in-house at parachute.live/blog, where it is served without any cookies; 2) Removed links to informational videos about Parachute that were shared on Vimeo, as they were accessing Google Analytics; 3) Removed all traces of Google Analytics; 4) Removed Google-hosted fonts which could leak information to Google when the font was first downloaded. Keep in mind that when visiting parachute.live by searching for it on Google instead of typing parachute.live directly into the browser, Google can see the searched keyword and the search result that was clicked on, which could be compromising.

Legal & Ethical Statement

Data obtained for the purposes of this analysis was obtained solely by passive techniques and in a manner that is fully compliant with legal guidelines and established legal precedent. At no time did Parachute’s research team employ any active analysis of apps or any “hacking” techniques. At no point during this analysis was the integrity, safety and performance of these apps impacted in any way. Our analytical process was recorded, documented, and evidenced from start to finish. Global app ratings were provided by AppFollow on May 4, 2020.

Updates

May 24, 2020: As requested by a reader, “LiveSafe” has been added to the list, bringing the new total to 21 apps


Parachute texts, calls and emails your friends and loved ones and sends them your live video, audio and location in the event of an emergency. Parachute never sends any of your information to data collection companies and other shadowy data dealers. Get the full details at parachute.live/privacy

More from the Parachute blog

Parachute Newsroom

Subscribe to alerts from the Parachute newsroom

Download Parachute