Finger touching phone showing vulnerabilities with mobile health apps

Mobile Health Apps Are Exposing PII and PHI via API Vulnerabilities; 23 Million May Be Affected

A shocking new report from mobile security firm Approov has found major vulnerabilities in the programming interfaces (APIs) that underpin dozens of the mobile health apps used by patient care organizations for remote account management and telemedicine appointments with patients. A sampling of 30 of these popular mobile health apps, used by a variety of different health care organizations, found that 100% had some sort of hardcoded API vulnerability that could be exploited by attackers to gain access to protected health information (PHI), identity and billing information.

Based on how widely these APIs are used for apps that handle remote medicine and patient account management, Approov estimates that as many as 23 million people may be impacted.

Serious systemic security problems with mobile health apps

Lead researcher Alissa Knight analyzed 30 popular mobile health apps that are currently in use, looking at the APIs that many rely on to function and share in common. In them she found vulnerabilities that are worryingly (and embarrassingly) simple. 77% of the apps tested contained hardcoded API keys used to authenticate the app to other services (such as payment processors), a major breach of best security practices. 7% of the apps had hardcoded usernames and passwords in plain text.

Knight found that about 50% of the doors that these API vulnerabilities opened led to private and sensitive contact, health and billing information. In addition to the hardcoded keys that could be retrieved by simply scrutinizing the API code, 100% of the API endpoints tested were found to be vulnerable to Broken Object Level Authorization (BOLA) attacks that involve a relatively simple process of falsifying user IDs. 100% of the apps were also vulnerable to man-in-the-middle attacks due to failure to implement certificate pinning, which forces the app to validate the server’s certificate against a known good copy.

There are about 318,000 mobile health apps available in total from about 84,000 different publishers. A recent Pew Research study estimates that at least 60% of all mobile internet users have downloaded one at this point, making it the world’s most common smartphone activity (more so than even online banking or virtual education). And this was prior to the COVID-19 pandemic, which has only accelerated the demand for telemedicine. In addition to online appointments, consumers use these apps to pay medical bills and check records among other features.

The demand for medical records has similarly increased among hackers and online criminals. The study points out that while a Social Security number sells for only about $1 and a valid credit card number might fetch around $100 at the upper end, a full set of medical records can sell for as much as $1,000. These records generally contain everything an identity thief might want to commit all sorts of different types of fraud, and are also invaluable in crafting convincing confidence schemes or even a blackmail attempt.

The simple fact that 100% of a random sampling of 30 mobile health apps turned up serious vulnerabilities is bad enough, but these apps are among the more popular in the major app stores and have a substantial userbase; the average download count of each was 772,619. If all of these apps are compromised in this way it is reasonable and prudent to assume that the entirety of the mHealth industry is full of similar app security holes.

While the fact that authentication tokens and even plaintext credentials are embedded in APIs is certainly worrying, the BOLA attack aspect may be even more troubling given that it impacts every one of the mobile health apps tested. In its simplest form, a BOLA attack is similar to switching out sequential numbers in a web URL to access unauthorized pages. It is a security oversight that absolutely should not still be happening in 2021, but it appears that 100% of these apps had some ability to swap numbers out to browse through different confidential health records. Some apps went further, granting a level of access generally only available to clinicians to alter medical histories and records (including issuing prescriptions for medication).

More needs to be done to ensure security of mobile health apps

So what should mobile health apps be doing that they are not at present? Knight advocates the “shift left” approach, which moves the security focus to the coding stage rather than after publishing on app stores. Tom Garrubba, CISO for Shared Assessments, expanded on this point: “While it is a best practice for a mainstream application’s code to move through a thorough secure code review during development, organizations are often haphazard on following the same secure systems development lifecycle (SSDLC) process while developing mobile applications. By not applying the same rigorous process, any defective code will lead to vulnerabilities that can be exploited by even the most novice of hackers.”

Certificate pinning was also mentioned earlier; this ensures that even if embedded keys and tokens are hit upon by decompiling and deobfuscating an API, they cannot be used for access.

The industry also clearly has a need for more frequent penetration testing and dynamic code analysis. As Saryu Nayyar, CEO of Gurucul, observes: “Code review and remediation for all of the applications and API’s in question is a monumental, but necessary, task to start. As is a review of the coding practices that led to such weak security in the first place.”