DevOps failures cast cloudy shadows over countless apps

Richi Jennings Your humble blogwatcher, dba RJA

Mobile apps are still awful: That’s the scary conclusion from researchers. They sampled a range of Android apps and easily found 23 that leaked the personal data of 100 million users—and worse.

Aside from the obvious lessons for developers, there are also things for IT people to think about. It might seem that mobile device management (MDM) has gone out of fashion, but it’s just as necessary today—perhaps more so with employees working remotely.

And it’s not just an Android problem. In this week’s Security Blogwatch, we go back to school.

Your humble blogwatcher curated these bloggy bits for your entertainment. Not to mention: integer overflow vulns.

10,000-ft. view: Depressing

What’s the craic? Catalin Cimpanu sounds deeply frustrated—Handful of Android apps exposed the data of more than 100 million users:

Mobile app developers are still exposing their users’ personal information through abhorrently simple misconfigurations … of third-party cloud services. [Researchers] found 23 Android applications that exposed the personal data of more than 100 million users.

[For example] developers who forgot to password-protect their backend databases [or] who left access tokens/keys inside their mobile application’s source code for services such as cloud storage or push notifications. … Reports about mobile apps which expose user data by leaving backend infrastructure exposed online have been published [for] half a decade. [But] the issue has continued to linger, primarily due to bad coding practices.

Literally nothing [changes] despite repeated warnings.

And here’s Tara Seals, with a kiss—Rampant Cloud Leaks:

The depth of the data at risk across the apps is such that a range of follow-on attacks could be possible, from using credentials against other accounts to social engineering and fraud/identity theft. … Cloud misconfigurations that leave data publicly exposed happen all the time … and unfortunately, there’s very little that end users can do to protect themselves.

The data was accessible from real-time databases in 13 of the Android apps. … However, for the examined apps, there were no authentication checks to access them. … In the case of at least two of the apps, cloud keys were exposed with no safeguards. … Push notification managers in many of the apps weren’t password-protected either. … This could be weaponized in ingenious ways.

If you opt to use cloud storage as a developer, you need to ensure any key material necessary to connect to such storage is kept secure, and you must also leverage the cloud provider’s access control and encryption mechanisms to keep the data protected. Mobile app developers should make use of the Android Keystore and Keychain mechanisms that are backed by the hardware security module of the mobile device [and] make use of the Android encryption mechanisms when storing other sensitive data client-side.

Who discovered this latest wall of shame? Check Point’s Aviran Hazum, Aviad Danin, Bogdan Melnykov, Dana Tsymberg, and Israel Wernik tag team—Mobile app developers’ misconfiguration of third party services:

Services such as cloud-based storage, real-time databases, notification management, analytics, and more are simply a click away from being integrated into applications. However, developers often overlook the security aspect of these services, their configuration, and of course, their content. … Misconfiguration [puts] users’ personal data and developer’s internal resources, such as access to update mechanisms and storage at risk.

This misconfiguration of real-time databases is not new, but to our surprise, the scope of the issue is still far too broad and affects millions of users. … There was nothing in place to stop the unauthorized access. … We were able to recover a lot of sensitive information including email addresses, passwords, private chats, device location, user identifiers, and more. If a malicious actor gains access [to] these data it could potentially result in service-swipes (ie. trying to use the same username-password combination on other services), fraud, and identity theft.

[For example] through “T’Leva,” a taxi app with over 50,000 installs, we were able to access chat messages between drivers and passengers and retrieve users full names, phone numbers, and locations (destination and pick-up). [Another] app, “iFax,” not only had the cloud storage keys embedded into the app, but also stored all fax transmissions [so] a malicious actor could gain access to all documents sent by more than 500k users. … Moreover, most of the apps [also had database] ‘write’ permissions.

Most push notifications services require a key … to identify [the] submitter. What happens when those keys are just embedded into the application file itself? … Imagine if a news-outlet application pushed a fake news entry notification to its users that directed them to a phishing page.

Wow. Just wow. And theshowmecanuck “politely” agrees:

I worked at company that made mobile apps. … Bad coding practices … are universal.

I've seen supposedly senior devs leave in default passwords on back end containers after [complaining] that they should be allowed to manage their own dev servers. And then they expose the servers to the world by ****ing up … the firewall rules and open up huge security holes.

These idiots work in every technology stack.

What other research can we find? David O’Brien adds his top three cloud FAILs—Most Common Cloud Misconfigurations:

First place: … Azure Storage Accounts / AWS S3 Buckets not enforcing HTTPS. Accessing / copying data over non-encrypted channels is definitely not recommended and a clear path to having data leaked into places it should not leak in to. … Microsoft for example sets this property now by default. AWS does not enforce HTTPS by default.

Second place: … Azure App Services / AWS Lambda Functions should not be publicly accessible (many of which we have found also have clear text secrets … on them).

Third place: … Azure Network Security Groups (NSG) / AWS Security Groups (SG) / GCP Firewalls allowing management ports access from the internet. … This means that a cloud based "firewall" is configured to allow traffic like Remote Desktop Protocol (RDP) or SSH inbound from the internet. A very common path for attackers into cloud hosted Virtual Machines.

O RLY? I’m not sure whether QuinnyPig agrees or disagrees:

"The number one cloud misconfiguration is S3 buckets not enforcing HTTPS" is one hell of a take.

But surely this is a developer problem, not an Android problem? ArmoredDragon agrees:

Something tells me this is an iOS problem as well, though iOS being notoriously difficult to audit makes it less likely that a third party would be able to spot something like this. Apple is already well known to let outright scammy apps through their censors, something tells me that a misconfigured cloud storage, especially one apple has zero control of, would fly right past their censors.

Maybe so, but Android users have other issues to contend with. Dan Goodin explains—4 vulnerabilities under attack give hackers full control of Android devices:

Unknown hackers have been exploiting four Android vulnerabilities that allow the execution of malicious code that can take complete control of devices, Google warned. … All four of the vulnerabilities were disclosed two weeks ago.

Two of the vulnerabilities are in Qualcomm’s Snapdragon [SoC], which powers … a massive number of handsets. … So far, there have been four Android zero-day vulnerabilities disclosed this year, compared with one for all of 2020.

Google has released security updates to device manufacturers, who are then responsible for distributing the patches. … Google representatives didn’t respond to emails asking how users can tell if they’ve been targeted. … Without more actionable information from Google, it’s impossible to provide helpful advice to Android users.

What’s the solution? Here’s an unpalatable suggestion, from couchslug:

Android is hopeless from a security perspective … and should be left to those who resent security. Understanding what one cannot have is important to making informed choices.

Clueful users would be best served by pure FOSS phones but there won't be many of those for at least a decade because that problem is extremely difficult to solve. … Of course the problem will never be seriously addressed … because revenue is why companies exist and security is an inconvenient cost center.

And RoninX is in full agreement:

Google should provide a way for Android users to just download and install security patches without waiting for their OEM. … I know Google has been moving in this direction architecturally, but they need to move farther and faster.

If a vulnerability is present in all instances of certain versions of Android, then it should be patchable in all of those instances. Likewise, if the flaw is related to a common hardware component, like a Qualcomm Snapdragon SoC, it should be possible to release a patch that works on all of the devices.

Meanwhile, a poetic gweihir simply rolls their eyes:

"App" rhymes with "crap." … The whole idea was from the start that semi-competent and incompetent people would write tons of apps. Some would appear to be well-written and hence be actually used. [This] is just a completely predictable side-effect.

The moral of the story?

IT: MDM of BYOD might be unfashionable, but it could CYA.
Dev: Don’t do that, obvs. kthxbai.

And finally

Too embarrassed to ask? Marcus “MalwareTech” Hutchins to the rescue:

Previously in “And finally”

You have been reading Security Blogwatch by Richi Jennings. Richi curates the best bloggy bits, finest forums, and weirdest websites … so you don’t have to. Hate mail may be directed to @RiCHi or sbw@richi.uk. Ask your doctor before reading. Your mileage may vary. E&OE. 30.

This week’s zomgsauce: Inbal Marilli (via Unsplash)

Read more articles about: SecurityApplication Security

More from Application Security