Reverse engineering is a huge problem for mobile apps. Through a variety of techniques, unscrupulous developers (aka: hackers) can pirate the creations of other developers by making minor changes to create the appearance of authenticity, but masking a more nefarious purpose. These are not your everyday run-of-the-mill ‘copycats’. They’re altered apps purporting to be the real thing, but often with malicious code hidden inside. At best, they provide an intentionally poor experience to damage the real app maker’s reputation. At worst, they act as a trojan that steals data which can be used in downstream attacks or cause other direct harm to the user. They can also harm other apps, as well as the networks to which the user is connected.
In March, for example, the mobile industry saw the emergence of the EventBot trojan, which has already morphed several times into other forms. One of the early variants is an Android-based trojan that looks and feels just like the Adobe Flash or Microsoft Word apps, but is actually a mobile banking trojan, whose true purpose is to find and steal unprotected data in banking, cryptocurrency and other financial apps on a mobile device. In fact, the trojan is sophisticated enough to intercept multifactor authentication (MFA) codes sent to a mobile device via SMS so it can use them in an account takeover attack by posing as the legitimate user.
No brand wants cybercriminals to use their apps as a vehicle for distributing malware or to flood the market with broken fakes that destroy consumer trust. But even if hackers don’t create a new, malicious app from a vendor’s original binary, the ability to reverse engineer an app to discover its inner workings provides cybercriminals with valuable information. By studying an app’s logic, they can modify that logic to bypass authentication controls (such as the MFA hack above) or identify bugs and vulnerabilities in third-party libraries (which are publicly available). They then use that information for enable all kinds of malicious acts, such as installing their own malicious code as backdoors, depositing additional malware or deploying keyloggers. If an app can be reverse engineered, the entire organization becomes vulnerable to back-end server attacks and the creation of automatic ordering apps such as Sneaker Bots.
Reverse engineering isn’t hard to do
To illustrate just how easy it is to reverse engineer an app, check out this YouTube video of a mobile gamer showing how to cheat in the Jurassic World mobile game on the Android platform. He uses an emulator, which is a very common free tool used in building/testing apps, to create his own patch for the game, commonly known as a “mod”. In the mod, he changes the logic of the “in-app purchase” function to give himself enough “Jurassic credit’ so that in-app purchases within the game are free. It takes less than five minutes to accomplish this extremely common attack technique in mobile gaming, which is a $100+ billion dollar market. According to a 2019 report from App Annie, mobile games derive 95% of their revenue from in-app purchases, so hacks like this are a big deal for mobile game makers.
Unfortunately, despite the risks, far too few developers take the measures necessary to prevent tampering and reverse engineering. The Verizon Mobile Security Index 2020 notes that 43% of organizations knowingly cut corners on mobile security to “get the job done.” Thankfully, there are effective measures developers can take to prevent hackers from reverse engineering their apps to create trojans, release broken look-alike apps and gain valuable insights they can use to launch devastating attacks.
The need for code obfuscation
One of the primary ways to prevent reverse engineering is code obfuscation, which prevents reverse engineering techniques that rely on decompiling or dissassembly of the app’s code via static or dynamic analysis tools like IDA, Hopper and dozens of others.
Coding obfuscation into your apps is not an easy task and requires advanced security skills. For starters, obfuscation is a one-way operation, so if a developer obfuscates the wrong process or component, the app will break. Secondly, because humans can’t read machine code and vice versa, obfuscation typically requires a developer to use symbols to tell machines where to begin and end obfuscation in native source code. This is not only extremely time consuming to do by hand, but must also be correctly updated with every new release of the app – line by line, release by release.
Additionally, non-native code and third-party components, such as software development kits
(SDKs), cannot be obfuscated by developers because they don’t have the source code. Finally, obfuscation can interfere with crash reporting and adds file size to the bundle, which could cause the app to exceed app stores’ size limits or become too bulky for the user to download and run on their phone. And any code you leave non-obfuscated can be easily accessed by hackers to piece together how the app works, just as our mobile gamer friend above did to Jurassic Park in five minutes.
Even if a developer does obfuscate parts of their code, code obfuscation isn’t enough on its own. Developers also need to incorporate measures to prevent tampering, so that, even if hackers are able to modify an app, it won’t function. One measure that can help prevent tampering is checksum verification, a capability that, frankly, should be standard in any mobile app. Checksums analyze the binary and use its unique features to generate a hash function; if the binary is altered, the result will be different from the one generated by the genuine app. It’s a standard means to ensure the integrity of a file or executable, because if the app is modified, the resulting checksum verification failure will cause the app to close.
Another measure that helps prevent tampering is Runtime Application Self-Protection (RASP), a security technology that can detect and block attacks in a completely self-contained manner by using information from inside the app itself. RASP monitors inputs and blocks activity that could allow attacks to occur, while also protecting the runtime environment from unwanted changes like tampering. RASP-protected applications don’t rely on external devices or software like firewalls, MDM, or a separate app to provide runtime security protection, hence the name “self-protecting”.
The challenge of implementation
Of course, it’s easy to say that developers need to include these measures to protect their apps from reverse engineering and tampering. Actually, implementing them is another story altogether. As mentioned above, while obfuscation is a critical protection against reverse engineering, it’s tricky to implement without inadvertently crippling the app. Mobile app security requires a highly specialized skill set that is specific to each operating system, platform and framework. An Android security expert’s knowledge won’t necessarily translate to an iOS app. What’s more, mobile developers who are also mobile security experts are in extremely short supply.
Manually implementing security also requires a lot of time and money. Doing it right will balloon both budgets and schedules, neither of which is desirable in our highly competitive app economy.
Thankfully, there are ways to implement these features without having to do so manually. Software development kits (SDKs) can be incorporated into apps, though these implementations do require manual coding and present some limitations when it comes to obfuscation. Another option is a no-code platform that can embed obfuscation and anti-tampering capabilities into an app binary in minutes, without requiring changes to source code. By obfuscating at the binary level, even SDKs and third-party libraries may be obfuscated.
The risk of hackers reverse engineering, debugging and tampering with apps is far too great to leave them unprotected. By implementing the appropriate measures in mobile apps, developers can protect not only their customers, but also their own organizations and themselves.