At Wandera, each day we receive requests from our enterprise customers wondering if a particular app is safe to allow on employee devices.

Wandera’s app risk assessments consist of two main types of app analysis:

  1. Static analysis is done by reverse-engineering the app code, by disassembling and decompiling the application package. During this stage, we extract various metadata like version, permissions, URLs, used libraries, etc. We also check records in the corresponding application store, if available.
  2. Dynamic analysis observes and analyses app behavior and network traffic in real-time. During this stage, we can identify if the application is using safe methods to transfer sensitive data, if it contacts only remote servers which it claims to do, etc.

When it comes to app risk assessments, there are a large number of risk indicators we consider. In our routine risk assessments we include the following:

App permissions – App permissions govern what an app is allowed to do and access. This ranges from access to data stored on your phone, like contacts and media files, through to pieces of hardware like your device’s camera or microphone. The available app permissions on iOS and Android are quite different. We always recommend auditing the permissions of each app to make sure they aren’t asking for access to resources they don’t need to minimize the risk of having your sensitive information exposed to unwanted parties. Do the permissions serve the functionality of the application? Are there any potential risks related to sensitive permissions (i.e., access to SMS)?

Network traffic – In static analysis, we look at specific destinations (or servers) an app sends data to and the sources of information it pulls data from. We care about the “reputation” of the server, about the quality of the connections (are they all encrypted, or only a subset of them), and the type of information that comes back over those links (e.g., is malware or phishing coming from a URL we used to think was safe?) In dynamic analysis, we look at the data in transit between the app and the internet. This is where we can see how the app sends and receives data as it carries out various tasks. It enables us to detect things like data exfiltration to third-party servers (see this example of a malicious game app found on Google Play), silent installation of additional APKs (see this example of Dropper malware found on Google Play), and commands to carry out various activities in the background, even those that mimic real user behavior (see this example of Clicker Trojan malware found on the App Store). In all cases we monitor the type of communication, how often the application communicates, and what data it sends. We also evaluate contacted URLs against the list of remote services and application components to verify if it does not try to send something sensitive without the user’s consent.

Data encryption – On Apple platforms, a networking security feature called App Transport Security (ATS) is available and enabled by default. ATS is basically a set of rules that ensure iOS apps and app extensions connect to web services using secure connection protocols such as HTTPS. iOS apps with ATS enabled are using encryption, while those with ATS disabled may mean they are still using encryption but only for selected network connections. On Android platforms, a similar encryption feature is available which protects all data that enters or leaves an Android device with Transport Layer Security (TLS) by default, however, developers can still change their app’s network security configuration to allow cleartext connections.

Application components – what are the building blocks of the application? The language used, programming techniques, libraries, frameworks, etc. The goal is to check if the application is using safe components without known vulnerabilities and/or potential problems. In this example, we discovered an advertising framework that was pulling inappropriate and unfiltered ad content into apps via an ad network. Upon investigating the impact, we discovered this particular ad framework was a very popular component used in a large number of Android apps (699 within our network at the time of research). In this example, we discovered a number of major airlines had included a component in their apps that was exposing passengers PII during the check-in process.

App store records – We investigate the app store data, which tells us things like who the developer is, how many installations the app has, how often it is updated, whether there is a bug report process, any negative or fraudulent reviews of the app. We also check out the developer’s reputation for signs that they are operating illegitimately – clues include, broken ‘contact us’ links or irrelevant content on FAQ pages, or a portfolio of apps that are unrelated or suspicious in nature.

Source code availability – When the source code is publicly available it means any researcher could look line-by-line at what the developer has programmed the app to do which can potentially identify flaws and malicious code in applications. But as proprietary or intellectual property, source code is often not accessible for testing. If source code is available, we check key components of the application – how it communicates with remote servers, if it properly uses certificates, how permissions are used, how it communicates with wireless peripherals (Bluetooth, Wi-Fi, NFC), if the code style follows best practices, etc.

Third-party intelligence – we check what other reports or assessments have been made available by other researchers.

If you have any questions about app risk assessments, contact one of our experts, we’re here to help.