With a hundred million end users, the notion of a widespread attack on Apple iOS devices is tempting to any criminal. The dream (or nightmare) of an attacker somehow targeting potentially millions of always-on, always-connected iOS devices using a large-scale automated attack is quite disconcerting.

You might be surprised to know that not only is this possible, but that the threat is also much more serious than that; a skilled virus writer could harvest sensitive financial information, steal account credentials, or other sensitive data from nearly any application running on the device, regardless of what bank, credit card manager, or photo vault you use, and regardless of what storage encryption or passcodes the end user may use on the device. Surprisingly, the basic design of many runtime environments, including iOS, allow for such an effective generalized attack, and this article will demonstrate just how an attacker might go after such a tempting target.

In my latest book, Hacking and Securing iOS Applications, I teach my readers how to first think like the criminals who are going to attack their applications, so that they can use the knowledge provided in later chapters to design more secure code. There are countless techniques that can be used to steal data from iOS applications, both while at rest and in transit, and unfortunately many developers aren’t aware of just how vulnerable they really are.

While just about any application can eventually be breached given an allotment of hands-on time and resources, the bigger threat (and what makes a much more lucrative target for criminals) is the capability to launch a widespread infection that would target a large number of applications and devices at once. Modern day botnets infect PCs using such a technique, resulting in the large-scale theft of private information from countless desktops, right under the users’ noses. Black market “search engines” exist, allowing high dollar clients to quickly locate and purchase private data across a network of infected machines using P2P protocols. Automated attacks are much more powerful than those that simply attack one vulnerability inside a single program, and this article will demonstrate how, by infecting the foundation that iOS applications are written on, an attacker can successfully launch such an automated attack.

You’ve read about it a few times this year, and in prior years as well: the next zero-day exploit that can be used to infect millions of iOS devices. Charlie Miller demonstrated how a trojan can be embedded in an otherwise innocuous stock market application in the App Store. A number of hacks “in the wild” have gotten press for attacking the some 10-20 million jailbroken devices on the market (these are conservative estimates). It seems there is always a zero-day exploit in the wild that can gain code execution privileges on non-jailbroken iOS devices. Apple does eventually address these, however sometimes the turnaround can be weeks or months. In addition to this, it has been confirmed by a number of analytics sources that a large number of iOS devices today are still running older versions of firmware that are still susceptible to well known attacks. A device only need be successfully attacked once in order to successfully infect it. With the right code, an attacker can make such an infection undetectable, even to forensic imaging tools used by law enforcement. In addition to zero-day exploits, a number of other techniques using boot loader exploits and custom code can be used to invisibly infect a device in less than two minutes. There are countless approaches to payload delivery, depending on whether you’re targeting one individual, a company, or the entire world. There are a number of different ways to infect an iOS device, albeit a zero-day exploit, or other techniques to target smaller audiences such as a corporation or government agency.

So foreign code can be introduced onto a device, and devices can be compromised. This happens all the time: why is this such an issue? Don’t we have encryption? SSL? Apple’s Keychain? What about Apple’s data protection encryption? ASLR? These are all great approaches to improving security, but do not protect an application from automated infection. In fact, a majority of these only deal with data at rest, and none of these deal with data while it’s loaded into an applications runtime.

While I’ve discussed a number of ways to circumvent these technologies in my book, this article is going to dig a bit deeper and address automated techniques to steal data from a common place in iOS: memory. What if I told you that I could steal personal information that you don’t even store on your phone, from your phone, while you were using your phone, and be a thousand miles away? The reality is much worse than this, in fact. Should an attacker craft such an automated attack, they could quite possibly modify data as it’s sent TO your financial institution, or other online account, to redirect payments to their own account, or to wreak other forms of havoc, using your own application to do it.

Let’s use the most common threat of data theft to illustrate such an automated attack. Allocated memory is where all of your data must reside in clear text, so that the application can work with it. No matter what numerous layers of encryption you bolt on after the fact, the application itself must have the ability to work with its own data, and so if you can get to the application’s memory, you can steal data before any encryption is performed, or any other security is added on. To help prevent memory theft by hackers, Apple recently incorporated address-space layout randomization (ASLR) to iOS, in an attempt to secure the memory of an application. ASLR randomizes your memory’s address space, so that a low level exploit is more likely to crash the application before it is able to successfully exploit an application and steal data. So what’s the problem? Apple’s Objective-C runtime environment makes it so much easier to hijack an application that nobody needs to write low-level exploits anymore.

The Objective-C runtime sits on top of what C programmers would call the “real world” where application code is actually executed. Objective-C, like many modern languages, is a reflective language; it can observe and modify its own behavior at runtime: it can see what objects and classes exist, what instance variables are stored, and so on. Reflection allows program instructions to be treated like data, allowing a program to make modifications to itself. The Objective-C runtime allows a program not only to create and call ad-hoc methods, but to create ad-hoc classes and methods on the fly, query existing classes and instance variables, and see the entire landscape of an application. Objective-C is based upon a simple Smalltalk-esque messaging framework; methods aren’t “called” in the sense of traditional subroutines, but rather are sent messages. If you know the right station to tune into, you can intercept these messages and see what’s going on in a program. And if you know the right way to send messages—then you can really start to manipulate what happens inside an Objective-C application. An attacker can manipulate and abuse the runtime of your Objective-C application to cause your application to malfunction on his behalf. Bypassing security locks, breaking logic checks, accessing privileged parts of your application, or stealing memory—all of these, and more, can be performed by an attacker running code on a compromised device.

The “runtime world” of your application is like a small model train village that someone can look down into, perceive everything happening, and even make changes from the “outside world”. So you have an Objective-C networking class in your application to send data back and forth. Imaging taking that networking class now and shrinking it down, placing it in a digital model train village: an attacker can easily look inside that networking class and see the data as it’s flowing back and forth, and it’s really easy to do if you coded your class in Objective-C. In a nutshell, it works like this.

Here’s a sample snippet from one of the HelloWorld programs I use in my book. The Objective-C version uses the Objective-C syntax to invoke four messages on the SaySomething class: alloc, init, say, and release.

SaySomething *saySomething = [ [ SaySomething alloc ] init ];
[ saySomething say: @"Hello, world!" ];
[ saySomething release ];

When the Objective-C code is compiled, this code looks more like C code that operates inside the Objective-C runtime. These four messages then look like C function calls:

objc_msgSend(
    objc_msgSend(
        objc_msgSend(
            objc_msgSend(
                objc_getClass("SaySomething"),
                NSSelectorFromString(@"alloc")),
            NSSelectorFromString(@"init")),
        NSSelectorFromString(@"say:"), @"Hello, world!"),
    NSSelectorFromString(@"release:"));

The objc_msgSend function is probably the most significant component of the Objective-C framework, as it is responsible for making the entire runtime do something. This function is used to send messages to objects in memory; the equivalent of calling functions in C. Any time a method or property is accessed, the objc_msgSend function is invoked under the hood. Since the Objective-C library is open source, we can take a look into this function and see how it’s constructed. The C prototype for the objc_msgSend function follows.

id objc_msgSend(id self, SEL op, ...)

There is a C framework sitting underneath the Objective-C runtime. The function accepts two parameters: a receiver (id self), and a selector (SEL op). The receiver is a pointer to the instance of a class that the message is intended for, and the selector is the selector of the method designated to handle the message. Methods are not copied for every instance of a class, but rather only one copy exists, and is invoked with a pointer to the instance being operated on. So any method that’s coded in Objective-C really is a simple C function under the hood, and a pointer to that function gets stored inside its class (which is really just a structure).

This C framework controls everything that goes on in the Objective-C runtime, and that what look like classes and methods to an Objective-C programmer really boil down to simple pointers and functions. An attacker knows this, and they also know to what extent the Objective-C runtime can be abused.

A simple example of how the runtime is easily abused would be the OneSafe application. I helped the author of OneSafe fix a pretty severe, yet all too common vulnerability in his application. To steal all of the passwords and credit card numbers out of OneSafe, you used to only need to follow these steps:

  1. Steal someone’s iPhone
  2. Run really fast
  3. In a debugger, attach to the process and invoke: [ [ [ UIApplication sharedApplication ] delegate ] userIsLogged: 1 ]

For example:

# gdb -q -p 2028
(gdb) call (void *) [ [ [ UIApplication sharedApplication ] \
    delegate ] userIsLogged: 1 ]
$1 = (void *) 0x2b16e0
(gdb) c
Continuing.

The application would then swing open, revealing all of the user’s data, without a password of any kind.

While objc_msgSend might look as if it can be heavily abused, more functions exist in the runtime that are much more appealing. As you’ve learned, methods are simply C functions whose pointers are stored in the class. To change the behavior of any method in a class, you only need to change a single pointer in a structure to point to your own code. And it’s even easier than this: the Objective-C runtime has provided a function to do this for you, without any real hacking!

The class_replaceMethod function is able to replace the method of any given class with a different method. An attacker could easily write their own code and replace a method in a target application. I give one useful example in my book on how to use this to get free skips from Pandora (which they still haven’t fixed, by the way – have fun!), by simply replacing their method to check the number of skips remaining with your own, to always return a full count. This can easily be hacked using a free utility named Cycript, which implements class_replaceMethod using a simple, JavaScript-esque syntax:

cy# var skipLimitState = [ SkipLimitState sharedSkipLimits ]
cy# skipLimitState->isa.messages['skipsForStation:'] = function() \
    { return 6; }

Every time the skipsForStation method is called, Pandora’s version of the method decrements the number of skips available by one. The simple Cycript recipe above overrides their function with one that always returns a full skip count. Such a simple hack that anyone can do it, even if you’ve never coded a line of Objective-C in your life!

So lets say I’m an attacker and I want to attack not Pandora, or even any one specific application, but lets say I want to attack as many financial applications out there as I can AT ONCE. Every application is written differently, and any of them could easily push out an update at any time that would change how their application works. I don’t want to have to go and reverse engineer every app, or have to write a dozen different infections. In fact, an attacker’s goals are likely very simple:

  1. Infect every application on the whole phone
  2. Steal as much important data as I can from any apps they run
  3. Wash, rinse, repeat.

What do a vast majority of applications have in common that would allow such an attack to work? They all use the same foundation to handle sensitive customer data: Apple’s foundation classes. Not all, but most applications under the sun use Apple’s foundation classes for everything from creating simple string (NSString) objects to using their networking and archiver classes to send, receive, and store data. If you attack the foundation, you attack every application using it. Apple’s foundation classes are at the very core of nearly every Objective-C application, and so if you can infect them, you can also infect “nearly every Objective-C application”.

Part 2 of this article will address attacks on the Apple foundation classes in more detail.

This article is a 3-part post. CLICK HERE TO READ PART 2 OF THIS ARTICLE.


For more information, consider picking up a copy of Hacking and Securing iOS Applications.

Book: Hacking and Securing iOS Applications

About Jonathan Zdziarski
Respected in his community as an iOS forensics expert, Jonathan is a noted security researcher and author of many books ranging from machine learning to iPhone hacking and software development. Jonathan frequently trains many federal and state law enforcement agencies in digital forensic techniques and assists in high profile cases. Jonathan is also inventor on several US patent applications, father of DSPAM and other language classification technology, an App Store developer and is currently employed as Sr. Forensic Scientist at viaForensics. All opinions expressed are the author’s own. Follow Jonathan on Twitter: @JZdziarski