Bad guys ruin everything —

Researchers use Intel SGX to put malware beyond the reach of antivirus software

Processor protects malware from attempts to inspect and analyze it.

Intel Skylake die shot, built using the 14nm process.
Intel Skylake die shot, built using the 14nm process.

Researchers have found a way to run malicious code on systems with Intel processors in such a way that the malware can't be analyzed or identified by antivirus software, using the processor's own features to protect the bad code. As well as making malware in general harder to examine, bad actors could use this protection to, for example, write ransomware applications that never disclose their encryption keys in readable memory, making it substantially harder to recover from attacks.

The research, performed at Graz University of Technology by Michael Schwarz, Samuel Weiser, and Daniel Gruss (one of the researchers behind last year's Spectre attack), uses a feature that Intel introduced with its Skylake processors called SGX ("Software Guard eXtensions"). SGX enables programs to carve out enclaves where both the code and the data the code works with are protected to ensure their confidentiality (nothing else on the system can spy on them) and integrity (any tampering with the code or data can be detected). The contents of an enclave are transparently encrypted every time they're written to RAM and decrypted upon being read. The processor governs access to the enclave memory: any attempt to access the enclave's memory from code outside the enclave is blocked; the decryption and encryption only occurs for the code within the enclave.

SGX has been promoted as a solution to a range of security concerns when a developer wants to protect code, data, or both, from prying eyes. For example, an SGX enclave running on a cloud platform could be used to run custom proprietary algorithms, such that even the cloud provider cannot determine what the algorithms are doing. On a client computer, the SGX enclave could be used in a similar way to enforce DRM (digital rights management) restrictions; the decryption process and decryption keys that the DRM used could be held within the enclave, making them unreadable to the rest of the system. There are biometric products on the market that use SGX enclaves for processing the biometric data and securely storing it such that it can't be tampered with.

SGX has been designed for this particular threat model: the enclave is trusted and contains something sensitive, but everything else (the application, the operating system, and even the hypervisor) is potentially hostile. While there have been attacks on this threat model (for example, improperly written SGX enclaves can be vulnerable to timing attacks or Meltdown-style attacks), it appears to be robust as long as certain best practices are followed.

Let’s ignore Intel’s threat model

The researchers are using that robustness for nefarious purposes and considering the question: what happens if it's the code in the enclave that's malicious? SGX by design will make it impossible for antimalware software to inspect or analyze the running malware. This would make it a promising place to put malicious code. However, code in an enclave is quite restricted. In particular, it has no provision to make operating system calls; it can't open files, read data from disk, or write to disk. All of those things have to be performed from outside the enclave. As such, naively it would appear that a hypothetical SGX-based ransomware application would need considerable code outside the SGX enclave: the pieces to enumerate all your documents, read them, and overwrite them with their encrypted versions would not be protected. Only the encryption operation itself would occur within the enclave.

The enclave code does, however, have the ability to read and write anywhere in the unencrypted process memory; while nothing from outside the enclave can look inside, anything inside the enclave is free to look outside. The researchers used this ability to scan through the process' memory and find the information needed to construct a return oriented programming (ROP) payload to run code of their choosing. This chains together little fragments of executable code that are part of the host application to do things that the host application didn't intend.

Some trickery was needed to perform this reading and writing. If the enclave code tries to read unallocated memory or write to memory that's unallocated or read-only, the usual behavior is for an exception to be generated and for the processor to switch out of the enclave to handle the exception. This would make scanning the host's memory impossible, because once the exception happened, the malicious enclave would no longer be running, and in all likelihood the program would crash. To cope with this, the researchers revisited a technique that was also found to be useful in the Meltdown attack: they used another Intel processor feature, the Transactional Synchronization eXtensions (TSX).

TSX provides a constrained form of transactional memory. Transactional memory allows a thread to modify a bunch of different memory locations and then publish those modifications in one single atomic update, such that other threads see either none of the modifications or all of the modifications, without being able to see any of the intermediate partially written stages. If a second thread tried to change the same memory while the first thread was making all its modifications, then the attempt to publish the modifications is aborted.

The intent of TSX is to make it easier to develop multithreaded data structures that don't use locks to protect their modifications; done correctly, these can be much faster than lock-based structures, especially under heavy load. But TSX has a side effect that's particularly convenient: attempts to read or write unallocated or unwriteable memory from within a transaction don't generate exceptions. Instead, they just abort the transaction. Critically, this transaction abort doesn't leave the enclave; instead, it's handled within the enclave.

This gives the malicious enclave all it needs to do its dirty work. It scans the memory of the host process to find the components for its ROP payload and somewhere to write that payload, then redirects the processor to run that payload. Typically the payload would do something such as mark a section of memory as being executable, so the malware can put its own set of supporting functions—for example, ransomware needs to list files, open them, read them, and then overwrite them—somewhere that it can access. The critical encryption happens within the enclave, making it impossible to extract the encryption key or even analyze the malware to find out what algorithm it's using to encrypt the data.

Signed, sealed, and delivered

The processor won't load any old code into an enclave. Enclave developers need a "commercial agreement" with Intel to develop enclaves. Under this agreement, Intel blesses a code-signing certificate belonging to the developer and adds this to a whitelist. A special Intel-developed enclave (which is implicitly trusted by the processor) then inspects each piece of code as it's loaded to ensure that it was signed by one of the whitelisted certificates. A malware developer might not want to enter into such an agreement with Intel, and the terms of the agreement expressly prohibit the development of SGX malware, though one might question the value of this restriction.

This could be subverted, however, by writing an enclave that loaded a payload from disk and then executed that; the loader would need a whitelisted signature, but payload wouldn't. This approach is useful anyway, because while enclave code runs in encrypted memory, the enclave libraries stored on disk aren't themselves encrypted. With dynamic loading, the on-disk payload could be encrypted and only decrypted once loaded into the enclave. The loader itself wouldn't be malicious, giving some amount of plausible deniability that anything nefarious was intended. Indeed, an enclave could be entirely benign but contain exploitable flaws that allow attackers to inject their malicious code inside; SGX doesn't protect against plain-old coding errors.

This particular aspect of SGX has been widely criticized, as it makes Intel a gatekeeper of sorts for all SGX applications. Accordingly, second-generation SGX systems (which includes certain processors branded eighth-generation or newer) relax this restriction, making it possible to start enclaves that aren't signed by Intel's whitelisted signers.

As such, the research shows that SGX can be used in a way that isn't really supposed to be possible: malware can reside within a protected enclave such that the unencrypted code of that malware is never exposed to the host operating system, including antivirus software. Further, the malware isn't constrained by the enclave: it can subvert the host application to access operating system APIs, opening the door to attacks such as ransomware-style encryption of a victim's files.

About that threat model...

The attack is esoteric, but as SGX becomes more commonplace, researchers are going to poke at it more and more and find ways of subverting and co-opting it. We saw similar things with the introduction of hardware virtualization support; that opened the door to a new breed of rootkit that could hide itself from the operating system, taking a valuable feature and using it for bad things.

Intel has been informed of the research, responding:

Intel is aware of this research which is based upon assumptions that are outside the threat model for Intel® SGX. The value of Intel SGX is to execute code in a protected enclave; however, Intel SGX does not guarantee that the code executed in the enclave is from a trusted source. In all cases, we recommend utilizing programs, files, apps, and plugins from trusted sources. Protecting customers continues to be a critical priority for us, and we would like to thank Michael Schwarz, Samuel Weiser, and Daniel Gruss for their ongoing research and for working with Intel on coordinated vulnerability disclosure.

In other words, as far as Intel is concerned, SGX is working as it should, protecting the enclave's contents from the rest of the system. If you run something nasty within the enclave, then the company makes no promises that bad things won't happen to your computer; SGX simply isn't designed to protect against that.

That may be so, but SGX gives developers some powerful capabilities they didn't have before. "How are bad guys going to mess with this?" is an obvious question to ask, because if it gives them some advantage, mess with it they will.

Channel Ars Technica