Air-gapped computers: These Israeli researchers just hacked one with a cell phone.

These Researchers Just Hacked an Air-Gapped Computer Using a Simple Cellphone

These Researchers Just Hacked an Air-Gapped Computer Using a Simple Cellphone

Future Tense
The Citizen's Guide to the Future
July 27 2015 2:06 PM

These Researchers Just Hacked an Air-Gapped Computer Using a Simple Cellphone

Many sensitive work environments restrict smartphones due to their technological capabilities as listening devices. Turns out simple cellphones might not be safe either.

Photo by Sean Gallup/Getty Images

This post originally appeared in Wired.

Wired logo

The most sensitive work environments, like nuclear power plants, demand the strictest security. Usually this is achieved by air-gapping computers from the Internet and preventing workers from inserting USB sticks into computers. When the work is classified or involves sensitive trade secrets, companies often also institute strict rules against bringing smartphones into the workspace, as these could easily be turned into unwitting listening devices.


But researchers in Israel have devised a new method for stealing data that bypasses all of these protections—using the GSM network, electromagnetic waves and a basic low-end mobile phone. The researchers are calling the finding a “breakthrough” in extracting data from air-gapped systems and say it serves as a warning to defense companies and others that they need to immediately “change their security guidelines and prohibit employees and visitors from bringing devices capable of intercepting RF signals,” says Yuval Elovici, director of the Cyber Security Research Center at Ben-Gurion University of the Negev, where the research was done.

The attack requires both the targeted computer and the mobile phone to have malware installed on them, but once this is done the attack exploits the natural capabilities of each device to exfiltrate data. Computers, for example, naturally emit electromagnetic radiation during their normal operation, and cellphones by their nature are “agile receivers” of such signals. These two factors combined create an “invitation for attackers seeking to exfiltrate data over a covert channel,” the researchers write in a paper about their findings.

The research builds on a previous attack the academics devised last year using a smartphone to wirelessly extract data from air-gapped computers. But that attack involved radio signals generated by a computer’s video card that get picked up by the FM radio receiver in a smartphone.

The new attack uses a different method for transmitting the data and infiltrates environments where even smartphones are restricted. It works with simple feature phones that often are allowed into sensitive environments where smartphones are not, because they have only voice and text-messaging capabilities and presumably can’t be turned into listening devices by spies. Intel’s manufacturing employees, for example, can only use “basic corporate-owned cell phones with voice and text messaging features” that have no camera, video, or Wi-Fi capability, according to a company white paper citing best practices for its factories. But the new research shows that even these basic Intel phones could present a risk to the company.


“[U]nlike some other recent work in this field, [this attack] exploits components that are virtually guaranteed to be present on any desktop/server computer and cellular phone,” they note in their paper.

Though the attack permits only a small amount of data to be extracted to a nearby phone, it’s enough to allow exfiltration of passwords or even encryption keys in a minute or two, depending on the length of the password. But an attacker wouldn’t actually need proximity or a phone to siphon data. The researchers found they could also extract much more data from greater distances using a dedicated receiver positioned up to 30 meters away. This means someone with the right hardware could wirelessly exfiltrate data through walls from a parking lot or another building.

Although someone could mitigate the first attack by simply preventing all mobile phones from being brought into a sensitive work environment, to combat an attack using a dedicated receiver 30 meters away would require installing insulated walls or partitions.

The research was conducted by lead researcher Mordechai Guri, along with Assaf Kachlon, Ofer Hasson, Gabi Kedma, Yisroel Mirsky, and Elovici. Guri will present their findings next month at the Usenix Security Symposium in Washington, DC. A paper describing their work has been published on the Usenix site, though it’s currently only available to subscribers. A video demonstrating the attack has also been published online.


Data leaks via electromagnetic emissions are not a new phenomenon. So-called TEMPEST attacks were discussed in an NSA article in 1972. And about 15 years ago, two researchers published papers demonstrating how EMR emissions from a desktop computer could be manipulated through specific commands and software installed on the machine.

The Israeli researchers built on this previous knowledge to develop malware they call GSMem, which exploits this condition by forcing the computer’s memory bus to act as an antenna and transmit data wirelessly to a phone over cellular frequencies. The malware has a tiny footprint and consumes just 4 kilobytes of memory when operating, making it difficult to detect. It also consists of just a series of simple CPU instructions that don’t need to interact with the API, which helps it to hide from security scanners designed to monitor for malicious API activity.

The attack works in combination with a root kit they devised, called the ReceiverHandler, that gets embedded in the baseband firmware of the mobile phone. The GSMem malware could be installed on the computer through physical access or through interdiction methods—that is, in the supply chain while it is enroute from the vendor to the buyer. The root kit could get installed through social engineering, a malicious app or through physical access to the targeted phone.

The Nitty Gritty


When data moves between the CPU and RAM of a computer, radio waves get emitted as a matter of course. Normally the amplitude of these waves wouldn’t be sufficient to transmit messages to a phone, but the researchers found that by generating a continuous stream of data over the multi-channel memory buses on a computer, they could increase the amplitude and use the generated waves to carry binary messages to a receiver.

Multi-channel memory configurations allow data to be simultaneously transferred via two, three, or four data buses. When all these channels are used, the radio emissions from that data exchange can increase by 0.1 to 0.15 dB.

The GSMem malware exploits this process by causing data to be exchanged across all channels to generate sufficient amplitude. But it does so only when it wants to transmit a binary 1. For a binary 0, it allows the computer to emit at its regular strength. The fluctuations in the transmission allow the receiver in the phone to distinguish when a 0 or a 1 is being transmitted.

“A ‘0’ is determined when the amplitude of the signal is that of the bus’s average casual emission,” the researchers write in their paper. “Anything significantly higher than this is interpreted as a binary ‘1’.”


The receiver recognizes the transmission and converts the signals into binary 1s and 0s and ultimately into human-readable data, such as a password or encryption key. It stores the information so that it can later be transmitted via mobile-data or SMS or via Wi-Fi if the attack involves a smartphone.

The receiver knows when a message is being sent because the transmissions are broken down into frames of sequential data, each composed of 12 bits, that include a header containing the sequence “1010.” As soon as the receiver sees the header, it takes note of the amplitude at which the message is being sent, makes some adjustments to sync with that amplitude, then proceeds to translate the emitted data into binary. They say the most difficult part of the research was designing the receiver malware to decode the cellular signals.

For their test, the researchers used a nine-year-old Motorola C123 phone with Calypso baseband chip made by Texas Instruments, which supports 2G network communication, but has no GPRS, Wi-Fi, or mobile data capabilities. They were able to transmit data to the phone at a rate of 1 to 2 bits per second, which was sufficient to transmit 256-bit encryption keys from a workstation.

They tested the attack on three work stations with different Microsoft Windows, Linux, and Ubuntu configurations. The experiments all took place in a space with other active desktop computers running nearby to simulate a realistic work environment in which there might be a lot of electromagnetic noise that the receiver has to contend with to find the signals it needs to decode.

Although the aim of their test was to see if a basic phone could be used to siphon data, a smartphone would presumably produce better results, since such phones have better radio frequency reception. They plan to test smartphones in future research.

But even better than a smartphone would be a dedicated receiver, which the researchers did test. They were able to achieve a transmission rate of 100 to 1,000 bits per second using a dedicated hardware and receiver from up to 30 meters away, instead of a proximity phone. They used GNU-Radio software, a software-defined radio kit, and an Ettus Research Universal Software Radio Peripheral B210.

Although there are limits to the amount of data any of these attacks can siphon, even small bits of data can be useful. In addition to passwords, an attacker could use the technique to siphon the GPS coordinates of sensitive equipment to determine its location—for example, a computer being used to operate a covert nuclear program in a hidden facility. Or it could be used to siphon the RSA private key that the owner of the computer uses to encrypt communications.

“This is not a scenario where you can leak out megabytes of documents, but today sensitive data is usually locked down by smaller amounts of data,” says Dudu Mimran, CTO of the Cyber Security Research Center. “So if you can get the RSA private key, you’re breaking a lot of things.”

See also:

Future Tense is a partnership of SlateNew America, and Arizona State University.

Kim Zetter is a senior staff reporter at Wired covering cybercrime, privacy, and security.