According to conspiracy theories, one of the more plausible ones is that a cell phone is running malicious firmware on its baseband processor and is listening and transmitting data even when it’s turned off. Nowadays this kind of behavior is called a feature, at least if your phone is made by Apple with the Find My functionality. Even when the phone is off, the Bluetooth chip runs smoothly in a low-power state, allowing these features to work. The problem is that this chip does not run signed firmware. All that is required is root access to the phone’s primary operating system to load a potentially malicious firmware image onto the Bluetooth chip.
Researchers from TU Darmstadt demonstrated the approach and wrote a great paper about their work (PDF). There are a couple of really interesting possibilities that this research suggests. The easiest way is to hijack Apple’s Find My system to track someone with their phone turned off. The greater danger is that this could be used to keep surveillance malware on a device, even during power cycles. Devices are usually quite well protected against attacks from the external network and hardly against attacks emanating from the chips themselves. Unfortunately, since unsigned firmware is a hardware limitation, there’s not much a security update can do to mitigate this beyond normal efforts to prevent attackers from compromising the operating system.
Bluetooth low energy
It’s another Bluetooth-related issue, this time related to Bluetooth Low Energy (BLE), which is used as an authentication token. You’ve probably seen this idea in one form or another, like the Android option to stay unlocked when connected to your BLE earbuds. It is used for different vehicles to unlock once the corresponding phone is within BLE range.
It has always been a bad idea to use BLE for this type of authentication, as BLE is vulnerable to in-flight relay attacks. One half of the attack is next to your phone and behaves like the car’s BLE chip, and the other is next to the car and spoofs your phone. Connect the two spoofing devices and the car thinks the authorized phone is right there. To make this ‘safe’, providers have added encryption features as well as signal timing analysis to try to detect spoofing.
The real innovation of the hack here is the use of dedicated hardware that sniffs and replays at the link layer. This circumvents the encryption problem as the signal is simply passed on undisturbed. It also speeds up the process to the point where latencies are low enough even over the internet hundreds of miles away. It is likely that the next iteration of this technique could simply use software defined radios to reproduce the signals at an even lower level. The solution is to either prompt the user for authorization before unlocking the vehicle or embed location information in the encrypted payload.
Python buffer burned out
This is one of those issues that isn’t a big deal, yet it can still be a problem in certain situations. It all started in 2012 when it was observed that the Python
memoryview object could crash a program if it points to a location that is no longer valid. A
memoryview is essentially a pointer to the underlying C buffer and doesn’t get quite the same automatic reference counting as a regular Python object. Deallocate the object
memoryview points to, then dereference that “pointer” for undefined C-style behavior. (Here we don’t mean cursed code, but rather garden variety UD – dereferencing a pointer that is no longer a valid pointer.) A bit of memory manipulation can pretty much control what the raw pointer value will be, and setting it to NULL predictably crashes the Interpreter.
This is actually a read and write primitive. Poke around in Python’s memory, find the ELF headers, and then find out where the glibc
system The dynamic library is in the procedure link table. Find it, use the memory corruption bug to jump to the appropriate location in memory, and boom, you’ve popped a shell from Python!
The wiser among you are probably thinking, oh, that’s a convoluted way to call
os.system(). And yes, as an exploit, it’s fairly nondescript. [kn32], our tour guide for this Python quirk, points out that it could be used to escape a Python sandbox, but that’s a very niche use case. While we conclude that this isn’t really an exploit, it’s a great learning tool and a fun hack.
What happens when a group of smart and highly motivated researchers, like the folks at PRODRAFT, dump their websites on a large ransomware gang? Well, first they need to come up with a memorable name. They decided to call this malware gang that Conti hurls Wizard Spiders – and got a strong D&D vibe from them.
The PDF report describes the results and they are impressive. The investigation mapped WS’ preferred tools, as well as some of their infrastructure, such as the network of Wireguard tunnels, that they use to represent their actions. Most interesting was the discovery of a backup server believed to be located in Russia, which also contained backups corresponding to REvil attacks. There are many theories as to what exactly this finding indicates. There is another version of the report that has been turned over to law enforcement that likely contains more identifying information.
A few notable techniques are discussed here, including a machine learning engine that looks at the writing and attempts to determine the author’s native language. There are tells for this, like omitting articles like “the” and using the wrong tense of the verb. Some strange-looking English phrases are literal, word-for-word translations of common native language expressions. In a conclusion that surprised no one, PRODRAFT found that WS’ official spokesperson was a native Russian speaker. Hopefully the rest of the story behind the extraction of this treasure trove of information can be shared. It promises to be quite the tale of hacking the hackers, and maybe an old-fashioned trade as well.
Discovery of the Parallels hack
During Pwn2own 2021, [Jack Dates] from RET2 Systems managed to corrupt the Parallels VM. To our delight, he wrote down the exploit process for our training. A number of bugs in the guest ancillary code allow a chain to escape the guest. The first bug used is an information leak where 0x20 bytes are written into a 0x90 sized buffer and then the entire buffer is exposed to the guest. That’s 0x70 bytes of host VM heap space that can be read at a time, just enough to get some base addresses.
The next bug is a buffer overflow in the drag-and-drop handling code. The structure passed to the host contains a string that should be null-terminated, and skipping the null allows a buffer overflow onto the stack. This overflow can be used to break the exception handling of guest extra code running on the host. A third bug, a so-called “megasmash”, doesn’t seem to be very useful as it overflows an integer to trigger a massive buffer overflow. The problem with using this is that it tries to write when it overflows
0xffffffff bytes of program memory. The chain uses this to change a callback pointer to point to malicious code. However, some memory is guaranteed to be read-only, which throws an exception.
The key is that the exception handling has been tampered with. So when the exception is thrown, the handling code immediately fails and hangs, preventing normal program cleanup. Other threads can then encounter the manipulated function pointer, leading to code execution. The bugs discovered were all fixed late last year, and [Jack] made a nice $40,000 for the exploit chain. Enjoy!