Home Tags CPU

Tag: CPU

Not that kind of crack.Geoff Parsons Apple's encryption battle Feds: New judge must force iPhone unlock, overturning ruling that favored Apple Amazon will restore Fire OS‘ encryption support in the spring What is a “lying-dormant cyber pathogen?” San Bernardino DA says it’s made up [Updated] San Bernardino DA says seized iPhone may hold “dormant cyber pathogen” [Update] To get back at Apple, GOP congressman introduces pointless bill View all…The custom firmware that the FBI would like Apple to produce in order to unlock the San Bernardino iPhone would be the most straightforward way of accessing the device, allowing the federal agency to rapidly attempt PIN codes until it found the one that unlocked the phone. But it's probably not the only way to achieve what the FBI wants.

There may well be approaches that don't require Apple to build a custom firmware to defeat some of the iPhone's security measures. The iPhone 5c used by the San Bernardino killers encrypts its data using a key derived from a combination of an ID embedded in the iPhone's processor and the user's PIN.

Assuming that a 4-digit PIN is being used, that's a mere 10,000 different combinations to try out. However, the iPhone has two protections against attempts to try every PIN in turn.

First, it inserts delays to force you to wait ever longer between PIN attempts (up to one hour at its longest).
Second, it has an optional capability to delete its encryption keys after 10 bad PINs, permanently depriving access to any encrypted data. The FBI would like to use a custom firmware that allows attempting multiple PINs without either of these features.

This custom firmware would most likely be run using the iPhone's DFU mode.

Device Firmware Update (DFU) mode is a low-level last resort mode that can be used to recover iPhones that are unable to boot.

To use DFU mode, an iPhone must be connected via USB to a computer running iTunes. iTunes will send a firmware image to the iPhone, and the iPhone will run that image from a RAM disk.

For the FBI's purposes, this image would include the PIN-attack routines to brute-force the lock on the device. Developing this firmware should not be particularly difficult—jailbreakers have developed all manner of utilities to build custom RAM disks to run from DFU mode, so running custom code from this environment is already somewhat understood—but there is a problem.

The iPhone will not run any old RAM disk that you copy to it.
It first verifies the digital signature of the system image that is transferred. Only if the image has been properly signed by Apple will the phone run it. The FBI cannot create that signature itself. Only Apple can do so.

This means also that the FBI cannot even develop the code itself.

To test and debug the code, it must be possible to run the code, and that requires a signature.

This is why it is asking for Apple's involvement: only Apple is in a position to do this development. Do nothing at all The first possibility is that there's simply nothing to do.

Erasing after 10 bad PINs is optional, and it's off by default.
If the erase option isn't enabled, the FBI can simply brute force the PIN the old-fashioned way: by typing in new PINs one at a time.
It would want to reboot the phone from time to time to reset the 1 hour delay, but as tedious as the job would be, it's certainly not impossible. It would be a great deal slower on an iPhone 6 or 6s.
In those models, the running count of failed PIN attempts is preserved across reboots, so resetting the phone doesn't reset the delay period.

But on the 5c, there's no persistent record of bad PIN trials, so restarting the phone allows an attacker to short-circuit the delay. Why it might not work Obviously, if the phone is set to wipe itself, this technique wouldn't work, and the FBI would want to know one way or the other before starting.
It ought to be a relatively straightforward matter for Apple to tell, as the phone does have the information stored in some accessible way so that it knows what to do when a bad PIN is entered. But given the company's reluctance to assist so far, getting them to help here may be impossible.Update: It turns out that this bug was fixed in iOS 8.1, so it probably wouldn't work after all. Acid and laserbeams One risky solution that has been discussed extensively already is to use lasers and acid to remove the outer layers of the iPhone's processor and read the embedded ID. Once this embedded ID is known, it's no longer necessary to try to enter the PIN directly on the phone itself.
Instead, it would be possible to simply copy the encrypted storage onto another computer and attempt all the PINs on that other computer.

The iPhone's lock-outs and wiping would be irrelevant in this scenario. Why it might not work The risk of this approach is not so much that it won't work, but that if even a tiny mistake is made, the hardware ID could be irreparably damaged, rendering the stored data permanently inaccessible. Jailbreak the thing The iPhone's built-in lockouts and wipes are unavoidable if running the iPhone's operating system... assuming that the iPhone works as it is supposed to.
It might not.

The code that the iPhone runs to enter DFU mode, load a RAM image, verify its signature, and then boot the image is small, and it should be simple and quite bullet-proof. However, it's not impossible that this code, which Apple calls SecureROM, contains bugs.
Sometimes these bugs can enable DFU mode (or the closely related recovery mode) to run an image without verifying its signature first. There are perhaps six known historic flaws in SecureROM that have enabled jailbreakers to bypass the signature check in one way or another.

These bugs are particularly appealing to jailbreakers, because SecureROM is baked into hardware, and so the bugs cannot be fixed once they are in the wild: Apple has to update the hardware to address them.

Exploitable bugs have been found in the way SecureROM loads the image, verifies the signature, and communicates over USB, and in all cases they have enabled devices to boot unsigned firmware. If a seventh exploitable SecureROM flaw could be found, this would enable jailbreakers to run their own custom firmwares on iPhones.

That would give the FBI the power to do what it needs to do: it could build the custom firmware it needs and use it to brute force attack the PIN.
Some critics of the government's demand have suggested that a government agency—probably the NSA—might already know of such a flaw, arguing that the case against Apple is not because of a genuine need to have Apple sign a custom firmware but merely to give cover for their own jailbreak. Why it might not work Of course, the difficulty with this approach is that it's also possible that no such flaw exists, or that even if it does exist, nobody knows what it is.

Given the desirability of this kind of flaw—it can't be fixed through any operating system update—jailbreakers have certainly looked, but thus far they've turned up empty-handed.

As such, this may all be hypothetical. Ask Apple to sign an FBI-developed firmware Apple doesn't want to develop a firmware to circumvent its own security measures, saying that this level of assistance goes far beyond what is required by law.

The FBI, however, can't develop its own firmware because of the digital signature requirements. But perhaps there is a middle ground.

Apple, when developing its own firmwares, does not require each test firmware to be signed.
Instead, the company has development handsets that have the signature restriction removed from SecureROM and hence can run any firmware.

These are in many ways equivalent to the development units that game console manufacturers sell to game developers; they allow the developers to load their games to test and debug them without requiring those games to be signed and validated by the console manufacturer each time. Unlike the consoles, Apple doesn't distribute these development phones.
It might not even be able to, as it may not have the necessary FCC certification.

But they nonetheless exist.
In principle, Apple could lend one of these devices to the FBI so that the FBI would then be responsible for developing the firmware.

This might require the FBI to do the work on-site at Cupertino or within a Faraday cage to avoid FCC compliance concerns, but one way or another this should be possible. Once it had a finished product, Apple could sign it.
If the company was truly concerned with how the signed firmware might be used, it might even run the firmware itself and discard it after use. This would relieve Apple of the burden of creating the firmware, and it could be argued that it was weakening Apple's first amendment argument against unlocking the firmware. While source code is undoubtedly expressive and protected by the first amendment, it seems harder to argue that a purely mechanical transformation such as stamping a file with a digital signature should be covered by the same protection. Why it might not work Apple may very well persist in saying no, and the courts may agree. Andrew Cunningham Stop the phone from wiping its encryption keys The way the iPhone handles encryption keys is a little more complex than outlined above.

The encryption key derived from the PIN combined with the hardware ID isn't used to encrypt the entire disk directly.
If it were, changing the PIN would force the entire disk to be re-encrypted, which would be tiresome to say the least.
Instead, this derived key is used to encrypt a second key, and that key is used to encrypt the disk.

That way, changing the PIN only requires re-encryption of the second key.

The second key is itself stored on the iPhone's flash storage. Normal flash storage is awkward to securely erase, due to wear leveling.

Flash supports only a limited number of write cycles, so to preserve its life, flash controllers spread the writes across all the chips. Overwriting a file on a flash drive may not actually overwrite the file but instead write the new file contents to a different location on the flash drive, potentially leaving the old file's contents unaltered. This makes it a bad place to store encryption keys that you want to be able to delete.

Apple's solution to this problem is to set aside a special area of flash that is handled specially.

This area isn't part of the normal filesystem and doesn't undergo wear leveling at all.
If it's erased, it really is erased, with no possibility of recovery.

This special section is called effaceable storage. When the iPhone wipes itself, whether due to bad PIN entry, a remote wipe request for a managed phone, or the built-in reset feature, this effaceable storage area is the one that gets obliterated. Apart from that special handling, however, the effaceable area should be readable and writeable just like regular flash memory. Which means that in principle a backup can be made and safely squirreled away.
If the iPhone then overwrites it after 10 bad PIN attempts, it can be restored from this backup, and that should enable a further 10 attempts.

This process could be repeated indefinitely. This video from a Shenzhen market shows a similar process in action (we came at it via 9to5Mac after seeing a tweet in February and further discussion in March). Here, a 16GB iPhone has its flash chip desoldered and put into a flash reader.

A full image of that flash is made, including the all-important effaceable area.
In this case, the chip is then replaced with a 128GB chip, and the image restored, with all its encryption and data intact.

The process for the FBI's purposes would simply use the same chip every time. By restoring every time the encryption keys get destroyed, the FBI could—slowly—perform its brute force attack.
It would probably want to install a socket of some kind rather than continuously soldering and desoldering the chip, but the process should be mechanical and straightforward, albeit desperately boring. A more exotic possibility would be to put some kind of intermediate controller between the iPhone and its flash chip that permitted read instructions but blocked all attempts to write or erase data. Hardware write blockers are already routinely used in other forensic applications to prevent modifications to SATA, SCSI, and USB disks that are being used as evidence, and there's no reason why such a thing could not be developed for the flash chips themselves.

This would allow the erase/restore process to be skipped, requiring the phone to be simply rebooted every few attempts. Why it might not work The working assumption is that the iPhone's processor has no non-volatile storage of its own.
So it simply doesn't remember that it is supposed to have wiped its encryption keys, and thus will offer another ten attempts if the effaceable storage area is restored, or that even if it does remember, it doesn't care.

This is probably a reasonable assumption; the A6 processor used in the iPhone 5c doesn't appear to have any non-volatile storage of its own, and allowing restoration means that even a securely wiped phone can be straightforwardly restored from backup by connecting it to iTunes. For newer iPhones, that's less clear.

Apple implies that the A7 processor—the first to include the "Secure Enclave" function—does have some form of non-volatile storage of its own. On the A6 processor and below, the time delay between PIN attempts resets every time the phone is rebooted. On the A7 and above, it does not; the Secure Enclave somehow remembers that there has been some number of bad PIN attempts earlier on.

Apple also vaguely describes the Secure Enclave as having an "anti-replay counter" for data that is "saved to the file system." It's not impossible that this is also used to protect the effaceable storage in some way, allowing the phone to detect that it has been tampered with.

Full restoration is similarly still likely to be possible. There is also some risk to disassembling the phone, but if the process is reliable enough for Shenzhen markets, the FBI ought to be able to manage it reliably enough. This last technique in particular should be quite robust.

There's no doubt that Apple's assistance would help a great deal; creating a firmware to allow brute-forcing the PIN would be faster and lower risk than any method that requires disassembly.

But if the FBI is truly desperate to bypass the PIN lockout and potential disk wipe, there do appear to be options available to it that don't require Apple to develop the firmware.
An updated rhev-hypervisor package that fixes several security issues,bugs, and enhancements is now available.Red Hat Product Security has rated this update as having Importantsecurity impact.

A Common Vulnerability Scoring System (CVSS) basescore, wh...
The 25th anniversary edition of the annual RSA Conference was held from Feb. 29 to March 4 in San Francisco's Moscone Center, showcasing the best and the worst that the security world has to offer, ranging from new products (check out eWEEK's slide sho...
An out-stretched arm slowly disappears... Response to the critical web-crypto-blasting DROWN vulnerability in SSL/TLS by cloud services has been much slower than the frantic patching witnessed when the Heartbleed vulnerability surfaced two years ago. DROWN (which stands for Decrypting RSA with Obsolete and Weakened eNcryption) is a serious design flaw that affects network services that rely on SSL and TLS.

An attacker can exploit support for the obsolete SSLv2 protocol – which modern clients have phased out but is still supported by many servers – to decrypt TLS connections. Successful attacks would give hackers the ability to intercept encrypted traffic (eg, passwords, credit card numbers, sensitive corporate data, etc) as well as impersonate a trusted cloud provider and modify traffic to and from the service using a man-in-the-middle attack. The Heartbleed bug meant attackers could read the memory of the systems protected by the vulnerable versions of OpenSSL. Pretty much anything in memory – SSL private keys, user passwords, and more – was open to thieves preying on unpatched systems as a result of the flaw, which emerged in April 2014. After one week, the number of cloud services vulnerable to Heartbleed fell from 1,173 to 86 (or a 92.7 per cent reduction).

By comparison, susceptibility to DROWN has only fallen from 653 to 620 (5.1 per cent) in the week since it burst onto the scene on Tuesday 1 March, according to figures from Skyhigh Networks' Cloud Security Labs. Skyhigh reckons 98.9 per cent of enterprises use at least one vulnerable service.

The average organisation uses 56 vulnerable cloud services, it reports. One-third of all HTTPS websites were potentially vulnerable to the DROWN attack at the time it was disclosed last week. Other experts, such as iSight Partners, reckon that DROWN is nowhere near as easy to exploit at Heartbleed because in the case of DROWN, an attacker already needs to be perched on a target network before feeding vulnerable systems attack traffic, among other factors. Heartbleed, by contrast, was much easier to exploit.

Even so, the DROWN vulnerability is a good candidate for prompt triage, particularly by the likes of cloud services, which market themselves as an agile and flexible enterprise computing resource. “Companies are adopting cloud services in record numbers, most of which have gone a long way to prove their worth and security to even the most cloud-sceptic industries such as financial services,” said Nigel Hawthorn, EMEA Marketing Director at Skyhigh Networks. “The cloud service industry acted fantastically in response to Heartbleed, and we need to see the same kind of response to DROWN today, which we haven’t to date.” Skyhigh Networks' technology allows organisations to monitor employee cloud use and lock down banned apps. ® Sponsored: DevOps for Dummies 2nd edition
Ecryption, bug bounties and threat intel dominated the mindshare of the cybersecurity hive mind at RSAC last week. SAN FRANCISCO, CALIF. – RSA Conference 2016 -- With one of the biggest crowds ever to hit Moscone for RSA Conference USA, the gathering last week of 40,000 security professionals and vendors was like a convergence of water cooler chatterboxes from across the entire infosec world. Whether at scheduled talks, in bustling hallways or cocktail hours at the bars nearby, a number of definite themes wound their way through discussions all week. Here's what kept the conversations flowing. Encryption Backdoors The topic of government-urged encryption backdoors was already promising to be a big topic at the show, but the FBI-Apple bombshell ensured that this was THE topic of RSAC 2016.

According to Bromium, a survey taken of attendees showed that 86% of respondents sided with Apple in this debate, so much of the chatter was 100 different ways of explaining the inadvisability of the FBI's mandate. One of the most colorful quotes came from Michael Chertoff, former head of U.S.

Department of Homeland Security: "Once you’ve created code that’s potentially compromising, it’s like a bacteriological weapon. You’re always afraid of it getting out of the lab.” Bug Bounties In spite of the dark cast the backdoor issue set over the Federal government's relations with the cybersecurity industry, there was plenty of evidence of positive public-private cooperation.

Exhibit A: the "Hack the Pentagon" bug bounty program announced by the DoD in conjunction with Defense Secretary Ash Carter's appearance at the show. While bug bounty programs are hardly a new thing, the announcement of the program shows how completely these programs have become mainstream best practices. "There are lots of companies who do this,” Carter said in a town hall session with Ted Schlein, general partner at Kleiner Perkins Caufield & Byers. “It’s a way of kind of crowdsourcing the expertise and having access to good people and not bad people. You’d much rather find vulnerabilities in your networks that way than in the other way, with a compromise or shutdown.” Threat Intel There was no lack of vendors hyping new threat intelligence capabilities at this show, but as with many hot security product categories threat intel is suffering a bit as the victim of its own success.

The marketing machine is in full gear now pimping out threat intel capabilities for any feature even remotely looking like it; one vendor lamented to me off the record, "most threat intel these days is not even close to being real intelligence." In short, threat intel demonstrated at the show that it was reaching the peak of the classic hype cycle pattern. RSAC attendees had some great evidence of that hanging around their necks. Just a month after the very public dismantling of Norse Corp., the show's badge holder necklaces still bore the self-proclaimed threat intelligence vendor's logos.

But as Robert Lee, CEO of Dragos Security, capably explained over a month ago in the Norse fallout, this kind of failure (and additional disillusionment from customers led astray by the marketing hype) is not necessarily a knock on the credibility of threat intel as a whole.
It is just a matter of people playing fast and loose with the product category itself. "Simply put, they were interpreting data as intelligence," Lee said. "There is a huge difference between data, information, and intelligence.
So while they may have billed themselves as significant players in the threat intelligence community they were never really accepted by the community, or participating in it, by most leading analysts and companies.

Therefore, they aren’t a bellwether of the threat intelligence industry." Related Content: Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas. Register today and receive an early bird discount of $200. Ericka Chickowski specializes in coverage of information technology and business innovation.
She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio More Insights
Anti-virus engine easily disable. Intel Security has fixed a flaw that made it possible to shut down its McAfee Enterprise virus engine, thereby allowing the installation of malware and pirated software. The hotfix addresses an issue that Agazzini Maurizio, senior security advisor at Rome-based consultancy Mediaservice, first warned about 15 months ago. McAfee acknowledged the bypass in December 2014 and released the patch on 25 February 2016. The flaw requires users or attackers first gain local administrator privileges, a level of access that many organisations lazily afford staff. "McAfee VirusScan Enterprise has a feature to protect the scan engine from local Windows administrators [and] a management password is needed to disable it," Maurizio says. "From our understanding this feature is implemented insecurely: the McAfee VirusScan Console checks the password and requests the engine to unlock the safe registry keys. "No checks are done by the engine itself, so anyone can directly request the engine to stop without knowing the correct management password." All versions are affected. Attackers can either use Maurizio's tool or alter registry keys before opening the McAfee console and choosing 'no password' Removing administrator rights from user accounts goes a long way to helping organisational security postures. In May Manchester-based security firm Avecto reckoned 97 percent of critical Microsoft vulnerabilities released in 2014 would be mitigated by removing admin rights. ® Sponsored: Five essentials for improving endpoint security
Patch for Piledriver chips emitted this week to kill off potentially exploitable glitches Analysis AMD will tomorrow release new processor microcode to crush an esoteric bug that can be potentially exploited by virtual machine guests to hijack host servers. Machines using AMD Piledriver CPUs, such as the Opteron 6300 family of server chips, and specifically CPU microcode versions 0x6000832 and 0x6000836 – the latest available – are vulnerable to the flaw. When triggered, the bug can glitch a processor core to execute data as software, which crashes the currently running process.
It is possible for a non-root user in a virtual machine to exploit this defect to upset the host system, or trick the host kernel into executing malicious code controlled by the user. In other words, it is possible on some AMD-powered servers for a normal user in a guest virtual machine to escape to the underlying host and take over the whole shared server.

Although it is rather tricky to exploit – for one thing, it requires precise timing – AMD has a fix ready for operating system makers to distribute to affected users from this Monday. "AMD is aware of the potential issue of unprivileged code running in virtual machine guests on systems that make use of AMD Opteron 6200/6300," a spokesman told The Register. "Following a thorough investigation we have determined that only AMD driver patch 6000832 and patch 6000836 is affected by this issue.

AMD has developed a patch to fully resolve the issue and will be made available to our partners on Monday, 7 March, 2016." The bug is related to the delivery of non-maskable interrupts (NMI), and is specific to the aforementioned microcode versions. On Linux, /proc/cpuinfo will list the ID number of the microcode running on your processor cores if you want to check if your machine is vulnerable. Microcode – basically, your processor's firmware – can be installed by your motherboard's BIOS or your kernel during boot-up: for example, Debian GNU/Linux distributes the latest patches in the amd64-microcode and intel-microcode packages. For most affected people, a package update and reboot will ensure the fixed microcode is in place.

The new microcode is also expected to appear on the AMD operating system team's website if you want to install it by hand. The microcode flaw has so far reared its head on systems using QEMU-KVM for virtualization, but it may affect other hypervisors. Bug hunt Due to Intel's dominance in the data center and virtualization world, this AMD-specific bug is not going to cause widespread chaos. However, it may give some people grief.

For one thing, the code gremlin managed to nip the OpenSUSE Linux project, which is cosponsored by AMD. An OpenSUSE build server that sports an Opteron 6348 processor with microcode version 0x06000836 hit a Linux kernel "oops" while running post-compilation tests on a fresh copy of GDB.

The debugger's bytes barely had time to settle on the hard drive before the tests were killed by the underlying kernel. Jiri Slaby, a SUSE Linux programmer, reported the weird crash to the Linux kernel mailing list at the end of February, and uploaded a bunch of diagnostic information for fellow developers to pore over. The crash was bizarre and, we're told, couldn't be repeated: while running tests on the newly built GDB debugger, the processor entered kernel mode and suddenly careered off course. Like a car hitting some black ice, it slid off the road and smashed into a tree.
It stopped executing the code it was supposed to be running, and instead slammed into a page of memory that had been wisely marked non-executable because it contained a critical kernel data structure rather than actual code.

That collision triggered a fault, which was flagged up as a potential kernel bug, and the running process was killed. At the time of the crash, the kernel was leaving an internal function called ttwu_stat(), which updates some of the scheduler's accounting statistics.
It is harmless.
Its instructions aren't that complicated: just some compares, additions, and stack popping and pushing.
It's called from the scheduler function try_to_wake_up(). Then a clue was spotted.

A scrap of torn red silk left at the GDB process's murder scene.

Before ttwu_stat() is called, the kernel function try_to_wake_up() does a bunch of stuff that includes this instruction: mov $0x16e80,%r15 What's a stack? Think of a stack as a pile of cafeteria trays: you push a tray, or a value, onto the stack, and you pop a tray, or value, off the stack.
If you push 1, then 4, then 5, and finally 2 onto the stack in that order, you'll pop them off in the order of 2, 5, 4, and 1.
If you push the contents of R15 and then, say, R14 onto the stack, when you next pop a value off, you'll get back R14's. This moves the hexadecimal value 0x16e80 into the CPU core's R15 register.
Soon after, ttwu_stat() is called, which pushes R15 and other registers onto the stack. At the end of ttwu_stat(), the registers, including R15, are pulled off the stack.

This means R15 should have the same value on leaving ttwu_stat() as it did entering the function – specifically, 0x16e80. Whatever the function did to R15, the register's original value should be restored on leaving ttwu_stat(). Let's look at the "oops" report generated by the kernel, which reveals the contents of all the registers at the time of the exception: RIP: 0010:[<ffff88023fd40000>] [<ffff88023fd40000>] 0xffff88023fd40000 RSP: 0018:ffff8800bb2a7c50 EFLAGS: 00056686 RAX: 00000000bb37e180 RBX: 0000000000000001 RCX: 00000000ffffffff RDX: 0000000000000000 RSI: ffff88023fdd6e80 RDI: ffff88023fdd6e80 RBP: ffffffff810a535a R08: 0000000000000000 R09: 0000000000000020 R10: 0000000001b52cb0 R11: 0000000000000293 R12: 0000000000000046 R13: ffff8800bb37e180 R14: 0000000000016e80 R15: ffff8800bb2a7c80 R15 should be 0x16e80 but it's actually 0xffff8800bb2a7c80 – and R14 is 0x16e80.

That's not right at all.
In ttwu_stat(), R15 is pushed onto the stack, then R14.

At the end of the function, R14 pulls its contents off the stack, and then R15 does the same.

But in this case, R14 has popped R15's value instead of its own.
Something's not right: the stack is an unexpected state. ttwu_stat()'s final instructions are: pop %r14 pop %r15 pop %rbp retq That's supposed to restore the contents of the R14, R15, and RBP registers from the stack in that order, and then pull another value off the stack: the location in try_to_wake_up() that ttwu_stat() is supposed to return to.

The final req instruction pops this return address and jumps to it. But, whoops, RBP contains 0xffffffff810a535a, which is the return address we want.

The req instruction was expecting that value, but instead it'll get whatever's next on the stack. This confirms the stack is off by one 64-bit register, or eight bytes: the value for R15 was popped into R14, the real return address was popped into RBP, and a previously stacked value was popped by retq as a return address and jumped to.

That explains why the kernel took off in a seemingly random direction – it tried using a pointer to data from the stack as a legit address to execute code. While ttwu_stat() was running, something else tampered with the stack pointer – the special register that keeps track of where in memory values are pushed to and popped from the stack.
Something invisible dropped the stack pointer an extra eight bytes.

A poltergeist spilling cafeteria trays of register values all over the floor in the middle of the night. You get the idea. Hate to interrupt you, what's this got to do with QEMU-KVM? The one thing you just should never do is blame your compiler or kernel or microprocessor when your code bombs. Your carefully crafted source, like a teenager's first poem to their first crush, is an extension of your essence, your passion to do things right. When it goes wrong, though, 99.999 per cent of the time it's because you suck, and any time spent blaming the toolchain or CPU is time not spent fixing your own work. Well, here's one of those rare moments where you can blame someone else. In the background to all of this, Google security engineer Robert Święcki had privately disclosed to AMD engineers and the Linux kernel security team a strange kernel "oops" that, in the words of Linux kernel chief Linus Torvalds, "turned out to be a AMD microcode problem with NMI delivery." Święcki had reported a similar exception to Slaby's GDB crash: the kernel had tried to execute code in memory that was off limits.

That sort of fault will make the hairs on the back of a security engineer's neck stand up: if a hacker can control or even simply influence where the CPU ricochets off to in kernel mode, she can potentially hijack the whole computer.
It's the sort of bug that you have to get to the bottom of. "I'm actually starting to suspect that it's an AMD microcode bug that we know very little about," Torvalds said, referring to Slaby's GDB prang. "There's apparently register corruption (the guess being from NMI handling, but virtualization was also involved) under some circumstances. "We do have a reported 'oops' on the security list that looks totally different in the big picture, but shares the exact same 'corrupted stack pointer register state resulting in crazy instruction pointer, resulting in NX fault' behavior in the end." In other words, Slaby had stumbled across an AMD microcode issue on production hardware, an issue that Święcki and the Linux kernel security team were already investigating. NMIs are interrupts that absolutely must be handled by the kernel and cannot be ignored: you can't tell the chipset to postpone them because they are typically generated by a hardware failure or a watchdog timer raising an alarm. Like almost all interrupts, they can potentially fire at any time. Perhaps an NMI delivery problem occurred during the doomed GDB test; a microcode bug meddling with the stack pointer in an innocuous kernel function during a process scheduling operation that spiraled into a serious exception in the host kernel. The other ingredient in this saga is virtualization: the OpenSUSE build server was compiling GDB and testing it in a QEMU-KVM virtual machine.

That means an unprivileged user in a guest virtual machine merely building software was able to trigger an "oops" in the host server's kernel.

That's not good. According to Święcki, the microcode glitch mostly interferes with the host kernel's stack pointer RSP, but it can also corrupt the contents of other registers – all of which can cause crashes, unpredictable behavior, or potentially be exploited to gain control of the system.

The Googler said he can, in "rare" conditions, commandeer the host machine's kernel from a virtual machine guest. "The visible effects are, in about 80 per cent of cases, incorrect RSP [values] leading to bad returns into kernel data or [triggering] stack-protector faults," Święcki told the Linux kernel mailing list. "But there are also more elusive effects, like registers being cleared before use in indirect memory fetches. "I can trigger it from within QEMU guests, as non-root, causing bad RIP [instruction pointer register values] in the host kernel. When testing, a couple of times out of maybe 30 'oopses', I was able to set it to user-space addresses mapped in the guest.
It greatly depends on timing, but I think with some more effort and populating the kernel stack with guest addresses it'd be possible to create a more reliable QEMU guest to host ring-0 escape." He added: My proof-of-concept code [to trigger the bug] works only under QEMU-KVM. Xen and KVMtools don't appear to be affected by it because there's some missing functionality in them that my PoC makes use of.

But another thread started on [the Linux kernel mailing list] made me think those hypervisors can also be affected, although that's just speculation. AMD told The Register the bad microcode – 0x6000832 and 0x6000836 – affects the Opteron 6200 and 6300 series, although Święcki believes the problem extends to newer AMD FX and Opteron 3300 and 4300 chips using the Piledriver architecture and the buggy microcode. Specific details on how to trigger the bug have not been disclosed ahead of the updated microcode's release. Not that you need to know exactly how to exploit the vulnerability: you could be unlucky like the SUSE team and encounter it randomly on a live system. Finally, Święcki pointed to a similar bug VMware has worked around in its ESXi hypervisor software for AMD Opteron 6300 CPUs. "Under a highly specific and detailed set of internal timing conditions, the AMD Opteron Series 63xx processor may read an internal branch status register while the register is being updated, resulting in an incorrect RIP.

The incorrect RIP causes unpredictable program or system behavior, usually observed as a page fault," reads the VMware note, issued last year. It's no secret that microprocessors – especially today's complex CPUs with billions of transistors – have bugs.
Intel and AMD both publish hundreds of pages of notes warning of subtle flaws in their designs. Most of the cockups are harmless to normal users, some are not; operating systems can work around the engineering blunders, or not bother at all for bugs that are benign.
Sometimes, though, new microcode is needed.

AMD last issued new microcode for its x86 processors in December 2014. ® PS: Here's a video of David Kaplan, a hardware security architect at AMD explaining how you'd typically go about testing and debugging a modern x86 CPU. Youtube video
An updated rhel-guest-image package that includes openssl packages that arenot vulnerable to CVE-2015-3197, CVE-2016-0800, CVE-2016-0705, CVE-2016-0702,and CVE-2016-0797 is now available for Red Hat Enterprise Linux 6. The rhel-guest-image package pro...
EnlargeGenkin et al. Researchers have devised an attack on Android and iOS devices that successfully steals cryptographic keys used to protect Bitcoin wallets, Apple Pay accounts, and other high-value assets. The exploit is what cryptographers call a non-invasive side-channel attack.
It works against the Elliptic Curve Digital Signature Algorithm, a crypto system that's widely used because it's faster than many other crypto systems.

By placing a probe near a mobile device while it performs cryptographic operations, an attacker can measure enough electromagnetic emanations to fully extract the secret key that authenticates the end user's data or financial transactions.

The same can be done using an adapter connected to the USB charging cable. "An attacker can non-invasively measure these physical effects using a $2 magnetic probe held in proximity to the device, or an improvised USB adapter connected to the phone's USB cable, and a USB sound card," the researchers wrote in a blog post published Wednesday. "Using such measurements, we were able to fully extract secret signing keys from OpenSSL and CoreBitcoin running on iOS devices. We also showed partial key leakage from OpenSSL running on Android and from iOS's CommonCrypto." Enlarge / This version of the attack exploits an iPhone 4 through its charging port. Genkin et al. While the researchers stopped short of fully extracting the key on a Sony-Ericsson Xperia x10 Phone running Android, they said they believe such an attack is feasible.

They also cited recently published research by a separate team that found a similar side-channel vulnerability in Android's version of the BouncyCastle crypto library. Enlarge Older versions of iOS—specifically, 7.1.2 through 8.3—appear to be vulnerable.

The current 9.x version does not appear to be vulnerable because it added defenses against side-channel attacks. However, users of even current versions of iOS are still at risk when using vulnerable apps. One such vulnerable iOS app is CoreBitcoin, which is used to protect Bitcoin wallets on iPhones and iPads.

Because it uses its own cryptographic implementation rather than the iOS CommonCrypto library, it is vulnerable to the key-extraction attack.

CoreBitcoin developers told the researchers they plan to replace their current crypto library with one that's not susceptible to the attack.

The latest version of Bitcoin Core, meanwhile, is not vulnerable. Both the 1.0.x and 1.1.x versions of the OpenSSL code library are also susceptible except when compiled for x-86-64 processors with a non-default option selected or when running a special option available for ARM CPUs.

The researchers said they reported the vulnerability to OpenSSL maintainers, and the maintainers said that hardware side-channel attacks aren't a part of their threat model.

The full research paper is here. The researchers—from Tel Aviv University, Technion and The University of Adelaide—recently published a separate paper that showed how to extract secret ECDH keys from a standard laptop even when it was locked in an adjacent room.

The attack is able to obtain the key in seconds.

A separate side-channel attack against RSA secret keys was devised in 2013. Unlike the one against mobile phones, it uses sound emitted by the electronics, rather than electromagnetic emanation or power consumption. At the moment, the attack would require a hacker to have physical possession of—or at least have a cable or probe in close physical proximity to—a vulnerable mobile device while it performed enough operations to measure "a few thousand ECDSA signatures." The length of time required would depend on the specific application being targeted.

The requirements might make the hack impractical in some settings, as long as device owners take care to closely inspect USB cables before plugging them in and look for probes near their phones. Still, averting attacks may sometimes prove difficult, since cables or probes could be disguised to conceal what they're doing.

And as the images in this post demonstrate, probes could be hidden on the underside of a table.
It's also possible that over time, researchers could devise ways to measure the leaks from further distances.

For that reason, while the vulnerabilities probably don't pose an immediate threat to end users, they should nonetheless be a top concern for developers.

The researchers have been working with the vendors of the specific software they analyzed to help them evaluate and mitigate the risk to their users.
Even a $35,000 government-ready flying machine can't escape hackers. Pricier means more secure, right? Not exactly.

A security researcher has found that many expensive police drones are vulnerable to hacks.  At San Francisco's RSA conference this week, Nils Rodday showed off flaws in a $35,000 drone's radio connection, opening the device to hackers more than a mile away.  According to Wired, Rodday was able to take full control of a government-ready quadcopter using only a laptop and cheap radio chip.

But any hacker who can reverse-engineer the drone's flight software can take control of the device, sending new navigation commands and blocking those from the actual operator. Rodday, an IT security consultant with IBM Germany, conducted his drone research as a graduate student at the University of Twente in the Netherlands and University of Trento in Italy.

The results were published in a final project called "Exploring Security Vulnerabilities of Unmanned Aerial Vehicles." Sworn to secrecy by the drone manufacturer, Rodday did not disclose the specific machine he tested, or who sells it.

But he did reveal two serious security oversights: poorly encrypted Wi-Fi connecting the drone to its user, and an even less-secure radio protocol. The unprotected drone is an easy target for a man-in-the-middle attack conducted by someone who could be more than a mile away, sending commands to reroute or reprogram the flying machine. "If you think as an attacker, someone could do this only for fun, or also to cause harm or to make a mess out of a daily surveillance procedure," Rodday told Wired. "You can send a command to the camera, to turn it to the wrong side so they don't receive the desired information…or you can steal the drone, all the equipment attached to it, and its information." The unidentified manufacturer has been alerted to the security flaws, and intends to fix the problem in its next model, the magazine said. Unfortunately, the same patch cannot be applied to those drones already flying around. What's worse, Rodday's discovery is likely not confined to just one unmanned aerial vehicle; it could extend to commercial quadcopters, as well. In December 2013, hacker and security analyst Samy Kamkar built SkyJack—a Parrot AR UAV equipped with a Raspberry Pi, engineered to autonomously seek out, hack, and wirelessly take over other drones within Wi-Fi distance.
Just set SSLv2 on fire Security experts are split on how easy it is for hackers to exploit the high-profile DROWN vulnerability on insecure systems. One-third of all HTTPS websites are potentially vulnerable to the DROWN attack, which was disclosed on Tuesday.

DROWN (which stands for Decrypting RSA with Obsolete and Weakened eNcryption) is a serious design flaw that affects network services that rely on SSL and TLS.

An attacker can exploit support for the obsolete SSLv2 protocol – which modern clients have phased out but is still supported by many servers – to decrypt TLS connections. As previously reported, code breaking involves sending lots and lots of probes to a server that supports SSLv2 and reuses the same private key across multiple protocols. Threat intel consultancy iSight Partners has concluded following an initial analysis of the problems that the vulnerability poses only a moderate threat to users. Steve Ward, senior director of marketing at iSIGHT Partners, commented: "iSIGHT Partners considers the DROWN Attack vulnerability (CVE-2016-0800) to be medium-risk and believe its exploitation poses only a moderate threat to users.

Although a large number of systems are reportedly vulnerable, exploitation requires notable manual effort and can only be used to obtain the private key for individual users." Widespread exploitation of the flaws by hackers is unlikely, according to iSIGHT Partners. "Since the attacker needs to be in a position to intercept traffic, we believe most victims will be targets of opportunity, not targeted.

Therefore, we anticipate limited actor interest and do not expect widespread exploitation." Tod Beardsley, security research manager at Rapid7, the firm behind Metasploit, conceded that a potential hacker would already need to be on a targeted network. He nonetheless suggested it's too early to downplay the significance of the flaw. "In the case of DROWN, the attacker does have to be in a privileged position on the network in order to eavesdrop on a TLS session, and also needs to have already conducted some reconnaissance on the server-side infrastructure, but this is the nature of padding oracle attacks. While it's not Heartbleed, DROWN techniques do demonstrate the weaknesses inherent in legacy cryptography standards." Beardsley is holding fire on a definitive assessment, at least pending the availability of exploit code. "I'm looking forward to the release of exploit code so that system administrators can demonstrate for themselves the practical effects of DROWN.
In the meantime, sysadmins should ensure that all their cryptographic services have truly disabled the old and deeply flawed SSLv2 protocol, and consider the cost and effort associated with providing unique private keys for their individual servers," he advised. The DROWN project's website, put together by the academic researchers who discovered the flaw, is here.

The logo of the site is a cracked padlock that's about to be swamped by a wave, neatly encapsulating the lamentable situation in graphic form. A Naked Security blog post by Paul Ducklin, senior technologist at Sophos, on the newly discovered DROWN vulnerability provides an assessment of the flaw as well as remediation tips. ® Sponsored: Agile For Dummies, 2nd Edition
Flannel rag again shown to be essential kit for freeloaders RSA 2016 Security analyst Jerry Gamblin has turned a hotel towel into a pass for RSA's San Francisco conference. Gamblin says hotel towels often include RFID chips for inventory control and that hitchhackers can use a Proxmark to easily copy and paste the unique identification number stored in their RSA entry pass' NFC chip and embed it in another device. It means anyone can clone a US$2000 pass to the sold-out conference to enter sessions and the exhibition floor. "Near field communication wasn't written in general to be used in this manner - it was meant to be used in scanners in supermarkets or whatever," Gamblin (@jgamblin) told Vulture South "I could put my RSA tag onto a blank MiFare card that I have here with me and could scan it such that I can access everywhere. Jerry Gamblin. "Never leave home without your towel." Gamblin says he is not attempting to 'show up' RSA and won't be scanning in with his towel even though it is possible. But other conference hitchhackers could easily do so. There is no vulnerability within the MIFARE Ultralight C and Gamblin says it is the choice of technology that leaves it open to abuse. He will update his guide to cloning cards at the conclusion of the conference. Image: Jerry Gamblin. ® Sponsored: Speed up incident response with actionable forensic analytics