Home Tags Diffie-Hellman

Tag: Diffie-Hellman

A vulnerability in a JSON-based web encryption protocol could allow attackers to retrieve private keys.

Cryptography experts have advised against developers using JSON Web Encryption (JWE) in their applications in the past, and this vulnerability il...
This year we found a new family of ransomware used in targeted attacks against organizations.

After penetrating an organization's network the threat actors used the PsExec tool to install ransomware on all endpoints and servers in the organization.

The next interesting fact about this ransomware is that the threat actors decided to use the well-known Petya ransomware to encrypt user data.
Starting in April, Oracle will treat JAR files signed with the MD5 hashing algorithm as if they were unsigned, which means modern releases of the Java Runtime Environment (JRE) will block those JAR files from running.

The shift is long overdue, as MD5’s security weaknesses are well-known, and more secure algorithms should be used for code signing instead. “Starting with the April Critical Patch Update releases, planned for April 18, 2017, all JRE versions will treat JARs signed with MD5 as unsigned,” Oracle wrote on its Java download page. Code-signing JAR files bundled with Java libraries and applets is a basic security practice as it lets users know who actually wrote the code, and it has not been altered or corrupted since it was written.
In recent years, Oracle has been beefing up Java’s security model to better protect systems from external exploits and to allow only signed code to execute certain types of operations.

An application without a valid certificate is potentially unsafe. Newer versions of Java now require all JAR files to be signed with a valid code-signing key, and starting with Java 7 Update 51, unsigned or self-signed applications are blocked from running. Code signing is an important part of Java’s security architecture, but the MD5 hash weakens the very protections code signing is supposed to provide.

Dating back to 1992, MD5 is used for one-way hashing: taking an input and generating a unique cryptographic representation that can be treated as an identifying signature. No two inputs should result in the same hash, but since 2005, security researchers have repeatedly demonstrated that the file could be modified and still have the same hash in collisions attacks. While MD5 is no longer used for TLS/SSL—Microsoft deprecated MD5 for TLS in 2014—it remains prevalent in other security areas despite its weaknesses. With Oracle’s change, “affected MD-5 signed JAR files will no longer be considered trusted [by the Oracle JRE] and will not be able to run by default, such as in the case of Java applets, or Java Web Start applications,” Erik Costlow, an Oracle product manager with the Java Platform Group, wrote back in October. Developers need to verify that their JAR files have not been signed using MD5, and if it has, re-sign affected files with a more modern algorithm.

Administrators need to check with vendors to ensure the files are not MD5-signed.
If the files are still running MD5 at the time of the switchover, users will see an error message that the application could not go. Oracle has already informed vendors and source licensees of the change, Costlow said. In cases where the vendor is defunct or unwilling to re-sign the application, administrators can disable the process that checks for signed applications (which has serious security implications), set up custom Deployment Rule Sets for the application’s location, or maintain an Exception Site List, Costlow wrote. There was plenty of warning. Oracle stopped using MD5 with RSA algorithm as the default JAR signing option with Java SE6, which was released in 2006.

The MD5 deprecation was originally announced as part of the October 2016 Critical Patch Update and was scheduled to take effect this month as part of the January CPU.

To ensure developers and administrators were ready for the shift, the company has decided to delay the switch to the April Critical Patch Update, with Oracle Java SE 8u131 and corresponding releases of Oracle Java SE 7, Oracle Java SE 6, and Oracle JRockit R28. “The CA Security Council applauds Oracle for its decision to treat MD5 as unsigned. MD5 has been deprecated for years, making the move away from MD5 a critical upgrade for Java users,” said Jeremy Rowley, executive vice president of emerging markets at Digicert and a member of the CA Security Council. Deprecating MD5 has been a long time coming, but it isn’t enough. Oracle should also look at deprecating SHA-1, which has its own set of issues, and adopt SHA-2 for code signing.

That course of action would be in line with the current migration, as major browsers have pledged to stop supporting websites using SHA-1 certificates. With most organizations already involved with the SHA-1 migration for TLS/SSL, it makes sense for them to also shift the rest of their certificate and key signing infrastructure to SHA-2. The good news is that Oracle plans to disable SHA-1 in certificate chains anchored by roots included by default in Oracle’s JDK at the same time MD5 gets deprecated, according to the JRE and JDK Crypto Roadmap, which outlines technical instructions and information about ongoing cryptographic work for Oracle JRE and Oracle JDK.

The minimum key length for Diffie-Hellman will also be increased to 1,024 bits later in 2017. The road map also claims Oracle recently added support for the SHA224withDSA and SHA256withDSA signature algorithms to Java 7, and disabled Elliptic Curve (EC) for keys of less than 256 bits for SSL/TLS for Java 6, 7, and 8.
SGX needs I/O protection, Austrian boffins reckon Intel's Software Guard Extensions started rolling in Skylake processors in October 2015, but it's got an Achilles heel: insecure I/O like keyboards or USB provide a vector by which sensitive user data could be compromised. A couple of boffins from Austria's Graz University of Technology reckon they've cracked that problem, with an add-on that creates protected I/O paths on top of SGX. Instead of the handful of I/O technologies directly protected by SGX – most of which have to do with DRM rather than user security – the technology proposed in Samuel Weiser and Mario Werner's Arxiv paper, SGXIO, is a “generic” trusted I/O that can be applied to things like keyboards, USB devices, screens and so on. And we're not talking about a merely esoteric technology that might soothe the fears of people running cloud apps on multi-tenant infrastructure. The Weiser/Werner proposal would create an SGX-supported trusted path all the way to a remote user's browser to protect (for example) an online banking session – and provide “attestation mechanisms to enable the bank as well as the user to verify that trusted paths are established and functional.” SGXIO as a way to protect a banking app The shortcoming SGXIO is trying to fix is that SGX's threat model considers everything outside itself a threat (which isn't a bad thing, in context). The usual approach for trusted paths is to use encrypted interfaces. The paper mentions the Protected Audio Video Path (PAVP) – but that's a DRM-specific example, and most I/O devices don't encrypt anything. Hence SGXIO, an attempt to add a generic trusted path to the SGX environment – and with that trusted path reaching to the end user environment, it's an attempt to protect an application from nasties like keyloggers that a miscreant might have installed on a victim's box. The key architectural concepts in SGXIO are: A trusted stack – which contains a security hypervisor, secure I/O drivers, and the trusted boot (TB) enclave; and The virtual machine – hosting an untrusted operating system that runs secure user applications. A user application communicating with the end user: 1. Opens an encrypted channel to the secure I/O driver; 2. This tunnels through the untrusted operating system, and establishes secure communication with the “generic” user I/O device. The hypervisor binds user devices exclusively to I/O; I/O on unprotected devices passes directly through the hypervisor; the trusted path names both the encrypted user-app-to-driver communication; and the exclusive driver-to-device binding; The TB enclave provides assurance of the trusted path setup, by attesting the hypervisor. The paper illustrates this process like this: SGXIO's trusted stack components An implementation wouldn't be seamless: the SGXIO paper devices a fair chunk of copy to application design, enclave programming (fortunately something Intel provides resources for), driver design, and hypervisor choice. Application developers, for example, have to work out a key exchange mechanism (Diffie-Hellman is supported, and SGXIO offers its own lightweight key protocol). For hypervisors, the paper suggests the seL4 microkernel. Originally developed by Australia's NICTA and now handled by the CSIRO Data61 project, seL4 is a mathematically verified software kernel that was published as open source software in 2014. SGXIO will get its first public airing at the CODASPY'17 conference in March, being held in Scottsdale Arizona. ® Sponsored: Customer Identity and Access Management
Whether quantum computing is 10 years away or is already here, it promises to make current encryption methods obsolete, so enterprises need to start laying the groundwork for new encryption methods. A quantum computer uses qubits instead of bits.

A bit can be a zero or a one, but a qubit can be both simultaneously, which is weird and hard to program, but once folks get it working, it has the potential to be significantly more powerful than any of today's computers. And it will make many of today's public key algorithms obsolete, said Kevin Curran, IEEE senior member and a professor at the University of Ulster, where he heads up the Ambient Intelligence Research Group. That includes today's most popular algorithms, he said.

For example, one common encryption method is based on the fact that it is extremely difficult to find the factors of very large numbers. "All of these problems can be solved on a powerful quantum computer," he said. He added that the problems are mostly like with public key systems, where the information is encoded and decoded by different people.
Symmetric algorithms, commonly used to encrypt local files and databases, don't have the same weaknesses and will survive a bit longer.

And increasing the length of the encryption keys will make those algorithms more secure. For public key encryption, such as that used for online communications and financial transactions, possible post-quantum alternatives include lattice-based, hash-based, and multivariate cryptographic algorithms as well as those that update today's Diffie-Hellman algorithm with supersingular elliptic curves. Google is already experimenting with some of these, Curran said. "Google is working with the Lattice-based public-key New Hope algorithm," he said. "They are deploying it in Chrome where a small fraction of connections between desktop Chrome and Google's servers will use a post-quantum key-exchange algorithm.

By adding a post-quantum algorithm on top of the existing one, they are able to experiment without affecting user security." Flexibility is key Some future-proof encryption algorithms have already been developed and are now being tested, but enterprises need to start checking now whether their systems, both those that they have developed themselves and those provided by vendors, are flexible enough to allow old, obsolete algorithms to be early replaced by new ones. Fortunately, according to Curran, there are already algorithms out there that seem to be workable replacements and that can run on existing computers. One company that is paying very close attention to this is Echoworx, which provides on-premises and cloud-based enterprise encryption software. Quantum computing will break all of today's commonly used encryption algorithms, said Sam Elsharif, vice president of software development at Echoworx.

Encryption that today's most sophisticated computer can break only after thousands of years of work will be beaten by a quantum computer in minutes. "This is obviously very troubling, since it's the core of our business," he said. "Echoworx will be in trouble -- but so will all of today's infrastructure." Since longer keys won't work for public key encryption and companies will need to replace their algorithms, the encryption technology needs to be modular. "It's called cryptographic agility," he said. "It means that you don't hard-wire encryption algorithms into your software, but make them more like pluggable modules.

This is how software should be designed, and this is what we do at Echoworx ." Once post-quantum algorithms have been tested and become standards, Echoworx will be able swap out the old ones with the new ones, he said. "You will still have a problem with old data," he said. "That data will either have to be destroyed or re-encrypted." Hardware-based encryption appliances will also need to be replaced if they can't be upgraded, he said. Don't worry, it's still a long way off How soon is this going to be needed? Not right away, some experts say. "The threat is real," said Elsharif. "The theory is proven, it's just a matter of engineering." But that engineering could take 10, 15 or 20 years, he said. Ulster University's Curran says that quantum computers need to have at least 500 qubits before they can start breaking current encryption, and the biggest current quantum computer has less than 15 qubits. "So there is no immediate worry," said Curran. However, research organizations should be working on the problem now, he said. "We may very well find that we do not actually need post-quantum cryptography but that risk is perhaps too large to take and if we do not conduct the research now, then we may lose years of critical research in this area." Meanwhile, there's no reason for an attacker to try to break encryption by brute force if they can simply hack into users' email accounts or use stolen credentials to access databases and key files. Companies still have lots of work to do on improving authentication, fixing bugs, and patching outdated, vulnerable software. "Many steps need to be taken to tighten up a company’s vulnerability footprint before even discussing encryption," said Justin Fier, director of cyber intelligence and analysis at Darktrace. In addition, when attackers are able to bypass encryption, they usually do it because the technology is not implemented correctly, or uses weak algorithms. "We still have not employed proper protection of our data using current cryptography, let alone a future form," he said. "Quantum computing is still very much theoretical," he added. "Additionally, even if a prototype had been designed, the sheer cost required to build and operate the device within the extreme temperature constraints would make it difficult to immediately enter the mainstream marketplace." No, go right ahead and panic Sure, the typical criminal gang might not have a quantum computer right now with which to do encryption. But that's not necessarily true for all attackers, Mike Stute, chief scientist at security firm Masergy Communications. There have already been public announcements from China about breakthroughs in both quantum computing and in unbreakable quantum communications. "It's probably safe to say that nation states are not on the first generation of the technology but are probably on the second," he said. There are even some signs that nation states are able to break encryption, Stute added.
It might not be a fast process, but it's usable. "They have to focus on what they really want," he said. "And bigger quantum computer will do more." That means that companies with particularly sensitive data might want to start looking at upgrading their encryption algorithms sooner rather than later. Plus, there are already some quantum computers already on the market, he added. The first commercial quantum computer was released by D-Wave Systems more than a year ago, and Google was one of its first customers. "Most everyone was skeptical, but they seem to have passed the test," said Stute. The D-Wave computer claims to have 1,000 qubits -- and the company has announced a 2,000-qubit computer that will be coming out in 2017. But they're talking about a different kind of qubit, Stute said.
It has a very limited set of uses, he said, unlike a general-purpose quantum computer like IBM's which would be well suited for cracking encryption. IBM's quantum computer has five qubits, and is commercially available. "You can pay them to do your calculations," he said. "I was able to do some testing, and it all seems on the up and up.
It's coming faster than we think." Related video: This story, "Prepare now for the quantum computing revolution in encryption" was originally published by CSO.
The Nmap Project just released the Holiday Edition of its open source cross-platform security scanner and network mapper, with several important improvements and bug fixes. New features in Nmap 7.40 include Npcap 0.78r5, for adding driver signing updates to work with Windows 10 Anniversary Update; faster brute-force authentication cracking; and new scripts for Nmap Script Engine, the project’s maintainer Fyodor wrote on the Nmap mailing list. The de facto standard network mapping and port scanning tool, Nmap (Network Mapper) Security Scanner is widely used by IT and security administrators for network mapping, port-scanning, and network vulnerability testing. Administrators can run Nmap against the network to find open ports, determine what hosts are available on the network, identify what services those hosts are offering, and detect any network information leaked, such as the type of packet filters and firewalls in use. With a network map, administrators can spot unauthorized devices, ports that shouldn’t be open, or users running unauthorized services. The Nmap Scripting Engine (NSE) built into Nmap runs scripts to scan for well-known vulnerabilities in the network infrastructure. Nmap 7.40 includes 12 new NSE scripts, bringing the total to 552 scripts, and makes several changes to existing scripts and libraries. The ssl-google-cert-catalog script has also been removed from NSE, since Google is no longer supporting the service. Known Diffie-Hellman parameters for haproxy, postfix, and IronPort have been added to ssl-dh-params script in NSE. A bug in mysql.lua that caused authentication failures in mysql-brute and other scripts (affecting Nmap 7.52Beta2 and later) have been fixed, along with a crash issue in smb.lua when using smb-ls. The http.lua script now allows processing HTTP responses with malformed header names. The script http-default-accounts, which tests default credentials used by a variety of web applications and devices against a target, adds 21 new fingerprints and changes the way output is displayed. The script http-form-brute adds content management system Drupal to the set of web applications it can brute force. The brute.lua script has been improved to use resources more efficiently. New scripts added to NSE include fingerprint-strings, to print the ASCII strings found in service fingerprints for unidentified services; ssl-cert-intaddr, to search for private addresses in TLS certificate fields and extensions; tso-enum, to enumerate usernames for TN3270 Telnet emulators; and tso-brute, which brute-forces passwords for TN3270 Telnet services. Nmap 7.40 adds 149 IPv4 operating system fingerprints, bringing the current total to 5,336 OS fingerprints. These fingerprints let Nmap identify the operating system installed on the machine being scanned, and the list includes a wide range of hardware from various vendors. The latest additions are Linux 4.6, macOS 10.12 Sierra, and NetBSD 7.0. The Amazon Fire OS was removed from the list of OS fingerprints because “it was basically indistinguishable from Android.” Nmap also maintains a list of service fingerprints so that it can easily detect different types of services running on the machine. Nmap now detects 1,161 protocols, including airserv-ng, domaintime, rhpp, and usher. The fingerprints help speed up overall scan times. Nmap 7.40 also adds service probe and UDP payload for Quick UDP Internet Connection, a secure transport developed by Google that is used with HTTP/2. A common issue when running a network scan is the time it takes to complete when some of the ports are unresponsive. A new option—defeat-icmp-ratelimit—will label unresponsive ports as “closed|filtered” in order to reduce overall UDP scan times. Those unresponsive ports may be open, but by marking the port this way, administrators know those ports require additional investigation. Source code and binary packages for Linux, Windows, and MacOS are available from the Nmap Project page.
Google's new Project Wycheproof will let software engineers look for previously known flaws in their open source cryptographic libraries. Google has released a set of tests that developers can use to check some open source cryptographic libraries for k...
Secure messaging app invites you to dive in and figure out if it's done anything wrong Signal developer Open Whisper Systems has quietly posted some important documents for developer consumption: the specifications of its signature verification, key agreement, and secret key protocols. The posts are dated 20 November, although a Tweet from 4 November suggests the documentation was stealth-published earlier. The three specs cover the XEdDSA and VXEdDSA signature schemes; the Extended Triple Diffie-Hellman (X3DH) key agreement protocol; and what the outfit calls the “Double Ratchet” protocol, which Signal uses for message encryption. With the Signal Service API and Signal Protocol API already public, Whisper Systems is giving outsiders a deep view of the operations of the popular privacy-messaging system. So what's in the box? X3DH kicks things off, providing key agreement between Bob and Alice, even if only one is online at the time.
It uses the familiar public key infrastructure approach – Alice retrieves Bob's key from the server he published it to – and they use that information to establish communication and choose their shared private key. The document's in-short version of what happens is a three-phase process: Bob publishes his identity key and prekeys to a server; Alice fetches a "prekey bundle" from the server, and uses it to send an initial message to Bob; Bob receives and processes Alice's initial message. X3DH can use X25519 or X448 elliptic curves, and for hashing it requires SHA 256 or SHA 512, and the document notes that the protocol “provides forward secrecy and cryptographic deniability”. The signatures X3DH uses are described in XEdDSA and VXEdDSA Signature Schemes.

The focus of the schemes is twofold: to ensure that the encrypted signatures look random to anybody sniffing them (a “verifiable random function”); and to make the schemes resistant to timing side-channel attacks. And we still haven't gotten to the users exchanging messages, because this only gets us as far as Bob and Alice setting up their message-passing.

The last part, protecting the messages, is the job of the Double Ratchet Algorithm. Defeating the snoops To make Signal resistant to decryption using a bunch of sniffed messages, the algorithm creates new keys for each message, and here Bob and Alice's public Diffie-Hellman values come back into play: “The parties also send Diffie-Hellman public values attached to their messages.

The results of Diffie-Hellman calculations are mixed into the derived keys so that later keys cannot be calculated from earlier ones.

These properties gives some protection to earlier or later encrypted messages in case of a compromise of a party's keys.” A “KDF chain” (key derivation function) in Double Ratchet protects Bob and Alice's message keys “even if the adversary can control the KDF inputs”, the document says; because a key is never used twice, the messages get forward security; and as long as the system is running enough entropy, Double Ratchet's also designed to be resistant to a snoop breaking into a server and recovering user messages. For non-cryptographers, the term “Double Ratchet” comes from how the protocol makes sure each message gets a new key: their Diffie-Hellman keys are “ratcheted” by each new message exchange; and so are the send/receive chains (the “symmetric-key ratchet”). The Register will watch with interest to see if any cryptanalysts can spot any gaps in the specs. ® Sponsored: Customer Identity and Access Management
Crypto connoisseurs finds favourite chat app protocol up to scratch Encrypted SMS and voice app Signal has passed a security audit with flying colours. As explained in a paper titled A Formal Security Analysis of the Signal Messaging Protocol (PDF) from the International Association for Cryptologic Research, Signal has no discernible flaws and offers a well-designed and compromise-resistant architecture. Signal uses a double rachet algorithm that employs ephemeral key exchanges continually during each session, minimising the amount of text that can be decrypted at any point should a key be compromised. Signal was examined by a team of five researchers from the UK, Australia, and Canada, namely Oxford University information security Professor Cas Cremers and his PhDs Katriel Cohn-Gordon and Lue Garratt, Queensland University of Technology PhD Benjamin Dowling, and McMaster University Assistant Professor Douglas Stebila. The team examines Signal threat models in the context of a fully adversarially-controlled network to examine how it stands up, proving that the cryptographic core of Signal is secure.

As the authors write, Signal's security is such that even testing it is hard: Providing a security analysis for the Signal protocol is challenging for several reasons.

First, Signal employs a novel and unstudied design, involving over ten different types of keys and a complex update process which leads to various chains of related keys.
It therefore does not directly fit into existing analysis models.
Second, some of its claimed properties have only recently been formalised.

Finally, as a more mundane obstacle, the protocol is not substantially documented beyond its source code. They conclude that it is impossible to say if Signal meets its goals, as there are none stated, but say their analysis proves it satisfies security standards adding "we have found no major flaws in its design, which is very encouraging". The team finds some room for improvement which they passed on to the app's developers, namely that the protocol can be further strengthened with negligible cost by using "constructions in the spirit of the NAXOS (authenticated key exchange) protocol" [PDF]" by or including a static-static Diffie-Hellman shared secret in the key derivation.

This would solve the risk of attackers compromising communications should the random number generator become fully predictable. The paper does, however, cover only a subsection of Signal's efforts, as it ignores non-Signal library components, plus application and implementation variations.
It should therefore be considered a substantial starting point for future analysis, the authors say, rather than the final world on Signal. ® Sponsored: Customer Identity and Access Management
Writing secure code can be challenging, and implementing cryptography correctly in software is just plain hard.

Even experienced developers can get tripped up.

And if your goal is to swindle people quickly, not to wow them with the quality of your software, there are sure to be serious crypto mistakes in your code. Malware authors may provide significant lessons in how not to implement cryptography.
Such was the upshot of research by Check Point’s Yaniv Balmas and Ben Herzog at the recent Virus Bulletin conference in Denver. Malware authors may be more likely to insert crypto doozies in their code than developers working on legitimate software because they may not care as much about code quality or design, said Balmas and Herzog.

These criminals are focused on getting a product that does enough to satisfy their immediate requirements -- and no more. Here’s a look at the crypto mistakes of recent malware headliners -- and how to identify similar missteps in future malware scripts in hopes of cracking their malicious code. Fuzzy-headed thinking on crypto Mistakes are inevitable when you have only a “fuzzy understanding of the details” and a very tight time frame.

Analyzing the work of malware authors, Balmas and Herzog identified four “anti-patterns,” when it came to implementing encryption, including voodoo programming, cargo cult technique, reinventing the square wheel, and bluffing.

Defenders who uncover hints of these categories of mistakes can break the encryption and hinder malware execution, or they can uncover its secrets via reverse-engineering. “These are basic misunderstandings of how to use cryptographic tools properly, which at best broadcast, ‘I have no idea what I am doing,’ and at worst, catastrophically cripple the malware such that it does not actually do what it set out to do,” Balmas and Herzog wrote. Professional or amateurish, these malware authors recognize that cryptography is increasingly essential to malware development -- in ransomware, to extort money from victims; in hiding communications from the infected device to the command-and-control server; in stealthily evading detection by security tools; and in signing malware as a trusted application.

But analysis shows that many appear to have trouble using encryption effectively, to their detriment.
Security analysts and network administrators who recognize the main classes of cryptographic errors have a big advantage in foiling ransom demands and thwarting infections. “Malware authors compose primitives based on gut feeling and superstition; jump with eagerness at opportunities to poorly reinvent the wheel; with equal eagerness, at opportunities to use ready-made code that perfectly solves the wrong problem,” Balmas and Herzog wrote in the conference whitepaper. No idea what this does, but it looks cool The authors behind the Zeus banking Trojan and Linux-based ransomware Linux.Encoder fell into the “voodoo programming” trap.

Their cryptographic implementation “betrays a deep confusion about the functionality being invoked -- what it is, what it does, and why it might fail,” the researchers said. In the case of Zeus, the authors chose popular stream cipher RC4 to encrypt all Zeus C&C traffic, but added another tweak.

They took the already encrypted stream and modified every byte by XORing it with the next byte. While RC4 has its own security weaknesses, the cipher is secure enough to do what Zeus needed, and the tweaked variant was “exactly as secure as plain, vanilla RC4,” the researchers noted. With Linux.Encoder, the authors seeded the rand() function with the current time stamp to generate its encryption key. When security researchers pointed out that it was really easy to break the ransomware keys, the authors tried generating an AES key by hashing the time stamp eight times. “Using a hash function eight consecutive times on an input shows a deep misunderstanding of what hash functions are,” the researchers wrote, noting that repeating the function does not yield a better hash.
In fact, it could result in “an odd creation that has weaker security properties.” Copy and paste this code I found The second class, “cargo cult programming,” refers to copying what looks like a working solution to the problem and pasting that block of code into the program without understanding why or how the code works.

Copying code isn’t a big deal -- if it was, Stack Overflow wouldn’t exist -- but if the programmer doesn’t know what is actually happening in that block, then the programmer doesn’t know whether the code is actually the correct solution. “[They] might end up using code that performs almost what they had in mind, but not quite,” the researchers wrote, noting that the authors behind the CryptoDefense ransomware fell in this trap. While most of CryptoDefense’s features -- RSA-2048 encryption, payment via bitcoin, communication with C&C servers via Tor -- were copied from the original CryptoLocker ransomware, the actual RSA implementation relied on a low-level cryptographic Windows API.

The actual code can be found in Microsoft Developer Network documentation, along with the explanation that when a flag is not set correctly, the application saves the private key in local key storage.

The CryptoDefense authors didn’t set that flag, so security researchers worked with victims to find the private key on their computers to decrypt the files. Because the malware authors didn’t thoroughly read the documentation, the defenders were able to save the day. Cobble together the code The typical software developer would gladly link to an open source project that handles a necessary task and save the time and effort to write it from scratch. Unfortunately for malware authors, compiling with statically linked third-party code is not always an option, as the extra code can enlarge the resulting executable or make it easier for security tools to detect the malware.
Instead of linking, authors tend to improvise and cobble together something that works.

The groups behind the Nuclear exploit kit and the ransomware families Petya and DirCrypt attempted to “reinvent the square wheel,” and to everyone else’s benefit, they did so poorly. “If you believe anything in cryptography is completely straightforward to implement, either you don’t understand cryptography, or it doesn’t understand you,” the researchers wrote. The widely distributed Nuclear exploit kit obfuscates exploit delivery by using the Diffie-Hellman Key Exchange to encrypt the information passed to the payloads.

The variables needed for the key exchange are sent to the server as a JSON file containing strings of hex digits, and the values are parsed and decoded using Base64. However, the researchers noted the implementation was “absurd” as it set the secret key to 0, rendering the whole process useless. Petra’s authors implemented Salsa20, a lesser-known stream cipher that is considered to be more resistant to attacks than RC4, from scratch. However, three major flaws in Petya’s implementation means the ransomware generates a 512-bit key containing 256 bits of constant and predictable values. “When your implementation of a cipher cuts its effective key size by half, and the required time for a break by 25 orders of magnitude, it’s time to go sit in the corner and think about what you’ve done,” the researchers said. DirCrypt didn’t fare much better, as the authors made the common mistake of reusing the same key when encrypting each file with RC4. Key-reuse is an understandable mistake, especially if the person doesn’t have elementary knowledge of how stream ciphers work and how they can fail. However, the group made a bigger mistake by appending the key to the encrypted file.
Victims could directly access the key and use it recover portions of locked files and, in some case, recover entire files. Fake it The last category isn’t actually a coding mistake, but rather the malware author’s intentional social engineering shortcut. Ransomware authors, for example, don’t need to create the “impeccable cryptographic design and implementation” when it’s far easier to lie, Check Point’s Balmas and Herzog said.

Few victims are going to question the malware’s encryption claims when it comes to retrieving their data. This was the case with Nemucod, a JavaScript Trojan that recently transformed into ransomware, which claimed to encrypt files with RSA-1024 encryption when it was actually using a simple rotating XOR cipher. Nemucod also displays the ransom note before the files are actually encrypted, so if the victim’s antivirus or security tools were vigilant enough to prevent the malware from downloading the encryption components, the files remain intact. Similarly, the ransomware Poshcoder originally used AES, instead of either RSA-2048 or RSA-4096 encryption listed on the ransom note. Poshcoder also claims to use strong asymmetric encryption, except AES is a symmetric cipher and is vulnerable to a number of attacks. The group behind Nemucod believes “would-be adversaries will become light-headed and weak in the knees the moment they hear the phrase ‘RSA-1024,’” the researchers wrote, noting that Nemucod “sets the gold standard for minimal effort.” If the victims are scared enough, they may be less likely to take a closer look at the malware’s capabilities. Take advantage of the mistakes Cryptography is hard, and many software developers screw up when trying to implement encryption.

Consider that the Open Web Application Security Project’s Top 10 list of web application vulnerabilities identifies only two common cryptographic mistakes that developers can make.
It’s no surprise the bad guys are struggling, too. “Evidence heavily suggests that most malware authors view those tools as wondrous black boxes of black magic, and figure they should be content if they can get the encryption code to run at all,” the researchers wrote. It’s tempting to pay the ransom or begin restoring from backup right away when files have been locked by ransomware or to assume that there is no way to break open the communications between an infected endpoint and the malware’s C&C servers.
Security analysts and IT administrators willing to take the time to look for these common mistakes in the offending malware may be able to change the outcome.
Someday, the bad guys will learn how to use encryption properly; until then, the defenders have the edge as they can get around broken implementations and coding errors. Related articles
The ​OpenSSL project has published a set of security advisories for vulnerabilities resolved in the OpenSSL library in December 2015, March, May, June, August and September 2016.

The following is a summary of these vulnerabilities and their status with respect to Juniper products: CVE OpenSSL Severity Rating Summary CVE-2016-6309 Critical statem/statem.c in OpenSSL 1.1.0a does not consider memory-block movement after a realloc call, which allows remote attackers to cause a denial of service (use-after-free) or possibly execute arbitrary code via a crafted TLS session. CVE-2016-0701 High The DH_check_pub_key function in crypto/dh/dh_check.c in OpenSSL 1.0.2 before 1.0.2f does not ensure that prime numbers are appropriate for Diffie-Hellman (DH) key exchange, which makes it easier for remote attackers to discover a private DH exponent by making multiple handshakes with a peer that chose an inappropriate number, as demonstrated by a number in an X9.42 file. CVE-2016-0703 High The get_client_master_key function in s2_srvr.c in the SSLv2 implementation in OpenSSL before 0.9.8zf, 1.0.0 before 1.0.0r, 1.0.1 before 1.0.1m, and 1.0.2 before 1.0.2a accepts a nonzero CLIENT-MASTER-KEY CLEAR-KEY-LENGTH value for an arbitrary cipher, which allows man-in-the-middle attackers to determine the MASTER-KEY value and decrypt TLS ciphertext data by leveraging a Bleichenbacher RSA padding oracle, a related issue to CVE-2016-0800. CVE-2016-0800 High The SSLv2 protocol, as used in OpenSSL before 1.0.1s and 1.0.2 before 1.0.2g and other products, requires a server to send a ServerVerify message before establishing that a client possesses certain plaintext RSA data, which makes it easier for remote attackers to decrypt TLS ciphertext data by leveraging a Bleichenbacher RSA padding oracle, aka a "DROWN" attack. CVE-2016-2107 High The AES-NI implementation in OpenSSL before 1.0.1t and 1.0.2 before 1.0.2h does not consider memory allocation during a certain padding check, which allows remote attackers to obtain sensitive cleartext information via a padding-oracle attack against an AES CBC session, NOTE: this vulnerability exists because of an incorrect fix for CVE-2013-0169. CVE-2016-2108 High The ASN.1 implementation in OpenSSL before 1.0.1o and 1.0.2 before 1.0.2c allows remote attackers to execute arbitrary code or cause a denial of service (buffer underflow and memory corruption) via an ANY field in crafted serialized data, aka the "negative zero" issue. CVE-2016-6304 High Multiple memory leaks in t1_lib.c in OpenSSL before 1.0.1u, 1.0.2 before 1.0.2i, and 1.1.0 before 1.1.0a allow remote attackers to cause a denial of service (memory consumption) via large OCSP Status Request extensions. CVE-2015-3193 Moderate The Montgomery squaring implementation in crypto/bn/asm/x86_64-mont5.pl in OpenSSL 1.0.2 before 1.0.2e on the x86_64 platform, as used by the BN_mod_exp function, mishandles carry propagation and produces incorrect output, which makes it easier for remote attackers to obtain sensitive private-key information via an attack against use of a (1) Diffie-Hellman (DH) or (2) Diffie-Hellman Ephemeral (DHE) ciphersuite. CVE-2015-3194 Moderate crypto/rsa/rsa_ameth.c in OpenSSL 1.0.1 before 1.0.1q and 1.0.2 before 1.0.2e allows remote attackers to cause a denial of service (NULL pointer dereference and application crash) via an RSA PSS ASN.1 signature that lacks a mask generation function parameter. CVE-2015-3195 Moderate The ASN1_TFLG_COMBINE implementation in crypto/asn1/tasn_dec.c in OpenSSL before 0.9.8zh, 1.0.0 before 1.0.0t, 1.0.1 before 1.0.1q, and 1.0.2 before 1.0.2e mishandles errors caused by malformed X509_ATTRIBUTE data, which allows remote attackers to obtain sensitive information from process memory by triggering a decoding failure in a PKCS#7 or CMS application. CVE-2016-0704 Moderate An oracle protection mechanism in the get_client_master_key function in s2_srvr.c in the SSLv2 implementation in OpenSSL before 0.9.8zf, 1.0.0 before 1.0.0r, 1.0.1 before 1.0.1m, and 1.0.2 before 1.0.2a overwrites incorrect MASTER-KEY bytes during use of export cipher suites, which makes it easier for remote attackers to decrypt TLS ciphertext data by leveraging a Bleichenbacher RSA padding oracle, a related issue to CVE-2016-0800. CVE-2016-6305 Moderate The ssl3_read_bytes function in record/rec_layer_s3.c in OpenSSL 1.1.0 before 1.1.0a allows remote attackers to cause a denial of service (infinite loop) by triggering a zero-length record in an SSL_peek call. CVE-2016-7052 Moderate crypto/x509/x509_vfy.c in OpenSSL 1.0.2i allows remote attackers to cause a denial of service (NULL pointer dereference and application crash) by triggering a CRL operation. CVE-2015-1794 Low The ssl3_get_key_exchange function in ssl/s3_clnt.c in OpenSSL 1.0.2 before 1.0.2e allows remote servers to cause a denial of service (segmentation fault) via a zero p value in an anonymous Diffie-Hellman (DH) ServerKeyExchange message. CVE-2015-3196 Low ssl/s3_clnt.c in OpenSSL 1.0.0 before 1.0.0t, 1.0.1 before 1.0.1p, and 1.0.2 before 1.0.2d, when used for a multi-threaded client, writes the PSK identity hint to an incorrect data structure, which allows remote servers to cause a denial of service (race condition and double free) via a crafted ServerKeyExchange message. CVE-2015-3197 Low ssl/s2_srvr.c in OpenSSL 1.0.1 before 1.0.1r and 1.0.2 before 1.0.2f does not prevent use of disabled ciphers, which makes it easier for man-in-the-middle attackers to defeat cryptographic protection mechanisms by performing computations on SSLv2 traffic, related to the get_client_master_key and get_client_hello functions. CVE-2016-0702 Low The MOD_EXP_CTIME_COPY_FROM_PREBUF function in crypto/bn/bn_exp.c in OpenSSL 1.0.1 before 1.0.1s and 1.0.2 before 1.0.2g does not properly consider cache-bank access times during modular exponentiation, which makes it easier for local users to discover RSA keys by running a crafted application on the same Intel Sandy Bridge CPU core as a victim and leveraging cache-bank conflicts, aka a "CacheBleed" attack. CVE-2016-0705 Low Double free vulnerability in the dsa_priv_decode function in crypto/dsa/dsa_ameth.c in OpenSSL 1.0.1 before 1.0.1s and 1.0.2 before 1.0.2g allows remote attackers to cause a denial of service (memory corruption) or possibly have unspecified other impact via a malformed DSA private key. CVE-2016-0797 Low Multiple integer overflows in OpenSSL 1.0.1 before 1.0.1s and 1.0.2 before 1.0.2g allow remote attackers to cause a denial of service (heap memory corruption or NULL pointer dereference) or possibly have unspecified other impact via a long digit string that is mishandled by the (1) BN_dec2bn or (2) BN_hex2bn function, related to crypto/bn/bn.h and crypto/bn/bn_print.c. CVE-2016-0798 Low Memory leak in the SRP_VBASE_get_by_user implementation in OpenSSL 1.0.1 before 1.0.1s and 1.0.2 before 1.0.2g allows remote attackers to cause a denial of service (memory consumption) by providing an invalid username in a connection attempt, related to apps/s_server.c and crypto/srp/srp_vfy.c. CVE-2016-0799 Low The fmtstr function in crypto/bio/b_print.c in OpenSSL 1.0.1 before 1.0.1s and 1.0.2 before 1.0.2g improperly calculates string lengths, which allows remote attackers to cause a denial of service (overflow and out-of-bounds read) or possibly have unspecified other impact via a long string, as demonstrated by a large amount of ASN.1 data, a different vulnerability than CVE-2016-2842. CVE-2016-2105 Low Integer overflow in the EVP_EncodeUpdate function in crypto/evp/encode.c in OpenSSL before 1.0.1t and 1.0.2 before 1.0.2h allows remote attackers to cause a denial of service (heap memory corruption) via a large amount of binary data. CVE-2016-2106 Low Integer overflow in the EVP_EncryptUpdate function in crypto/evp/evp_enc.c in OpenSSL before 1.0.1t and 1.0.2 before 1.0.2h allows remote attackers to cause a denial of service (heap memory corruption) via a large amount of data. CVE-2016-2109 Low The asn1_d2i_read_bio function in crypto/asn1/a_d2i_fp.c in the ASN.1 BIO implementation in OpenSSL before 1.0.1t and 1.0.2 before 1.0.2h allows remote attackers to cause a denial of service (memory consumption) via a short invalid encoding. CVE-2016-2176 Low The X509_NAME_oneline function in crypto/x509/x509_obj.c in OpenSSL before 1.0.1t and 1.0.2 before 1.0.2h allows remote attackers to obtain sensitive information from process stack memory or cause a denial of service (buffer over-read) via crafted EBCDIC ASN.1 data. CVE-2016-2182 Low The BN_bn2dec function in crypto/bn/bn_print.c in OpenSSL before 1.1.0 does not properly validate division results, which allows remote attackers to cause a denial of service (out-of-bounds write and application crash) or possibly have unspecified other impact via unknown vectors. CVE-2016-6303 Low Integer overflow in the MDC2_Update function in crypto/mdc2/mdc2dgst.c in OpenSSL before 1.1.0 allows remote attackers to cause a denial of service (out-of-bounds write and application crash) or possibly have unspecified other impact via unknown vectors. CVE-2016-2179 Low The DTLS implementation in OpenSSL before 1.1.0 does not properly restrict the lifetime of queue entries associated with unused out-of-order messages, which allows remote attackers to cause a denial of service (memory consumption) by maintaining many crafted DTLS sessions simultaneously, related to d1_lib.c, statem_dtls.c, statem_lib.c, and statem_srvr.c. CVE-2016-2180 Low The TS_OBJ_print_bio function in crypto/ts/ts_lib.c in the X.509 Public Key Infrastructure Time-Stamp Protocol (TSP) implementation in OpenSSL through 1.0.2h allows remote attackers to cause a denial of service (out-of-bounds read and application crash) via a crafted time-stamp file that is mishandled by the "openssl ts" command. CVE-2016-2181 Low The Anti-Replay feature in the DTLS implementation in OpenSSL before 1.1.0 mishandles early use of a new epoch number in conjunction with a large sequence number, which allows remote attackers to cause a denial of service (false-positive packet drops) via spoofed DTLS records, related to rec_layer_d1.c and ssl3_record.c. CVE-2016-6302 Low The tls_decrypt_ticket function in ssl/t1_lib.c in OpenSSL before 1.1.0 does not consider the HMAC size during validation of the ticket length, which allows remote attackers to cause a denial of service via a ticket that is too short. CVE-2016-2177 Low OpenSSL through 1.0.2h incorrectly uses pointer arithmetic for heap-buffer boundary checks, which might allow remote attackers to cause a denial of service (integer overflow and application crash) or possibly have unspecified other impact by leveraging unexpected malloc behavior, related to s3_srvr.c, ssl_sess.c, and t1_lib.c. CVE-2016-2178 Low The dsa_sign_setup function in crypto/dsa/dsa_ossl.c in OpenSSL through 1.0.2h does not properly ensure the use of constant-time operations, which makes it easier for local users to discover a DSA private key via a timing side-channel attack. CVE-2016-6306 Low The certificate parser in OpenSSL before 1.0.1u and 1.0.2 before 1.0.2i might allow remote attackers to cause a denial of service (out-of-bounds read) via crafted certificate operations, related to s3_clnt.c and s3_srvr.c. CVE-2016-6307 Low The state-machine implementation in OpenSSL 1.1.0 before 1.1.0a allocates memory before checking for an excessive length, which might allow remote attackers to cause a denial of service (memory consumption) via crafted TLS messages, related to statem/statem.c and statem/statem_lib.c. CVE-2016-6308 Low statem/statem_dtls.c in the DTLS implementation in OpenSSL 1.1.0 before 1.1.0a allocates memory before checking for an excessive length, which might allow remote attackers to cause a denial of service (memory consumption) via crafted DTLS messages. CVE-2016-2176 is a vulnerability that only affects EBCDIC systems. No Juniper products are affected by this vulnerability. Affected Products: Junos OS: Junos OS is potentially affected by many of these issues. Junos OS is not affected by CVE-2016-0701, CVE-2016-0800, CVE-2016-2107, CVE-2016-2176, CVE-2016-2179, CVE-2016-2181, CVE-2016-6308, CVE-2016-6309 and CVE-2016-7052. ScreenOS: ScreenOS is potentially affected by many of these issues.
ScreenOS is not affected by CVE-2015-1794, CVE-2015-3193, CVE-2015-3194, CVE-2015-3196, CVE-2015-3197, CVE-2016-0701, CVE-2016-2107, CVE-2016-2109, CVE-2016-2179, CVE-2016-2181, CVE-2016-6308, CVE-2016-6309 and CVE-2016-7052. Junos Space: Junos Space is potentially affected by many of these issues. Junos Space is not affected by CVE-2015-1794, CVE-2016-0705, CVE-2016-0798, CVE-2016-2176, CVE-2015-3193, CVE-2015-3196, CVE-2016-0701, CVE-2016-2107, CVE-2016-6305, CVE-2016-6307, CVE-2016-6308, CVE-2016-6309 and CVE-2016-7052. NSM: NSM is potentially affected by many of these issues. NSM is not affected by CVE-2015-1794, CVE-2016-0705, CVE-2016-0798, CVE-2016-2176, CVE-2015-3193, CVE-2015-3196, CVE-2016-0701, CVE-2016-2107, CVE-2016-6305, CVE-2016-6307, CVE-2016-6308, CVE-2016-6309 and CVE-2016-7052. Juniper Secure Analytics (JSA, STRM): STRM, JSA series is potentially affected by these issues. CTPView/CTPOS: CTPView and CTPOS are potentially affected by many these issues.

CTPView and CTPOS are not affected by CVE-2015-1794, CVE-2016-0705, CVE-2016-0798, CVE-2016-2176, CVE-2015-3193, CVE-2015-3196, CVE-2016-0701, CVE-2016-2107, CVE-2016-6305, CVE-2016-6307, CVE-2016-6308, CVE-2016-6309 and CVE-2016-7052. Junos OS: OpenSSL December 2015 advisory: CVE-2015-3193, CVE-2015-3194, CVE-2015-3195, CVE-2015-3196 and CVE-2015-1794 are resolved in 12.1X44-D60, 12.1X46-D45, 12.1X46-D51, 12.1X47-D35, 12.3R12, 12.3R13, 12.3X48-D25, 13.2X51-D40, 13.3R9, 14.1R7, 14.1X53-D35, 14.2R6, 15.1F5, 15.1R3, 15.1X49-D40, 15.1X53-D35, 16.1R1 and all subsequent releases (PR 1144520). OpenSSL March 2016 advisory: CVE-2016-0705, CVE-2016-0798, CVE-2016-0797, CVE-2016-0799, CVE-2016-0702, CVE-2016-0703 and CVE-2016-0704 are resolved in 13.3R10*, 14.1R8, 14.1X53-D40*, 14.2R7, 15.1F5-S4, 15.1F6, 15.1R4, 15.1X49-D60, 15.1X53-D50, 16.1R1 and all subsequent releases (PR 1165523, 1165570). OpenSSL May 2016 advisory: CVE-2016-2105, CVE-2016-2106, CVE-2016-2108, CVE-2016-2109, CVE-2016-2176, CVE-2016-2180 are resolved in 13.3R10*, 14.1R9*, 14.1X53-D40*, 14.2R8*, 15.1F5-S4, 15.1F6-S2, 15.1R4, 15.1X53-D50, 15.1X53-D60, 16.1R1 and all subsequent releases.

Fixes are in progress for other supported Junos releases (PR 1180391). OpenSSL June to September 2016 advisories: CVE-2016-2177, CVE-2016-2178, CVE-2016-2179, CVE-2016-2180, CVE-2016-2181, CVE-2016-2182, CVE-2016-6302, CVE-2016-6303, CVE-2016-6304, CVE-2016-6305, CVE-2016-6306, CVE-2016-6307, CVE-2016-6308, CVE-2016-6309, CVE-2016-7052 are resolved in 13.3R10*, 14.1R9*, 14.2R8*, 15.1R5*, 16.1R4* and all subsequent releases.

Fixes are in progress for other supported Junos releases (PR 1216923). CVE-2016-2108 was resolved when fixes for OpenSSL Advisories in June and July 2015 were implemented in Junos.

At that time OpenSSL version was upgraded to 1.0.1p in Junos 13.3 and later releases which included a fix for this issue. Please see JSA10694​ for solution releases. Note: * - These Junos releases are pending release at the time of publication. Note: While Junos is not affected or impacted by certain CVEs, fixes for those get included with the relevant OpenSSL version upgrade. Hence these are stated as resolved. ScreenOS: CVE-2015-3195 is resolved in 6.3.0r22.

This issue is being tracked as PR 1144749. Please see JSA10733 further details. Rest of the applicable issues in OpenSSL advisories until May 2016 in have been resolved in ScreenOS 6.3.0r23.

These issues are being tracked as PRs 1180504 and 1165796. Fixes for issues in OpenSSL advisories from June to September are being tracked as PR 1217005. Junos Space: OpenSSL software has been upgraded to 1.0.1t in Junos Space 16.1R1 (pending release) to resolve all the issues included in OpenSSL advisories until May 2016.

These issues are being tracked as PRs 1144741, 1158268, 1165853, 1180505, 1212590. Fixes for issues in OpenSSL advisories from June to September are being tracked as PR 1216998. NSM: OpenSSL software has been upgraded to 1.0.2h in NSM 2012.2R13 to resolve all the issues included in OpenSSL advisories until May 2016.

This upgrade is being tracked as PR 1198397. Fixes for issues in OpenSSL advisories from June to September are being tracked as PR 1217003. Juniper Secure Analytics (JSA, STRM): OpenSSL December 2015 and March 2016 advisories: CVE-2015-3194, CVE-2015-3195, CVE-2015-3196, CVE-2015-1794, CVE-2015-3193, CVE-2016-0702, CVE-2016-0703, CVE-2016-0704, CVE-2016-0705, CVE-2016-0797, CVE-2016-0798, CVE-2016-0799 and CVE-2016-0800 have been resolved in 2014.6.R4.A resolution for other issues is pending release.These issues are being tracked as PR 1151137, 1165861. CTPView CVE-2015-3194 and CVE-2015-3195 have been resolved in 7.1R3, 7.2R1 and all subsequent releases (PR 1144746). CVE-2016-0702, CVE-2016-0703, CVE-2016-0704, CVE-2016-0797, CVE-2016-0799 and CVE-2016-0800 have been resolved in 7.1R3, 7.2R2, 7.3R1 and all subsequent releases (PR 1165849). CTPOS CVE-2015-3194 and CVE-2015-3195 have been resolved in 7.2R1 and all subsequent releases (PR 1144964). CVE-2016-0702, CVE-2016-0703, CVE-2016-0704, CVE-2016-0797, CVE-2016-0799 and CVE-2016-0800 have been resolved in 7.0R7, 7.1R3, 7.2R2, 7.3R1 and all subsequent releases (PR 1165847). Standard security best current practices (control plane firewall filters, edge filtering, access lists, etc.) may protect against any remote malicious attacks. Junos OS Since SSL is used for remote network configuration and management applications such as J-Web and SSL Service for JUNOScript (XNM-SSL), viable workarounds for this issue in Junos may include: Disabling J-Web Disable SSL service for JUNOScript and only use Netconf, which makes use of SSH, to make configuration changes Limit access to J-Web and XNM-SSL from only trusted networks ScreenOS Methods to reduce the risk associated with this issue include: Limit access to SSL ports to only trusted hosts. Disabling web administrative services will mitigate the risk of this issue:unset int eth0/0 manage web Refer to KB6713 for enabling SSH on the firewall. General Mitigation It is good security practice to limit the exploitable attack surface of critical infrastructure networking equipment. Use access lists or firewall filters to limit access to the HTTPS or SSL/TLS services only from trusted, administrative networks or hosts.
Researchers warn that many 1,024-bit keys used to secure communications on the internet today might be based on prime numbers that have been intentionally backdoored in an undetectable way. Many public-key cryptography algorithms that are used to secure web, email, VPN, SSH and other types of connections on the internet derive their strength from the mathematical complexity of discrete logarithms -- computing discrete logarithms for groups of large prime numbers cannot be efficiently done using classical methods.

This is what makes cracking strong encryption computationally impractical. Most key-generation algorithms rely on prime parameters whose generation is supposed to be verifiably random. However, many parameters have been standardized and are being used in popular crypto algorithms like Diffie-Hellman and DSA without the seeds that were used to generate them ever being published.

That makes it impossible to tell whether, for example, the primes were intentionally "backdoored" -- selected to simplify the computation that would normally be required to crack the encryption. Researchers from University of Pennsylvania, INRIA, CNRS and Université de Lorraine recently published a paper in which they show why this lack of cryptographic transparency is problematic and could mean that many encryption keys used today are based on backdoored primes without anyone -- aside from those who created them -- knowing. To demonstrate this, the researchers created a backdoored 1,024-bit Diffie-Hellman prime and showed that solving the discrete log problem for it is several orders of magnitude easier than for a truly random one. "Current estimates for 1,024-bit discrete log in general suggest that such computations are likely within range for an adversary who can afford hundreds of millions of dollars of special-purpose hardware," the researchers said in their paper. "In contrast, we were able to perform a discrete log computation on a specially trapdoored prime in two months on an academic cluster." The problem is that for someone who doesn't know about the backdoor, demonstrating that a prime has been trapdoored in the first place would be nearly impossible. "The near universal failure of implementers to use verifiable prime generation practices means that use of weak primes would be undetectable in practice and unlikely to raise eyebrows." This is conceptually similar to the backdoor found in the Dual_EC random number generator, which is believed to have been introduced by the U.S. National Security Agency. However, that backdoor was much easier to find and, unlike Diffie-Hellman or DSA, Dual_EC never received widespread adoption. Diffie-Hellman ephemeral (DHE) is slowly replacing RSA as the preferred key exchange algorithm in TLS due to its perfect forward secrecy property that's supposed to keep past communications secure even if the key is compromised in the future. However, the use of backdoored primes would defeat that security benefit. Furthermore, 1,024-bit keys are still widely used online, despite the U.S. National Institute of Standards and Technology recommending a transition to larger key sizes since 2010.

According to the SSL Pulse project, 22 percent of the internet's top 140,000 HTTPS-enabled websites use 1,024-bit keys. "Our results are yet another reminder that 1,024-bit primes should be considered insecure for the security of cryptosystems based on the hardness of discrete logarithms," the researchers said. "The discrete logarithm computation for our backdoored prime was only feasible because of the 1,024-bit size, and the most effective protection against any backdoor of this type has always been to use key sizes for which any computation is infeasible." The researchers estimate that performing similar computations for 2048-bit keys, even with backdoored primes, would be 16 million times harder than for 1,024-bit keys and will remain infeasible for many years to come.

The immediate solution is to switch to 2048-bit keys, but in the future all standardized primes should be published together with their seeds, the researchers said. Documents leaked in 2013 by former NSA contractor Edward Snowden suggested that the agency has the ability to decrypt a lot of VPN traffic. Last year, a group of researchers speculated that the reason for this was the widespread use in practice of a small number of fixed or standardized groups of primes. "Performing precomputation for a single 1,024-bit group would allow passive eavesdropping on 18 percent of popular HTTPS sites, and a second group would allow decryption of traffic to 66 percent of IPsec VPNs and 26 percent of SSH servers," the researchers said in their paper at that time. "A close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break."