Home Tags Code Signing

Tag: Code Signing

APT Trends report Q3 2017

Beginning in the second quarter of 2017, Kaspersky's Global Research and Analysis Team (GReAT) began publishing summaries of the quarter's private threat intelligence reports in an effort to make the public aware of what research we have been conducting.  This report serves as the next installment, focusing on important reports produced during Q3 of 2017.
Forgeries undermine the trust millions of people place in digital certificates.
Digital code signing certificates are more expensive than credit cards or weapons.

Introducing WhiteBear

As a part of our Kaspersky APT Intelligence Reporting subscription, customers received an update in mid-February 2017 on some interesting APT activity that we called WhiteBear.
It is a parallel project or second stage of the Skipper Turla cluster of activity documented in another private report. Like previous Turla activity, WhiteBear leverages compromised websites and hijacked satellite connections for command and control (C2) infrastructure.
A cross-platform win32-based Mirai spreader and botnet is in the wild and previously discussed publicly. However, there is much information confused together, as if an entirely new IoT bot is spreading to and from Windows devices.

This is not the case.
Instead, an accurate assessment is that a previously active Windows botnet is spreading a Mirai bot variant.
Starting in April, Oracle will treat JAR files signed with the MD5 hashing algorithm as if they were unsigned, which means modern releases of the Java Runtime Environment (JRE) will block those JAR files from running.

The shift is long overdue, as MD5’s security weaknesses are well-known, and more secure algorithms should be used for code signing instead. “Starting with the April Critical Patch Update releases, planned for April 18, 2017, all JRE versions will treat JARs signed with MD5 as unsigned,” Oracle wrote on its Java download page. Code-signing JAR files bundled with Java libraries and applets is a basic security practice as it lets users know who actually wrote the code, and it has not been altered or corrupted since it was written.
In recent years, Oracle has been beefing up Java’s security model to better protect systems from external exploits and to allow only signed code to execute certain types of operations.

An application without a valid certificate is potentially unsafe. Newer versions of Java now require all JAR files to be signed with a valid code-signing key, and starting with Java 7 Update 51, unsigned or self-signed applications are blocked from running. Code signing is an important part of Java’s security architecture, but the MD5 hash weakens the very protections code signing is supposed to provide.

Dating back to 1992, MD5 is used for one-way hashing: taking an input and generating a unique cryptographic representation that can be treated as an identifying signature. No two inputs should result in the same hash, but since 2005, security researchers have repeatedly demonstrated that the file could be modified and still have the same hash in collisions attacks. While MD5 is no longer used for TLS/SSL—Microsoft deprecated MD5 for TLS in 2014—it remains prevalent in other security areas despite its weaknesses. With Oracle’s change, “affected MD-5 signed JAR files will no longer be considered trusted [by the Oracle JRE] and will not be able to run by default, such as in the case of Java applets, or Java Web Start applications,” Erik Costlow, an Oracle product manager with the Java Platform Group, wrote back in October. Developers need to verify that their JAR files have not been signed using MD5, and if it has, re-sign affected files with a more modern algorithm.

Administrators need to check with vendors to ensure the files are not MD5-signed.
If the files are still running MD5 at the time of the switchover, users will see an error message that the application could not go. Oracle has already informed vendors and source licensees of the change, Costlow said. In cases where the vendor is defunct or unwilling to re-sign the application, administrators can disable the process that checks for signed applications (which has serious security implications), set up custom Deployment Rule Sets for the application’s location, or maintain an Exception Site List, Costlow wrote. There was plenty of warning. Oracle stopped using MD5 with RSA algorithm as the default JAR signing option with Java SE6, which was released in 2006.

The MD5 deprecation was originally announced as part of the October 2016 Critical Patch Update and was scheduled to take effect this month as part of the January CPU.

To ensure developers and administrators were ready for the shift, the company has decided to delay the switch to the April Critical Patch Update, with Oracle Java SE 8u131 and corresponding releases of Oracle Java SE 7, Oracle Java SE 6, and Oracle JRockit R28. “The CA Security Council applauds Oracle for its decision to treat MD5 as unsigned. MD5 has been deprecated for years, making the move away from MD5 a critical upgrade for Java users,” said Jeremy Rowley, executive vice president of emerging markets at Digicert and a member of the CA Security Council. Deprecating MD5 has been a long time coming, but it isn’t enough. Oracle should also look at deprecating SHA-1, which has its own set of issues, and adopt SHA-2 for code signing.

That course of action would be in line with the current migration, as major browsers have pledged to stop supporting websites using SHA-1 certificates. With most organizations already involved with the SHA-1 migration for TLS/SSL, it makes sense for them to also shift the rest of their certificate and key signing infrastructure to SHA-2. The good news is that Oracle plans to disable SHA-1 in certificate chains anchored by roots included by default in Oracle’s JDK at the same time MD5 gets deprecated, according to the JRE and JDK Crypto Roadmap, which outlines technical instructions and information about ongoing cryptographic work for Oracle JRE and Oracle JDK.

The minimum key length for Diffie-Hellman will also be increased to 1,024 bits later in 2017. The road map also claims Oracle recently added support for the SHA224withDSA and SHA256withDSA signature algorithms to Java 7, and disabled Elliptic Curve (EC) for keys of less than 256 bits for SSL/TLS for Java 6, 7, and 8.

SHA-1 End Times Have Arrived

For the past couple of years, browser makers have raced to migrate from SHA-1 to SHA-2 as researchers have intensified warnings about collision attacks moving from theoretical to practical. In just weeks, a transition deadline set by Google, Mozilla and Microsoft for the deprecation of SHA-1 is up. Starting on Jan. 24, Mozilla’s Firefox browser will be the first major browser to display a warning to its users who run into a site that doesn’t support TLS certificates signed by the SHA-2 hashing algorithm. The move protects users from collision attacks, where two or more inputs generate the same hash value. In 2012, Bruce Schneier projected a collision attack SHA-1 would cost $700,000 to perform by 2015 and $143,000 by 2018. In 2015, researchers said tweaks to existing attacks and new understanding of the algorithm could accelerate attacks and make a full-on collision attack feasible for somewhere between $75,000 to $125,000. Bocek and other experts warn the move SHA-2 comes with a wide range of side effects; from unsupported applications, new hardware headaches tied to misconfigured equipment and cases of crippled credit card processing gear unable to communicate with backend servers. They say the entire process has been confusing and unwieldy to businesses dependent on a growing number of digital certificates used for not only their websites, but data centers, cloud services, and mobile apps. “SHA-1 deprecation in the context of the browser has been an unmitigated success. But it’s just the tip of the SHA-2 migration iceberg. Most people are not seeing the whole problem,” said Kevin Bocek, VP of security strategy and threat intelligence for Venafi. “SHA-1 isn’t just a problem to solve by February, there are thousands more private certificates that will also need migrating.” Nevertheless, it’s browsers that have been at the front lines of the SHA-1 to SHA-2 migration. And starting next month, public websites not supporting SHA-2 will generate various versions of ominous warnings cautioning users the site they are visiting is insecure. According to Venafi Labs research team, 35 percent of the IPv4 websites it analyzed in November are still using insecure SHA-1 certificates. However, when researchers scanned Alexa’s top 1 million most popular websites for SHA-2 compliance it found only 536 sites were not compliant. Exceptions to Every Rule But still there are companies concerned about disruption to their business after the deadline asking for exceptions and exploring alternatives to full SHA-2 support. “What you are seeing is various companies, for one reason or another, unable to complete the migration” said Patrick Donahue, security engineering product lead at Cloudflare. “The browser makers have created an exception process that allows companies to make appeals for exceptions that allow CAs (certificate authorities) to issue them.” For example, last year Mozilla allowed a security firm to issue nine new SHA-1 certificates to payment processor Worldpay to use in 10,000 of its payment terminals worldwide. Worldpay argued because it missed a Dec. 31, 2015 cutoff for obtaining SHA-1 certificates, and said it needed SHA-1 certificates to buy more time to make the transition to SHA-2 or risk having thousands of its terminals go dark. After considerable debate, Mozilla granted the exception and issued SHA-1 certificates after the cutoff date. According to Cloudflare, as many as 10 percent of credit card payment systems may also face problems as browsers reject SHA-1 certificates used in terminals similar to Worldpay’s. “For credit card processing, it’s not as simple as a software update. It will require sending out new credit card processing machines that support SHA-2,” Donahue said. For social networking behemoth Facebook, it wasn’t so much about the company looking for an exception, rather a solution that could allow it to keep users stuck on older computers and aging handhelds connected to its service. In late 2015, Facebook estimated up to 7 percent of browsers used by its customers, particularly in developing countries, would not able to use the newer SHA-2 standard. At the time, Facebook chief security officer Alex Stamos said, “We don’t think it’s right to cut tens of millions of people off from the benefits of the encrypted Internet, particularly because of the continued usage of devices that are known to be incompatible with SHA-256.” The solution for Facebook is similar to what a number of companies have sought, a stopgap fix until SHA-2 adoption is ubiquitous. Facebook said it has found success running a large TLS termination edge with certificate switching, where it intelligently chooses which certificates a person sees based upon Facebook’s guess as to the capabilities of user’s browser. “This allows us to provide HTTPS to older browsers using SHA-1 while giving newer browsers the security benefits of SHA-256,” Stamos explained. Cloudflare and Mozilla have both developed a similar techniques for customers concerned that line-of-business websites will stop working after the deprecation deadline. “The biggest excuse among web server operators was the need to support Internet Explorer on Windows XP (pre-SP3), which does not support SHA-2.  However, websites with this requirement (including www.mozilla.org) have developed techniques that allow them to serve SHA-2 certificate to modern browsers while still providing a SHA-1 certificate to IE/XP clients,” said J.C. Jones, cryptographic engineering manager at Mozilla. Workarounds work for browsers, but different SHA-2 transition challenges persist within the mobile app space. Apps Must Catch Up on SHA-2 Support When a browser rejects a SHA-1 certificate, the warning message is easy to spot. That’s not the case with apps. While Google’s Android and Apple’s iOS operating systems have supported SHA-2 for more than a year, most apps still do not. “With apps, you don’t get to see what the actual trusted connections are. It requires blind faith,” Venafi’s Bocek said. Alternatively, it should be a matter of trust and verify, he said. He reminds how bad things got in 2014 when Fandango and Credit Karma assured users that their data was secure and being sent over encrypted SSL connections when in fact they disabled their apps’ SSL certificate validation. SHA-1 used by apps is a far cry from no protection. But still, the absence of SHA-2 introduces risk that someone could mint a forged SHA-1 certificate to connect with an app using a SHA-1 certificate. An attacker spoofing the DNS of a public Wi-Fi connection could launch a man-in-the-middle attack, and unlike with a browser, the use of untrusted TLS certificates would go unnoticed, Bocek said. App developers don’t have to worry about pressure from browser makers. More often, it’s about what individual platforms and app makers decide to enforce. In December, Apple backtracked on its plan to enforce a year-end deadline that would have required developers to adopt App Transport Security, which included SHA-2 support. It did not set a new deadline. As with Apple, it’s unclear when Google might force app developers to use SHA-2 or if it will reject SHA-1 certificates used to sign apps. Salesforce warned its developer community and users if they wanted to continue to have error-free access to Salesforce, they needed to ensure their operating systems, browsers, and middleware were capable of accepting HTTPS certificates with the SHA-2 hashing algorithm or risk being blocked. For Facebook, unlike its stance on SHA-1 deprecation for its website, it set a firm sunset date of Jan. 1, 2016 for SHA-1 for developers using its SDKs. After that date, Facebook required apps that connect to it to support SHA-2 connections. “If your app relies on SHA-1 based certificate verification, then people may encounter broken experiences in your app if you fail to update it,” said Adam Gross, a production engineer at Facebook. Internal PKIs Aren’t Immune Enterprises are also not under the same immediate pressure to update their internal PKI used for internal hardware, software and cloud applications. But security experts warn that doesn’t make them immune to major certificate headaches. One of those hassles is the fact the number of certificates has ballooned to an average of more than 10,000 per company, which makes the switch from SHA-1 to SHA-2 a logistical nightmare, according to Venafi. The growth of SSL and TLS traffic has been a mixed blessing, Bocek said. “More security is always better, but one blind spot in a sea of thousands of certificates used within the enterprise could be disastrous for the security of a business.” That’s prompted Microsoft to take a hardline approach to SHA-1 deprecation. It announced an ultimate kill date for SHA-1 for its operating system in 2014: “CAs must stop issuing new SHA-1 SSL and Code Signing end-entity certificates by 1 January 2016 … For SSL certificates, Windows will stop accepting SHA-1 end-entity certificates by 1 January 2017. This means any time valid SHA-1 SSL certificates must be replaced with a SHA-2 equivalent by 1 January 2017 … For code signing certificates, Windows will stop accepting SHA-1 code signing certificates without time stamps after 1 January 2016.  SHA-1 code signing certificates that are time stamped before 1 January 2016 will be accepted until such time when Microsoft decides SHA1 is vulnerable to pre-image attack.” “This is an operational issue. It isn’t as simple as patching or upgrading systems,” Bocek said. The migration requires a certificate inventory assessment, a review of policies, application and system testing and automation, he said. But he warns of the perils of procrastination. “This is a non-trivial migration,” Bocek said. Think of 2017 not as the end of the race, rather when SHA-2 migrations begin to hit their stride, he said.
The Certificate Authority Security Council has released new Minimum Requirements for Code Signing for use by all CAs (Certificate Authorities).

This represents the first-ever standard for code-signing, and the advocacy group hopes the guidelines will improve web security by making it easier to verify software authenticity. The new Minimum Requirements for the Issuance and Management of Publicly-Trusted Code Signing Certificates outlines specific steps CAs and individual software companies must perform to ensure code-signing certificates are not abused.
It addresses "user concerns about the trustworthiness of signed objects and accurately identifying the software publisher," the working group wrote in the requirements document. While the requirements are intended primarily for CAs that can issue code-signing certificates (including root CAs publicly trusted for code signing and all other CAs part of the root CA's validation path), software companies and developers have to comply with some of the requirements if they are going to work with a standards-compliant CA. Not meeting those requirements can mean a code-signing certificate will not be issued, or an existing one will be revoked. Code signing refers to using certificates to digitally sign executables and scripts in order to verify the author's identity and, more importantly, that the code has not been altered or corrupted since it was signed.
Several attack campaigns have stolen legitimate code-signing certificates to sign malware, making it possible for the malicious code to bypass security defenses.

There are 25 million pieces of malware enabled by code-signing certificates, and stolen code-signing digital certificates are sold everyday on underground markets for more than $1,000 each, said Kevin Bocek, vice president or security strategy and threat intelligence at Venafi. "Code signing is critical to every mobile device and computer we touch," Bocek said. Microsoft has already adopted the minimum requirements and will require all CAs issuing code-signing certificates for the Windows platform to adopt the minimum requirements starting Feb. 1, 2017. Because CAs have different rules for how they issue and revoke code signing certificates, both developers and cybercriminals could game the system, Bocek said. Without any standards in place, it was possible to get accepted one CA even after already being rejected by a different CA.

The variance made it difficult to know which code-signing certificate could be trusted. With the guidance, each CA has some leeway in developing its own process for how to issue and revoke certificates, but the underlying requirements are the same from CA to CA. Along with providing all the information necessary for the CA to verify the identity of the software company (or developer) in order to issue the certificate or sign the code object, organizations are responsible for making sure the private key is generated, stored, and used in a secure environment with controls to prevent the keys from being stolen or misused.

The CA has to provide guidance on how to protect the keys, but it's up to the organization do it in a way that matches the guidelines: Protecting the private keys: Organizations have to use either a trusted platform module to generate and secure key pairs, a FIPS-140-Level-2 Hardware Security Module or equivalent (such as Common Criteria EAL 4+), or another type of hardware storage token, such as a USB key or a SD card.

The tokens have to be kept physically separate from the device hosting the code-signing function until the moment it is actually needed for a signing session. Securing the code signing computer: The computer used for signing cannot be used for web browsing, and it must be periodically scanned by regularly updated security software for possible infections. Picking a trusted third-party: Organizations that use a third-party signing service to sign objects with their private keys should make sure the signing service has enabled multi-factor authentication to access and authorize code signing.
If the service doesn't, it's not compliant with the new requirements and should be a serious warning flag. Transporting the key securely: If the CA or the signing service is generating the private key on behalf of the organization, the private keys may be transported outside of the secure infrastructure.
In those cases, the key must either be transported "in hardware with an activation that is equivalent to 128 bits of encryption, or encrypt the Private Key with at least 128 bits of encryption strength," according to the standard.

That could mean using a 128-bit AES key to wrap the private key, or storing the key in a PKCS 12 file encrypted with a randomly generated password "of more than 16 characters containing uppercase letters, lowercase letters, numbers, and symbols." Using strong keys: The CA will not issue the code-signing certificate if the requested Public Key does not meet modern security requirements or if it has a known weak Private Key (such as a Debian weak key). The CA will have to spell out all of the new requirements in the subscriber agreement, and it has to keep complete records to show both the organization and the CA is following the rules. Under the agreement, the organization cannot request a code-signing certificate if the public key in the certificate is -- or will be -- used with a non-code signing certificate.

The organization also has to commit to protecting against the theft or misuse of the private key, and to immediately request the CA to revoke the certificate if the private key is compromised or used to sign malicious code. If the private key is compromised due to an attack, the CA doesn't have to issue a new or replacement certificate until it is satisfied the organization has improved its security protections. "Documentation of a Takeover Attack may include a police report (validated by the CA) or public news report that admits that the attack took place.

The Subscriber must provide a report from an auditor with IT and security training or a CISA that provides information on how the Subscriber was storing and using Private keys and how the intended solution for better security meets the guidelines for improved security," the standard says. Currently, if the CA rejects the request for a new or replacement certificate, the organization can apply with another CA. However, if the second CA is following the new requirements, then it will be checking "at least one database containing information about known or suspected producers, publishers, or distributors of Suspect Code, as identified or indicated by an Anti-Malware Organization and any database of deceptive names" before issuing a certificate.
If the second CA sees that the organization has been implicated in signing bad code, then the idea is that it will also push back and reject the application, just like the first CA. "The CA must not issue new certificates to organizations that have been the victim of two Takeover Attacks or where the CA is aware the organization is not storing the private keys correctly," the standard says. The standard also has other requirements about the CA setting up a Timestamp Authority and how the timestamp certificates should be used, such as letting code signatures to stay valid for the length of the period of the timestamp certificate.  The standard was released by the Code Signing Working Group, part of the CA/Browser Forum, which is a voluntary group of CAs, browser makers, and software vendors that use X.509 v.3 digital certificates in their applications.

The Code Signing Working Group consists of Comodo, DigiCert, Entrust, GlobalSign, Izenpe, Microsoft, Symantec, SSC, and WoSign.

The China-based WoSign is the same CA that was recently marked as untrusted by Mozilla, Apple, and Google for multiple problems in how SSL certificates were issued. "The CA Security Council guidance on code signing is long overdue," Bocek said. "New methods of certificates to detect fraud and misuse such as Certificate Reputation will also see increased adoption as misuse of code signing certificates gets more and more attention." The requirements have not been adopted by the CA/Browser Forum, but will instead be improved and maintained by the CA Security Council.
Apply best routing practices liberally. Repeat each morning Solve the DDoS problem? No problem. We’ll just get ISPs to rewrite the internet.
In this interview Ian Levy, technical director of GCHQ’s National Cyber Security Centre, says it’s up to ISPs to rewrite internet standards and stamp out DDoS attacks coming from the UK.
In particular, they should change the Border Gateway Protocol, which lies at the heart of the routing system, he suggests. He’s right about BGP.
It sucks.

ENISA calls it the “Achilles’ heel of the Internet”.
In an ideal world, it should be rewritten.
In the real one, it’s a bit more difficult. Apart from the ghastly idea of having the government’s surveillance agency helping to rewrite the Internet’s routing layer, it’s also like trying to rebuild a cruise ship from the inside out. Just because the ship was built a while ago and none of the cabin doors shut properly doesn’t mean that you can just dismantle the thing and start again.
It’s a massive ship and it’s at sea and there are people living in it. In any case, ISPs already have standards to help stop at least one category of DDoS, and it’s been around for the last 16 years.

All they have to do is implement it. Reflecting on the problem Although there are many subcategories, we can break down DDoS attacks into two broad types.

The first is a direct attack, where devices flood a target with traffic directly. The second is a reflected attack. Here, the attacker impersonates a target by sending packets to another device that look like they’re coming from the target’s address.

The device then tries to contact the target, participating in a DDoS attack that knocks it out. The attacker fools the device by spoofing the source of the IP packet, replacing their IP address in the packet header’s source IP entry with the target’s address.
It’s like sending a letter in someone else’s name.

The key here is amplification: depending on the type of traffic sent, the response sent to the target can be an order of magnitude greater. ISPs can prevent this by validating source addresses and using anti-spoofing filters that stop packets with incorrect source IP addresses from entering or leaving the network, explains the Mutually Agreed Norms for Routing Security (MANRS).

This is a manifesto produced by a collection of network operators who want to make the routing layer more secure by promoting best practices for service providers. Return to sender One way to do this is with an existing standard from 2000 called BCP 38. When implemented in network edge equipment, it checks to see whether incoming packets contain a source IP address that’s approved and linked to a customer (eg, within the appropriate block of IPs).
If it isn’t, it drops the packet.

Corero COO & CTO Dave Larson adds, “If you are not following BCP 38 in your environment, you should be.
If all operators implemented this simple best practice, reflection and amplification DDoS attacks would be drastically reduced.” There are other things that ISPs can do to choke off these attacks, such as response rate limiting.

Authoritative DNS servers are often used as the unwitting dupe in reflection attacks because they send more traffic to the target than the attacker sends to them.

Their operators can limit the number of responses using a mechanism included by default in the BIND DNS server software, for example, which can detect patterns in incoming traffic and limit the responses to avoid flooding a target. The Internet of Pings We’d better sort this out, because the stakes are rising.

Thanks to the Internet of Things, we’re seeing attackers forklift large numbers of dumb devices such as IP cameras and DVRs, pointing them at whatever targets they want. Welcome to the Internet of Pings. We’re at the point where some jerk can bring down the Internet using an army of angry toasters.

Because of the vast range of IP addresses, it also makes things more difficult for ISPs to detect and solve the problem. We saw this with the attack on Dyn in late October, which could well be the largest attack ever at this point, hitting the DNS provider with pings from tens of millions of IP addresses.

Those claiming responsibility said that it was a dry run. Bruce Schneier had already reported someone rattling the Internet’s biggest doors. “What can we do about this?” he asked. “Nothing, really.” Well, we can do something. We can implore our ISPs to pull their collective fingers out and start implementing some preventative technology. We can also encourage IoT manufacturers to impose better security in IoT equipment. Let’s get to proper code signing later, and start with just avoiding the use of default login credentials first. When a crummy malware strain like Mirai takes down half the web using nothing but a pre-baked list of usernames and passwords, you know something’s wrong. How do we persuade IoT vendors to do better? Perhaps some government regulation is appropriate.
Indeed, organizations are already exploring this on both sides of the pond. Unfortunately, politicians move like molasses, while DDoS packets move at the speed of light.
In the meantime, it’s going to be up to the gatekeepers to solve the problem voluntarily. ® Sponsored: Want to know more about PAM? Visit The Register's hub
Google announced Monday that when it ships Chrome 56 in January 2017 the browser will distrust certificates issued by Chinese certificate authoritiesWoSign and StartCom that have made headlines over the past month. The move was somewhat expected after Mozilla announced last week the company would begin distrusting certificates from the same CAs in Firefox 51, also slated to launch in January. Both companies have publicly blamed WoSign for failing to adhere to standards expected of certificate authorities. Google blamed WoSign’s acquisition of StartCom, a move it tried to sweep under the rug in September Mozilla in particular released a five-page report in late September explaining missteps made by WoSign and StartCom. the most glaring perhaps the fact the CA was found backdating SSL certificates to circumvent a deadline requiring CAs to stop issuing SHA-1 SSL certs by Jan. 1, 2016.  Microsoft’s Edge and Internet Explorer browsers are scheduled to block SHA-1 certs, widely viewed as unstable, while Firefox and Chrome deprecated the algorithm at the beginning of this year. WoSign backdated certificates to December 2015 on 62 occasions for certs it issued in 2016 to get around that restriction, according to Mozilla’s report. Like Mozilla, Andrew Whalley, a member of the company’s Chrome Security team, said Google was made aware of WoSign’s malfeasances in mid-August when the company issued a cert for Github’s domains without Github’s authorization. An update on WoSign and StartCom in Google Chrome: https://t.co/uctoVvtaC4 — Andrew R. Whalley (@arw) October 31, 2016 WoSign’s acquisition of StartCom led to a shakeup in staff, policies, and issuance systems, which directly mislead the browser community, in the eyes of Google. Whalley claims that the way the company went about its acquisition of StartCom and the mis-issued certificates were tipping points for the company. “For both CAs, we have concluded there is a pattern of issues and incidents that indicate an approach to security that is not in concordance with the responsibilities of a publicly trusted CA,” Whalley said. Much like Mozilla did last week, Google said Monday it will distrust WoSign and StartCom certs issued after Oct. 21 in Chrome 56.

Certs issued before Oct. 21 will be trusted assuming they comply with Chrome’s policies but Google says it reserves the right to fully distrust all of WoSign’s certs in future releases.

Adding a sense of urgency to the situation, Whalley adds that in some instances, WoSign and StartCom customers may find their certificates don’t work at all in Chrome 56. Users are being encouraged to switch to another CA that is trusted by Chrome; any sites still using the old certs will be put on a whitelist and can request to be removed once they’ve transitioned. “Any attempt by WoSign or StartCom to circumvent these controls will result in immediate and complete removal of trust,” Whalley warns. Kathleen Wilson, the owner of Mozilla’s CA Certificates Module and Policy, said last week that the company will remove the affected root certs from its root store at some point – likely after March 2017 – but if WoSign’s new root certs are accepted for inclusion, it could change the removal date to coincide with WoSign’s plans to move customers to the new certs. Apple took a similar stance last month when it announced it would no longer trust certificates issued by the WoSign’s Free SSL Certificate G2 intermediate CA on macOS and iOS. It’s still unclear when or if Microsoft, one of the last remaining major root certificate stores, will revoke trust for WoSign and StartCom.

The company did not immediately return a request for comment on Tuesday. Both WoSign code signing certificates and WoSign EV code signing certificates are still trusted by Windows and four of WoSign’s root certificates are still listed as on Microsoft’s Trusted Root Certificate Program list. Microsoft’s Azure Key Vault, which allows users to save keys and other cloud app data, also supports WoSign for SSL certs.
MacOS, iOS task threading was open to hijack When Apple shipped its security bug-fixes earlier this week, one patch mostly passed under the radar. Ian Beer of Google Project Zero, who found a deep-down vulnerability in the XNU kernel, first reported it to Apple in February this year, and it took until now to clean it up properly. It took eight months, apparently, because of a basic architectural feature of the kernel: calling target functions directly instead of via the MIG IPC (Mach interface generator inter process communication) layer is fast, but “there’s no central point where access to a resource can be cut off”. In this post, task_t considered harmful, Beer describes (in gloriously geek-out detail) a discovery that needed “a large refactor in MacOS 10.12.1 / iOS 10.1” to fix. The TL;dr version is that the bugs offered at least two exploit types: privilege escalation, and sandbox escape. How come? Because when a new program – of the type SUID binary – launches, old tasks and thread ports are invalidated, but the essence of the bug is in this Chromium Project Zero bug report: “When a suid binary is executed it's true that the task's old task and thread ports get invalidated, however, the task struct itself stays the same.

There's no fork and no creation of a new task.

This means that any pointers to that task struct now point to the task struct of an euid 0 process.” Back to Beer's more readable post: by creating a dangling task_t, an exploit can do things like writing to one IOKit process's buffer from another process – and, as he demonstrates, that means one process can write to a more privileged process. “Since this bug also allows us to gain any entitlements we want as well as root it’s easy to use it to defeat kernel code signing on OS X and load an unsigned kernel extension.
See the exploit for CVE-2016-1757 for one way to do this.” “Every task_t pointer is a potential security bug”, Beer writes, because “there’s no locking mechanism to let you assert that the privileges of a task struct haven’t changed since you got access to it and just because kernel code got access to a task struct at one time doesn’t mean it should have access later.” Apple went through two rounds of mitigations before this week's fix, because, Beer notes, “This isn't an easy bug class to fix … “There are task_t pointers everywhere”. ®
The StrongPity APT is a technically capable group operating under the radar for several years.

The group has quietly deployed zero-day in the past, effectively spearphished targets, and maintains a modular toolset. What is most interesting about this group’s more recent activity however, is their focus on users of encryption tools, peaking this summer.
In particular, the focus was on Italian and Belgian users, but the StrongPity watering holes affected systems in far more locations than just those two.

Adding in their creative waterholing and poisoned installer tactics, we describe the StrongPity APT as not only determined and well-resourced, but fairly reckless and innovative as well. Clearly this APT is interested in encrypted data and communications.

The tools targeted by this group enable practices for securing secrecy and integrity of data.

For example, WinRAR packs and encrypts files with strong suites like AES-256, and TrueCrypt encrypts full hard drives all in one swoop.

Both WinRAR and TrueCrypt help provide strong and reliable encryption. WinRAR enables a person to encrypt a file with AES-256 in CBC mode with a strong PBKDF2 HMAC-SHA256 based key.

And, TrueCrypt provides an effective open-source full disk encryption solution for Windows, Apple, Linux, and Android systems. Using both of these tools together, a sort of one off, poor man’s end-to-end encryption can be maintained for free by putting these two solutions together with free file sharing services. Other software applications help to support encrypted sessions and communications. Well known applications supporting end-to-end encryption are used by hundreds of millions of folks, sometimes unknowingly, every day.
IM clients like Microsoft’s Skype implement 256-bit AES encrypted communications, while Putty, Winscp and Windows Remote Desktop help provide private communications and sessions with fully encrypted communications as well. Most of these communications across the wire are currently unbreakable when intercepted, at least, when the applications are configured properly. This actor set up a particularly clever site to deliver trojanized WinRAR installers in the summer of 2016, appears to have compromised another, and this activity reminds us somewhat of the early 2014 Crouching Yeti activity. Much of the Crouching Yeti intrusions were enabled by trojanizing legitimate ICS-related IT software installers like SCADA environment vpn client installers and industrial camera software driver installers.

Then, they would compromise the legitimate company software distribution sites and replace the legitimate installers with the Crouching Yeti trojanized versions.

The tactics effectively compromised ICS and SCADA related facilities and networks around the world.
Simply put, even when visiting a legitimate company distribution site, IT staff was downloading and installing ICS-focused malware.
StrongPity’s efforts did much the same. In the case of StrongPity, the attackers were not focused on ICS or SCADA.

They set up a domain name (ralrab[.]com) mimicking the legitimate WinRAR distribution site (rarlab[.]com), and then placed links on a legitimate “certified distributor” site in Europe to redirect to their poisoned installers hosted on ralrab[.]com.
In Belgium, the attackers placed a “recommended” link to their ralrab[.]com site in the middle of the localized WinRAR distribution page on winrar[.]be.

The big blue recommended button (here in French) linked to the malicious installer, while all the other links on the page directed to legitimate software: Winrar[.]be site with “recommended link” leading to malicious ralrab[.]com The winrar[.]be site evaluated what “recommended” package a visitor may need based on browser localization and processor capability, and accordingly offered up appropriate trojanized versions.
Installer resources named for french and dutch versions, along with 32-bit versus 64-bit compiled executables were provided over the summer: hxxp://www.ralrab[.]com/rar/winrar-x64-531.exe hxxp://www.ralrab[.]com/rar/winrar-x64-531fr.exe hxxp://www.ralrab[.]com/rar/winrar-x64-531nl.exe hxxp://www.ralrab[.]com/rar/wrar531.exe hxxp://www.ralrab[.]com/rar/wrar531fr.exe hxxp://www.ralrab[.]com/rar/wrar531nl.exe hxxp://ralrab[.]com/rar/winrar-x64-531.exe hxxp://ralrab[.]com/rar/winrar-x64-531nl.exe hxxp://ralrab[.]com/rar/wrar531fr.exe hxxp://ralrab[.]com/rar/wrar531nl.exe hxxp://ralrab[.]com/rar/wrar53b5.exe Directory listing, poisoned StrongPity installers, at rarlrab[.]com The first available visitor redirects from winrar[.]be to ralrab[.]com first appeared on May 28th, 2016, from the dutch speaking version of the winrar.be site.

And around the same time, another “certified distributor” winrar[.]it served trojanized installers as well.

The major difference here is that we didn’t record redirections to ralrab[.]com, but it appears the site directly served StrongPity trojanized installers: hxxps://www.winrar[.]it/prelievo/WinRAR-x64-531it.exe hxxps://www.winrar[.]it/prelievo/WRar531it.exe The site started serving these executables a couple of days earlier on 5/24, where a large majority of Italian visitors where affected. Download page, winrar[.]it Quite simply, the download links on this site directed visitors to trojanized WinRAR installers hosted from the winrar.it site itself.
It’s interesting to note that both of the sites are “distributors”, where the sites are owned and managed not by rarlabs, but by local owners in individual countries. StrongPity also directed specific visitors from popular, localized software sharing sites directly to their trojanized installers.

This activity continued into late September 2016.
In particular, the group redirected visitors from software aggregation and sharing site tamindir[.]com to their attacker-controlled site at true-crypt[.]com.

The StrongPity controlled Truecrypt site is a complete rip of the legitimate site, now hosted by Sourceforge. Here is the Tamindir truecrypt page, looks harmless enough. TrueCrypt page, tamindir software sharing site Unlike the newer poisoned WinRAR installers, StrongPity hosted several  Much like the poisoned WinRAR installers, multiple filenames have been used to keep up with visitor interests.
Visitors may have been directed to the site by other means and downloaded directly from the ripped and persuasive site. true-crypt[.]com malicious StrongPity distribution site At the very bottom of the page, there are a couple of links to the poisoned installers: hxxp://www.true-crypt[.]com/download/TrueCrypt-Setup-7.1a.exe hxxp://true-crypt[.]com/files/TrueCrypt-7.2.exe Referrers include these localized software aggregates and sharers: gezginler[.]net/indir/truecrypt.html tamindir[.]com/truecrypt/indir It’s interesting that Ksn recorded appearance of the the file on two unique systems in December 2015, a third in January 2016, all in Turkey, and then nothing until May 2016.

Then, deployment of the installers continued mostly within Turkey in July and September 2016. Over the course of a little over a week, malware delivered from winrar.it appeared on over 600 systems throughout Europe and Northern Africa/Middle East. Likely, many more infections actually occurred.

Accordingly, the country with the overwhelming number of detections was in Italy followed by Belgium and Algeria.

The top countries with StrongPity malware from the winrar.it site from May 25th through the first few days of June are Italy, Belgium, Algeria, Cote D’Ivoire, Morroco, France, and Tunisia. winrar[.]it StrongPity component geolocation distribution In a similar time-span, the over sixty visitors redirected from winrar.be to ralrab.com for malicious file download were overwhelmingly located in one country. The top countries directed to StrongPity malware from the winrar.be site from May 25th through the first few days of June are Belgium, Algeria, Morroco, Netherlands, Canada, Cote D’Ivoire, and Tunisia. winrar[.]be StrongPity component geolocation distribution StrongPity previously set up TrueCrypt themed watering holes in late 2015.

But their offensive activity surged in late summer 2016.

The group set up a site directly pulled from the contents of the legitimate TrueCrypt website.

From mid July to early September, dozens of visitors were redirected from tamindir[.]com to true-crypt[.]com with unsurprisingly almost all of the focus on systems in Turkey, with victims in the Netherlands as well. tamindir[.]com to true-crypt[.]com poisoned TrueCrypt installer redirects The StrongPity droppers were often signed with unusual digital certificates, dropping multiple components that not only provide complete control of the victim system, but effectively steal disk contents, and can download components for further collection of various communications and contacts.

Because we are talking about StrongPity watering holes, let’s take a quick look at what is being delivered by the group from these sites. When we count all systems from 2016 infected with any one of the StrongPity components or a dropper, we see a more expansive picture.

This data includes over 1,000 systems infected with a StrongPity component.

The top five countries include Italy, Turkey, Belgium, Algeria, and France. In the case of the winrar[.]be/ralrab[.]com watering hole malware, each one of the six droppers that we observed created a similar set of dropped components on disk.

And, in these cases, the attackers did not re-use their fake digital certificates.
In addition to installing the legitimate version of WinRAR, the dropper installed the following StrongPity components: %temp%\procexp.exe %temp%\sega\ nvvscv.exe prst.cab prst.dll wndplyr.exe wrlck.cab wrlck.dll Of these files, two are configurable and encrypted with the same keyless cipher, “wrlck.cab” and “prst.cab”. While one maintains several callback c2 for the backdoor to fetch more instructions and upload installed software and file paths, the other maintains something a bit more unusual. “prst.cab” maintains an encrypted list of programs that maintain encrypted connections.

This simple encoding takes the most significant nibble for each character, swaps the nibbles of that byte, and xors the result against the original value.
Its code looks something like this: x = s[i]; j = ((x & 0xF0)>>4); y = x ^ j; Using that cipher in the ralrab[.]com malware, the package is configured to seek out several crypto-enabled software applications, highlighting the group’s interest in users of more encryption-supported software suites. putty.exe (a windows SSH client) filezilla.exe (supports ftps uploads) winscp.exe (a windows secure copy application, providing encrypted and secure file transfer) mstsc.exe (Windows Remote Desktop client, providing an encrypted connection to remote systems) mRemoteNG.exe (a remote connections manager supporting SSH, RDP, and other encrypted protocols) Also included in StrongPity components are keyloggers and additional data stealers. Widely available, strong cryptography software tools help provide secure and private communications that are now easily obtained and usable.
In the summer of 2016, multiple encryption-enabled software applications were targeted with watering hole, social engineering tactics, and spyware by the StrongPity APT. While watering holes and poisoned installers are tactics that have been effectively used by other APT, we have never seen the same focus on cryptographic-enabled software. When visiting sites and downloading encryption-enabled software, it has become necessary to verify the validity of the distribution site and the integrity of the downloaded file itself.

Download sites not using PGP or strong digital code signing certificates need to re-examine the necessity of doing so for their own customers. We have seen other APT such as Crouching Yeti and Darkhotel distribute poisoned installers and poisoned executable code, then redistribute them through similar tactics and over p2p networks. Hopefully, simpler verification systems than the current batch of PGP and SSL applications will arise to be adopted in larger numbers. Until then, strong anti-malware and dynamic whitelisting solutions will be more necessary than ever. For more details on APT tactics like StrongPity watering holes, contact intelreports@kaspersky.com.