Home Tags Database

Tag: Database

An unwelcome PITSTOP Glitches at distributed denial-of-service mitigation biz Incapsula left the websites it defends offline twice on Thursday. Incapsula blamed "connectivity issues" for the global PITSTOP, aka the worldwide degradation of its services. "A rare case triggered an issue on the Incapsula service and caused two system-wide errors at 9:44 UTC and 14:50 UTC making sites inaccessible," a spokeswoman told us. "The issue was identified immediately and actions were taken to contain it and restore service.

The root cause has been identified and the Incapsula development and ops teams have corrected the issue. We apologize for the inconvenience to our customers." The data center security firm elaborated on the situation on its system status page and in a string of tweets. Affected sites included the blog of IT security industry veteran Graham Clulely. He tweeted: "Apologies to those trying to get to my site. @Incapsula_com is down for the second time today, bring my site with it." ® Incapsula Incapsulating Thursday's problems Bootnote PITSTOP – Partial Inability To Support Totally Optimal Performance: Not quite a full TITSUP, which is a Total Inability To Support Usual Packets. Sponsored: Speed up incident response with actionable forensic analytics
Tech has plenty of holy wars -- Windows vs Linux, emacs vs vi, and Perl vs Python, to name a few -- and security has its own: vulnerability disclosure.

At times it makes sense to publicly disclose a security vulnerability, but the recently revealed out-of-bounds read flaw in OpenSSL isn't one of them. Attackers can trigger the out-of-bounds read flaw in OpenSSL's b2i_PVK_bio() function with a specially crafted private key, according to a post by Guido Vranken, a software engineer at Intelworks.

That could lead to a heap corruption and potentially leak memory contents. The vulnerability was reported to OpenSSL on Feb. 24, but Vranken said the project team informed him on Feb. 26 that the report, along with other reports submitted around that time, would have to wait until the next release.
Vranken publicized the bug on his blog on Mar. 1, the same day OpenSSL released versions 1.0.2g and 1.0.1s. "It's not necessarily more secure to have vulnerable code running on servers for a month of more while attackers, if any (for this vulnerability), are not bound to release cycles and have the advantage of time," he wrote. The argument that administrators and users have to know about security vulnerabilities right away and can't wait for updates is frequently used to justify public disclosures.

Certainly, there are times when openly revealing a bug can spur a lagging company to prioritize the issue and get it fixed. That was the case with last year's automobile hack, as researchers Charlie Miller and Chris Valasek worked with Chrysler for nine months to fix the security flaw that could let attackers wirelessly break into some vehicles and remotely control a 2015 Jeep Cherokee.

Chrysler issued a recall notice within days after the duo's "stunt hack" with Wired's Andy Greenberg at the wheel. That isn't the case with the OpenSSL flaw since the project team acknowledged the report and indicated it was working on a fix.

Even Vranken acknowledged the team has to "conform to deadlines and schedules." No better, no worse While it should be fixed at some point, the bug doesn't seem critical enough to warrant pre-emptively disclosing it before a patch. While Vranken didn't provide information regarding severity or exploitability in his post, an entry on VulnDB, a comprehensive vulnerability database from Risk Based Security, suggests this is not a show-stopping, drop-everything-and-get-on-it flaw. VulnDB rated the flaw as "high," but assigned a base score of 7.8, an exploitability score of 8.6, and an impact score of 7.8.

The scores, based on the Common Vulnerability Scoring System as well as other internal classifications and metrics, are used to determine if a vulnerability can be easily exploited and if there is a public exploit available. This makes the vulnerability "no better or no worse" than the 60-or-so OpenSSL flaws found over the past two years, said Bill Ledingham, CTO of Black Duck Software. "This is another in a long line of vulnerabilities reported against OpenSSL as researchers pore over the code." It would be a good idea for OpenSSL to make sure similar out-of-bounds read vulnerabilities aren't in other sections of the code, which may wind up being more critical than this particular one. Not under attack Another good reason for a public disclosure would be if the flaw was actively under attack and being aware could help administrators beef up their defenses.

That isn't the case in this situation, as Vranken wasn't aware of any incidents, and the VulnDB entry doesn't list any, either.

Thanks to the disclosure, attackers who didn't know about the issue now have the details and can experiment to craft an exploit, and the defenders don't have an easy way to defend their systems. IT has to wait for a new OpenSSL release, which they had to do before the disclosure -- so nothing has been gained by jumping the gun.

For the moment, administrators with OpenSSL in their environments can rest assured they don't need to do anything about this specific bug.   Responsible disclosure may take longer and may not be as exciting, but it helps improve overall security because by the time the details are public, the fix is available.

There's some comfort in being able to say, yes, this is a serious issue, but look, here's what can be done to address it to protect the systems/network.

The endless drumbeat of software vulnerabilities can wear down even the most security-conscious IT administrator, especially when it's not clear how it can be exploited, whether there's an active threat, or even what to do as a result of the bug report. Researchers need to think through the vulnerability's actual impact. Just because it's potentially serious doesn't automatically make it critical.

There are only so many times people can be told there is nothing they can do about a serious flaw before they start ignoring vulnerability reports altogether.

That's not what anyone wants to see happen in IT security.
Shawn Collins It was just days ago when the federal judge presiding over the upcoming Oracle v.

Google API copyright trial said he was concerned that the tech giants were already preparing for a mistrial—despite the fact that the San Francisco jury hasn't even been picked yet. US District Judge William Alsup said he was suspicious that, during the trial, the two might perform intensive Internet searches on the chosen jurors in hopes of finding some "lie" or "omission" that could be used in a mistrial bid. To placate the judge's fears, Google said (PDF) it won't do Internet research on jurors after a panel is picked for the closely watched trial, set to begin on May 9."The Court stated that it is considering imposing on both sides a ban on any and all Internet research on the jury members prior to verdict. Provided the ban applies equally to both parties, Google has no objection to imposition of such a ban in this case," Google attorney Robert Van Nest wrote to the judge in a Tuesday filing. Enlarge Peter Kaminski Google was referring solely to Internet searches of the jury once jurors were picked. Oracle didn't go so far in its response Tuesday and said the dueling companies should be able to investigate jurors both before and after they are chosen. "...the parties should be permitted to conduct passive Internet searches for public information, including searches for publicly available demographic information, blogs, biographies, articles, announcements, public Twitter and other social media posts, and other such public information," Oracle attorney Peter Bicks wrote (PDF) Alsup on Tuesday. However, Oracle was concerned that Google might tap its vast database of "proprietary" information connected to jurors' Google accounts and said such research should be off-limits. "Neither party should access any proprietary databases, services, or other such sources of information, including by way of example information related to jurors', prospective jurors', or their acquaintances' use of Google accounts, Google search history information, or any information regarding jurors' or prospective jurors' Gmail accounts, browsing history, or viewing of Google served ads..." Oracle wrote. Google has never suggested it would violate its customers' privacy in such a way. Oracle is seeking $1 billion in damages after successfully suing the search giant for infringing Oracle's Java APIs that were once used in the Android operating system.

A federal appeals court has ruled that the "declaring code and the structure, sequence, and organization of the API packages are entitled to copyright protection." The decision reversed the outcome of the first Oracle-Google federal trial before Alsup in 2012.

APIs are essential and allow different programs to work with one another. The new jury will be tasked with deciding solely whether Google has a rightful fair-use defense to that infringement.
Allege critical software vulns ignored in huge backlog Frustrated security professionals acting on behalf of equally irritated researchers unable to gain Common Vulnerabilities and Exposures (CVE) numbers for their bugs have started an alternative numbering system to help triage what they describe as a huge backlog of ignored software flaws. Several prominent researchers are now backing the Distributed Weakness Filing (DWF) System badged as an alternative for the herds of researchers unable to gain the CVE for their legitimate vulnerabilities. The researchers say the movement is a response to inaction from US government-funded CVE-handler MITRE Corporation over the last six months.

They claim it has allocated far fewer CVE numbers to vulnerabilities and has been much less responsive to requests from researchers. MITRE has been contacted for comment. Common Vulnerabilities and Exposures numbers are the numerical tags assigned to legitimate verified bugs that act as a single source of truth for security companies and engineers in corporate offices to assign and apply patches. It is crucial for the security of software and for decades has been assigned – largely for US technology – by the MITRE Corporation. Dozens of researchers from multiple countries – from upstart hackers to competent experts with track records – have told this reporter they have been unable to gain a CVE number from MITRE. The effects of the alleged radio silence are tangible; the Reg understands that many US government agencies do not react to disclosed vulnerabilities that are not catalogued by the National Vulnerability Database – which in turn ignores bugs that lack assigned CVEs. Some large private sector corporations also respond only to CVE-numbered bugs – leading to the possibility that legitimate and critical vulnerabilities may remain unpatched due to MITRE's alleged unwillingness to allocate a CVE number to them. However, while the number of bugs is outpacing the speed at which vulnerability numbers are allocated, researchers say not enough is being done to cover important and forgotten critical bugs in popular software.
Some researchers say they have held off disclosure as a result, while many are published without CVE tracking. Kurt Seifried, who established the alternative system, is a Red Hat security staffer and MITRE board member but speaks to The Register in his personal capacity. He says the system could remain a bridge for those cut out of CVE allocation or, in the worst case scenario, become a full-blown replacement to it with eventual co-opting of the CVE title. "We are really seeking a response from MITRE," Seifried says, adding he would be glad to retire the effort should MITRE fill the gap. "Your first job is to get CVEs out the door, and the second is to engage with industry and neither of those is happening. "I planned to maybe launch this (DWF) in the summer, but I saw that it was getting worse and we as an industry just can't do another four months of no one getting CVEs. Seifried and other researchers contacted by this reporter say they have tried hard to inquire and lobby MITRE for CVE allocation – to no avail. It has sparked a series of complaints sent to this reporter and posted in public online mailing lists. A researcher known as Radek said he'd failed to elicit a response when disclosing his OS X vulnerabilities. "I have not heard back from MITRE," he says. "I am a little bit confused why vulnerability like this one which affects few hundreds or even more applications do not have a CVE assigned.
It is ridiculous in my opinion." Security researcher David Jorm says some prominent researchers able to gain immediate CVEs harbour such disdain for the alleged allocation failings they have submitted entirely fake and mocking bugs and still received CVE numbers. "There are a lot of legitimate researches who can't get CVE," Jorm says. "It seems that you need to be a rock star to get a number." Jorm, a respected security researcher in Australia, says the rules and procedures for allocation need to be clearly defined for the stability of the technology industry. "A lot of feeds aggregate CVEs for vulnerability and threat intelligence platforms, as do a lot of vulnerability scanners; the downstream impact is enormous," he says. The DWF system will largely map and complement CVE such that CVE-2016-0101 will become DWF-2016-0101.
It has, like the CVE system, corporations serving as numbering authorities.
Interested researchers can look over the DWF system at GitHub. ® Sponsored: Network monitoring and troubleshooting for Dummies
The vendor security evaluation framework provides questions that organizations need to ask to accurately assess a third-party's security and privacy readiness, Google said. Google has released a framework to open source that it implements internally to...
This will be a particularly timely eWEEKchat conversation on how security is moving ahead in the nascent IoT age. On Wednesday, March 9, at 11 a.m. PST/2 p.m.

EST/7 p.m.

GMT, @eWEEKNews will host its 41st monthly #eWEEKChat.

The topic will be "Is Data-Centric Security the Future?" It will be moderated by Chris Preimesberger, who serves as eWEEK's editor of features and analysis.Some quick facts:Topic: "Is Data-Centric Security the Future?"Date/time: March 9, 2016 @11 a.m. PST/2 p.m.

EST/7 p.m.

GMT Moderator: Chris Preimesberger: @editingwhiz Tweetchat handle: Use #eWEEKChat to follow/participate, but it's easier and more efficient to use real-time chatroom links.Chatroom real-time links: We have two: http://tweetchat.com/room/eweekchat or http://www.tchat.io/rooms/eweekchat.

Both work well.
Sign in via Twitter and use #eweekchat for the identifier."Is Data-Centric Security the Future?"Data-centric security is designed to protect data at all times while allowing it to flow freely and securely anywhere, without the need for plug-ins, proxies, gateways or changes in user behavior.This defines a large trend in IT in which the primary function is the management and manipulation of data itself, rather than security focused primarily on the application, networking or storage.

This type of security follows the data item or store around wherever it travels—on-premises or off.This is as close to airtight a concept as there can be when it comes to securing the Internet of things, many industry observers say.With the advent of virtualized IT systems, the worldwide explosion in the use of cloud and managed services, and the increasing usage of data storage and big data analytics inside clouds, data is often separated in so-called "chunks" for security purposes and spread in various locations. Later, when the entire file is needed, systems reassemble these chunks—usually with a just-in-time methodology.All this movement has made conventional security a central problem, and data-centric security—centered around government-level encryption—may have come to the rescue as the only way to handle all this travel in a reliable fashion.Some of the leading innovators in this space include Thales Security, which recently bought Vormetric for this purpose; IONU, whose data isolation platform creates a separate and secure zone where data is insulated from the outside world; Dataguise, which specializes in data-centric security for NoSQL server shops; and Vera, which does both file-centric and data-centric security.These are just a few of the data points we'll talk about on March 9. We also will pose questions such as:--What do you personally see as the No. 1 advantage of using data-centric security?--What other companies do you know will become data-centric security players in 2016?--Do you see, or do you not see, data-centric security becoming mainstream in 2016?Join us March 9 at 11 a.m. Pacific/2 p.m.

Eastern/7 p.m.

GMT for an hour.

Chances are good that you'll learn something valuable.
VIDEO: Yinglian Xie, CEO and co-founder of DataVisor, discusses her firm's new technology that makes use of unsupervised analytics to combat online fraud. There is an increasing consensus among security vendors and technology users that organizations c...
On Tuesday, The Guardian reported that the Federal Bureau of Investigation (FBI) has changed its rules regarding how it redacts Americans’ information when it takes international communications from the National Security Agency’s (NSA) database.

The paper confirmed the classified rule change with unnamed US officials, but details on the new rules remain murky. The new rules, which were approved by the secret US Foreign Intelligence Surveillance Court (FISC), deal with how the FBI handles information it gleans from the National Security Agency (NSA).

Although the NSA is technically tasked with surveillance of communications involving foreigners, information on US citizens is inevitably sucked up, too.

The FBI is then allowed to search through that data without any “minimization” from the NSA—a term that refers to redacting Americans’ identifiable information unless there is a warrant to justify surveillance on that person. The FBI enjoys privileged access to this information trove that includes e-mails, texts, and phone call metadata that are sent or received internationally. Recently, the Obama administration said it was working on new rules to allow other US government agencies similar access to the NSA’s database. But The Guardian notes that the Privacy and Civil Liberties Oversight Group (PCLOB), which was organized by the Obama administration in the wake of the Edward Snowden leaks, took issue with how the FBI accessed and stored NSA data in 2014. "As of 2014, the FBI was not even required to make note of when it searched the metadata, which includes the ‘to' or ‘from' lines of an e-mail,” The Guardian wrote. "Nor does it record how many of its data searches involve Americans’ identifying details." However, a recent report from PCLOB suggested that the new rules approved by FISC for the FBI involve a revision of the FBI's minimization procedures.
Spokespeople from both the FBI and PCLOB declined to comment on that apparent procedure change, saying it was classified, but PCLOB’s spokesperson, Sharon Bradford Franklin, told The Guardian that the new rules "do apply additional limits.” A spokesperson for the Office of the Director of National Intelligence said that the new procedures may be publicly released at some point.
In data transmission, bandwidths in the Gigabit range call for new IT security solutions.

This applies in particular to traditional unified threat management (UTM) firewalls, which have limited performance.

At this year's CeBIT, the IT security company Rohde & Schwarz Cybersecurity will present an innovative solution that for the first time meets the challenges posed by higher bandwidths: the UTM+ firewall series with an integrated next-generation engine.

The integrated software also comes with high-end features.Munich, March 8, 2016 — The UTM+ firewall series was designed especially for the needs of medium sized businesses.
It is just as powerful as a next-generation firewall (NGFW) due to the integrated single-pass technology. While the efficiency of a traditional UTM appliances ends in the megabit range, UTM+ appliances provide performance in the Gigabit range.

And they offer even more: the UTM+ models are easy-to-use, all-in-one solutions and are significantly less expensive than next-generation firewalls. In addition to single-pass technology, further high-performance next-generation firewall features were integrated into the new UTM+ solution.

These include, for example, security mechanisms such as port-independent SSL decryption for automatic analysis of encrypted data traffic.

The permanent layer 7 scanner ensures complete and continuous analysis of data packets – even after successful validation.

The application control feature allows a fine-grained analysis of network traffic.

The firewall operating system is additionally protected with a highly secure firewall container system. Like all new Rohde & Schwarz Cybersecurity products to be showcased at CeBIT, the UTM+ firewalls follow the innovative approach "security by design", which prevents attacks proactively rather than reactively. Security certificate: made in GermanyAt CeBIT 2016, the Rohde & Schwarz security companies gateprotect, Sirrix, Rohde & Schwarz SIT and ipoque will, for the first time, bundle their broad ranges of technologically leading IT and network security solutions under the umbrella of the new Rohde & Schwarz Cybersecurity GmbH.

The first product of this new big player is the UTM+ V16. The UTM+ V16 is the improved successor model to the successful GP series with V15 software from gateprotect.

The V16 software is not only more powerful, but can be optically recognized as a Rohde & Schwarz product.
Instead of the familiar red, it now comes in the blue and gray Rohde & Schwarz corporate colors. Rohde & Schwarz Cybersecurity, a 100 % subsidiary of the Rohde & Schwarz electronics group, develops and manufactures its products exclusively in Germany.

Customers can therefore rely on the stringent German quality and data protection standards as well as maximum performance for all Rohde & Schwarz Cybersecurity products. Contact:Svenja Borgschulte, Tel.: +49 (0)221 801087 85, Fax: +49 (0)221 801087 77, E-Mail: sb@moeller-pr.de Kontakt für Leser:Christian Reschke, Tel.: +49 (0)30 65884 232, Fax: +49 (0)30 65884 184, E-Mail: christian.reschke@rohde-schwarz.com https://cybersecurity.rohde-schwarz.com/de CeBIT 2016 in Hanover, March 14 to 18 hall 6/booth G16 Rohde & Schwarz CybersecurityThe IT security company Rohde & Schwarz Cybersecurity protects companies and public institutions around the world against espionage and cyberattacks.

The company offers high-end encryption solutions, next-generation firewalls, network traffic analytics and endpoint security software in addition to producing cutting-edge technical solutions for IT and network security.

These “Made in Germany” IT security solutions range from compact all-in-one products to custom solutions for critical infrastructures.

The “security by design” approach, which employs a proactive rather than reactive approach to dealing with cyberattacks, is central to the development of trusted IT solutions.

Around 400 employees work at the current sites in Berlin, Bochum, Darmstadt, Hamburg, Leipzig, Munich and Saarbrücken. R&S® is a registered trademark of Rohde & Schwarz GmbH & Co. KG.All press releases are available online at https://cybersecurity.rohde-schwarz.com/de.Image material can also be downloaded there.
NEWS ANALYSIS: Multiple speakers at the RSA conference said developers alone are not to blame for the current state of cyber-security in which threats evolve faster than the defenses. SAN FRANCISCO—It's the best of times and the worst of times to be a software developer.

There are lots of jobs and business opportunities for developers, but thousands of new applications reach the market each day with inadequate attention to built-in security flaws.Cloud computing, containers, new programming languages and continuous integration and delivery tools are changing the game and enabling developers to create new types of applications and reach new levels of agility.

Despite all the opportunity, there's one area in which developers can't catch a break—security.Here at the RSA Conference this week there was a lot of talk about Apple vs. the FBI and the coming security market consolidation.

Dig a little deeper and the real issues confronting enterprise CIOs and security managers include the never-ending stream of insecure applications being put into production from vendors as well as corporate developers.For enterprise developers, this is not necessarily their fault.

They are facing, in geek speak, the Kobayashi Maru Star Trek command test scenario: They can't win.

Either they push out apps quickly and insecurely, or slowly but more securely.
Security processes and agile development methodologies require their own schedules and resources. To that point, a new survey from CloudPassage found that 50 per cent of security professionals don't believe security is capable of moving as fast as app release cycles; 65 percent said a lack of resources and organizational siloes are the main barriers to security getting into release cycles earlier. Businesses, seeing great opportunities in increasing developer productivity, are pushing developers to get apps out as fast as possible.
Sometimes, security best practices are being ignored. More often, they are merely being put off until later.
Software producers will wait to work on security until hackers find the product's weak spots.

This symptom is already pervasive in the Internet of things.

Experts who monitor and test application security call this "security debt."Which kinds of applications are the ones causing the most problems?"New ones.

That's the reality," said Amichai Shulman, CTO of Web application firewall vendor Imperva. "There are not bad programmers or bad languages.
It's mostly those apps that have very tight schedules—a very fast time to market—that are the most vulnerable. No one has enough time to weed out vulnerabilities and write secure code."The biggest code culprit for security these days are APIs for mobile apps and server-side controls.

Companies are creating mobile versions of their legacy applications and in the process generating security bugs. "Companies say let's go mobile, they mobilize the apps and they end up with APIs that are vulnerable," he said.Again, business imperatives are not necessarily the developer's fault. Nor do security flaws occur because student developers are not getting enough training on writing secure code and preventing exploits like SQL injection and cross-site scripting.It's also a simple numbers problem.
IT industry research shows that over the next few years millions of cyber-security jobs will go unfilled.
Chocolate Factory rolls out geolocation filter on search results If you use Google in Europe, your search results will be censored under the Continent's right-to-be-forgotten policy – even if you try to use one of the ad giant's non-European sites. Until now if you used Google.com rather than, say, Google.de, you could still find results that have been removed at someone's request: the links would be censored on google.de but available on google.com From next week, though, if you connect to Google.com from an IP address with a European geolocation, you'll get the censored result. Under the right-to-be-forgotten policy, people can ask for results to pulled from the search engine on all queries made in the EU. Previously, the filters had only been applied to the local Google domains for each EU country. Users would now need to find other means, such as an overseas VPN to get around the filtered search results. Since the European Court of Justice ruled in 2014 that citizens had the right to order their names be expunged from embarrassing Google search results, the Chocolate Factory has been working with the EU courts to honor the requests. "We're changing our approach as a result of specific discussions that we've had with EU data protection regulators in recent months," wrote Google global privacy council Peter Fleischer. "We believe that this additional layer of delisting enables us to provide the enhanced protections that European regulators ask us for, while also upholding the rights of people in other countries to access lawfully published information." Fleischer acknowledged that Google has had "occasional disagreements" with the EU in how to enforce the directive, but said the Chocolate Factory will continue to comply with requests to pull information in Europe, even if many of those requests would still leave the results readily available for viewing on other search engines and anyone who would run a search query outside of the EU. ® Sponsored: Speed up incident response with actionable forensic analytics
Uncle Sam can't argue against science Analysis Apple versus the FBI has generated much discussion and conjecture lately. The vast majority of it has centered on the rights and the wrongs, about the loss of privacy, and of the precedent that breaking one iPhone would create. Many are hanging on the blow-by-blow developments for an outcome, to see which side trumps: Apple – and by implication, increasingly, the tech industry – or law enforcement and the government.

But this misses the point and the ultimate outcome: victory for Apple. That's because there is a higher law beyond what FBI director James Comey sought to enforce on Apple last month. It was described by Harvard professor Larry Lessig almost 20 years ago, when he was then unknown, in a book called Code and Other Laws of Cyberspace, since updated as Code v2. Lessig called law as defined in computer code "West Coast Law." This is as opposed to "East Coast Law," which is defined by statute. Encryption is one such West Coast Law.
It was defined by Whitfield Diffie and Martin Hellman 40 years ago in a paper called "New Directions in Cryptography." Their Diffie-Hellman protocol brought us the concept of public key cryptography, messages encrypted first with a key everyone knows, then decrypted with a private key controlled by the recipient. Or vice versa. East Coast Law is analog.
It changes and it has exceptions.

Arguments can be made – on either side of a question – that define or change East Coast Law or that shift its interpretation, as happens in courts. West Coast Law, like encryption, is binary.
It's science.
It uses facts that can't be denied or altered through the relative strength or weakness of an argument.
So we have learned from that day to this. As the Diffie-Hellman paper was published, Ron Rivest, Adi Shamir, and Len Adleman created an implementation known by their initials: RSA.

They defied the wishes of the US National Security Agency and published an article on it in Scientific American in 1977. In 1991, programmer Phil Zimmermann wrote a program called Pretty Good Privacy, implementing RSA. Zimmermann launched PGP Inc in 1996, defying attempts by RSA Security (now part of EMC) to claim patent rights over the two-key method, then fighting the US government over rights to export it. The first version of the encrypted Web standard, https, also using Diffie-Hellman keys, was written into Netscape Navigator in 1994.
It evolved into a full Internet specification in 2000.

After encrypting its own traffic, Google began preferring the encrypted pages of web sites it indexed late last year. Why did Google do this? Partly in response to the revelations of Edward Snowden, whose document dump in 2013 showed that the NSA has been ignoring privacy routinely ever since 9/11.
Snowden's point was that the government's promises on this issue can't be trusted. Snowden says we can't trust government with our secrets, and we don't have to. You might as well pass a law telling glaciers not to melt. We all want our privacy and security. West Coast Law says the only way you get it is if everyone does. But, Comey says, he just wants Apple to disable PIN protection on one iPhone.

But this, too, is an encryption case.

The PIN serves as a shorter key.

This phone will self-destruct after 10 failures, just like the messages in Mission Impossible. If Apple unlocks the phone because of terrorism, the district attorney for New York County (Manhattan) alone has 175 Apple devices in his lab he wants to open, in hopes of solving crimes. And it's not just America.
If Apple broke its own phone's security because of US legal demands, China would demand that right.
So would Russia.
So would every other dictatorship. Many "crimes" being investigated in these countries are political.
If Comey gets his way, then so does Vladimir Putin. This is why Bruce Schneier, a security expert who became an IBM employee last week when his employer was bought by Big Blue, writes that "Our national security needs strong encryption." He adds: I wish I could give the good guys the access they want without also giving the bad guys access, but I can't.
If the FBI gets its way and forces companies to weaken encryption, all of us – our data, our networks, our infrastructure, our society – will be at risk. That's West Coast Law in a nutshell.
It's science.
It's binary. Resistance to it is futile. The decision by Judge James Orenstein to deny a government demand against Apple, based on the arguments used in San Bernardino, is thus theater.
So, too, with the House hearing.

Congress could pass a law, and the President could sign a law, mandating that all security have a back door, just as was sought in 1991. But even if Tim Cook was not allowed to defy such a demand, as he says he will in the case of the PIN, replacing it with something "even Apple" can't crack, unbreakable security is possible. Which means unbreakable security will exist. Will only criminals and governments have it? Or will you? Will everyone? It's all or nothing.

That's the ruling of West Coast Law. And what of Whitfield and Diffie, who launched this ship 40 years ago? They were just awarded the Turing Prize, computing's equivalent of the Nobel. Law can't defy science. ® Sponsored: Speed up incident response with actionable forensic analytics