Home Tags Computer Security

Tag: Computer Security

Free webinar series from the folks at Sophos Promo Registration is open for Security SOS Week, a short series of live webinars each featuring Sophos expert IT security practitioners.

The events range from protecting your business against social engineering to embracing the Internet of Things without letting crooks into your network. You can find out more and sign-up at Security SOS Week, but in the meantime here is a handy synopsis for you. The 30-minute webinars kick off each day from 14 March 2016 to 18 March 2016 at 2pm to 2.30pm UK time. (14:00-14:30 UTC.) Naked Security writer Paul Ducklin hosts each event and his brief is to interview Sophos experts to help you cut through the jargon and understand the big issues in computer security today. Each webinar consists of 20 minutes of live interview, followed by 10 minutes of Q&A. Paul promises: “No sales pitches, no product demos, no PowerPoint slide decks - just informed answers to tricky problems.” Check out the running order below: Social Engineering – when charming crooks talk to helpful users Monday 14 March 14:00 GMT Sophos Global Security IT Manager Ross McKerchar takes you into the murky world of targeted attacks and shows how to build defences that will prevent one well-meaning employee from giving away the keys to the castle. Can you strengthen security by weakening it? Tuesday 15 March 14:00 GMT Some regulators want stronger security for the data you hold while others want to deliberately exploit "backdoors" in case they need to access your data in an investigation. What to do? John Shaw, Sophos Vice President, Product Management, discusses. Malvertising: When trusted websites go rogue Wednesday 16 March 14:00 GMT Crooks don't need to hack into a mainstream website to infect it with malware.

They can get away with hacking just one ad served up by one ad network.

This is "Malvertising", and John Shier.
Sophos IT Security Specialist, explains how it works, why crooks love it, and what we can do to stamp it out. Inside a hacker's toolkit Thursday 17 March 14:00 GMT Join SophosLabs Principal Researcher Fraser Howard for an insight into what cybercrime tools the hackers have up their sleeves, how they work together, and what we can do to get the better of them. What's next for the Internet of Things? Friday 18 March 14:00 GMT Chet Wisniewski, Sophos Senior Security Advisor, tells you how you can dip your toes in the IoT water without plunging straight into trouble - as well as explaining how you can help us make the next generation of "things" secure by design. Sponsored: Why every enterprise needs an Internet Performance Management (IPM) Strategy
Security isn’t black and white.
It isn’t a choice between full security and no security -- it’s a continuum with a lot of gray in between. Full security, even if achievable, would “secure” things beyond the realm of reasonable usability.

But even then hackers would find a way in. Usable security comes down to a single feeling: trust. Trust makes our world mostly normal and livable.
In one of Bruce Schneier’s books (I forget which) he wrote about the societal trust in everyday acts like ordering pizza.

The pizza company trusts you’re going to pay when the pizza is delivered.

The driver trusts that you’re going to pay and tip, and you won’t harm him or her.

The customer trusts that the pizza will match the order -- and trusts the delivery driver, a stranger, enough to open their door. Without such pervasive trust, everyday life would be impossible. The issue is dogging Uber and other tech companies right now: Uber wants its customers to feel safe enough to hop into a stranger's car, despite horror stories stemming from a few bad apples.

Apple, and nearly every other big name in the IT industry, is fighting the feds so that customers feel they can safely store private information.

Every software vendor works hard against bugs and hackers to keep the trust of their customers. Once trust is harmed, it can be impossible to regain.

Ask anyone who’s ever been cheated on. To curry trust, companies have to address several components, including security, compliance, privacy, and transparency. Trust factor No. 1: Security The base component of trust in the security world is, of course, good security.

Customers want to be assured that a product won’t open the door to random hacking, harassment, and unauthorized activity. When a piece of software or hardware gets hacked too many times, customers look elsewhere. Security doesn’t have to be perfect.
In fact, the product itself can survive with hundreds to thousands of bugs, year after year.
It all depends on whether those defects result in harm to the customer.

As long as relatively few people get hacked or bothered, most people will keep on using it. On the same note, you can have a secure product with only a few bugs -- but if one them gets badly abused, it could be game over. Security is rarely a selling point. Most people choose cool features over security.

But a lot of exploits over time or one bad exploit that impacts a lot of people can damage a whole bunch of trust. Without security as the foundation, trust is impossible. Trust factor No. 2: Compliance Computer products need to comply with basic societal norms, human rights, national and local laws -- and government regulations if applicable.
Interestingly, different cultures have different expectations.
In China, people accept that it is legal for their government to monitor every digital transaction they make (although some use proxies to get around the country’s censoring firewalls). In the United States, people accept far more business ownership of their personal data, with few meaningful restrictions, than their European counterparts. Other countries, such as India, accept that bribes are normal way of doing business for everything from paying your taxes to operating a business.

Every country has its own idea of what is just and fair, but the people expect that every vendor doing business in their country comply with the federal and local laws. Trust factor No. 3: Privacy Customers expect that their private information will not be shared without consent.

This is true even of countries where the government and businesses know almost everything about each individual. People may accept sharing their information with business and government, but they don’t want their friends and neighbors to have the same access. This expectation of privacy is one of the newest components of trust, one that many companies are only now coming to grips with.

But it’s huge. Users want to be able to control how much of their data is accessed and where it goes. Many of the smartest companies, not directly in the data collection business, are realizing that the smartest privacy strategy is to collect the least amount of personal data possible.

The less personal information they have, the less they have to protect, and the less that can be stolen. Trust factor No. 4: Transparency More and more, people expect governments and companies to be more transparent about what they collect and when.

There's a growing expectation that governments and companies must post their information collection policies in an easily accessible place, though this applies more to companies than to governments. Other trust components Security, compliance, privacy, and transparency are the foundations of trust in computer security, but there are two more: expectations and perception. Overall, trust is a matter of expectations. Yes, different countries have different expectations.

But it’s the communication, transparency, and acceptance of those guidelines that creates expectations, and it ultimately determines whether trust succeeds or fails. Perception is reality. Many businesses die failing to recognize this.
It doesn’t matter how trustworthy a product is if consumers view it as untrustworthy. Our world is replete with examples of a tiny fraction of vocal observations turning into a global meme.
It happens in politics all the time.

A politician or candidate does one little thing (spell "potato" wrong, yell during a big win, speak Mandarin to Chinese people), and suddenly many people see the politician through the lens of the one incident. No wonder politicians give us canned, measured speech. Perceptions can harm better security.
I work at a software company where occasionally an update patch will cause operational issues in a small number of computers, often unrelated to the patch.

But a few dozen complaints get amplified in the media, including this publication, and the next thing you know tens of millions of people stop applying the patch. Gaining and keeping trust A big part of gaining and keeping trust is to continuously foster an environment where trust is valued and communicated to everyone participating.

Consumers will forgive occasional or even ongoing issues if enough goodwill has been earned to show the company cares about the customer. The more I analyze computer security, the more I realize it’s not about numeric bug counts ... or security at all.
It’s more about intent and trustworthiness, and every component that makes up that trustworthiness, largely led by perceptions. Long-term, established trust sells, regardless of the underlying security posture.

Everything else is background noise.
The famous munition t-shirt--the way security data might have to have been shared if proposed trade restrictions under the Wassenaar Arrangement were approved.After nearly a year of protests from the information security industry, security researchers, and others, US officials have announced that they plan to re-negotiate regulations on the trade of tools related to "intrusion software." While it's potentially good news for information security, just how good the news is will depend largely on how much the Obama administration is willing to push back on the other 41 countries that are part of the agreement—especially after the US was key in getting regulations on intrusion software onto the table in the first place. The rules were negotiated through the Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies, an agreement governing the trade of weapons and technology that could be used for military purposes. Originally intended to prevent proliferation and build-up of weapons, the US and other western nations pushed for operating system, software, and network exploits to be included in the Wassenaar protocol to prevent the use of commercial malware and hacking tools by repressive regimes against their own people for surveillance. These concerns appear to have been borne out by documents revealed last year in the breach of Italy-based Hacking Team, which showed the company was selling exploits to Sudan and other regimes with a record of human rights abuses.

And security systems from Blue Coat were resold to a number of repressive states, including Syria's Assad regime—which may have used the software to identify and target opposition activists. But the framework the State Department brought back from Wassenaar contained language that "was too broad and would harm cybersecurity," Harley Geiger, director of public policy at the security and penetration testing tools vendor Rapid7, told Ars. The initial rules proposed under new provisions negotiated by the State Department in 2013—which arose from trade restrictions introduced initially by France and the United Kingdom—were intended to prevent "bad" countries from obtaining technology like "IMSI catchers" for intercepting cell phone calls, network surveillance tools, and spyware.

But the language would have placed export licensing controls on a broad range of technology, software, and services related to legitimate computer security, including systems specifically designed to block malware, penetration testing tools, and possibly even security training. The same sort of rules once restricted the export of commercial-grade encryption, placing it under International Traffic in Arms Regulations (ITAR).

The perl code for RSA encryption was famously printed on a t-shirt in protest of its classification as an "ITAR controlled munition." The implementation that Commerce proposed, Geiger explained, may have prevented companies from sharing information about potential exploits with overseas subsidiaries.

Companies that provide penetration testing services, such as Rapid7, would run into difficulty providing those services overseas. "The number of licenses you would have to apply for normal cybersecurity operations would multiply greatly," Geiger said. Normally, US regulations implementing Wassenaar protocols are simply issued.

But the Commerce Department's Bureau of Industry and Security (BIS) took the unusual step of opening proposed exploit rules up for public comment.

The immediate feedback to the first set of rules proposed was almost universally negative.

The rules' language is flawed partially because of the broad interpretation of what "intrusion" technology is.

The regulations proposed swept up defensive software systems as well because they include information about exploits. The Electronic Frontier Foundation, the Center for Democracy and Technology, and Human Rights Watch joined in submitting comments about the proposed rules.

The groups warned that the rules were overly broad—they placed restrictions on cybersecurity software, for example, because the software "may incorporate encryption functionality." Rapid7's team commented that the proposed rules would "establish controls on 'technology required for the development of 'intrusion software,' which would regulate exports, re-exports and transfers of technical information required for developing, testing, refining, and evaluating exploits and other forms of software meeting the proposed definition of 'intrusion software.' This is the type of information and technology that would be exchanged by security researchers, or conveyed to a software developer or public reporting organization when reporting an exploit." The rule, they argued, would have a chilling effect on security research. The outcry led to congressional hearings on the proposed rules' impact, which led to the inter-agency panel's reconsidering of the rules. "Today’s announcement represents a major victory for cybersecurity here and around the world,” said Rep. Jim Langevin (D-R.I.), who led the Congressional effort to stop the proposed rules, in a statement issued on Tuesday on the conclusions of that panel to re-negotiate. "While well-intentioned, the Wassenaar Arrangement’s ‘intrusion software’ control was imprecisely drafted, and it has become evident that there is simply no way to interpret the plain language of the text in a way that does not sweep up a multitude of important security products." The EFF was similarly enthusiastic about the decision, posting news of the shift under the headline, "Victory!" But while optimistic, Geiger—who joined Rapid7 from the Center for Democracy and Technology in January—remains cautious about how much will be renegotiated.

The agreements cover intrusion "technology, software and systems" as separate categories, and the wording of the decision he had seen didn't indicate if all three would be addressed—or if only "technology" (hardware) would be. "These controls should be removed completely to enabler legit cybersecurity activity," Geiger said. "But if it's not possible, we think the reforms should be comprehensive, and not just include technology but also software and systems and change the definition of intrusion software."
From the president of RSA to the director of the NSA, all RSA conference keynotes mentioned needs for protecting liberties and increasing the infosec workforce. ec professionals and protecting privacy rights Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas. Register today and receive an early bird discount of $200. Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad ...
View Full Bio More Insights
With the average cost of a data breach running at $3.8 million, IBM plans to buy Resilient Systems to beef up its security incident-response capabilities. With security as one of its strategic imperatives, IBM made a series of moves to bolster its security incident response capabilities, including announcing its intent to acquire Resilient Systems, a provider of a popular incident response platform.Resilient Systems, whose CTO Bruce Schneier is a well-known cryptographer, computer security and privacy specialist, develops and markets a security incident-reporting platform that automates the process of responding to cyber-security breaches.

The addition of Resilient expands IBM's capabilities in the incident-response space, where Big Blue has been more active in the security threat-detection and -prevention space.Financial terms of the deal, which IBM announced at the RSA Conference 2016 in San Francisco, were not disclosed."We have a nice portfolio; it's like an immune system we've been putting together for the prevention and detection of security threats, primarily in software and services," Marc van Zadelhoff, general manager of IBM Security, told eWEEK. "What this acquisition does is it really helps us double down in the area of response.

Detect and prevent is one area that we spend a lot of time in." However, IBM has had an existing services team you could call in when there was an attack—sort of like a "Ghost Busters" incident-response team, van Zadelhoff said. "But we're announcing Resilient Systems will be joining us," he said. "And they are the leading incident-response platform.
It's a real nice fit on top of our portfolio. Our 6,000 QRadar customers have been asking us to get more into this area.

There's also our BigFix, Guardium and our managed services team that will all be leveraging this capability as Resilient comes on board." IBM is in the midst of a transformation to focus on a core set of growth imperatives: cloud, analytics, mobile, social and security (CAMSS)."We've seen those imperatives grow quickly over the last couple of years," van Zadelhoff said. "We launched the security business unit about four and a half years ago.
It crosses software and services and is focused toward the CISO [chief information security officer]. We've become one of the biggest enterprise players and I see no lack of appetite by IBM to continue to invest and help us grow this business."IBM Security has been building its business up over the last couple of years.
In 2015, the unit became a $2 billion business for IBM, grew 12 percent and hired 1,000 people over the last year to amass well over 6,000 people in the unit.IBM Security is growing at about two times the market average, and in prevention and detection, Big Blue already is the market leader, van Zadelhoff said. Meanwhile, the security market is consolidating, with many of the pure-play providers beginning to struggle. "So we're already the leader in the one big pillar in the market," he said. "Incident response is the other and we're investing in the leader with this. We're going after the next segment of the market very aggressively."The Resilient Systems team consists of about 100 people situated just across town from IBM Security's headquarters in the Kendall Square area of Cambridge, Mass., van Zadelhoff said. "Our headquarters is in Cambridge; they're based in Cambridge and their management team is outstanding," he noted. "They have some really brilliant players like Ted Julian, Bruce Schneier and John Bruce.

These guys are known players in the space.

There are other players in the incident response space, but these guys are leading the pack." 
AmyApple's encryption battle Apple CEO Tim Cook: Complying with court order is “too dangerous to do” If FBI busts into seized iPhone, it could get non-iCloud data, like Telegram chats Apple: We tried to help FBI terror probe, but someone changed iCloud password Trump urges supporters to boycott Apple in wake of encryption brouhaha Feds to court: Apple must be forced to help us unlock seized iPhone View all…A key justification for last week's court order compelling Apple to provide software the FBI can use to crack an iPhone belonging to one of the San Bernardino shooters is that there's no other way for government investigators to extract potentially crucial evidence from the device. Technically speaking, there are ways for people to physically pry the data out of the seized iPhone, but the cost and expertise required and the failure rate are so great that the techniques aren't practical. In an article published Sunday, ABC News lays out two of the best-known techniques.

The first one is known as decapping.
It involves removing the phone’s memory chip and dissecting some of its innards so investigators can read data stored in its circuitry. With the help of Andrew Zonenberg, a researcher with security firm IOActive, here's how ABC News described the process: In the simplest terms, Zonenberg said the idea is to take the chip from the iPhone, use a strong acid to remove the chip’s encapsulation, and then physically, very carefully drill down into the chip itself using a focused ion beam.

Assuming that the hacker has already poured months and tens of thousands of dollars into research and development to know ahead of time exactly where to look on the chip for the target data -- in this case the iPhone's unique ID (UID) -- the hacker would, micron by micron, attempt to expose the portion of the chip containing exactly that data. The hacker would then place infinitesimally small "probes" at the target spot on the chip and read out, literally bit by bit, the UID data.

The same process would then be used to extract data for the algorithm that the phone normally uses to "tangle" the UID and the user's passkey to create the key that actually unlocks the phone. From there the hacker would load the UID, the algorithm and some of the iPhone's encrypted data onto a supercomputer and let it "brute force" attack the missing user passkey by simply trying all possible combinations until one decrypts the iPhone data. Since the guessing is being done outside the iPhone's operating system, there's no 10-try limit or self-destruct mechanism that would otherwise wipe the phone. But that’s if everything goes exactly right.
If at any point there's even a slight accident in the de-capping or attack process, the chip could be destroyed and all access to the phone's memory lost forever. A separate researcher told ABC News it was unlikely the decapping technique would succeed against an iPhone.
Instead, it would likely cause the data to be lost forever.

A slightly less risky alternative is to use infrared laser glitching.

That technique involves using a microscopic drill bit to pierce the chip and then use an infrared laser to access UID-related data stored on it. While the process may sound like it was borrowed from a science fiction thriller, variations of it have been used in the real world.
In 2010, for instance, hardware hacker Chris Tarnovsky developed an attack that completely cracked the microcontroller used to lock down the Xbox 360 game console. His technique used an electron microscope called a focused ion beam workstation (then priced at $250,000 for a used model) that allowed him to view the chip in the nanometer scale. He could then manipulate its individual wires using microscopic needles. While such techniques are technically possible against the iPhone, in this case, their practicality is severely lacking.

For one thing, the chances of permanently destroying the hardware are unacceptably high.

And for another, the long and extremely costly hacks would have to be carried out from scratch on any additional iPhones government investigators wanted to probe. By contrast, the software a federal magistrate judge has ordered Apple to produce would work against virtually all older iPhones with almost no modifications. Yes, Apple would have to alter the digital signature to make the software run on different devices, but that would require very little investment. More importantly, the software Apple provided in the current case would all but guarantee the expectation that Apple produce similar assistance in future cases.

And even when a suspect's iPhone used "secure enclave" protections not available on the 5C model in this case, the firmware running on the underlying chips can be updated.

Given the precedent that would be set in the current case, it wouldn't be much of a stretch for a court to require Apple to augment the software with functions for bypassing Secure Enclave protections. The process laid out in Sunday's article is interesting, and technically it shows that it may be possible for the FBI to extract the data stored on the seized iPhone without Apple's assistance.

But for the reasons laid out, it will never be seriously considered, let alone used.
In an election year, particularly one in which we’re all bracing for a downturn, the 1992 Clinton campaign’s famous catchphrase “It's the economy, stupid!” can’t help but come to mind.

Apply that same commonsense thinking to computer security and you get: “It's the data, stupid!” We suffer from a dearth of data and quality analytics on how we’re exploited and compromised. We know most of the likely root causes: unpatched software, social engineering, eavesdropping, password cracking/guessing, data leaks, misconfiguration issues, denial of service, insider threats, zero days, and so on.

But we lack good metrics on how often they occur inside our environment. We understand that we’re getting exploited by malware -- we may even have the number of detected and removed malware programs in a given period -- but we probably have little data on how many times social engineering let a bad guy in. We may know every unpatched program in our environment, but probably not which one is letting in the most damage. We simply don’t know how each threat ranks against each other. The upshot is that we respond to crisis events and gut feelings.
It’s about time we started to mature our defenses by asking for data, good metrics, better reports, and ultimately accountability.
If you really think about it, our lack of data should be embarrassing to us. How can any organization perform risk assessment when the threats and risks haven’t been quantified? Start collecting data now I spent the first three decades of my career wondering why all the wonderful computer security defense tactics, strategies, and tools didn’t work to make our computers safer for work and play.
I’ve decided that I’m going to dedicate the last two decades of my career forcing IT security environments to think about and collect more data. Every other part of the organization runs on data, from HR to finance to building maintenance.
I can probably ask any janitor in any building how many rolls of toilet paper are used in their building each week and receive an accurate answer.

But ask any IT security person what their company’s biggest security threat is, backed by data, and you’ll usually get a puzzled stare. The Holy Grail of IT security defense data is the number of times a particular root cause exploit was used to successfully compromise your enterprise.
If you got a report that said something like this: You might be able to start to focus everyone on the risks that matter the most. Of course you’d need to take root cause exploit occurrences and multiply them by the damaged they caused to get a better list, but even with this list alone, you’d have actual data from which to work smarter. The idea of ranked data needs need to become pervasive through IT security in every organization.

Don’t bring me an unranked list of anything.
I want ranked relevancy. Want me to start fixing vulnerabilities? Give me the data.

And I don’t mean the number of vulnerabilities.

That number means little.

Also, don’t tell me it’s critical. Nearly everything in our world is critical.
I think three-fourths of the vulnerabilities on CVE lists are critical. No, what I want to hear is how much X vulnerability is successfully exploited in the environment, especially compared to other vulnerabilities. I may have 1,000 unpatched Windows servers, but if they are being exploited more through unpatched Compaq Insight Manager, then I need to focus on the latter before the former. Some readers will tell me it’s impossible to get this sort of data.
In some cases it may be difficult, but seldom impossible.
I know we can collect far more data than we are gathering today.
In most cases we aren’t even trying. Sometimes a “best effort” gives us enough to get started. Even more important is to establish a culture where data is king.

Gut feeling is fine.

But back it up with data before you act on it. Pitching to management Data is the language of CIOs and CISOs. How can you run to a CIO or CISO asking for money to fund security technology or best practices without risk-relevancy data? By the time you step into that office you should have hard data to support your bullet points. Imagine walking up to your CISO and saying, “We identified X root cause as behind 49 percent of our successful exploits.
It’s our No. 1 problem.

By reducing this single cause we can get rid of nearly 50 percent of our current computer security risk.
I’d like to put together a project team to explore how we can best mitigate this issue. Here’s the data and here’s how we will measure future success.” I can’t imagine a CISO not being knocked out by such an approach incorporating real data, focus, and accountability. It’s a myth that management isn’t giving us the resources we need to do our jobs better.

The reality is that we haven’t been providing the background data to make the kind of well-supported arguments CIOs and CISOs are accustomed to hearing. How to get started What data you start to collect depends upon many of factors, beginning with what data you already collect and where the gaps are.
In general, a good data event that ends up creating security alerts should contain the following attributes: High likelihood that occurrence indicates unauthorized activity Either a single occurrence or an unexpectedly large number of events in a given time period indicates a high chance of maliciousness Low number of false positives An alert occurrence always results in an investigative/forensics response If you haven’t guessed by now, I’ve become a data warrior.
I’m already meeting with CIOs, CISOs, and the rest of my team members to ask them what data they want to see that they don’t see today.
I’m meeting with my data people to find out what they have and what they think we might need.

A data-driven computer security defense is a new paradigm. We’re going to need all the help we can get. Next time someone brings you an unranked list of things to do or fix, ask about the relevancy and data. Make it a habit.
Classic, defanged files at archive.org won't actually wipe your hard drive.
Media playback is unsupported on your device Lincolnshire County Council hit by £1m malware demand 30 January 2016 Last updated at 22:42 GMT Lincolnshire County Council's computer systems have been closed for four days after being hit by computer malware demanding a £1m ransom.Ransomware encrypts data on infected machines and unscrambles it only if victims pay a fee.The council said it was working with its computer security provider to apply a fix to its systems.Phillip Norton reports.
Australia is blaming China for a cyber attack on a supercomputer at the Bureau of Meteorology (BoM) that has links to multiple government agencies including the Department of Defence. The “massive” attack has raised fears that potentially sensitive national security information may have been compromised, according to the Australian Broadcasting Corporation (ABC). The report said “multiple official sources” have confirmed the attack, that it is expected to cost millions of dollars and possibly take years to plug the security breach, and that government officials are “confident” the attack came from China. The BoM supercomputer contains a lot of research, but could be viewed as a potential gateway to a host of government agencies that have even more sensitive information. In a statement, the BoM said it does not comment on security matters. “Like all government agencies, we work closely with the Australian Government security agencies. “The bureau's systems are fully operational and the bureau continues to provide reliable, ongoing access to high quality weather, climate, water and oceans information to its stakeholders,” the statement said. There are no details on which systems have been affected or if any and whether information was taken, nor on why China was seen as the likely culprit. There was no immediate comment from the Chinese embassy in Canberra. Among other services, the BoM provides climate information for commercial airlines and shipping, analyses national water supplies, gathers climate information and works closely with the defence department. China has repeatedly been accused of using cyber attacks to spy on foreign states and companies. The US said the issue has put an "enormous strain" on their relationship. Chinese officials routinely deny supporting cyber espionage and say China is itself a victim of hacking. The Australian Federal Police declined to comment, while the Department of Defence said in a statement that it was barred by policy from commenting on specific cyber security incidents. CW+ Features Enjoy the benefits of CW+ membership, learn more and join.
What happens when you are forbidden from disclosing that backdoor you found?
LinkedIn's head security honcho shares his proactive security strategy, which begins with everyone buying in. PALO ALTO, Calif.—How safe is your company from malware attacks and security breaches? As the technology and methods behind cyber-attacks are constantly evolving, it's virtually impossible for any company to accurately say it's completely safe, but there are steps you can take to minimize threats. Ganesh Krishnan, who runs security at the popular job site and social network LinkedIn, shared some of the lessons he's learned over a 20-year career in security, including stints at Intel and Yahoo. His "tech talk" was part of a meet-up here this week at online payments firm WePay. The first point he emphasized is that security teams are by definition outnumbered.  "There are a lot more hackers than security people. Security has to be everyone's responsibility," he said. This maxim extends to both technical and non-technical employees, as both are needed to help defend against a growing range of threats including so-called phishing attacks. Phishers use social engineering, email and social media to gain access to corporate networks. For example, a phisher might contact a relatively low-level employee under false pretense (e.g., pretending to be an authorized outside contractor), guess the employee's password and get into the network. In another example, the massive security breach at Target in 2013 was traced back to an HVAC service provider whose credentials were stolen, allowing the attackers to compromise the retail giant's network. "Even [the accounts of] salespeople and non-engineers can be compromised. It's not hypothetical, it's happened," said Krishnan. Although there have been many high-profile breaches, Krishnan says a lot has changed for the better. For example, it used to be considered a no-no to let someone outside your company test your software for security weaknesses. "Not anymore. Companies are rewarding the people who find vulnerabilities if it's done ethically and with responsible disclosure," he said. But there still needs to be a shift in the mindset of many developers when it comes to software. "Most engineers want to get the software finished and get the features done," Krishnan said, "but when you write code, think about how someone could abuse it." Because that mindset hasn't been in place, Krishnan says security issues are so pervasive that "if you think you haven't been hacked yet, you just don't know it." If that sounds dire, Krishnan doesn't apologize for raising alarm bells, but says a comprehensive strategy that includes not only prevention, but also a strategy and systems to detect and respond to breaches is what's needed to minimize threats. His key tips include logging everything and keeping that data for at least a year. This includes firewall, virtual private network (VPN), access and antivirus logs. "When there's an issue, having logs will prove to be extremely useful because you can see why you were hacked and take administrative action if necessary," he said. How to Prevent Phishing On the issue of phishing, Krishnan notes it can happen to anyone, but employee training can head off the threat. Rather than simply explain the danger of phishing to employees, he recommends "live training" to sensitize them. "You'll be surprised how many employees will give up their credentials" in a pseudo-phishing attack used for training. "And it's not just a password that's compromised," he said. "It's an attack on the network to plant malware once they can get someone to a bad site or install something." Krishnan says the training has proved very useful. "Someone hears person X fell for it, and they don't want to get caught themselves," he said.