Home Tags Do-It-Yourself

Tag: Do-It-Yourself

Sales soar to $6.2 million as do-it-yourself kits, ransomware-as-a-service, and distribution offerings take hold.
Since its beginning, Docker has been created by synthesizing elements in Linux and repackaging them in useful ways.

Docker’s next trick is the reverse: using container elements to synthesize distributions of Linux.Today Docker unveiled LinuxKit and the Moby Project, a pair of projects that are intended to allow operating system vendors, do-it-yourselfers, and cutting-edge software creators to create container-native OSes and container-based systems.[ Thinking of striking out on your own? Download InfoWorld's 29 tips for succeeding as an independent developer for valuable guidance from a solo -- and successful -- solo programmer. | Keep up with hot topics in programming with InfoWorld's App Dev Report newsletter. ]The do-it-yourself kit LinuxKit, which Docker has been using internally to create Docker Desktop and Cloud, uses containers as a building block for assembling custom Linux distributions.
It provides a minimal system image—35MB at its absolute smallest—with all its system daemons containerized.

Both system components and the software it ships with are delivered in containers.To read this article in full or to leave a comment, please click here
Lawyer wants victims to stand up: “We’ll have a slew of high quality filings.”
Communications failure leads to zero day, late patch, natch. Millions of do-it-yourself websites built with the Wix web maker were at risk of hijack thanks to a brief zero day DOM-based cross-site scripting vulnerability. Wix boasts some 87 million users, among them two million paying subscribers. Contrast Security researcher Matt Austin (@mattaustin) dug up the flaw he rates as severe, and attempted to get Wix to patch it under quiet private disclosure since October. He says he heard nothing back from the web firm other than an initial receipt of the disclosure on 14 October after three subsequent update requests. Checks appear to confirm the holes have been quietly shuttered after Austin's public disclosure. Wix has been contacted for comment. "Wix.com has a severe DOM cross-site scripting vulnerability that allows an attacker complete control over any website hosted at Wix," Austin says in his disclosure. "Simply by adding a single parameter to any site created on Wix, the attacker can cause their JavaScript to be loaded and run as part of the target website. "Administrator control of a wix.com site could be used to widely distribute malware, create a dynamic, distributed, browser-based botnet, mine cryptocurrency, and otherwise generally control the content of the site as well as the users who use it." More attack scenarios awaited attackers who either found the flaw before Austin or spotted his disclosure before Wix could patch it. Austin says attackers could have: Changed content of a hosted website for targeted users; Challenged the user for their Wix username and password; Challenged the user for their Facebook or Twitter username and password; Attempted to trick users of the website into downloading malware and executing it; Generated ad revenue by inserting ads into website pages; Spoofed bank web pages and attempted to have users log in; Make it difficult or impossible to find and delete the infection, and, Create new website administrator accounts. Austin supplied then working proof-of-concept links showing the DOM cross-site scripting in action against Wix template sites. He also provided five steps required for attackers to spin the vulnerability into a worm to hit scores of sites. The public disclosure, while made sooner than the fastest industry standard 30 day bug fix window, should serve as a reminder to all businesses with an online presence to have a process in place to handle vulnerability disclosures.

This should preferably include a nominated staffer to handle the disclosures, along with security@* email address which is visible on the business website. Researchers should be offered regular patch updates and lawyers kept firmly at bay. ®
The tools used in detecting intrusions can lead to an overwhelming number of alerts, but they're a vital part of security. Traditional security used to be focused exclusively on prevention and policy enforcement. While these mechanisms are still important, there is increasing attention paid to detection in order to be able to more rapidly spot potentially malicious activities that have circumvented preventative security. The leading security teams are shifting their mission to detect a potential attack and respond appropriately and quickly.

To achieve such a mission, organizations typically implement the following detection (or incident response) process: Start with an alert. Investigate the alert and determine if it is benign or unwanted. Make a proper decision and remediate or respond. This detection process can be implemented at different levels of detail in various security organizations.
I have seen implementations conducted by a single-person security team all the way to large security operations centers. Of course, an effective detection strategy isn’t easy to achieve with a limited security budget and constrained staff.

A major factor determining the effectiveness of the detection process is that security teams have to deal with the massive inefficiency and imprecision of security tools.

For most organizations, the total alert volume is far beyond what any group can handle. Most alerts aren’t useful because either they don’t highlight anything of importance or are nonspecific or unsubstantiated, making an already impossible situation even worse.

This situation is created by three main factors. First, most tools are still primarily malware-oriented and based on technical artifacts, such as a signature, hash, domain, or a predefined list of software behaviors.

This means that alerts will be firing all the time. Second, many tools are designed to detect isolated “trees” and don’t understand that they are part of an overall “forest” of an orchestrated campaign being run by an attacker.
It’s common to see a singular event — let’s say a scan inside the network — set off a hundreds or even thousands of individual alerts. Third, many traditional tools are based on a do-it-yourself mentality.

They’re created assuming a completely manual style of operational security. Part of this recognizes that most tools are rather low in fidelity and not intended to offload the work of a security professional by conducting some level of investigation or making a judgment call. If organizations are going to be successful with detection, they need to measure and improve their efficiency.

The following questions can help develop an effective detection strategy, or evaluate a specific new or existing detection tool: Detection coverage: If an active attacker were operating inside my network, would my systems see an operational activity and set off an alert? Detection quantity: Can my team investigate all the relevant alerts? Is it clear which are relevant? Detection quality: When an alert is investigated, can I reach a conclusion? Detection coverage is determined by what is being sought — is a tool looking for malware? Reconnaissance? Lateral movement? The best approach here is to define a few use cases that you’d like to detect and make sure that they’re covered. Most data breaches are ultimately the result of an attacker that operates inside the internal network.

The use cases can either be defined theoretically or tested live as an attack simulation or penetration test. Detection quality and quantity can be measured in a very concrete way using standardized metrics: Detection quantity: efficiency — Count the total number of alerts per day and calculate how many alerts you have per 1,000 hosts. Detection quality: accuracy — Percentage of alerts handled by the team that are useful (not whitelisted or ignored) over the total number of alerts. A Ponemon Institute study of 630 enterprises showed that the average number of alerts a security team receives per week from their security tools — firewalls, intrusion-detection systems, intrusion-prevention systems, sandboxing systems, security information and event management (SIEM) systems, etc. — was a shocking 172 per 1,000 hosts per day (2,420 alerts for an average 14,000 endpoints in a network). Where does one start with that volume of alerts, particularly given that the large majority of these are false-positives? The same study shows that, on average, two-thirds of the security staff’s time is wasted because their tools are grossly inefficient, so only 4% of alerts are typically investigated, leaving the possibility that somewhere around 96% of those untouched may be something important. This is clearly an inefficient, inaccurate process because there is no good way to guarantee a focus on the right alerts.

There are examples where signs of an attacker were hidden in a flood of alerts.
In the Target breach, according to reported details, signals were detected by existing systems but buried under thousands of similar alerts without any way to know that this particular alert was important. It’s not enough to have a haystack with some needles that can only be discovered with hindsight after it’s too late; we need a process that can get to the important needles to proactively detect attacks, before damage is done. Realistically, security teams can actively handle a handful of alerts per 1,000 hosts per day.

This is based on the Ponemon data (4% of handled events equals 6.8 alerts/1,000 hosts/day) and also on my experience with different chief information security officers.
If a process generates many more, they will be ignored. Using metrics enables evaluation and comparison of different tools and methods, including overlapping products.

For example, you can compare SIEM-based event correlation to network traffic analytics, sandbox-based detection, user and entity behavior analytics, endpoint detection, malware analysis, and so on, with these questions: Does the type of new alerts add significant coverage or attack detection capability? How many alerts per day does the system normally generate? Can my team investigate that amount? What percentage of the alerts can be handled and are relevant to handle? With this perspective, we can ask questions such as, “Will this increase detection accuracy?” “How is this security tool making the team more efficient?” and “Can this security solution reduce staffing requirements?” Related Content: Giora Engel, vice president, product & strategy at LightCyber is a serial entrepreneur with many years of technological and managerial experience.

For nearly a decade, he served as an officer in an elite technological unit in the Israel Defense Forces, where he initiated and ...
View Full Bio More Insights