Home Tags BGP

Tag: BGP

Google, Facebook, Apple, and Microsoft all affected by “intentionalrdquo; BGP mishap.
Cisco has updated its IOS XE software to address a denial of service vulnerability in its implementation of BGP over an Ethernet VPN.
A vulnerability in the Border Gateway Protocol (BGP) over an Ethernet Virtual Private Network (EVPN) for Cisco IOS XE Software could allow an unauthenticated, remote attacker to cause the device to reload, resulting in anbsp;denial of service (DoS...
The BGP Path Validation draft standards were designed to ensure that Internet traffic flows only along digitally signed, authorized paths.
Routers should know their place One of the most persistent bugs in Internet infrastructure, route leaks in the border gateway protocol (BGP), is in the sights of a group of 'net boffins and their with a new Internet-Draft.…
Visa, MasterCard, and Symantec among dozens affected by "suspicious" BGP mishap.
Boffins say BGP is a threat to the crypto-currency Attacks on Bitcoin just keep coming: ETH Zurich boffins have worked with Aviv Zohar of The Hebrew University in Israel to show off how to attack the crypto-currency via the Internet's routing infrastructure.…
New service puts logic closer to users, aims to be "global load balancer" for apps.
2017-01 Security Bulletin: Junos: Denial of Service vulnerability in RPD (CVE-2017-2302)Product Affected:This issue can affect any product or platform running Junos OS. Problem: On Junos OS devices where the BGP add-path feature is enabled with 'send' ...
One billion-plus accounts stolen in one online heist.

The U.S. presidential election messed with by another country.

Corporate secrets stolen and released on the internet on a regular basis. More and more data held hostage by ransomware.
Stock markets routinely manipulated by hackers.

Denial-of-service attacks whacking websites all over the place. Will computer security ever get better? Or is this the way things are and we simply have to live with it? For a long time I’ve speculated that it would take a tipping-point event for the world to stop treating the horrible current state of security as business as usual.
It would take a major shutdown of most of the internet or the major stock exchanges for a day or longer. Nothing else would be shocking enough.

Everything else is business as usual. But maybe a global catastrophic event would not be enough. Maybe what we have now is what we have for the foreseeable future.
I’ve long worried that this might be the case, but I haven’t wanted to admit it as realistic possibility. The past is prologue People and things change, but not so much.

The best indicator of future behavior is past behavior. Most real change is slow and nonlinear, and it happens unexpectedly.
I’ve been expecting computer security to get significantly better for three decades now.
It’s only gotten worse.
Sure, we’ve made progress in a few places, and we’re even arresting more big hackers.

But for the most part the overall risk of something malicious happening is the same or higher than before. Nobody has a plan The biggest evidence that we aren’t going to have a significantly more secure internet soon is that exactly zero big initiatives are moving forward that could help.
It seems the era of doing big things to the internet’s underlying infrastructure is dead. We are still relying on insecure protocols (Border Gateway Protocol, DNS, UDP) for most of the behind-the-scenes plumbing. More secure versions have been tried for decades and still the internet resists.

Things that could make the internet significantly safer aren’t going to be a reality anytime soon. Acceptable risk As bad as the risk is—essentially, kids and professional hackers can shut down big parts of the internet or steal anything they want at will—the world has responded through its inaction.

This risk is acceptable compared to the cost of better securing the internet. This reminds me of a story Bruce Schneier wrote a while back. He said computer security professionals are mistaken if they think users don’t understand the risk of poor passwords. We professionals confuse the risk incurred by poor passwords (such as exposing a company’s most cherished intellectual property) with the risk to the user who chooses poor passwords (basically, none). Whose fault is it anyway? Do any of us know of a single person who was punished, much less fired, for using poor passwords? I don’t.
I’m sure it happens.
I’m sure someone used a “123456” password that led to malicious hacking and was held accountable for that stupidity.
I mean, companies lose hundreds of millions of dollars due to internet theft every year. Occasionally, someone must get punished for it besides the odd CIO. On the other hand, maybe it’s like the U.S. financial system, where blatant fraud and untenable risk decisions led to more than $1 trillion in capital going up in smoke, without a single person being successfully prosecuted (except for this guy). Even after the huge financial crisis, from which the world is still recovering, relatively weak regulations were put in place to stop it from happening again.
In the United States, those regulations (Dodd-Frank) are likely to be undone by the next Congress.

This shouldn’t surprise anyone: No one in government was fired for undermining regulations prior to the meltdown, which made the whole mess almost inevitable. The point is that the huge theoretical risk of bad internet security is acceptable to almost everyone … until it’s not.

Even if the worst happens, it’s unlikely anyone will actually get in trouble, much less fired.
If you think of risk management that way—the real way every human being measures it—then what we have is good enough. I don’t like this idea at all.

But I need to stop living in a dream world where everyone suddenly realizes how bad internet security is and actually demands something better.

The fact is, we could make the internet significantly more secure today for relatively low cost and most users would support it.

But lack of accountability means it’s not going to happen.
Apply best routing practices liberally. Repeat each morning Solve the DDoS problem? No problem. We’ll just get ISPs to rewrite the internet.
In this interview Ian Levy, technical director of GCHQ’s National Cyber Security Centre, says it’s up to ISPs to rewrite internet standards and stamp out DDoS attacks coming from the UK.
In particular, they should change the Border Gateway Protocol, which lies at the heart of the routing system, he suggests. He’s right about BGP.
It sucks.

ENISA calls it the “Achilles’ heel of the Internet”.
In an ideal world, it should be rewritten.
In the real one, it’s a bit more difficult. Apart from the ghastly idea of having the government’s surveillance agency helping to rewrite the Internet’s routing layer, it’s also like trying to rebuild a cruise ship from the inside out. Just because the ship was built a while ago and none of the cabin doors shut properly doesn’t mean that you can just dismantle the thing and start again.
It’s a massive ship and it’s at sea and there are people living in it. In any case, ISPs already have standards to help stop at least one category of DDoS, and it’s been around for the last 16 years.

All they have to do is implement it. Reflecting on the problem Although there are many subcategories, we can break down DDoS attacks into two broad types.

The first is a direct attack, where devices flood a target with traffic directly. The second is a reflected attack. Here, the attacker impersonates a target by sending packets to another device that look like they’re coming from the target’s address.

The device then tries to contact the target, participating in a DDoS attack that knocks it out. The attacker fools the device by spoofing the source of the IP packet, replacing their IP address in the packet header’s source IP entry with the target’s address.
It’s like sending a letter in someone else’s name.

The key here is amplification: depending on the type of traffic sent, the response sent to the target can be an order of magnitude greater. ISPs can prevent this by validating source addresses and using anti-spoofing filters that stop packets with incorrect source IP addresses from entering or leaving the network, explains the Mutually Agreed Norms for Routing Security (MANRS).

This is a manifesto produced by a collection of network operators who want to make the routing layer more secure by promoting best practices for service providers. Return to sender One way to do this is with an existing standard from 2000 called BCP 38. When implemented in network edge equipment, it checks to see whether incoming packets contain a source IP address that’s approved and linked to a customer (eg, within the appropriate block of IPs).
If it isn’t, it drops the packet.

Corero COO & CTO Dave Larson adds, “If you are not following BCP 38 in your environment, you should be.
If all operators implemented this simple best practice, reflection and amplification DDoS attacks would be drastically reduced.” There are other things that ISPs can do to choke off these attacks, such as response rate limiting.

Authoritative DNS servers are often used as the unwitting dupe in reflection attacks because they send more traffic to the target than the attacker sends to them.

Their operators can limit the number of responses using a mechanism included by default in the BIND DNS server software, for example, which can detect patterns in incoming traffic and limit the responses to avoid flooding a target. The Internet of Pings We’d better sort this out, because the stakes are rising.

Thanks to the Internet of Things, we’re seeing attackers forklift large numbers of dumb devices such as IP cameras and DVRs, pointing them at whatever targets they want. Welcome to the Internet of Pings. We’re at the point where some jerk can bring down the Internet using an army of angry toasters.

Because of the vast range of IP addresses, it also makes things more difficult for ISPs to detect and solve the problem. We saw this with the attack on Dyn in late October, which could well be the largest attack ever at this point, hitting the DNS provider with pings from tens of millions of IP addresses.

Those claiming responsibility said that it was a dry run. Bruce Schneier had already reported someone rattling the Internet’s biggest doors. “What can we do about this?” he asked. “Nothing, really.” Well, we can do something. We can implore our ISPs to pull their collective fingers out and start implementing some preventative technology. We can also encourage IoT manufacturers to impose better security in IoT equipment. Let’s get to proper code signing later, and start with just avoiding the use of default login credentials first. When a crummy malware strain like Mirai takes down half the web using nothing but a pre-baked list of usernames and passwords, you know something’s wrong. How do we persuade IoT vendors to do better? Perhaps some government regulation is appropriate.
Indeed, organizations are already exploring this on both sides of the pond. Unfortunately, politicians move like molasses, while DDoS packets move at the speed of light.
In the meantime, it’s going to be up to the gatekeepers to solve the problem voluntarily. ® Sponsored: Want to know more about PAM? Visit The Register's hub
Updated OpenStack Networking packages that resolve various issues are nowavailable for Red Hat OpenStack Platform 9.0 (Mitaka) for RHEL 7. Red Hat OpenStack Platform provides the facilities for building a privateor public infrastructure-as-a-service (IaaS) cloud running on commonlyavailable physical hardware.

This advisory includes packages for:* OpenStack Networking serviceOpenStack Networking (neutron) is a virtual network service for OpenStack.Just as OpenStack Compute (nova) provides an API to dynamically request andconfigure virtual servers, OpenStack Networking provides an API todynamically request and configure virtual networks.

These networks connect'interfaces' from other OpenStack services (e.g. virtual NICs from ComputeVMs).

The OpenStack Networking API supports extensions to provide advancednetwork capabilities (e.g. QoS, ACLs, network monitoring, etc.)This update addresses the following issue:* Prior to this update, the `Fail` mode on OVS physical bridges was not set,defaulting to `standalone`. However, this meant that when `ofctl_interface` wasset to `native`, and the interface became unavailable (due to heavy load, OVSagent shutdown, network disruption), the flows on physical bridges could becleared.Consequently, the physical bridge traffic was disrupted. With this update, theOVS physical bridge fail mode has been set to `secure`.

As a result, flows areretained on physical bridges. (BZ#1372369) Red Hat OpenStack 9.0 for RHEL 7 SRPMS: openstack-neutron-8.1.2-5.el7ost.src.rpm     MD5: 7a03c92f63add23df2c93848bd867bf6SHA-256: 347f26f914b4cd65acf26cbf098fb396ecca112d560e4ee90eaaab30f2fe75e2   x86_64: openstack-neutron-8.1.2-5.el7ost.noarch.rpm     MD5: 95e7f37a1a06262772a06fe268b26159SHA-256: 2eff67fd7218933cf832f835520b3555560222524d8c004c8ebacfc286d00c19 openstack-neutron-bgp-dragent-8.1.2-5.el7ost.noarch.rpm     MD5: f6067ed0ccf4fb6bdd05ab454649ccdbSHA-256: c481d4143b709d92341ad9b15542216552194ce947cc0e0388cb848744722c1b openstack-neutron-common-8.1.2-5.el7ost.noarch.rpm     MD5: 58ae59f4c3c526aa81c73da5f5a052aaSHA-256: 840169960f1b4522808281310591e4c6fe9db0464e08d1a9182b8fceb0ef0449 openstack-neutron-linuxbridge-8.1.2-5.el7ost.noarch.rpm     MD5: efd957cc875e087be43787ede19d17a2SHA-256: f107a94c4fc427add3a74ce3201e41c8b9f059b648ff99519578405d158afab8 openstack-neutron-macvtap-agent-8.1.2-5.el7ost.noarch.rpm     MD5: 8a486748edf094b20fb42e531dd8fdbbSHA-256: b4b1987a4524831daec072012e5aa07216b974e90bcbcb40173cabbabbbcf31e openstack-neutron-metering-agent-8.1.2-5.el7ost.noarch.rpm     MD5: 1a6e0366baede5d288c5c643317b2d58SHA-256: d47afe271058d0a26cf855f3c42ebf07655229ccf65474ec0ec8eff165308164 openstack-neutron-ml2-8.1.2-5.el7ost.noarch.rpm     MD5: fecbc48e87e1bc17e35a7b90cd5e1883SHA-256: afc5bb9e9873ade54a8d64570274b555a7aef82e33685c3ace58c542a69e34c1 openstack-neutron-openvswitch-8.1.2-5.el7ost.noarch.rpm     MD5: abd888cb525f7897e066bfb28c5894c1SHA-256: e00c76b3529af3c10d88837a4eabecee12200c964f8feae0ffe9fc8c3793aed2 openstack-neutron-rpc-server-8.1.2-5.el7ost.noarch.rpm     MD5: 92b91e2db156e0175faec5ab6c9e0e8eSHA-256: 2f8aee9eb22508c7094ba458b2626977d9db2834109c0ee08a735b8540afc81d openstack-neutron-sriov-nic-agent-8.1.2-5.el7ost.noarch.rpm     MD5: dad81ca432db814ff64177c3205bdd8bSHA-256: 975e0aaf9facad18d6c0bd37fb90ae0c5dbdbeef48354fc3934fec554a9a18c8 python-neutron-8.1.2-5.el7ost.noarch.rpm     MD5: d2662f8cf02662aeac46f9b9b121ed21SHA-256: c007949167a77df8d4e2f65fb65fbc33e318bd342cd219dc661f063c2fac6c2d python-neutron-tests-8.1.2-5.el7ost.noarch.rpm     MD5: 932a0579bae6f1c0e4865b981bb55e58SHA-256: a010c716e8a70e1b843748a6c4c41339f42581633312b6df8a3de0e828338b97   (The unlinked packages above are only available from the Red Hat Network) 1372369 - Backport to mitaka: Set secure fail mode for physical bridges These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from: