By Henry Berg-Lee, Liang Wang, Grace Simaszewski, Jennifer Rexford and Prateek Mittal
Security, BGP, KLAYswap, PKI, Public Key Infrastructure, CA, Certificate Authorities, Cryptocurrency On February 3, 2022, attackers launched a highly effective attack against the Korean crypto exchange KLAYswap. We discussed the details of this attack in our previous blog post »Attackers exploit a fundamental flaw in web security to steal $2 million in cryptocurrency. However, in this post, we have only scratched the surface of potential countermeasures that can prevent such attacks. In this new post, we will discuss how we can defend the web ecosystem against such attacks. This attack consists of several exploits in different layers. From the network stack. We call such attacks, “multi-layer attacks”, and offer our view on why they are effective. Moreover, we propose a practical defense strategy against them that we call “multi-layered security”.
As we discuss below, cross-layer security involves security techniques at different layers of the network stack that work in harmony to defend hard-to-detect vulnerabilities in just one layer.
At a high level, the opponent’s attack affected many layers of the network stack:
- The network layer Responsible for providing access between hosts on the Internet. The first part of the adversary’s attack involved targeting the network layer with a Border Gateway Protocol (BGP) attack That manipulated the roads to hijack the traffic intended for the victim.
- The session layer Responsible for secure end-to-end communication over the network. To attack the opponent’s session layer It reinforced its attack on the network layer Obtain a digital certificate for the victim’s domain from a trusted Certificate Authority (CA). With this digital certificate, the opponent has established encrypted and secure TLS sessions with KLAYswap users.
The difficulty of fully protecting against cross-layer vulnerabilities like this is that they exploit interactions between the different layers involved: a vulnerability in the routing system can be used to exploit a weak link in a public-key infrastructure, and even the web development ecosystem is implicated in this attack due to the way Java loads script. The multi-layered nature of these vulnerabilities often leads developers working at each layer to dismiss the vulnerability as an issue with the other layers.
There have been several attempts to secure the web against these types of attacks at the HTTP layer. Interestingly, these techniques often ended up dying in the water (as was the case with HTTP install And the Extend validation certificates). This is because the HTTP layer alone does not contain the routing information needed to properly detect these attacks and can only rely on information available to end user applications. This could potentially cause HTTP defenses to only block connections when benign events occur, such as when a domain chooses to move to a new hosting provider or changes its certificate configuration because these look very similar to routing attacks at the HTTP layer.
Due to the multi-layered nature of these vulnerabilities, we need a different mindset to fix the problem: People at all layers need to fully deploy any realistic security solutions to that layer. As we will explain below, there is no silver bullet that can be deployed quickly in any layer; Instead, our best hope is more modest (but easier to deploy) security improvements for all layers involved. Operating under the “other tier will fix the problem” attitude simply perpetuates these vulnerabilities.
Here are some ideal short-term and long-term predictions for each layer of the stack implicated in these attacks. While in theory any layer implementing one of these “long-term” security improvements could significantly reduce the attack surface, these technologies have yet to see the kind of deployment we’re required to rely on in the short term. On the other hand, all the technologies in the short-term list have seen some degree of dissemination at the production/real world level and members of these communities can start using them today without much difficulty.
|short term changes||long-range goals|
|Web applications (application layer)||Reduce the use of code loaded from external domains||Sign and certify all code being executed|
|PKI/TLS (session layer)||Publish worldwide Multiple control points validation||Adoption of identity verification technology based on cipher-protected DNSSEC that provides security in the presence of powerful network attacks|
|Routing (network layer)||Sign and verify paths with RPKI and follow security practices described by MANRS||Deploy BGPSec to almost completely eliminate routing attacks|
In the application layer: Web applications are downloaded online and are completely decentralized. At the moment, there is no mechanism to universally confirm the correctness of code or content in a web application. If the adversary manages to obtain a TLS certificate for google.com and intercepts your connection to Google, your browser will (now) have no way of knowing that it is serving content that didn’t actually come from Google’s servers. However, developers can remember that any third-party dependency (especially those loaded from different domains) can be a third-party vulnerability and limit the use of third-party code on their website (or host third-party code locally to reduce the attack surface) . Furthermore, both locally hosted and third-party content can be secured using Integrity of sub-sources The cryptographic hash (included in the web page) ensures the integrity of the dependencies. This allows developers to provide cryptographic signatures for the dependencies on their web page. Doing so greatly reduces the attack surface forcing attacks to target only one connection to the victim’s web server rather than the many different connections involved in retrieving different dependencies.
In the session layer: CAs need to identify the clients requesting certificates, and while there are proposals to use encrypted DNSSEC for identity verification (like dan), the status quo is to verify identity via network connections with domains listed in certificate requests. Thus, global routing attacks are likely to be very effective against CAs unless we make fundamental changes to the way certificates are issued. But this does not mean that all hope is lost. Many network attacks are not global but are actually localized to a specific part of the Internet. CAs are able to mitigate these attacks by checking domains from multiple control points spread across the Internet. This allows some CAs to be unaffected by the attack and to communicate with the legitimate domain owner. Our group in Princeton Design multiple priority point verification And worked with the largest PKI CA on the web in the world Let’s code to Developed the first ever production deployment from him. Certificate authorities (CAs) can and should use multiple checkpoints to verify domains making them immune to LAN attacks and ensuring they see a global perspective on routing.
At the network layer: In routing, it is difficult to protect against all BGP attacks. It requires expensive public key operations on every BGP update using a protocol called BGPsec that current routers do not support. However, recently there has been a hugely increasing adoption of a technology called Resource Public Key Infrastructure (RPKI) which prevents global attacks by creating an encrypted database of networks that control the Internet that blocks IP addresses. Importantly, when properly configured, RPKI also limits the size of the IP prefix to be declared preventing global and highly effective sub-prefix attacks. In a sub-prefix attack, the adversary announces a longer and more specific IP prefix than the victim and takes advantage of the longer-prefixed routing to favor the vast majority of the Internet to advertise it. RPKI is fully compatible with existing routers. The only downside is that RPKI can still be avoided by some local BGP attacks where, instead of claiming to have the victim’s IP address being checked against the database, the opponent simply claims to be the victim’s ISP. The complete map of connected networks and which other networks are not currently secured by RPKI. This leaves a window for some of the types of BGP attacks we’ve seen in the wild. However, the impact of these attacks is greatly reduced and often only affect a part of the Internet. In addition, the MANRS . project Provides recommendations for operational best practices including RPKI that help prevent and mitigate BGP hijackings.
Use cross-layer security to defend cross-layer attacks
Looking across these layers, we see a common trend: at each layer there are proposed security technologies that can stop attacks like the KLAYswap attack. However, all of these technologies face deployment challenges. Additionally, there are more modest technologies that are seeing widespread use in the real world today. But each of these techniques used alone can be avoided by an adaptive opponent. For example, RPKI can be avoided by local attacks, multipoint validation can be avoided by global attacks, etc. However, if we instead look at the benefit that all of these technologies scattered together in different layers provide, things look a lot more promising. Below is a table summarizing this:
|Technology / Layer of Security||Good at detecting routing attacks affecting the entire Internet||Good at detecting routing attacks affecting a part of the Internet||Limits the number of potential targets for directional attacks|
|RPKI at the network layer||yes||number||number|
|Multiple point validation in session layer||number||yes||number|
|Integration of sub-resources and locally hosted content into the application layer||number||number||yes|
This synergy between security technologies spread across different layers is what we call cross-layer security. RPKI alone can be avoided by clever enemies (Using attack techniques we see more and more in the wild). However, attacks that avoid RPKI tend to be local (i.e. not affecting the entire Internet). This synergizes with multipoint validation that is better at catching local attacks. Furthermore, since these two technologies working together do not completely eliminate the attack surface, improvements in the web layer that reduce reliance on code loaded from external domains help reduce the attack surface further. At the end of the day, the entire web ecosystem can benefit greatly from every layer that deploys security technologies that take advantage of information and tools available exclusively to that layer. Moreover, when working in unison, these technologies together can do something that none of them can do on their own: stopping attacks across layers.
Cross-layer attacks are surprisingly effective because no single layer has enough information about the attack to prevent it completely. Hopefully each layer has the ability to protect from a different part of the attack surface. If developers across these different communities know what kind of security is realistic and expected from their layer in the stack, we’ll see some significant improvements.
Although the ideal end game is to deploy a security technology capable of fully defending against attacks across layers, we have yet to see widespread adoption of any such technology. In the meantime, if we continue to focus security only against cross-layer attacks in a single layer, these attacks will take much longer to protect against. Changing the way we think and seeing the strengths and weaknesses of each layer allows us to protect against these attacks more quickly by increasing the use of synergistic technologies in the different layers that have already seen their spread in the real world.