DNSSEC Denial-of-Service Attacks Show Technology’s Fragility


A pair of attacks revealed by researchers this year underscored the fragility of the Domain Name System (DNS) and the security extensions (DNSSEC) that were adopted to help secure the world’s internet infrastructure.

For the past year, Internet infrastructure firms and software makers have worked to patch DNS servers for a critical set of flaws in DNSSEC. Originally discovered more than a year ago by four researchers at Goethe-Universität Frankfurt and Technische Universität Darmstadt, the so-called KeyTrap denial-of-service (DoS) attack could trick DNS servers into spending hours attempting to validate signatures on specially created DNSSEC packets, according to their presentation at the Black Hat Europe 2024 conference earlier this month.

The researchers notified major Internet providers of the issues late last year and worked with them to produce patches earlier this year, but the flaws in DNSSEC are systematic, says Haya Schulmann, a professor of computer science at Goethe-Universität Frankfurt and one of the researchers involved in the work.

“I would not say that the core of the problem has been resolved,” she says. “There are patches which mitigate the most severe problems, but the core issue is yet to be addressed.”

The KeyTrap security weaknesses were not the only DNS attacks to surface in 2024. In May, a team of Chinese researchers revealed that they had discovered three logic vulnerabilities in DNS that allowed three types of attacks: DNS cache poisoning, DoS, and resource consumption. Dubbed TuDoor, the attack affected some 24 different DNS software codebases, the researchers stated in a summary of their work.

Related:Millions of Pen Tests Show Companies’ Security Postures Are Getting Worse

The discovery of the two classes of DNS and DNSSEC flaws highlight that security and availability are often at odds with each other, and that the Internet as a whole still has areas of fragility.

“The Internet was an experimental research project which gradually evolved, and it started with very few networks and gradually evolved to support this huge commercial platform — of course, it’s fragile,” Schulmann says. “It’s a wonder that it works.”

‘Accept Liberally, Send Conservatively’ Falls Down

The design philosophy of much of the Internet boils down to a principle espoused by computer scientist Jonathan Postel, which the German researchers paraphrased as: “Be liberal in what you accept and conservative in what you send.” The principle aims to improve robustness by calling for software to be “written to deal with every conceivable error, no matter how unlikely; sooner or later a packet will come in with that particular combination of errors and attributes, and unless the software is prepared, chaos can ensue,” according to RFC 1122, “Requirements for Internet Hosts — Communications Layers.”

Related:Time to Get Strict With DMARC

However, other critiques have found that tolerating the unexpected often leads to harmful consequences. Rigorous standards can slowly decay and suffer feature creep when software is too liberally accepting, especially when the protocols are not adequately maintained, software engineers Martin Thomson and David Schninazi argue in RFC 9413.

“Careless implementations, lax interpretations of specifications, and uncoordinated extrapolation of requirements to cover gaps in specification can result in security problems,” they wrote. “Hiding the consequences of protocol variations encourages the hiding of issues, which can conceal bugs and make them difficult to discover.”

The German university researchers exploited the expansion of DNSSEC’s acceptance of various cryptographic algorithms to developed an attack vector that allowed them to create an off-path attack — in other words, they did not need to control a router or DNS server that processed a DNSSEC transaction. By sending DNSEC packets containing hundreds of cryptographic signatures and hundreds of keys, they forced DNS servers to try to validate all the combinations — all because the servers supported a wide variety of cryptographic methods.

“When you have cryptography, there are challenges and complexity that start when you need to deploy multiple algorithms,” Schulmann says. “You have to sign using all these algorithms, and every resolver has to validate the algorithms and identify which ones were sent … and validate the signature, and that is the problem.”

DNSSEC Pushes Its Limits

Fixing the DNSSEC weakness required the digital equivalent of chewing gum and baling wire. Cloudflare, for example, placed limits on the maximum numbers of keys its servers will accept when requests cross zones, such as .com delegating a response to cloudflare.com, the firm stated.

Yet, there is no simple fix, so Internet infrastructure companies have had to be agile as well.

“Even with this limit already in place and various other protections built for our platform, we realized that it would still be computationally costly to process a malicious DNS answer from an authoritative DNS server,” Cloudflare stated in its analysis and response memo on the issue. “We added metrics which will allow us to detect attacks attempting to exploit this vulnerability.” The company also placed additional limits on requests.

There are currently more than 30 RFCs related to DNSSEC, underscoring the need for defenders to repeatedly patch the standard to adapt to attackers’ tactics. Developers have to be closely involved with the infrastructure operators and researchers in the community to make sure that they are building their software to the highest standard.

“In our research, we see that the more functionality you have, the more features you add, then the more bugs and the more problems you have — and all of those can be exploited to launch attacks,” she says. “Routing networks, DNS, and other systems — they are no different.”



Source link

#DNSSEC #DenialofService #Attacks #Show #Technologys #Fragility