
8,888 Reasons Not to Use External Recursive DNS [GTER 51]
This article delves into the critical reasons why autonomous systems and internet service providers (ISPs) should avoid using external recursive DNS services, such as Google DNS (8.8.8.8) or Cloudflare DNS (1.1.1.1). The speaker, Iuby, Technology Director, argues that while these services are often promoted for improving browsing speed, they introduce significant vulnerabilities and performance degradation. The presentation highlights that relying on external DNS can lead to massive failures during Distributed Denial of Service (DDoS) attacks, as the entire network that depends on these services will cease to function. Furthermore, the article explains how DNS, much like Border Gateway Protocol (BGP), is a fundamental pillar of the internet's functionality, converting human-readable domain names into IP addresses. Delegating this crucial service to a third party without a contract or Service Level Agreement (SLA) is deemed irresponsible. The impact on latency, especially in remote regions, and the significant number of DNS queries required to load modern websites, are presented as compelling arguments against external DNS usage. The speaker emphasizes that implementing a local recursive DNS is neither expensive nor difficult, urging providers to take ownership of this critical infrastructure component to ensure network stability and performance.
The Problem with External Recursive DNS
Iuby, Technology Director, highlighted the significant risks associated with using external recurring DNS services. One of the primary concerns is the vulnerability during DDoS attacks. An ISP or data center that relies on an external DNS for resolving addresses faces a massive failure. During a DDoS attack, if the external DNS service is overwhelmed or targeted, the entire network dependent on it will stop functioning, regardless of other investments made in DDoS mitigation.
The damito already gave a spoiler yesterday, right? An ISP or data center that receives DDoS attacks has massive DNS failures. So there's no point in making investments in DDoS mitigation if, during a DDoS attack, your entire network that depends on this external service will stop working at the same time.
He further elaborated on what external recursive DNS services are: majoritively free services (with some exceptions) that convert domain names like example.com.br into IP addresses (IPv4 or IPv6). Common examples include Google DNS (8.8.8.8), Cloudflare DNS (1.1.1.1), Quad9, and Giga DNS (a Brazilian alternative). Despite the widespread recommendation to use these services by major tech websites like Tecnoblog, TechMundo, and Canaltech, Iuby argues that this advice is fundamentally flawed for ISPs and data centers.
How the Internet Works and the Role of DNS
Iuby broke down the fundamental components of Internet operation, emphasizing the critical, often misunderstood, role of DNS. He analogized the internet's structure to two machines communicating, mediated by a router, an ISP, and a mysterious cloud. The server's ability to return content to the requester relies on two key protocols: Border Gateway Protocol (BGP) and DNS. While BGP handles path selection and routing of IP prefixes, DNS is responsible for converting human-readable domain names into numerical IP addresses, which is essential for any internet communication.
All the networks you build only exchange IPv4 and IPv6 numbers. Names are not exchanged. Inter-AS or the Internet is not by name; it's by number. This conversion must be done somewhere, and DNS does this conversion.
He stressed that both BGP and DNS are equally important, serving as the "needle and thread" that sow the internet together. Without both functioning correctly, the internet simply doesn't work. DNS, like BGP (which has its noble DFZ or global routing table), also maintains a critical table of all internet destinations, allowing applications and humans to reach content via domain names rather than obscure IP addresses.
Pillars of Autonomous Systems and the Matryoshka Effect
Iuby identified four pillars crucial for a robust autonomous system (AS) or ISP: Layer 1 (physical infrastructure), BGP, DNS, and content quality. He highlighted that the internet's design, rooted in ARPANET's origins with a focus on redundancy and survivability, implicitly underscores the importance of DNS. DNS operates in layers, similar to a Matryoshka doll, with one layer inside another. An authoritative DNS server holds the definitive IP address for a domain. A recursive DNS server queries this authoritative server, caches the response, and then provides it to the client. The client's operating system and even applications further cache these responses, creating multiple layers of local copies.
The speaker used a striking analogy of a poorly restored painting to illustrate the impact of using external DNS. He argued that it is equivalent to "ruining" the internet, humanity's greatest creation, by delegating its crucial function to a third party with no contractual obligations, SLAs, or support channels. This practice, he asserted, is unintelligent, as it bypasses the opportunity for content caching within the ISP's network, which is a highly sought-after benefit for providers.
Performance Impact and Latency
Iuby presented empirical data to demonstrate the severe performance degradation caused by relying on external DNS. He used data from the most accessed websites in Brazil, focusing on TTL (Time to Live) and the number of DNS queries. For example, loading the initial page of TikTok.com requires 159 DNS queries, and many DNS responses have a very short TTL, meaning they are forgotten quickly, forcing repeated queries. This high volume of queries, combined with the distance to external DNS servers, drastically impacts page load times.
He shared latency measurements from across Brazil, excluding Roraima and Amapá, taken in October from approximately 250 autonomous systems, totaling over 3,450 latency measurements. The average latency for Google DNS (8.8.8.8) in Brazil was 27.6 milliseconds. However, in regions like North and Northeast Brazil, this average jumped to 45 and 48 milliseconds respectively. Specific examples include Acre and Amazonas, where latency reached 70 milliseconds. A peculiar incident in Minas Gerais saw latency for Google DNS spike to 1,380 milliseconds on a Sunday, severely impacting users. These numbers highlight how crucial local DNS resolution is for a responsive user experience.
Region Data | Google DNS Latency (ms) |
---|---|
Brazilian Average | 27.6 |
North Brazil | 45 |
Northeast Brazil | 48 |
Acre | 70 |
Amazonas | 70 |
Iuby demonstrated that a single user, with one notebook, performed 938 DNS queries in just 5 minutes while browsing normally, averaging 188 queries per minute. This means that for every 48 clients on a network, there are 8,888 reasons per minute not to use external recursive DNS. Using external DNS makes the network fragile, susceptible to massive failures, and drastically reduces service performance, ultimately degrading the internet as a whole.
Addressing Counterarguments and Solutions
During the Q&A, Iuby addressed several common counterarguments. When asked about a desirable latency for large operators that might not have DNS close to every client, he suggested 5 milliseconds as an acceptable criterion. Beyond that, the impact on perceived browsing performance becomes noticeable. He recommended combining direct queries to root servers with local caching, as envisioned by the internet's pioneers.
Regarding ISPs that redirect all DNS traffic to internal servers (DNS NAT), he acknowledged that while it addresses the issue of clients manually configuring external DNS, it's essential to educate clients and staff through clear communication and marketing materials. He suggested creating informative videos to explain why using local DNS is beneficial.
For reported cases where IPTV boxes or "sky gato" (illegal streaming boxes) work better with public DNS, he suspected that poorly maintained local DNS servers with very low TTLs might be the cause, rather than inherent superiority of external DNS. He reiterated that a well-maintained local DNS will always outperform an external one in terms of performance.
A crucial point was raised about IPv6 transition techniques, specifically 464XLAT, which critically depends on an internal DNS for translation. Using external DNS would impede such transitions, creating technical debt. He urged ISPs to start their IPv6 adoption by deploying IPv6-enabled local recursive DNS, even if clients are still primarily on IPv4.
For smaller providers in remote areas concerned about server energy consumption, he suggested Raspberry Pi devices as an economical and energy-efficient solution for local DNS, potentially even as part of an Anycast network with proper scripting.
Takeaways
- DDoS Vulnerability: Relying on external recursive DNS makes your network highly vulnerable to DDoS attacks, as the entire service chain can fail if the external DNS is targeted or overwhelmed.
- DNS as a Critical Pillar: DNS is as fundamental to internet function as BGP, translating human-readable names to IP addresses, and its importance should not be underestimated or delegated without strict contracts.
- Performance and Latency: External DNS significantly increases latency, especially in remote regions, and heavily impacts the loading performance of modern websites that require hundreds of DNS queries per page.
- Local DNS is Essential: Implementing and maintaining a local recursive DNS within your autonomous system is crucial for stability, performance, and to avoid issues like content being cached far from the user.
- Technical Debt and Future Readiness: Not using local DNS creates technical debt, particularly hindering the adoption of modern IPv6 transition techniques that require internal DNS resolution.
- Affordable Solutions Exist: Implementing a local recursive DNS is neither expensive nor overly complex; even low-power devices like Raspberry Pis can serve this purpose for smaller providers.
References
© 2025 ClarifyTube. All rights reserved.