Skip to content

Latest commit

 

History

History
270 lines (206 loc) · 18.2 KB

dns.md

File metadata and controls

270 lines (206 loc) · 18.2 KB

Domains and DNS

Domains and DNS is surprisingly a very big problem for a self hosting:

  • A domain is not that cheap. For people from reach countries this is not a problem but this is a big problem for others (Like me when I was a student. Even now I decided that my domain doesn't worth that money and canceled).
  • And don't forget to renew it. Most of homepages had their domain expired. Then is can be hijacked by squatters. This is what happened for me too. And I can't understand where the squatters takes a money to pay for my domain, unless they don't really pay anything.
  • Good luck to find a good and not taken name.
    • You may also violate a trademark.
    • Frauders may create a fishing "your-bank" domain.
  • It complicated to configure:
    • This is a manual step to buy a domain.
    • There is no a singe DNS config API and instead we there are over hundred APIs for each registrar.
    • You need to understand what does it mean these A, AAAA, MX, NS and other records.
  • Zones have their own rules about content of your site.
    • You may be easily lost your domain or just blocked, censored or eliminated by a competitor.
    • A lot of fake abuse reports may harm your business (not really a problem for self hosting).
  • Those money goes to the ICANN monopoly basically for nothing. Some registrars are controlled by suspissios organizations or even mafias.
  • There are no a good geographical distribution between registrars:
    • Most of them are in US, some in German other big countries may have only three-four registrars and entire continents are underrepresented.
    • This is inequality but also have some risks and problems like unability to pay in local currencies and payment systems.
    • Some TLDs are belongs to big corporations (e.g. .dev by Google) that erodes DNS neutrality.
    • There are a few big registrars that takes most of market and form a cartel.

For now the main idea is to make the jkl.mn to offer a Dynamic DNS service. Each subdomain will be a random onion domain so that no needs to be registered. https://github.com/yurt-page/dyndns-onion But this creates a centralized point that I don't have resources to maintain. So maybe some decentralized Blockchain or DHT based approach would be better.

nsupdate

If you have an own DNS zone e.g. example.com then the DNS server supports a special command nsupdate which is an extension for DNS protocol. So you can use the nsupdate command to change some subdomain records. This is used by administrators but not by routers. The reason is that you need a pre-shared TSIG key and also the nsupdate utility is too heavy for routers. Still OpenWrt has a support of this https://openwrt.org/docs/guide-user/services/ddns/client#bindnsupdate

DynDNS

All routers already have a support of no-ip.com or dyn.com (previously called DynDNS.com). A router just makes a GET request nic/update?hostname={yoursubdomain}&password={pass} to a server. It detects an IP and updates a DNS record.

DynDNS2 API

Since all DynDNS providers supports the same URL as DynDNS.com. The API is unofficially called DynDNS2 i.e. DynDNS.com V2.

Some protocol descriptions:

DDNS providers

DDNS Clients

DNS API Libraries

Software for DDNS providers

I added links to a Wikipedia page to make them easier to find for a new users https://ru.wikipedia.org/wiki/%D0%94%D0%B8%D0%BD%D0%B0%D0%BC%D0%B8%D1%87%D0%B5%D1%81%D0%BA%D0%B8%D0%B9_DNS#%D0%9F%D0%BE_%D0%B4%D0%BB%D1%8F_DDNS_%D0%BF%D1%80%D0%BE%D0%B2%D0%B0%D0%B9%D0%B4%D0%B5%D1%80%D0%BE%D0%B2= But these links may be removed any time because Wikipedics don't like them.

Here is a prototype that I made https://github.com/yurt-page/go-ddnsd

We need to fight with abuses nsupdate-info/nsupdate.info#496 (comment)

Keenetic routers have own DDNS service https://help.keenetic.com/hc/en-us/articles/360000400919

But also supports others https://help.keenetic.com/hc/en-us/articles/360000934780-Dynamic-DNS-client

Subproject: DNS API standard or specification

Tor

All these problems are solved for Tor network and *.onion websites. Single Onion Service is a good option: only four hops instead of a hidden service. But it's still slow and can be accessed only from Tor Browser. Example of configuring a Single Onion Service https://gist.github.com/stokito/2a7ab43cb409afa9eef8061dd12ed82f We can add to a Yurt the Single Onion Service by default but it will be accessible only via Tor Browser. Also the Tor is too heavy for regular routers: it depends on OpenSSL (>1mb) and creates additional files https://lists.torproject.org/pipermail/tor-dev/2022-July/014751.html

It would be great to have a mnemonic names:

P2P DNS based on DHT

DNS must be authoritative but DHT doesn't guarantee that a record will be found or that the returned record is not faked.

KadNode - DynDNS based on BitTorrent Mainline Kademlia DHT

The KadNode is P2P DNS with content key, crypto key and PKI support. See KadNode talk. It's in German, use auto-translate.

This is basically DNS over Torrent DHT (Kadmelia). The DHT is slow so the domain may be not resolved before timeout. The DHT is great because there is no a DB but also adds some level of stablity against outages or attacks. The Tor network also uses own DHT but it's more advanced: not only faster but also makes re-hash daily to protect from generating of similar hashes for a specific domain and thus catching search requests and thus get approximate count of visits. As a random source for re-hashing the Tor takes last block hash from Bitcoin blockchain which is a cool idea by itself and may be used in other places.

By itself the domains can be usual e.g. real registered or just a ECC public key e.g. free but ugly. Once KadNode resolves an IP it will try to connect itself and check that cert corresponds. This saves from connecting to another server with the IP of outdated domain. E.g. KadNode makes the same check that a browser makes when checks that a cert's domain corresponding to the domain. This adds an addional delay.

The KadNode is based on mbedTLS that doesn't have ed25519 so it also can't generate onion-like domains. This is not a big deal but having interchangable domains is a nice thing to have.

Related Projects

Other attempts and research

Some papers about p2p systems and DHTs for newcomers

DNS SEC

DANE

The DANE protocol used to set a TLS cert with DSN record. This makes possible to have HTTPS for .bit and .onion domains i.e. without a CA.

Local IP as domain + TLS

https://github.com/cunnie/sslip.io

DNS pull

This is not related but a good idea to improve privacy of DNS queries.

Looks like a DNS leak easily solved by just caching locally ALL domains. One rec is 100 bytes. IPv4 addr has 32 bits. 2^32 * 100 = 430Gb. It's fine for modern disks. And get few Gb of deltas daily. For most DNS with stable TTL this should work. More privacy than DNS over TLS https://twitter.com/stokito/status/1540107921324900356

This idea appeared before https://www.kiv.zcu.cz/~ledvina/DHT/lecture13.pdf:

There are 76.9 million domains registered

  • Including generic TLDs and country-code TLDs
  • Compressed file with all info: 7.5GB
  • About 20,000 AS’s in the world
  • Suppose each NS serves other 3 NS’s (23 GB pushed)
  • Build delivery tree of depth 10 roughly
  • Push updates daily
  • About 760 KBytes / hour
  • About 850 Kbps upload to three peers
  • A lot of changes are for the same bindings
  • 87% of domains do not change at all

Great latency performance!

  • Akamai still works
  • Backward-compatible with old DNS
  • We are only adding prefetching to DNS – Improve performance with affecting the systems architecture
  • Idea for M.Sc. project: build push-based DNS!

I've sent a letter to author where asked for any further research but didn't receive a reply yet. Anyway something similar can be implemented for yurt domains internally. Since we have a single jkl.mn DNS then we can easily track IP changes and yurts can download the DNS updates.

Making site offline

Some DDNS servers allows to turn off a website. Given that we should support websites from laptop that maybe on or off this is a good things to implement. See also https://tomhummel.com/posts/website-business-hours/