Modern Software - Layers of Shit

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Modern Software - Layers of Shit

fschmidt
Administrator
Browsers cache DNS results.  Why?  There is absolutely no reason.  As Knuth said "premature optimization is the root of all evil (or at least most of it) in programming".  In this case, there was never a reason to cache DNS results because there is no significant overhead to querying the OS where DNS caching is appropriate.  Presumably some moronic programmer working on one of the browsers decided to cache, and the other browsers followed like mindless sheep.

But it gets worse.  Browsers cache without respecting TTL (Time To Live) which, according to the DNS protocol, specifies the maximum time to cache.  Again why?  Because modern programmers are members of modern culture, and therefore are vile human scum who don't respect anything that makes sense.  The DNS protocol was developed before modern culture went completely to shit, and therefore is fairly well designed.  But today's browsers reflect the depraved degeneracy of modern culture.

This is my latest encounter with the layers of shit of modern software.  Here the layer is the DNS caching layer in browsers, which has absolutely no reason to exist.  I could write a long book about all the layers of shit in modern software.

Why do I care about DNS caching?  Because I want to implement DNS failover.  And in fact this is another story about a modern layer of software shit.  I might as well tell this story.

There are two approaches to website failover.  One is to have some box in front of the website which can redirect traffic to a backup if the primary fails.  But then this box becomes a single point of failure.  And it is one more thing to maintain.  A better approach (and therefore hated and ridiculed by modern scum) is to use DNS to redirect traffic when the primary fails.  Just change the DNS from mapping to the primary machine to mapping to the backup machine.  This clean and simple, and therefore hated by modern scum.  When the Web started, this actually wouldn't work because DNS changes propagated slowly, but this has been fixed with Dynamic DNS (and of course this was done before modern culture went to shit).

Dynamic DNS is widely used today, but not as originally intended.  It is used for servers behind DHCP.  DHCP is an idiotic system of dynamically allocating IPs to machines on a local network.  As idiotic as this is, it is orders of magnitude more idiotic to put servers behind DHCP since these servers don't have a stable IP.  But since this is completely idiotic, naturally modern scum want to do this.  To make this work, Dynamic DNS is used to change the IP in the A record as the IP of the server changes.  This is done by special software.  This is effectively one layer of shit (the special software) being used to compensate for another layer of shit (DHCP).

Originally, Dynamic DNS was meant to be a simple flexible way to quickly change any DNS configuration.  A simple client tool called "nsupate" was developed for this purpose.  Today there are many companies offering Dynamic DNS, mostly for the purpose I mentioned above.  But virtually none offer nsupdate access (only Dyn does, buried in inaccessible documentation).  Instead, those that offer access do it through a modern pathetic REST (web based) interface.  And in this interface, they only allow changes to A records (which map name to IP).  This is completely inflexible.  In my case, I want to change the CNAME record (mapping a name to another name).  I found only one REST interface that supports this from an obscure company called Zonomi.  I will probably use Zonomi because Dyn is a mess aimed at the enterprise market.

So here we have another layer of shit (the REST interface) obscuring the flexible nature of Dynamic DNS.  And this is completely pointless because nsupdate had already been developed and was a fine solution.  Why was this done?  Because modern programmers are modern human scum who hate anything that is simple and makes sense, and they much prefer inflexible layers of programmatic shit.

Returning now to browser caching in the context failover, the result will be that failover will not work for however long the moronic browser caches DNS (in violation of TTL).  In my tests on a Mac, this varies from about 2 minutes on Safari to 10 minutes on Chrome.  Naturally Chrome is the worst, being owned the absolute worst tech company in existence, Google.  To modern scum, 10 minutes of failure may not seem like a long time.  But to me, 10 minutes of failure is inexcusable.  There is just no reason for it.  I wish the hearts of modern scum would fail for 10 minutes so that they could get a taste of their own medicine.  But the human body was designed by God/nature, not by modern scum, so it works.

Anyway, I will use DNS failover.  If one builds a house out of shit, it will smell when it rains.  And all modern software is shit, so there is nothing I can do about it.
Woe to those who call bad good and good bad -- Isaiah 5:20
Following the Old Testament, not evil modern culture