← All Posts

Nullish Coalescing and Optional Chaining

Posted by

  critique ethics tech web

9 min read | 2554 words | 622 views | 0 comments

Unless you're a front-end web developer, "nullish coalescing" and "optional chaining" probably don't mean much to you. If you're like me though, you cringe inwardly every time you read them. So, what's the big deal with them, and why are they bad?

Actually, bad is an understatement: these are simply downright evil. Nullish coalescing and optional chaining (hereafter N.C. and O.C. for short) are special operators in JavaScript that attempt to simplify traditional JavaScript syntax. To understand why they were introduced, let's consider some simple scenarios that make their usage clear.

Variables in programming language can contain values (but you probably knew that already). Sometimes, NULL is used to denote the absence of any real value, and NULL is a construct support by most languages, including C, PHP, JavaScript, etc. Because NULL isn't generally a usable value in most code, it needs to be explicitly handled, or bad things can happen. NULL dereferencing in C can make your program crash. For example:

struct *foobar = NULL;
printf("%s\n", foobar->str);

The above code will result in a segfault if you run it, so you might do this instead:

struct *foobar = NULL;
if (foobar) {
   printf("%s\n", foobar->str);
} else {
   printf("foobar is NULL\n");
}

This might seem like a log of work just to handle a NULL, and you would be right. Thankfully, we can condense this a little bit using the ternary operator:

struct *foobar = NULL;
printf("%s\n", foobar ? foobar->str : "foobar is NULL\n");

If you've never seen the ternary operator used before, this might look a little strange at first, but it's actually quite straightforward, and the ternary operator gets used a lot in languages like C. Simply put, you have condition ? iftrue : iffalse. If the condition is true, the left side gets used; otherwise, the right side gets used. This is what allows you to handle NULL above without doing much extra work.

In dynamic languages like JavaScript, you don't have to worry about segfaults, but your program can still crash or do nasty things if you don't properly handle NULL (or in JavaScript's case, undefined). Thus, using the ternary operator to properly handle null/undefined can be quite common:

var x = y ? y : 'nothing';

In this case, if y is a variable that exists and is defined, then we'll assign y to x. If not, we'll assign it the string 'nothing' instead. Simple enough.

Well, apparently the JavaScript gods didn't think this was simple enough, because they got restless and decided this was too verbose. So they came up with the N.C. operator instead. The idea is not to repeat yourself, so instead you can do:

var x = y ?? 'nothing';

It's a very subtle simplification, but we only have to specify y once, instead of explicitly saying that we should use it if it exists. Note that this is not necessarily the same thing as the ternary operator. The ternary operator is much broader, since you can do something like var x = y ? a : b. Fundamentally, the true and false sides can be literally anything, and don't need to have anything to do with the condition. In most languages, simply specifying a variable that is NULL will evaluate to false, however, so it just so happens that a ? a : b is somewhat of a common construct.

In languages like C, macros can be used to simplify this scenario as well. For example, I often use the macro S_OR(x, "empty") when dealing with string pointers that could be NULL. So, to be fair, this type of thing has been done before, in other ways. However, the N.C. operator in JavaScript is different, in a big way. It's not backwards-compatible.

Let me repeat that. The nullish coalescing operator syntax is brand new, and any user agent written before it existed does not understand it.

Big deal, you might say — just polyfill it. Aye, but there's the rub. The trouble is, N.C. can't be polyfilled.

For those who aren't familiar, a polyfill is a commonly done thing in JavaScript to support newer JavaScript features for older browsers that don't natively support them. Polyfills are essentially scripts written in traditional JavaScript that emulate a newer feature implemented natively in newer browsers. The idea is that polyfills can allow older browsers to support newer functionality in sort of a kludged way, so developers can use newer capabilities supported by newer browsers. Generally, this is fine, and users don't really notice one way or another, as polyfills are meant to be transparent.

Polyfills aren't perfect, but they can allow for gaps to be bridged. Initially, as the "great World Wide Web breakage of 2021" got underway, we discovered that much of the breakage was due to just a few spanking new JavaScript features that could actually be polyfilled: the result was the Chromefill extension, which many people (including yours truly) now use to automatically polyfill these features that are not widely supported amongst all browsers. Ideally, this wouldn't be necessary, and responsible web developers would avoid the use of such features altogether. However, workarounds are possible. For many incompatibilities, Chromefill silently fills the gaps so users can have a normal browsing experience. On the development side, it's been somewhat of a cat and mouse game, as websites continue to adopt new bleeding-edge features that result in further breakage, but for the most part, these have resulted in more polyfills being added to patch the holes again.

So, why does it matter that N.C can't be polyfilled? Because there's no way to make older browsers support it. Quite, literally, if your codebase uses N.C. at all, anywhere, your entire webpage is suddenly partially or completely incompatible with any browser - any user agent at all - that does not natively support the N.C. operator.

Same goes for optional chaining (O.C.). Say you wanted to do something like this:

struct *foobar = NULL;
char *c = foobar ? foobar->str : NULL;

In some ways, the concept is very similar to what we've already seen, but it's a slightly different use case. In C, a macro for this is again readily doable:

#define X_IF(f,g) f ? f->g : NULL

struct *foobar = NULL;
char *c = X_IF(foobar,str);

This works if, say, you want the title of an object. Well, if there's no object, then there's no title, either. However, you can't dereference the object, since it's NULL.

So we know what N.C. and O.C. are all about now. Useful? Yes. Necessary? Not even remotely.

The development effort saved in using these operators is practically nothing. On the other hand, the breakage caused by these across the web has been simply stupendous. (If you've happened to have visited the MSFN Forums at any time in the past year or so, you know just what I'm talking about.) It's incredible how harmful a few simple characters have become, but they have singlehandedly (or doublehandedly?) broken much of the entire World Wide Web on a regular basis for a large number of users. In fact, these two operators are probably responsible for the single largest JavaScript-related breakage of the World Wide Web, to date, in history. This is not simply Internet Explorer thinking different — this is a truly unprecedented mass breakage, the likes of which have not been seen before in recent history.

This is a classic example of "high risk, low reward" or more appropriately, "high breakage, quite literally no time saved, considering the time that goes into thinking about the new operator in the first place". It's an alarming example at how the disconnection of "modern" web developers today poses a grave threat to the standards-based, open web that we've enjoyed for the past 3 decades.

Thus, the cat and mouse game with breakage has thus taken on a new dimension with such nasty operators as these. GitHub, for example, adopted these features at one point several months ago. Naturally, I complained to them, and GitHub actually reversed the change in fairly short order, acknowledging that it had been a mistake on their end. Unfortunately, just a couple weeks ago, the changes made their way to the website again, once again breaking functionality on much of the GitHub website for me and hundreds of other users on the MSFN Forums. Support tickets are in again, but they haven't resolved the issue yet this time around. Indeed, we are currently at the mercy of web developers who decide whether or not to be responsible, and if they choose to be irresponsible, whether or not they decide to do the responsible thing and undo their changes or not.

Over time, the world wide web has changed. For one, it's gotten a lot bigger. The way that people use and consume the web, the types of content available on it, have also changed. The way the web is created, too, has changed. As websites have become more complicated and complex — and more JavaScript-heavy — the accessibility of websites has also declined. Most websites nowadays are a pain to load on slow DSL, let alone a dial-up connection of any speed. As websites grow more bloated and resource-intensive, many things become more out of reach for those with slower or older systems. Then we have technical malpractice: for instance, medium.com's CDN idiotically blocks any user agents it doesn't like — including older browsers or atypical user agents. That's not security — it's stupidity.

Most people take the Internet and the World Wide Web for granted, but they only remain open and accessible if we make a deliberate effort to keep them that way. For years and years, bloat creep has been eroding the accessibility of the web to many. In the past year, this has been taken to a new extreme with a mass explosion of incompatible JavaScript all over the web. Not only has the use of JavaScript gone from enhancing to detracting, but so too has the stewarding of it gone from intimate to disconnected. Libraries like React make it easy to put together bloated websites with dozens of packages with lots of code that other people wrote — the complete opposite of progressive enhancement, accessible. This means that a website, out of no direct action of its own, could inadvertently suddenly break for thousands of users, simply because the developer of one its packages decided he couldn't give a hoot about accessibility or compatibility. If that's not a scary proposition for the web, I don't know what it is.

We have come to the point where, increasingly, the web has diverged to not one but a tale of two webs. One web — the old web, you might say — stays true to the founding principles of the web - an open and accessible web to all. Content is generated server-side, and JavaScript is used sparingly and only when necessary. And the other extreme, we have websites that need to load a megabyte's worth of JavaScript libraries just to output "Hello world". It's sites like these that bring to mind the old adage that "just because you can, doesn't mean you should".

Little browsing is done on text based (line mode) browsers anymore, but it's worth remembering that when the web began, there were no graphical browsers — all web browsers only dealt with text. I'm not saying we should confine ourselves to what was possible or practical in the early days of the web. However, there is something about the spirit of the early days that remains more important — perhaps more important now than ever — as our technological capabilities continue to grow and eclipse what was possible then by leaps and bounds. Because, if wielded unwisely and unjustly, we create not a powerful and more capable web but a less accessible and more discriminatory one.

Web developers have a responsibility to their users. Above all, it is the users that justify the existence of a website, and web development is a delicate dance that developers must carefully navigate as they make changes, add features, and continue to iterate. New features, where they add value, and are not overly intrusive or obstructive, are generally justified. Progress for the sake of progress — or more rather, JavaScript for the sake of JavaScript — however, is not, though this is, increasingly, what the web reeks of today. Ever the two-edged sword that it is, JavaScript must be wielded wisely as, unlike HTML and CSS, if misused, it can - quite literally - break your site. Maybe not for you, maybe not for most people, but for many people. If you can't deploy your JavaScript in a sensible and compatible way, then it's tactful not to deploy it all. More generally, if you can't polyfill some bleeding edge feature, then you probably don't need it, and you probably shouldn't be using it.

N.C. and O.C. are perfect examples of the kind of insanity that results when developers become so disconnect from reality — and their users — that they let the sexy features of a language go straight to their head. If there were a report card for website accessibility, the use of either (or both) of these operators alone would be enough to warrant an automatic F. It's bad enough that many web developers simply can't be bothered to care about this kind of issue, which, arguably, disqualifies them from being holistic web developers at all to me. Even worse, however, is web developers that might not even know that their sites are broken in these ways until irrate users from all over begin reporting them. That's a good indication that your website is chewing off people's browsers more than it should be, and some serious fat cutting needs to get underway.

The reality is that N.C. and O.C. don't serve any purpose other than as busywork for developers with nothing better to do to pocket a paycheck. Meanwhile, websites break, nobody cares, and people like me end up filing support tickets everywhere because their web team can't get their act together. N.C. and O.C. don't add any new functionality to the web. It's just a dumb way to save a few bytes on your website by replacing a bunch of compatible code with a bunch of incompatible code that does the exact same thing. No sane programmer would ever condone this kind of behavior.

Web developers: it's time to take inventory of your web applications, take stock of your JavaScript, and seriously reevaluate where you're going with web applications. The web that we're increasingly creating, as developers, is a slower, bloated, and inaccessible one. It's on where web developers actively widen the digital divide, not bridge it. It's no wonder that so many people block JavaScript outright — really, who can blame them? (Pro tip: if you disable JS, you can read the New York Times online for free. You didn't hear that from me.) As a language, JavaScript has become so misused and overused that if you don't hate it (at least partially) by now, you probably haven't surfed the net in the past ten years.

As for me, I have one thing to say to the nullish coalescing and optional chaining operators: get off my lawn, get out of my browser, and stay away from the web!

← All Posts


Comments

Log in to leave a comment!