Last week, two events reminded us, yet again, of how right Douglas Crockford was when he declared the web “the most hostile software engineering environment imaginable.” Both were serious enough to take down an entire site—actually hundreds of entire sites, as it turned out. And both were avoidable.
In understanding what we control (and what we don’t), we will build resilient, engaging products for our users.
The first of these incidents involved the launch of Chrome 66. With that release, Google implemented a security patch with serious implications for folks who weren’t paying attention. You might recall that quite a few questionable SSL certificates issued by Symantec Corporation’s PKI began to surface early last year. Apparently, Symantec had subcontracted the creation of certificates without providing a whole lot of oversight. Long story short, the Chrome team decided the best course of action with respect to these potentially bogus (and security-threatening) SSL certificates was to set an “end of life” for accepting them as secure. They set Chrome 66 as the cutoff.
The second incident was actually quite similar in that it also involved SSL, and specifically the expiration of an SSL certificate being used by jQuery’s CDN. If a site relied on that CDN to serve an HTTPS-hosted version of jQuery, their users wouldn’t have received it. And if that site was dependent on jQuery to be usable … well, ouch!
For what it’s worth, this isn’t the first time incidents like these have occurred. Only a few short years ago, Sky Broadband’s parental filter dramatically miscategorized the jQuery CDN as a source of malware. With that designation in place, they spent the better part of a day blocking all requests for resources on that domain, affecting nearly all of their customers.
It can be easy to shrug off news like this. Surely we’d make smarter implementation decisions if we were in charge. We’d certainly have included a local copy of jQuery like the good Boilerplate tells us to. The thing is, even with that extra bit of protection in place, we’re falling for one of the most attractive fallacies when it comes to building for the web: that we have control.
Lost in transit?
First off, we don’t—at least in the vast majority of cases—control the network our code traverses to reach our users. Ideally our code takes an optimized path so that it reaches its destination quickly, yet any one of the servers along that path can read and manipulate the code. If you’ve heard of “man-in-the-middle” attacks, this is how they happen.
For example, certain providers have no qualms about injecting their own advertising into your pages. Gross, right? HTTPS is one way to stop this from happening (and to prevent servers from being able to snoop on our traffic), but some providers have even found a way around that. Sigh.
Lost in translation?
Assuming no one touches our code in transit, the next thing standing between our users and our code is the browser. These applications are the gateways to (and gatekeepers of) the experiences we build on the web. And, even though the last decade has seen browser vendors coalesce around web standards, there are still differences to consider. Those differences are yet another factor that will make or break the experience our users have.
While every browser vendor supports the idea and ongoing development of standards, they do so at their own pace and very much in relation to their business interests. They prioritize features that help them meet their own goals and can sometimes be reluctant or slow to implement new features. Occasionally, as happened with CSS Grid, everyone gets on board rather quickly, and we can see a new spec go from draft to implementation within a single calendar year. Others, like Service Worker, can take hold quickly in a handful of browsers but take longer to roll out in others. Still others, like Pointer Events, might get implemented widely, only to be undermined by one browser’s indifference.
All of this is to say that the browser landscape is much like the Great Plains of the American Midwest: from afar it looks very even, but walking through it we’re bound to stumble into a prairie dog burrow or two. And to successfully navigate the challenges posed by the browser environment, it pays to get familiar with where those burrows lie so we don’t lose our footing. Object detection … font stacks … media queries … feature detection … these tools (and more) help us ensure our work doesn’t fall over in less-than-ideal situations.
document.body.innerHTML += '<p>Can I count to four?</p>'; for (let i=1; i<=4;>=4;>
This code is designed to insert several paragraphs into the current document and, when executed, produces this:
Can I count to four? 1 2 3 4 Success!
Simple enough, right? Well, yes and no. You see, this code makes use of the
let keyword, which was introduced in ECMAScript 2015 (a.k.a. ES6) to enable block-level variable scoping. It will work a treat in browsers that understand
let. However, any browsers that don’t understand
Browser plugins are another form of third-party code that can negatively affect our sites. And they’re ones we don’t often consider. Back in the early ’00s, I remember spending hours trying to diagnose a site issue reported by one of my clients, only to discover it only occurred when using a particular plugin. Anger and self-doubt were wreaking havoc on me as I failed time and time again to reproduce the error my client was experiencing. It took me traveling the two hours to her office and sitting down at her desk to discover the difference between her setup and mine: a third-party browser toolbar.
We don’t have the luxury of traveling to our users’ homes and offices to determine if and when a browser plugin is hobbling our creations. Instead, the best defense against the unknowns of the browsing environment is to always design our sites with a universally usable baseline.
Lost in interpretation?
Regardless of everything discussed so far, when our carefully crafted website finally reaches its destination, it has one more potential barrier to success: us. Specifically, our users. More broadly, people. Unless our product is created solely for the consumption of some other life form or machine, we’ve got to consider the ultimate loss of control when we cede it to someone else.
Over the course of my twenty years of building websites for customers, I’ve always had the plaintive voice of Clerks’ Randal Graves in the back of my head: “This job would be great if it wasn’t for the f—ing customers.” I’m not happy about that. It’s an arrogant position (surely), yet an easy one to lapse into.
People are so needy. Wouldn’t it be great if we could just focus on ourselves?
No, that wouldn’t be good at all.
When we design and build for people like us, we exclude everyone who isn’t like us. And that’s most people. I’m going to put on my business hat here—Fedora? Bowler? Top hat?—and say that artificially limiting our customer base is probably not in our company’s best interest. Not only will it limit our potential revenue growth, it could actually reduce our income if we become the target of a legal complaint by an excluded party.
Our efforts to build robust experiences on the web must account for the actual people that use them (or may want to use them). That means ensuring our sites work for people who experience motor impairments, vision impairments, hearing impairments, vestibular disorders, and other things we aggregate under the heading of “accessibility.” It also means ensuring our sites work well for users in a variety of contexts: on large screens, small screens, even in-between screens. Via mouse, keyboard, stylus, finger, and even voice. In dark, windowless offices, glass-walled conference rooms, and out in the midday sun. Over blazingly fast fiber and painfully slow cellular networks. Wherever people are, however they access the web, whatever special considerations need to be made to accommodate them … we should build our products to support them.
That may seem like a tall order, but consider this: removing access barriers for one group has a far-reaching ripple effect that benefits others. The roadside curb cut is an example we often cite. It was originally designed for wheelchair access, but stroller-pushing parents, children on bicycles, and even that UPS delivery person hauling a tower of Amazon boxes down Seventh Avenue all benefit from that rather simple consideration.
Maybe you’re more of a numbers person. If so, consider designing your interface such that it’s easier to use by someone who only has use of one arm. Every year, about 26,000 people in the U.S. permanently lose the use of an upper extremity. That’s a drop in the bucket compared to an overall population of nearly 326 million people. But that’s a permanent impairment. There are two other forms of impairment to consider: temporary and situational. Breaking your arm can mean you lose use of that hand—maybe your dominant one—for a few weeks. About 13 million Americans suffer an arm injury like this every year. Holding a baby is a situational impairment in that you can put it down and regain use of your arm, but the feasibility of that may depend greatly on the baby’s temperament and sleep schedule. About 8 million Americans welcome this kind of impairment—sweet and cute as it may be—into their home each year, and this particular impairment can last for over a year. All of this is to say that designing an interface that’s usable with one hand (or via voice) can help over 21 million more Americans (about 6% of the population) effectively use your service.
Finally, and in many ways coming full circle, there’s the copy we employ. Clear, well-written, and appropriate copy is the bedrock of great experiences on the web. When we draft copy, we should do so with a good sense of how our users talk to one another. That doesn’t mean we should pepper our legalese with slang, but it does mean we should author copy that is easily understood. It should be written at an appropriate reading level, devoid of unnecessary jargon and idioms, and approachable to both native and non-native speakers alike. Nestled in the gentle embrace of our (hopefully) semantic, server-rendered HTML, the copy we write is one of the only experiences of our sites we can pretty much guarantee our users will have.
Old advice, still relevant
Recognizing all of the ways our carefully-crafted experiences can be rendered unusable can be more than a little disheartening. No one likes to spend their time thinking about failure. So don’t. Don’t focus on all of the bad things you can’t control. Focus on what you can control.
Start simply. Code defensively. User-test the heck out of it. Recognize the chaos. Embrace it. And build resilient web experiences that will work no matter what the internet throws at them.