Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

Election Season

So election season is upon us -- no, not that one -- elections for the W3C's Technical Architecture Group (TAG). You might be thinking to yourself "self, I didn't know the W3C had such a thing...how very strange...I wonder what it does." Also, you might be wondering why I'd bother to write about a perfunctory process related to a group whose deliverables you have probably never encountered in the wild and are fantastically unlikely to.

The answer is that I'm now in the running for one of the TAG's 4 open seats. At last week's TPAC I was talked talked myself into running. Google has nominated me, so we're off to the proverbial races. Given the number of condolences I've received since deciding to run, you might wonder why I'd do such at thing at all; particularly since the general consensus is that the TAG's impact is somewhere between "mostly harmless" and "harmless". I've never heard it described as "valuable" or "productive" in its modern incarnation, so why run? Could it be for the warmth of human contact that a weekly conference call could provide? A deep-seated but as yet unfulfilled desire to weigh in on what really irks me about how people use HTTP? Indeed not.

I'm running to try to turn the TAG into an organization that has something to say about the important problems facing devs building apps today; particularly how new specs either address or exacerbate those challenges.

Notionally that's what the TAG does now; review specs in development and provide feedback. If you look at the current work items, however, there's nothing about JavaScript. There's similarly little on DOM or the sorry state of API design at the W3C and how it is enabled by WebIDL and a broad cultural ignorance of JS. The TAG could provide advice on API design to WGs, how to think about creating a well layered platform, and how to keep webdevs in mind at all times.

State management gets a lengthy exposition that doesn't do much to uncover the challenges in designing apps that need to keep and manage state, synchronize that state in local data models, and map state onto URLs. Its a hard problem that most JS-driven apps struggle with and the TAG has no view beyond motherhood, apple pie, and pushState(). It can do better.

For webdevs, the TAG is a natural ally -- it carries the weight of the W3Cs reputation in ways that vendor-motivated WGs don't. It can be an advocate for building things well to reluctant managers and organizations, helping you make the "environmentalist" argument; "the web is for everyone, why are you making it bad?" It can also be a go-between for agitated webdevs and the WGs, a voice of reason and patient explanation.

As a talking shop, the TAG can never be expected to have power over implementations or webdevs, but it can be opinionated. It can take a stand on behalf of webdevs and call out huge problems in specs, missing work items for WGs, and build bridges to the framework authors that are doing the heavy lifting when the browsers don't. It can also build bridges to other standards bodies that specify hugely important parts of the web platform, bringing a webdev perspective to bear.

I have worked inside a browser, represented Google at TC39, and have built large apps and frameworks on the web stack. I have seen firsthand how the TAG isn't doing this important work of building bridges and jawboning on behalf of web developers. And I plan to fix it, if elected.

If you work at a company that is a member of the W3C, I hope to get your AC rep's vote. If not, I would still love your support in letting AC reps from the Member organizations know that you support the goal of turning the TAG into an advocacy organization for the interests of webdevs.

If you're particularly interested in this problem, I'd love your help -- I'm looking for reformist-minded candidates to join me in running for the TAG. If you don't have a sponsoring or nominating organization, send me mail or a tweet.

Voting nitty-gritty, largely cribbed from Ian Jacob's mail to a Members-only W3C mailing list:

Inadmissible Arguments

I spend a lot of time working in, on, and around web standards. As a part of this work, several bogus perspectives are continuously deployed to defend preferred solutions. I hereby call bullshit on the following memetic constructs:

"That's just a browser caching problem."

Look for this to be deployed along standards-wonk classics such as "our job should just be to provide low-level primitives", "we don't know enough to solve this", and "people will just write tiny libraries to provide that".

"It's a caching problem" is a common refrain of those who don't build apps and aren't judged on their latency. And it's transparently bullshit in the web context. It relies on you forgetting -- for just that split second while it sounds possible to duck the work at hand -- that we're talking about a platform whose primary use-case is sending content across a narrow, high-latency pipe. If you work in web standards and don't acknowledge this constraint, you're a menace to others.

Recent history alone should be enough to invalidate the caching argument; remember Applets? Yeah, Java had lots of problems on the client side, but what you saw with client-side Java were the assumptions of server-side engineers (who get to control things like their deployment VM versions, their hardware and spindle speeds, etc.) imported to distributed code environments where you weren't pulling from a fast local disk in those critical moments when you first introduced your site/app/whatever to the users. Instead, you saw the piles upon piles of JARs that get created when you assume that that next byte is nearly free, that disk is only 10ms away, and that cold startups are the rare case. It worked out predictably and Java-like systems succeed on the client when their cultural assumptions do align with deployment constraints -- Android and iOS are great examples. Their mediated install processes see to it.

Back out here on the wolly web, caching is something that's under user control. It must be for privacy reasons, and we know from long experience that users clear their caches. The "cache" they can't clear is the baseline set of functionality the platform provides -- i.e., the built in stuff on all runtimes developer cares about...which is what specs can effect (obliquely and with some delay).

By the time you're having a serious discussion about adding a thing to a spec among participants who aren't obvious bozos, you can bet your sweet ass that the reason it was brought up in the first place is a clear and obvious replication of effort among existing users bordering on the ubiquitous -- often in libraries they use. Saying that someone can write a library rises no higher than mere truism, and saying that a standards body shouldn't provide some part of the common features found among those existing libraries because caching will make the cost of those libraries moot is ignorance or standards-jujitsu in an attempt to get a particular proposal shelved for some other reason.

As bad as the above is, consider the (historically prevalent) use of this argument regarding "why we don't need no stinking" markup features -- just "fix" browser caching and "give me a low-level thing and I'll build [rounded-corners, gradients, new layout mechanisms, CSS3d xforms, etc.] myself". The subtle bias towards JavaScript and away from a declarative web is one of the worst, most insidious biases of the already enfranchised upper-class of JavaScript-savvy web developers, perpetuating a two-tier web in which "real engineers" can do better every year but wherein those same expressiveness and performance gains aren't transmitted to folks without CS degrees (yes, I'm looking straight at you, WebGL). You almost want to give it to them, though, so they'll finally come to terms with how wrong they really are about the ability for caching to be "fixed".

We've spent a decade on this -- remember that we're all using CDNs, pulling our JS libraries from edge-cached servers with stable URLs for optimal LRU behavior, running minifiers, setting far-forward expires, etc. And that's just what webdevs are doing: meanwhile browser vendors have been working day and night to increase the sizes of caches, ensure full caches where possible, and implement sometimes crazy specs that promise to help. It's not for lack of trying, but we still don't collectively know how to "fix" caching.

Think libraries are free? Show me how you'll make 'em free and I'll start taking you seriously. Until then, bozo bit flipped.

"It should have exactly the same API as library X"

Similar arguments include "it must be pollyfillable", etc.

Why is this bullshit? Not because it represents an aversion to being caught in an implementation/deployment dead zone -- that's a serious concern. Nobody wants a better world dangling out there just beyond reach, waiting for Vendor X to pick it up and implement or for Version Y of Old Browser to die so that 90% of your clients can acces the feature. That's where libraries provide great value and will continue to, no matter what the eventual feature looks like. Remember, the dominant libraries in use by developers don't have Firefox, Chrome, IE, and Opera-specific versions that you serve up based on client. The platonic ideal is 180-degrees in the other direction: one code base, feature detection, polyfills, progressive enhancement -- basically anything within (and often well outside of) reason to keep from serving differential content. So your library is going to have all those versions anyway until all of your clients have the new-spec version. Optimizing based on existing API design because "now we can polyfill it without extra effort!" misunderstands both the role of spec designers and of libraries and serves users very poorly indeed.

Where this argument becomes truly inadmissible, though, is when it seeks to define the problem to be "what our API already does". Turns out that the language you can design features with when all you have is JavaScript and HTML/CSS patterns is...well...the JS, HTML, and CSS you have today. Powerful yes. Expressive? Hrm. If your job, however, is to evolve an integrated platform, taking on a constraint predicated on the current form of your systems is nuts. There are some hard constraints like that (backwards compatibility), but adopting some form of hack as the "blessed" way of doing something without looking around and going "how can we do this better given that we have more powerful and expressive language to solve the problem with?" is nothing but lost opportunity.

Yes, a standards group might look around and go "nope, we can't do better than that polyfill/library" and adopt the solution wholesale. That's not a bad thing. What is bad, though, is advocacy about far-future solutions to current problems based solely on the idea that some library totally nailed it and we shouldn't be asking for anything more -- e.g.: Microdata cribbed from RDFa and Microformats when what you really wanted was Web Components. Rote recital of existing practice is predictably weak sauce that robs the platform of its generative spark. We should all be hoping for enhancements that make the future web stronger than today's, and you only find those opportunities by taking off the blinders and giving yourself free reign to do things that libraries, polyfills, and hack just can't.

What to do about the developers and users caught in the crossfire? Well, we can advocate for degradable, polyfill-friendly designs, but there are limits to that. The most hopeful, powerful way to make the pain disappear is to ensure that more of our users year-over-year are on browsers/platforms that auto-update and won't ever be structurally left-behind again. And yes, that's something that every web developer should be working towards; prompting users to update to current-version auto-upgrading -- "evergreen", if you will -- browsers. Remember: you get the web you ask your users for.

"Just tell us the use cases"

This is the one I'm most sick of hearing from the mouths of standards wonks and authors. What they're trying to say is some well-meaning combination of:

What it often winds up doing, however, is serving as a way for a standards body or author to shut up unproductive folks; folks who aren't willing to do the work of helping them come to enlightenment about the architecture of the current solution, the constraints it was built under, and the deficiencies it leaves you with. Or it can be used to avoid that process entirely -- which is where we're getting towards bullshit territory. Taken to the extreme (and it too often is) the "use-cases, not details" approach infantilizes standards body participants, setting a bar too high for the folks who are crying out for help and setting the bar too low for the standards body because, inevitably, the tortured text of the "use cases" is an over-terse substitue for a process, a way of building, and an architectural approach. Use-cases (as you see them for HTML and CSS in particular) become architecture-free statements, no more informative than a line plucked at random from Henry V. Suggestive, yes. Useful? Nope. The very idea of building standards without a shared concept of architecture is bogus, and standards participants need to stop hiding behind this language. If you don't understand something, say so. If many people work hard to explain it and can't, ask for a sample application or snippet to hack on so you can do a mind-meld with their code and can ask questions about it in context.

Yes, having the use-cases matters, but nobody should be allowed to pretend that they're a substitue for understanding the challenges of an architecture.


While I've only covered a few of the very worst spec-building tropes, it's by no means the end of the rhetorical shit list. Perhaps this will be a continuing series -- although I'm sure this post will offend enough folks to make me think twice about it. If we put an end to just these, though, a lot of good could be accomplished. Here's to hoping.

Please Vote

If you're a web developer, please vote in Paul Irish's poll on browser support. The larger the population that votes, the more we can trust the answers, and the data is critical to making sense of how we collectively think about browser support. Go now. It won't take long, I promise.

Class Warfare

And so we're at an impasse.

At the last TC39 meeting, after spending what felt like an eternity on the important topic of UTF-16 encoding, the last day hastily ended with two topics that simply could not be any more urgent: can we get something done about data mutation observation -- the underpinnings for every data binding system worth a damn -- and classes.

As you might imagine, classes got at least one cold shoulder, but this time we were able to suss out the roots of the objection: the latest majority-agreed proposal ("Maximally Minimal Classes") isn't "high water mark" classes and, since so much has been given up to get something moved down the field, classes are no longer worth having. In this post I'll outline a bit of the debate so far, current state, and hopefully convince you that the anti-classicist's arguments are weak.

But first: why classes at all?

There's an ambient argument in the JS world that "we don't need no stinking classes". That all we must do to save the language is teach people the zen of function objects and closures and all will be well with the world...and if we lose some people, so what? I don't want to tar everyone who sympathizes with the beginning of that sentiment with the obvious negatives of the last bit, but that's the overall gist I get. In this view, classes are an unnecessary and lead people to consider JS code the "wrong" way. I've written before about how in ES6 class means function, but that doesn't mollify the discontent.

But then why any language feature at all? Why isn't assembler good enough for everyone? Or C? Or C++?

Turns out the answer is "human frailty". Or put a different way, the process of cognition depends on very limited amounts of short-term stack space in our wetware, and computing languages are about making the computer hospitable for the human, not about telling the system what to do. Our tradeoffs in languages would look much different if we could all easily recall 20 or 30 things at a time instead of 5-10. Languages are tools for freeing creative people from registers and stacks and heaps and memory management and all the rest; all while trying to keep the creative process grounded in the reality that it's memory words in a Von Neumann architecture; without that grounding we'd end up too disconnected from the system to deliver anything practical.

Abstractions, high-level semantics, types...they're all pivots, ways of considering sets of things while working with an individual example of that thing mentally. You don't consider every case all at once and don't remember everything about the system at one time; instead you focus on the arguments to the current function which creates it's own scope. All of these mechanisms create ways of limiting what you need to think about. They ease the burden of associating some things with other things by allowing you to consider smaller cases and allowing you to create or break limitations as they become variously helpful and harmful to getting something done. And we have lots of words to describe the things we find painful about dealing with the overhead of remembering; of a lack of directness and locality: "spaghetti code", "unmaintainable", "badly factored", "tightly coupled", etc. Where there's a new fad in programming, odds are high that it is being touted as a solution to some aggregate behavior which causes a reasonable local decision to become globally sub-optimal. This is what language features are for and why, when you see yourself using the same patterns over and over again, something is deeply wrong.

I'll say this again (and imagine me saying it sloooooowly): languages evolve in response to the pain caused by complexity. The corollary is that language which don't evolve to remove complexity, or which add it in day-to-day usage cause pain. Pain is almost always bad.

Complexity takes many forms, but the process of disambiguation when looking at a syntactic form that relies on a pattern is one of the easiest to take off the table. You'll see this in my work in lots of areas: Web Components make the relationship between markup and eventual behavior less ambiguous when you read the HTML, Shadow DOM creates an isolation boundary for CSS and HTML that make it easier to consider a "thing" in isolation when building and using them, classes in JS make it easier to read some bit of code and not be tripped up by the many meanings of the word function, and many of the features I've advocated for in CSS (mixins, hierarchy, variables) are factoring mechanisms that make it simpler to pivot between repeated stuff and the ad-hoc visual semantics of the document that mean so much to the design language.

Not only should we write off the luddites, we should consider every step towards lower complexity good and steps towards complexity and slower understanding to be bad and actively harmful. Tortured claims about how "you aren't going to need those things" need to be met with evidence; in this case, what do the pervasive libraries in the ecosystem do? Yeah, those are complexity to be taken off the table. The agenda is clear.

But back to classes and JS.

We've been debating various forms of classes in JS since the late '90s. That's right; we've been circling this drain for more than a decade. In that time, nearly every permutation of class system that I'm familiar with has been debated as the rightful heir to the reserved class keyword. From a parallel inheritance structure (non-prototypal) to "just prototypes like we have them today", from "classes as constructor body" to "classes as object literals with constructors", from frozen to gooey and unbaked, from classes as nominal types to totally untyped, we've seen it all. In all of this, the only constant has been failure. TC39 has continued, for more than a decade, to fail to field any class syntax (yes, I have my slice of blame-cake right here). In that time, JS has gone from being important to being critical to the construction of large systems; both on the client and the server. As this has happened, we've seen function pressed into service not only as lambda and method, but module, class, and pretty much every other form of ad-hoc composition system that's needed at any point. We're overdue for adding language-level support for all of these things. The committee has shown great ability to extend and expand upon idioms already in the language, taking syntax as a starting point and using it create new room for reducing pain (the spread operator and destructuring are great examples). Yet still many on the committee, notably Luke Hoban of Microsoft and Waldemar Horwat of Google, worry that a form of class in ES6 that doesn't make coffee for you while also shooting rainbows out of it's behind won't be worth the syntactic space it takes up. I would have worried about this too back in '08 when I first attended a TC39 meeting, but no longer. This is a group that is good at incremental change. Rainbow-shooting technology takes a while to build; meanwhile, wouldn't it be great if we could get a cup of joe around here? We've waited long enough to get something meaningful to build on and with. I want a hell of a lot more than what the Maximally Minimal Classes proposal adds, but they're a starting point, and there will be more ES versions.

So that's Argument #1: people will hate us if class doesn't do enough.

True, perhaps, but they will hate us anyway. Add classes and Mikeal Rogers personally organizes the pitchfork and torch distribution points and leads the march. Don't add classes and the world at large thinks of JS as a joke and a toy -- where'd that giant library/transpiler get to, anyway? You need something after all. Add them and Mikeal can keep not using classes to his heart's content, and if we add classes in the style I prefer (and which Max/Min Classes embody), he can use them as old-skool functions and never care how they were declared; all the while making JS more productive for folks who don't want to buy into a whole lifestyle to get a job done.

What's Argument #2? That instances created from classes aren't frozen. That is to say, it's still possible to add new properties to them later. You know, like every other JS object.

If you're thinking "yes, but isn't there Object.freeze() already?", you're not alone. What you're witnessing here is a debate about what's considered good, not what's possible. You can manually freeze/seal objects to your heart's content, but what the (tiny) minority that is strenuously making argument #2 is demanding is that nobody be allowed to use the word class to create a constructor function body unless the resulting object is automatically frozen. They are willing to hold up the entire train until this preference is mollifed, and are unhappy with any short syntax which does not bless their preferred style.

In fairness, I also would be grumpy to see things go a direction I don't prefer, but consider the following syntaxes that could be deployed in a future version to create frozen classes, none of which are precluded by the Max/Min proposal:

// "const" causes instances to have their properties
// frozen on exiting the constructor
const class Point {
  constructor() { ... }
}

// Declaring a const properties member causes the // constructor to check for those only, freezing exiting // the constructor class Point { constructor() { ... } const properties { // Object literal ... } }

And I'm sure you can think of a couple of others. The essential distinguishing feature, and the thing that is holding the entire train up, is that the word class without any embellishment whatsoever doesn't work this way. Yes, I know it sounds crazy, but that's where we're at. You can help, though. Erik Arvidsson has implemented Max/Min classes in Traceur and you can try it out in the live repl. If you like/hate it, won't you please let the es-discuss mailing list know? Or just post in the comments here for posterity. We know that these classes need many more features, but your thoughts about this syntax as a starting point couldn't be more timely.

Hoisted From The Comments

Some stuff is too good to leave in the shadows. On my Bedrock post, James Hatfield writes in with a chilling point, but one which I've been making for a long while:

”every year we’re throwing more and more JS on top of the web”

The way things are going in my world, we are looking at replacing the web with JS, not simply layering. At a certain point you look at it all and say “why bother”. Browsers render the DOM not markup. They parse markup. Just cut out the middle man and send out DOM – in the form of JS constructs.

The second part is to stop generating this markup which then must be parsed on a server at the other end of a slow and high latency prone communication channel. Instead send small compact instructions to the browser/client that tells it how to render and when to render. Later you send over the data, when it’s needed...

This is a clear distillation of what scares me about the road we're headed down because for each layer you throw out and decide to re-build in JS, you end up only doing what you must, and that's often a deadline-driven must. Accessibility went to hell? Latency isn't great? Doesn't work at all without script? Can't be searched? When you use built-ins, those things are more-or-less taken care of. When we make them optional by seizing the reigns with script, not only do we wind up playing them off against each other (which matters more, a11y or latency?) we often find that developers ignore the bits that aren't flashy. Think a11y on the web isn't great now? Just wait 'till it's all JS driven.

It doesn't have to be this way. When we get Model Driven Views into the browser we'll have the powerful "just send data and template it on the client side" system everyone's looking for but without threatening the searchability, a11y, and fallback behaviors that make the web so great. And this is indicative of a particularly strong property of markup: it's about relationships. "This thing references that thing over there and does something with it" is hard for a search engine to tease out if it's hidden in code, but if you put it in markup, well, you've got a future web that continues to be great for users, the current crop of developers, and whoever builds and uses systems constructed on top of it all later. That last group, BTW, is you if you use a search engine.

But it wasn't all clarity and light in the comments. Austin Cheney commented on the last post to say:

This article seems to misunderstand the intention of these technologies. HTML is a data structure and nothing more. JavaScript is an interpreted language whose interpreter is supplied with many of the most common HTML parsers. That is as deep as that relationship goes and has little or nothing to do with DOM.

...It would be safe to say that DOM was created because of JavaScript, but standard DOM has little or nothing to do with JavaScript explicitly. Since the release of standard DOM it can be said that DOM is the primary means by which XML/HTML is parsed suggesting an intention to serve as a parse model more than a JavaScript helper.

Types in DOM have little or nothing to do with types in JavaScript. There is absolutely no relationship here and there shouldn’t be...You cannot claim to understand the design intentions around DOM without experience working on either parsers or schema language design, but its operation and design have little or nothing to do with JavaScript. JavaScript is just an interconnecting technology like Java and this is specifically addressed in the specification in Appendix G and H respectively.

And, after I tried to make the case that noting how it is today is no replacement for a vision for how it should be, Austin responds:

The problem with today’s web is that it is so focused on empowering the people that it is forgetting the technology along the way. One school of thought suggests the people would be better empowered if their world were less abstract, cost radically less to build and maintain, and is generally more expressive. One way to achieve such objectives is alter where costs exist in the current software life cycle of the web. If, for instance, the majority of costs were moved from maintenance to initial build then it could be argued that more time is spent being creative instead of maintaining.

I have found that when working in HTML alone that I save incredible amounts of time when I develop only in XHTML 1.1, because the browser tells you where your errors are. ... Challenges are removed and costs are reduced by pushing the largest cost challenges to the front of development.

... The typical wisdom is that people need to be empowered. If you would not think this way in your strategic software planning then why would it make sense to think this way about strategic application of web technologies? ...

This might all sound very rational on one level, but a few points need to be made:

I think Austin's point about moving costs from maintenance to build is supposed to suggest that if we were only more strict about things, we'd have less expensive maintenance of systems, but it's not clear to me that this has anything to do with strictness. My observation from building systems is that this has a lot more to do with being able to build modular, isolated systems that compose well. Combine that with systems that let you iterate fast, and you can grow very large things that can evolve in response to user needs without turning into spaghetti quite so quickly. Yes, the web isn't great for that today, but strictness is orthogonal. Nothing about Web Components demands strictness to make maintainability infinitely better.

And the last point isn't news. Postel's Law isn't a plea about what you, dear software designer, should be doing, it's an insightful clue into the economics of systems at scale. XML tried being strict and it didn't work. Not even for RSS. Mark Pilgrim's famously heroic attempts at building a reliable feed parser match the war stories I've heard out of the builders of every large RSS system I've ever talked to. It's not that it's a nice idea to be forgiving about what you accept, it's that there's no way around it if you want scale. What Austin has done is the classic bait-and-switch: he has rhetorically substituted what works in his organization (and works well!) for what's good for the whole world, or even what's plausible. I see this common logical error in many a standards adherent/advocate. They imagine some world in which it's possible to be strict about what you accept. I think that world might be possible, but the population would need to be less than the size of a small city. Such a population would never have ever created any of the technology we have, and real-world laws would be how we'd adjudicate disputes. As soon as your systems and contracts deal with orders of magnitude more people, it pays to be reliable. You'll win if you do and lose if you don't. It's inescapable. So lets banish this sort of small-town thinking to the mental backwaters where it belongs and get on with building things for everyone. After all, this is about people. Helping sentient beings achieve their goals in ways that are both plausible and effective.

If helping people is not what this is about, I want out.

Older Posts

Newer Posts