Sorry for the late notice, but the inimitable Matthew Russell has organized another dojo.beer() event for 7pm tonight (Wed, July 22nd) at O'Flaherty's pub in San Jose, near the convention center. Should be a great time, so if you're in the area, hopefully we'll see you there!
In which I partially defend Microsoft and further lament the state of tech "journalism".
A very short open letter:
Dear interwebs:
Please stop mis-representing the results of benchmarks. Or, at a minimum, please stop blogging the results in snide language that shows your biases. It makes the scientific method sad.
Thank you.
Alex Russell
Today's example of failure made manifest comes via Reddit's programming section (easy target, I know), but deserves some special attention thanks to such witty repartee as:
Using slow-motion video? What a great idea. Maybe we can benchmark operating systems like that.
Maybe we can....and maybe we should. It might yield improvements in areas of OS performance that impact user experience. With a methodology that represents end-user perception, you should be able to calculate the impact of different scheduling algorithms on UI responsiveness, something that desktop Linux has struggled with.
The test under mockery may have problems, but they're not the ones the author assumes. It turns out that watching for visual indications of "doneness" is a better-than-average way to judge overall browser performance (assuming fixed hardware, testing from multiple network topologies, etc.). After all, perceived performance in browsing is all that matters. No one discounts a website's performance because when you visit they happen to let browsers cache resources that get used across pages or because they use a CDN to improve resource loading parallelism. In the real world, anything you can do to improve the perceived latency of a web site or application is a win.
MSFT's test methodology (pdf) does a good job in balancing several factors that affect latency for end-users, including resources that are loaded after onload
or in sub-documents, potential DNS lookup timing issues, and the effects of network-level parallelism on page loading. Or at least it would in theory. The IE team's published methodology is silent on points such as how and where DNS caches may be in play and what was done to mitigate them, but the level of overall rigor is quite good.
So what's wrong with the MSFT test? Not much, except that they didn't publish their code or make the test rig available for new releases of browsers to be run against. As a result, the data is more likely to be incorrect because it's stale than to be incorrect due to methodology problems. New browser versions are being released all the time, rendering the conclusions from the Microsoft study already obsolete. Making the tests repeatable by opening up the test rig or filling in the gaps in the methodology would fix that issue while lending the tests the kind of credibility that the Sun Spider and V8 benchmarks now enjoy.
This stands in stark opposition to this latest "benchmark". Indeed, while the source code was posted, it only deepens my despair. By loading the "real world sites" from a local copy, much of the excellent work being done to improve browser performance at the network level is totally eliminated. Given the complexity of real-world sites and the number of resources loaded by say, Facebook.com, changes that eliminate the effects of the network make the tests highly suspect. While excoriating JavaScript benchmarks as not representing the real world accurately, the test author eliminated perhaps the largest contributor to page loading latency and perceived performance. Ugg.
Instead of testing real-world websites (where network topology and browser networking makes a difference), the author tested local, "dehydrated" versions of websites. The result is that "loading times" weren't tested, but rather a test of "local resource serving times and site-specific optimizations around the onload
event" was run . Testing load times would have accounted for resources loaded after the onload
event fired, too. There's reason to think that neither time to load from local disk nor time for a page to fire the onload handler dominate (or even indicate) real world performance.
I'm grateful that this test showed that Chrome loads and renders things quickly from local disk. I also have no doubt that Chrome loads real websites very quickly, but this test doesn't speak to that.
It's frustrating that the Reddits and Slashdots of the world have such poor collective memory and such faulty filtering that they can't seem to keep themselves from promoting these types of bias re-enforcing stories on a regular basis. Why, oh why, can't we have better tech journalism?
I'm not sure how long it was b0rked, but the online ShrinkSafe app is back up and working.
Dojo's has as long a history as any chunk of JavaScript in wide use, and it's easy to forget how long the road has been and how far the project has come. Will of the Lucid Desktop project has put together a code_swarm visualization of the project's history to date. Lots of fun to see old friends appear and think back on when what happened:
Thanks, Will!
A relatively light-on-data article is up at Slashdot right now, and it casts aspersions both on the IBMers who contribute to Dojo and on the Foundation itself based on the Free Software party line that all software patents are inherently evil.
I won't address the background point regarding software patents here. I'll only to say that reasonable people can disagree on this, particularly when it comes to proposed solutions. What I would like to focus some attention on is the background that this patent filing is made against.
IBM has executed a CCLA with the Dojo Foundation. This agreement gives Dojo (and the rest of the OSS community) a license to whatever patent rights may be embodied in contributions of code. While IBM may file patents on things they build and contribute to Dojo, there's no risk to any users regarding use of that code or "submarine" issues of patent infringement. As a result of the Dojo Foundation's insistence that ALL code come with CLAs, Dojo is more trustable in terms of IP than most of the JavaScript you can choose to use. A similar patent claim in a less rigorously developed toolkit would indeed be apocalyptic, but the Dojo community has adopted a mature process for dealing with IP that both makes the concerns plain and then works to eliminate them, step-by-step. That's what licensing agreements are, after all: links in the chain that together help you trust that your anchor is indeed set.
It's clear to me that IBM filed this patent fully aware that they were giving away all follow-on rights to enforce it in anything but a defensive way for the benefit of the Foundation and users of Dojo. After watching IBM counsel decimate SCO in court, does anyone in the OSS world really think that IBM's lawyers are fools? And if so, to what end?
It's sad that Slashdot hasn't, for a decade of coverage of IP issues, learned that licensing is harder than the zealots would have you believe and that malice isn't always the intent of those who participate in communities with a commercial interest.
The good news here, of course, is that IBM is just as generous today toward the OSS and Dojo communities as they were yesterday. We have the legal documents to prove it.
Older Posts
Newer Posts