The Pain of HTML5

One of the essential problems with any new technology is whether to adopt it and diabolically abandon users who are not able to upgrade or miss out on new possibilities and eventually become irrelevant.

The middle way of course is to fly to the latest technologies as soon as they are stable enough but also provide fallbacks and, where necessary, reduced functionality to users of older systems. This has been the lot of any web developer for years now, where the round blue albatross around everyones necks has been IE6.

Every code path has a cost, and supporting multiple different generations of browser has a high cost (development, testing, support, maintenance, and most importantly coder misery). In recognition of this, one company has even begun passing that cost on to the customers in the shape of a markup for those who use their site with old versions of IE.

We’ve noticed that our customers are increasingly prepared to upgrade their browsers so we are preparing newer versions of some of our core libraries that jettison lots of code that only existed to upgrade older browsers as far as possible to HTML5 level capabilities. We’re also using the opportunity to allow HTML5 technologies that were necessarily bit players in the past to take center stage.

However, not all of the things we would like to do are possible yet. That’s what this post is about; places where HTML5 currently falls short that have hurt us in the last month.

Pain 1: Internet Explorer

The party over the passing of IE6 (and in our case IE7 too) is barely over before you start realising that better though it is, IE8 is still not in any sense a ‘modern browser’ (that wonderful euphemism for ‘browser not built by Microsoft’). In fact, we are going to make extensive use of WebSockets and WebWorkers and they aren’t even in IE9.

There goes another code path and we still haven’t moved beyond the old rule: one way for IE and one way for everything else.

IE8 doesn’t even have ecmascript5, so you’ll have to decide what to do about array.indexOf, array.each, Date.now() (creates less garbage than +(new Date)), getter and setter properties and a host of other, useful javascript upgrades. Of course you’ll be working without strict mode too.

Oh yes, in IE8 and IE9, you can do cross origin requests, but not using an XMLHttpRequest, while in IE10 XMLHttpRequest is updated to support the new functionality. This means that you can’t use the fact that XDomainRequest is there to indicate you should use it, as you probably want to start using the normal API in IE10.

Only IE10 is really comparable with ‘modern browsers’ and that won’t even run on anything older than Windows 8 [EDIT: a commenter points out that while IE10 doesn’t run on Windows 7 at the moment, it will eventually]. What proportion of your customers are running that?

Pain 2: Variety of offline storage options, all insecure

When there’s a problem, we need to be able to investigate it by getting logs. For many web pages, nothing much interesting is happening on the client so logs from the server are fine, but for our application, which might run for days without a page reload we need more insight into what is happening on the client.

The problem is that logging slows the client down and results in memory growth, so we can only enable it after we know there is a problem, which is often too late. Using pre-html5 technologies, the best we could do was store a predefined number of the last log messages in a lazy format to keep the performance hit low and ask the user to retrieve them when there was a problem.

The dream of course, is that logs are written in the background (so as not to reduce the responsiveness of the client) to some form of offline storage, and can be retrieved after something has gone wrong, even if the machine has been rebooted in between.

This could be achieved by writing to the FileSystem API in a WebWorker except that the FileSystem API is not currently supported by anything except Chrome.

We could use localStorage instead of the FileSytem API, except that localStorage is a synchronous API so a lot of data there would increase load times.

Even worse, we can’t actually store logs unencrypted on the local machine because they might have sensitive information in them. Suddenly our simple logging solution is looking quite messy, unperformant and requires us implementing our own encryption, since there is no encryption API for storage (or in fact anything) in HTML5.  I think it’d make a good addition to the standard library.

Pain 3: No good support for sharing between windows

Sometimes, you’d like to be able to share connections to the same server across many different windows. This is particularly good if you’ve got elements that the user can pop-out, but if a user opens two different tabs to the same application it’d be nice for them to share the connection too.

We’ve got access to postMessage now, which greatly improves the code, but only if you have a handle to the window you want to send messages to. How can you discover other tabs that may have been opened by the user? In Chrome, they are not even running in the same process.  This kind of functionality will be necessary to create long running applications that behave like services.

You could use a SharedWebWorker instead for this except that it isn’t supported by Firefox.

You could use an onstorage event to broadcast to all the other windows, except that not all browsers give you the handle to the window doing the storing on that event (it’s not part of the spec).

Pain 4: Details of WebWorkers patchily implemented

We’re very keen to use webworkers where possible to receive and parse our data, since receiving multiple thousands messages a second can start to impact responsiveness if you’re doing it on the main javascript thread.
Unfortunately Firefox doesn’t support WebSockets inside Workers (there’s an under appreciated bug report).

Neither Chrome nor Firefox correctly resolve Worker scripts based on the location of the code that instantiates them rather than the page itself, so our library must be deployed on the same webserver as the page is served from (no CDN for you!), and it can’t be included by pages at different levels in the hierarchy, nor can the script file be renamed without a code change. This despite the fact that the spec is clear that worker scripts should be resolved relative to the instantiating script.

Ecstasy

HTML5 is a wonderful thing, filled with possibilities for letting developers soar and enabling new and interesting capabilities.  But this makes it all the more frustrating when you come across inconsistently implemented features or discover that you still have to carry Internet Explorer, (in its eigth incarnation) around your neck.

It seems appropriate to end with a misquote from Marlowe’s The Tragical History of Doctor Faustus:

Thinkst thou that I who saw the face of the Web Hypertext Application Technology Working Group,

And tasted the eternal joys of the HTML5 Spec,

Am not tormented with ten thousand hells,

In being deprived of everlasting bliss?

21 thoughts on “The Pain of HTML5”

  1. “Only IE10 is really comparable with ‘modern browsers’…”
    You mean “last year’s” modern browsers. IE10 isn’t out yet and, by the time it’s finally released, it will be at least two years behind all the others…again.
    I read a post on StackOverflow yesterday that told someone to go ahead and use a feature that will appear in IE10 cause, by the time his app is released, IE10 will be out. That post was one year ago. I hope that questioner didn’t follow that advice.

  2. “Neither Chrome nor Firefox correctly resolve Worker scripts based on the location of the code that instantiates them rather than the page itself, […]. This despite the fact that the spec is clear that worker scripts should be resolved relative to the instantiating script.”
    I believe you are misreading the spec. It’s supposed to be relative to the page itself, which is consistent with XHRs, setting window.location, and really every other script-triggered resource load. A script’s base URL, when created from a document node, is the document’s base URL.
    http://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#script's-base-url
    http://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#create-a-script-from-a-node

    1. Interesting. You may be correct, although the links you provide don’t make it explicit.
      > If the base URL is set from another source, e.g. a document base URL, then the script’s base URL must follow the source, so that if the source’s changes, so does the script’s.
      Notice the ‘if’. Which implies first of all that it’s possible that a scripts base URL doesn’t have to be set from a document base URL. I would assume that this is true only if the script is embedded in the page. That’s also what I believe the second link is talking about too.
      Regardless of what is meant however, the correct solution is that scripts load other scripts relative to themselves. It might be reasonable for scripts to load resources relative to the page, but scripts that load other scripts *need* to be loaded relative to the first scripts for the reasons I give above (CDNs, reference to libraries from pages in multiple locations, etc.)

      1. The second link is quite explicit. A script created from a document node sets its base URL to the owning document’s base URL. It is that script that is the entry script and that base URL which is used to resolve the worker’s URL. If you want to trace back a step, here is the algorithm for executing a script. Note step 2.4. which calls the algorithm in the second link. And note it applies to both inline scripts in the document and ones referenced externally; there’s never been much difference between the two in terms of how they execute.
        http://www.whatwg.org/specs/web-apps/current-work/multipage/scripting-1.html#execute-the-script-block
        The spec says “if” because there are other ways to launch script—a web worker for instance. In fact the script for a web worker gets the worker’s URL as its base URL, so any nested workers are resolved relative to their parent’s base URL. But, again, this is only for nested workers.
        http://www.whatwg.org/specs/web-apps/current-work/multipage/workers.html#run-a-worker
        You will have other issues with CDNs though. If you note step 3 here, the worker script must be same-origin as the page, otherwise you get a SecurityError. This is distinct from the base URL. The base URL is just a way to resolve URLs. Origins are the security principals on the web and changing those has far more serious consequences. Though I don’t see off-hand the reason for the restriction here. At the very least, you may be able to convince WHATWG to allow CORS on that request.
        http://www.whatwg.org/specs/web-apps/current-work/multipage/workers.html#dom-worker
        http://dvcs.w3.org/hg/cors/raw-file/tip/Overview.html

        1. > If you note step 3 here, the worker script must be same-origin as the page, otherwise you get a SecurityError. This is distinct from the base URL. The base URL is just a way to resolve URLs.
          The correct behaviour (and what I thought the spec said, although I may well have been wrong about that) is for the same origin check in this case to be between the origin the script file was loaded from and the worker script.
          Whether or not a particular piece of javascript uses a Worker is an implementation detail that shouldn’t necessitate modifying the set up of the pages or server that uses the script.

          1. (The origin check is distinct from the base URL for URL resolution you were talking about. Though, yes, both come from the parent page. Per spec and per implementation.)
            That actually would cause no end of security problems. HTML pages (iframes and opened linked) run with the ambient authority of the origin. JavaScript historically never had, and so cannot be made to now.
            Imagine a site which hosts user-supplied text files which are then served back under the original origin. There is a lot of content sniffing to get right, but you can make it work. (Ever noticed how Gmail’s “View original” link inserts 1024 blank spaces at the beginning? That’s to prevent IE from interpreting the page as HTML and this avoiding an XSS.) Nowhere[1] in the web platform does a page interpreted as script inherit the authority of its hosting origin. So our hypothetical site has no reason to prevent users from uploading a file containing “self.postMessage([something only our site can do])”. Now we modify workers to run as their hosting origin and the site has a XSS vulnerability.
            You also can’t have any same-origin checks against the origin hosting the JavaScript. There is basically no difference between inline script and externally hosted script once they run[2]. Even if https://a.example.com hosts the script, if it’s embedded in https://b.example.com, it runs with https://b.example.com's privileges and manipulates https://b.examples.com's data. For that script to then gain privileges as https://a.example.com would again create vulnerabilities. Think of as a #include from C. Everything you run is within the context of the including page.
            Am I correct in understanding your goal here that you have some library that you host which is then included into other people’s sites? And that library internally wants to use a worker? In that case, what about embedding an invisible iframe that you host? Then you have code that runs under your origin and you can do what you like with it, including launching a worker. That iframe can’t access the parent page’s data, but you can postMessage between them and communicate what you need.
            [1] Well, CSP sort of does. But not in a way that’s relevant here.
            [2] Again, CSP aside; CSP affects whether they run at all based on hosting origin.

          2. I’m not suggesting that the worker runs with the authority of the host it’s loaded from. I’m suggesting that the worker is resolved relative to the script that loads it, and can only be loaded if it has the same origin as the script that loads it. It would of course run with the authority of the page.
            Surely we can agree that if v1.2 of a library wants to take advantage of webworkers, that should be an implementation detail and should not require changes to the scripts, pages or server of the code that uses the library.
            > Am I correct in understanding your goal here that you have some library that you host which is then included into other people’s sites? And that library internally wants to use a worker? In that case, what about embedding an invisible iframe that you host? Then you have code that runs under your origin and you can do what you like with it, including launching a worker. That iframe can’t access the parent page’s data, but you can postMessage between them and communicate what you need.
            This is exactly what we’ve been doing for the last 10 years (except using document.domain instead of postMessage obviously), and it’s a ridiculous hack that this spec should have addressed. Libraries should not be making changes to other peoples pages, particularly not changes that involve adding elements that the owner of that page is not allowed to remove. Also, in the absence of MessageChannel (which very few browsers support), it means that we can’t communicate with workers running on the main page without tying up the main thread.
            In terms of our set up, we have a library and we want it to be able to, based on a configuration option, load itself in a webworker regardless of what server its loaded from and what its filename is. Because workers are loaded relative to the page rather than the script, we can’t get a script to load itself or another script right next to it without knowing in advance where that script will be deployed to (probably different for each of our clients). We have various ways around this, but they’re all annoying.
            The other difficulty with your suggestion is that our library makes connections to servers, but can failover to other servers if those are not available. If those servers are not available, then we can’t load an iframe from them, and when we want to failover, we’d have to replace the whole iframe page (and library). This means that the majority of the library *must* be in the main page, and only the code necessary to deal with the connection is in the iframe, which then means that the main thread is being tied up again.
            I’m confused about the controversiality of this. Scripts can already load scripts from anywhere (XHR with CORS and then eval, or adding a script tag to the page), why should they not do so relative to their own location?
            There has never been a standard way for javascript to load javascript before, and it’s looking like this new standard could have done with a bit more thought.

  3. IE8 was RTMd (released to manufacturing) March 19, 2009. ECMAScript 5 was published in December 2009. See a problem here? Of course it doesn’t have ES5 support.

    1. Indeed. I’m not saying it should. What I’m saying is that when you make the decision to support IE8, you need to be aware of the huge raft of useful JavaScript features you’re leaving behind.
      You’ll notice that I’m not complaining that Firefox 3 doesn’t support ecmascript 5. That’s because we don’t have to support FF3. The pain here is not that IE8 doesn’t support modern web features, it’s that modern web developers have to support IE8.
      IE8 is the new IE6.

    2. The real problem is not that IE8 doesn’t have ECMA5script support, it is that Microsoft markets their web browser in an idiotic way, which only exists to make more money. No one should be using IE8 anymore than they should be using Firefox 3.

  4. Slightly untechy but what about the horrible CSS rules required to support nested section and h1, h2 etc tags? Are we all forced to use a CSS compiler to make the style writing and maintenance reasonable? I’m not even sure the compilers are much help / good solutions anyway

    1. I think the problem of CSS organisation and compatibility is difficult and interesting, but there are others here at Caplin with perhaps more to say on this topic than me. I’ll see if I can get one of them to write a blog…

  5. Speaking of websockets, IE6+ and pretty much all of the desktop browsers work well with them. Why? Linux/OSX are usually up to date enough to have websockets, while windows (IE mostly) can easily fall back to flash. No problem whatsoever there.
    iPhone? No such luck, only the very bleeding edge works well – but that’s ok, Apple market tends to throw away devices older than 6 months. Blackberry? RIM seems to want to provide people with a smooth web experience – blackberry browser support websockets for quite some time.
    That leaves us with Android. In my 3-day research, I’ve found there is ABSOLUTELY NO WAY to get websockets to the (Android) browser.
    The only option is to ask Android users to switch the browser to something else (Firefox Mobile, Opera Mobile or Chrome for Android).
    Situation sounds like a deja vu…. “best viewed with xyz”… Oh and funnily enough, it’s not Microsoft’s fault this time round! Go figure..

  6. i made my own document scripting solution for apps
    its written in a 4gl that supports win32 and databases and has an IP database driver from a US company. si i can access my own business solutions over the internet without a browser. browser suck, html sucks.

  7. Of course not all users will have the latest software. This isn’t anything to do with HTML5, it has always been the case that developers have had to support systems a few years old. I also think this is absolutely nothing like the IE6 situation — Microsoft effectively had halted all browser development, and left us with an extremely buggy browser that was years behind standardisation. IE8+ is pretty stable for what it claims to do, and not anywhere near as far behind the curve as IE6 got.
    Is it really such a terrible thing to be patient while new technology matures and is adopted, before jumping into it? I know it’s nice to stay on the edge, but only a small percentage of sites need all this stuff, and those sites should have the budgets to pay for what it takes.
    I realise HTML5 is marketing-friendly and thus getting pushed heavily. But the definition of it has been so distorted away from the actual HTML5 spec, developers have to be sensible enough to look at the actual individual specs and make decisions on what can be used and when. Don’t get distorted by the marketing and think that it is both possible and important to use everything that marketers have falsely placed under the HTML5 umbrella. Go HTML5 in terms of marketing, but in terms of technology do that by using the new HTML5 elements and be more careful about the other stuff.
    So I don’t like the feeling of whining I picked up here: if you need to work on the technology edge, you’re getting paid for it ;-). And if you’re not getting paid for it, then you’re crazy to work on under-budgeted projects. I feel very lucky to now be a web developer working in browsers that aren’t full of bugs and have great inbuilt developer tools. IE6 was hell, now we are just in a situation where we can avoid problems by apt planning while continuing to make high quality sites regardless of whether all the future wave of specs are immediately usable to us.

    1. Sounds like a case of Stockholm syndrome; just happy not to be developing for ie6 anymore, regardless of what you have to put up with otherwise. The fact of the matter is that ie8 is a marginal improvement on ie6. ie9 is a significant improvement, and it looks like ie10 will be a ‘modern browser’. We had to support ie6 for nearly 11 years (so yes, I know very well that users are not always going to have the latest browsers). It’s going to be miserable if we have to support ie8 for anywhere near that long.
      It continues to be a major problem for web developers that MS do not upgrade the capabilities of their browsers and like to distinguish their platforms by restricting which browser will run on different versions of windows. That’s why ie8 is so far behind yet we still have to support it, and it seems perfectly reasonable to me to complain about such practices.
      The complaint about nonsecure storage is a complaint that the spec hasn’t addressed this requirement. One of the reasons I wrote this blog was in the hope that this would raise the awareness of the need for this. It’s true that this item is a bit of a wish list, but given that we now have specs for 4 different storage mechanisms, and one that was implemented and dropped, it’s a bit of a shame that nobody seems to have done any work on this, even though Nicholas Zakas raised it more than 2 years ago.
      The bugs with webworkers e.g. with firefox not letting them access websockets despite the spec have been identified for more than 3 years, yet no work has been done yet. I think it’s pretty justified to complain about that kind of thing.

      1. Freedom from being kept as a slave is a human right, having all the latest technology at your disposal is not ;-). So I think your stockholm situation is rather silly, no one is keeping you hostage, you just exist in a changing world.
        I’d like Microsoft to make IE for more platforms, but I have no entitlement for it, so I’m not going to get angry that they don’t. But even if they did we’d have a situation where older hardware wouldn’t be fast enough to deal with the latest standards, so it wouldn’t really solve the problem completely. But yes, it’d be very nice.
        I would definitely disagree with the idea IE8 is not much over IE6. IE6 was fundamentally broken, basic CSS just went wrong, things would even behave randomly (sometimes margins would add up, then you’d refresh and they wouldn’t). Your problem with IE8 is more of an issue of functionality that was not implemented at that point, rather than broken. There’s a big difference here — clients always thought design was easy, as they’d used DTP programs like Publisher and thought it was all dragging edges etc. Yet the tools that we were supposed to have, limited as they were compared to DTP programs, were also fundamentally buggy. I see clients thinking functionality as easy far less often than I do them seeing design as easy, as they don’t have this erroneous insight into it. Also from a planning point of view, it is a lot easier to plan for stuff you know is missing, than to try and guess how IE6 would randomly misrender pages.
        Also, IE was pretty much abandoned at IE6 (the team was dissolved) – but since the IE7 IE has been in constant development, so the cycle is so much faster now. I think we had a right to be angry with the IE6 situation – Microsoft had used its monopoly to destroy competitors, then abandoned their product, kicking a whole industry down and creating God-knows how many millions or billions in bug workarounds.

Leave a Reply to Anonymous Cancel reply

Your e-mail address will not be published. Required fields are marked *