It’s just data

Breaking the Web with hash-bangs

Mike Davies: So the #! URL syntax was especially geared for sites that got the fundamental web development best practices horribly wrong, and gave them a lifeline to getting their content seen by Googlebot.

The correct way of doing things would of course be to use window.history.pushState (as GitHub is now doing to speed-up navigation in repo trees), but not all browsers implement it (let alone deployed versions!) so the hash-bang can be seen as a transitioning solution that works everywhere.

That being said, it doesn’t make sense to use history.pushState if the page requires JavaScript anyway ("empty window" interfaces), and in many cases, it doesn’t make sense to build a no-javascript equivalent if nobody will ever use it.

Finally, I don’t think this is a sign of “the web done wrong”, it’s just done “differently": the JS app is "code on demand”. The hash-bang and <meta name="fragment" content="!"> are for crawlers which won’t run the “code on demand” and you nevertheless want to give them a chance to do those “AJAX calls” your JS app is doing to get its data. Different capabilities, different paths.
The difference with making an alternative, no-JS app? the crawler doesn’t care of usability, only data.

Posted by Thomas Broyer at

@Thomas You make the assumption that this is an either/or situation. Either build a JS-only app, or build two apps (JS and non-JS). However, at this point in the evolution of the web, any web developer worth their salt is capable of building sites using progressive enhancement. This eliminates the false dichotomy you are presenting. Gawker’s site is nothing more than a simple blog; it is not an app. Building the ‘content on demand’ via JS on top of an existing static HTML blog is trivial. It is so trivial, in fact, that I refuse to call any developer of this alternative technique a ‘professional’.

Now, I don’t want to dismiss the possibility that a site may be built to require JS. However, such a requirement can only be made in cases where the user’s platform and environment can be controlled. This situation does not exist on the public web; it only exists on corporate intranets. Therefore, if a public site requires JS simply to access content, then it is wrong.

Posted by Jason Karns at

@Jason I don’t make that assumption: GitHub does use progressive enhancement for the tree browsing (hence the use of pushState, if supported).

But Google apps for instance all require JS (except GMail, but it’s not progressive enhancement either), and they’re not limited to corporate intranets (quite the contrary actually).

It’s a tradeoff: given how clients (even mobile ones) are more and more powerful, with faster JS engines every couple of months, “full JS”, “single page” apps/sites allow you to cut down the size of exchanged data by an order of magnitude (sure the initial download, containing the “code on demand” is heavier, but it can be cached; and then only raw data is exchanged over the wire), allowing faster responses and better UX (test GitHub tree browsing with and without JS, or with an unsupported browser).
With progressive enhancement, if you then load data using AJAX, then what about your URLs? (imagine GitHub doing the very same thing but without using pushState to update the URL) If you then fallback to using “hashes” then you’re missing the point and you’d rather start with an empty page than with one whose content will be dynamically replaced as soon as it loads (if someone bookmarks the “#'d URL” to come back to it later).
If you don’t load data using “AJAX”, then you’re loading “too much data” at each page load (see, the banner and left menu in Sam’s blog for instance; very lightweight in this case, but not on web sites like LifeHacker, or Facebook, or GitHub); data that not only have to be transfered over the wire even if it hasn’t changed, has to be rendered again on the client side (even on mobile devices, where rendering is costly), but also has to be first computed and assembled on the server-side (that’s probably what costs the most in GitHub’s case), and the discussion on progressive enhancement is irrelevant here (if you’re not “enhancing” data loading and navigation, then why are we talking about URLs?).

Why would it be “bad” to require JS if everyone has a JS capable browser? the only ones I could cite that don’t are crappy mobile browsers (NetFront, built into my wife’s Samsung Player One), mobile browsers attempting to bring you “the web” on feature phones or otherwise crappy devices (Opera Mini, that I use every morning to check the news while taking my coffee before going to work), or text-based browsers (Lynx et al.) that you either use because you do not have the choice (you’re in front of a server with only a console) or are reactionary.

Posted by Thomas Broyer at

I have a number of friends, that fall right into the target group of Lifehacker, that always surfe using the NoScript extension for Firefox. They don’t do s for fun, they do it because they’d rather surfe a little less comfortable in exchange for more privacy and secruity. While personally I’m not that harsh, I can understand the sentiment. Why would you purposefully exclude users making this choice when suitable alternatives exist? Namely the HTML5 History API - it solves all these problems for users using recent Webkit Browsers or Firefox 4 - and the share of those using a browser that supports is will be in the near future quite big.

Posted by Rouven Weßling at

Its About The Hashbangs

Before I get started here’s the disclaimer: The opinions expressed in this rant are my own personal opinions on web development and do not represent the views of my employer, its engineering organisation or any other employees. A few months back...

Excerpt from - Home at

It's About The Hashbangs

Before I get started here’s the disclaimer: The opinions expressed in this rant are my own personal opinions on web development and do not represent the views of my employer, its engineering organisation or any other employees. A few months back...

Excerpt from - Home at

Hiring a Bonded Security Firm Will Guarantee You Are Protected: If a security firm is bonded, this means that you are protected if a guard you hired steals from you. Usually, a reputable firm will be bonded, and will guarantee in a contract that they will take care of the losses if one of their workers is charged of theft. access control for warehouses

Posted by jojolei at

Add your comment