This covers many of the reasons I prefer web browsing without JS enabled: speed, less mem usage, safer. Before I started doing so, I thought that disabling JS would make the internet unusable, but a vast majority of pages display their content just fine without it. In addition, it is really easy to add domains to the JS whitelist in Chrome without the need to go into the options menu every time (less so in FF). AND, Chrome uses separate whitelists for normal and private browsing, and will clear the private browsing whitelist when you close the window! There are some periodic annoyances, but I wouldn't think of going back to default-enabled.
Browsing the web in Emacs is a bit too hardcore for me though :P
"I thought that disabling JS would make the internet unusable, but a vast majority of pages display their content just fine without it"
I also browse the web with Javascript disabled (using Noscript), but have formed the opposite impression: more and more sites won't display their content (or display their content incorrectly) if Javascript is disabled. Most of the sites I'm looking at are not "web apps" or SPAs (single page apps), they are sites with text articles or links so they have no real reason to break without Javascript. (It's always annoying when you attempt to click a link only to find it won't work unless Javascript is enabled.)
The rise of Javascript frameworks is only fuelling this trend of Javascript-dependent web sites. I've said this before, but Web developers pick the tools that make their lives easier (as you'd expect), but that doesn't always mean that users get the best experience.
> It's always annoying when you attempt to click a link only to find it won't work unless Javascript is enabled.
Ah, the number of times I've asked colleagues to stop doing this! If you want a JS-powered link/button/whatever on a page, insert it using JS; then you're guaranteed that it will only show up for those who can use it.
Likewise, all togglable content should begin visible, and selectively-hidden by JS during page load; that way, it only gets hidden for those users capable of showing it again.
Also, although this is rarer, all work should be done in small, isolated event handlers. That way, when some unexpected situation arises (eg. the user is blocking your chosen spyware platform), that particular handler dies, but all of the rest keep working (eg. the button handlers, the slideshows, etc.).
The truth is almost nobody cares about js being disabled anymore. Even if today you think it's an overestimating (I don't think so anymore) over few more years it won't be even for you. Just 2 years ago it was like "What if a customer doesn't have js enabled? We don't want to loose him just over that! Let's support both versions.", today it is already more like "No js? Weird, lol. Well, it's his own problem anyway, no reason to spend time&money on that in 2014.".
When I for some reason was forced to use lynx, which I don't normally do, I was simply amazed by how smooth browsing is, it doesn't feel like the usual "internet" anymore, more like navigating your project via text editor on the local filesystem. But unfortunately as you go further than browsing ArchWiki it becomes almost unusable due to lack of static (in the sense "no js") sites support.
W3m-js is/was a thing. Also, w3m can apparently draw images inline via xterm, though I wasn't able to get this working in the ~10 minutes I tried on OSX Yosemite with X11 and xterm using the w3m from homebrew. UZBL is ok too, if you just want a minimalistic WebKit browser with vim key binds.
If you want vim-keybinds in your browser you can just use vimperator or pentadactyl, it's not the point here. Web without pictures/js on today's connection is almost as smooth as navigating your local file system, while "normal" browsing isn't, and you (well, me) don't even notice that before you eventually try that js-free browsing from lynx or something like that. That is, it would be if not for the fact that js-free web is a "Red Book animal" already.
It probably depends on the exact type of sites you normally visit; I default to JS off but the majority of sites I visit are "pre-Web 2.0" informational types which are perfectly readable without JS. As you notice, it's mostly the newer sites which are problematic.
Web developers pick the tools that make their lives easier (as you'd expect), but that doesn't always mean that users get the best experience.
...which I think is both selfish and somewhat ironic since web developers are almost certainly users too.
I really dislike how a lot of multi-page navigation (i.e. getting results 51-100) is now commonly done with AJAX. It's not really faster than loading a new page, but it breaks the back button. "Oh you clicked on the 287th link and hit back? Here are results 1-50." So annoying.
The pushState API has been available for years. This isn't an inherent limitation of single page apps. Even where the pushState API isn't available (older versions of IE) the anchor tag can be be used to maintain browser history so the back button doesn't break. https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/M...
But here's the deal: now the web developer has to manually do something that used to be intrinsic to the way the web worked. Really feels like a step backwards to me.
What's interesting is that in the past, all the Real web devs knew that frames suck because they break navigation. Anyone who showed their framed site in a community of cool web devs would be laughed at.
Now it's cool to break the back button (and links and everything) and make me watch spinners. It's as if the text and thumbnails they're serving today somehow take two orders of magnitude more bandwidth to load...
Devs and designers are now (more so than ever) different sets of people, and designers are now calling the shots in most places.
I'm a web dev, I know that ajax loading sucks, that without enough budget to do it right (and we never have enough budget) UX ends up broken, messing with scrolling sucks, page-based is best, don't make an app if it should actually be a site, etc.
Do I get any say? Not really. What do I end up building? Magic-scrolling ajax-loading apps, and if I have any budget left at the end of the job it goes on tweaking typography (i.e. designer-visible stuff) rather than fixing back buttons.
Which is to say, devs still know that this stuff sucks, but we're no longer in charge of the relevant decisions.
DISCOURSE I AM LOOKING AT YOU. In a real forum I'd open pages 1-n in tabs and read them on PT. On discourse I have to open the same page N times and scroll to a spread of points down the page.
Why does that make you angry? Sometimes, reimplementing how loading works can make your website more performant, for example. If JavaScript gives developers the power to make their websites better, why shouldn't they take advantage of that power?
It makes me angry because it's unnecessary, adds complexity, and requires me to allow whatever website to execute code on my computer. I don't want that code, I just want to read the page.
It makes me angry because it is a gross abuse of the web platform. The web is not for applications, it's for documents. I would be just as angry if I received a Word document that contained scripts that it needed to run to display the document (actually I would be angry if I received any Word document).
I don't want 7 tracking scripts, 4 advertisement scripts, jQuery, Angular.js, and whatever other junk modern webdevelopers include on their "websites". I just want the content, and that's what HTML is for. You don't need to include any JavaScript programs to show me formatted text, which is what I visited the page for. I didn't visit to see fancy scrolling effects, to have my every interaction with the document tracked, or to see nothing at all if I don't run foreign programs. I just want the content. Just give me a [motherfucking website](http://motherfuckingwebsite.com/).
It makes me angry because it is a gross abuse of the
web platform. The web is not for applications, it's for
documents. I would be just as angry if I received a Word
document that contained scripts that it needed to run to
display the document (actually I would be angry if I
received any Word document).
That ship has already sailed, Pam. There are two webs now: the document-centric, platform-agnostic, user-controlled presentation of content as envisioned by Tim Berners-Lee at CERN, and the over-specified development platform that is HTML5, that ultimately originated in Win98's Active Desktop "push technology".
The latter has gained world-wide adoption because developers and industries couldn't agree to build a standard software repository that was distributed and allowed independent re-implementations of the running engine. Java tried to be that standard, but didn't have a convenient way to deliver software to end users. App stores later are becoming a close second, but being walled gardens they'll never replace that role in full.
It's still useful to think of "the two webs" as different purposes for the HTML5 technology; it's a good question in particular to ask yourself before you start a new website and decide what of the two models you want to support. Requesting that all webs are coded assuming the "web of documents" view is not realistic anymore, though.
I might agree with your for the minority of things that are literally documents. But that's not what most of the pages I visit are. Twitter is a stream of little posts and real-time notifications. My webmail client isn't a document, it's a browser for documents. Lots of sites let me add my own content, and it's a painful kludge to re-render the static HTML for every page every time anyone makes a change. Let that data get pulled out of a database and sent to the application running in my browser. That's a much better fit for what's actually happening with the content.
I have nothing against applications. I just don't want applications inside my document browsing application.
Some uses of client-side scripting on the web are useful and necessary. When that is the case, make a real application instead of abusing a platform for publishing documents. Web app developers are just making things worse for everyone.
When client-side scripting is not necessary, web developers use it anyway for some reason. Blogger is a great example of this: a blog post is definitely a document, it doesn't need any client-side scripting, yet Blogger blogs just show an empty page if you visit them with JS disabled. Why would they do that?! It's almost as if they want Blogger to be as inaccessible as possible.
>My webmail client isn't a document, it's a browser for documents.
That's exactly why it shouldn't be on the web! Abusing the web to make it into an application platform is like forcing a square into a circle-shaped hole. The web doesn't have to be everything to everyone, just let it be a platform for documents.
Not every page has to be a web app - the Blogger thing annoys me too. But if it's an app you're making, the web browser is the biggest deployed platform and therefore an appealing target. http://xkcd.com/1367/
> Twitter is a stream of little posts and real-time notifications.
The stream part is easy enough, with pagination. I have no objection to hiding the pages with JavaScript, making any page 'bottomless.' Real-time notification, obviously, can't work without some sort of execution, but one could simply show notifications on the next page viewed (or calculate them when sending that page, or whatever).
> My webmail client isn't a document, it's a browser for documents.
Good thing that you're using a browser…
I'm not opposed to using JavaScript to speed things up or enhance them, but using it to deliberately break the Web is just wrong.
If they haven't benchmarked their site on my computer, they don't know whether it's making their websites more performant. But years of experience with JS disabled by default (and only enabled when necessary) makes it very clear to me that using JS, at least on my system, is almost never a way to make any site more performant. On the contrary, it increases load times, cpu usage, and memory usage. It might also force me to use bloated crappy browsers like Firefox or Chrome where I could otherwise use something like w3m or links, which will display the page in a fraction of the RAM, much faster.
I wouldn't be disabling JS if it weren't such a pain in the ass. Or maybe I still would, just for security.
Plenty of sites do use client-side instrumentation, and so they do indeed benchmark their site on your machine, and then make decisions based on the results of those benchmarks (likely aggregated with many other benchmarks from many other users). Yes, sometimes JavaScript is used poorly in ways that make sites bloated and slow, but plenty of sites use it properly in ways that only make their sites better (collecting the appropriate data to confirm that what they are doing is making their site better).
People talk about these sites every once in a while. But my experience is that enabling JS makes things worse across the board. If there's some site that would perform better with it, these are very much in the minority. And in my experience, "old web" sites with out JS loaders and such are almost always much faster & easier on the resources.
Even if some sites deploy instrumentation, it doesn't mean they're necessarily lifting a finger to make it faster for me.
I do not find adding another vector for problems, including malicious ones in the face of persistently broken sandboxes, to be "better".
If it's a "page" I am going to "read", I don't feel the need to enable arbitrary execution of whatever gets stuffed down the pipe, beyond rendering the more restrictively defined HTML and some hopefully well-debugged -- although even with them problems continue to crop up -- image formats.
It's like a biological viral infection. Hygiene can keep it out, but once it's inside, you may have a very persistent and possibly quite debilitating problem.
I don't want to be disconnected from the world. Neither, however, do I want to engage in, um... "unprotected casual browsing".
P.S. Next version might be a local proxy (well, usually local) which will improve browser performance, cross-compatibility with other browsers and across OSes, and all around fit better, imho (which is that a browser's only job is to render web content).
Browsing the web in Emacs is a bit too hardcore for me though :P