WWW--An Idea Whose Time Has Come (and gone)

You hear most people speak of “the internet”, they mean the world wide web, the online world as accessed by their browsers.

I thought the web was clever when it first made its appearance, and I still think it does a good job at doing what it was designed to do: display lightly formatted static text pages that contain links to any of an unlimited number of other possible pages. Going beyond that, I’ll grant that it also functions well when used to display pages with images and more complicated formatting (tables, frames, columns, image placement, etc).

It is, however, klunky, cumbersome, crash-prone, awkward, and annoying as hell in its increasingly pervasive manifestation as Web-On-Steroids, attempting to be the be-all and end-all of network communication. And the only reason that Flash, QuickTime, Java, cascading style sheets, shopping cart database+secure sockets systems, and everything else ditigal and networkable has ended up precariously perched on top of the hypertext transport protocol is that that’s what caught on first so everyone and their poodle has a browser, i.e., least common denominator.

And the reason the web and its browsers ended up being the least common denominator was bandwidth: in the days when the 14K modem ruled supreme, you could do a lot with a system designed to download plain text 7 bit ASCII characters and interpret a subset of them as codes for formatting same.

But the web in its contemporary complicated incarnation is a high-bandwidth Frankenstein monster of cobbled-together hacks and plug-ins hamstrung by its inherent limitations, top-heavy from the random add-ons, and crash-prone and inelegant from trying to reproduce the entire range of possible computing experience within the confines of an environment built for static text browsing.

I think this is really the wrong way to go. Instead of either prompting the download of a document followed by the launch of a “helper” application on the client computer to open it or using a plug-in architecture to open and display the document in the browser window along with a string-and-duct-tape rendition of some other program’s controls and choices redone in Java and variants, a web browser should, upon receiving an URL to, let’s say –

MSWD://boards.straightdope.com/sdmb/faq.doc

– pass the URL back to the operating system, which would be configured so as to have a designated program for the handling of that type of document as URL, in this case the default being Microsoft Word, which would then launch, open the document over the internet in a Word window, and you’d be in Word, not a browser-substitute for its word-processor functionality.

Web programmers would not have to reinvent every non-web wheel, nor would they, if programming for the web in the first place, be constantly running into the wall of limitations and kludges.

The ease of programming a database in FileMaker, a presentation in PowerPoint, a spreadsheet in Excel; or drawing pictures in Photoshop, writing letters (or posts to the Straight Dope Message Board, thank you very much) in Word…the security of accessing your bank account through a program that is not ‘stateless’ like the web and therefore is not balancing the complexity of database and encryption and mathematical functions on top of a sequence of raw no-context ‘get’ and ‘put’ interactions between client and server…no contest as far as I’m concerned, even though it does mean that, yes, you have to acquire software other than Internet Explorer or Netscape to do more than browse static pages over the internet.

Who agrees, and who wishes to argue instead for the supremacy of the ‘everthing-in-a-browser-window’ paradigm of networked computing?

[list][list][list][list][list][list][list]Bill Gates
…;)…

The first time I wrote a java applet, a simple text editor that stored files on my website, I had an epiphany: I suddenly understood what scared Bill Gates so much about Netscape, Java, and the Internet. Those three things had the potential to make OSes irrelevant. If that combo caught on, the OS would be nothing more than a collection of device drivers; all the important work would be done in a browser window, over the network. It threatened to turn PCs into dumb terminals again.

I think that your rage against the web is misdirected. The kludginess, the frankenstein plugins, the attempts to shoehorn the world with icing into a browser is the fault of companies born of the dot-com bubble. First, the browsers available have never been particularly good, stable, or mature pieces of software. Netscape and IE are only now becoming reliable; same with Java. Flash and Shockwave have always been about the attempt by companies to turn the web into the interactive TV about which media execs have been dreaming and talking about for the last twenty years.

As for Word and Excel docs, these do open their related applications, though that app may be embedded in the browser.

Fundamentally, I think the premise of “everything in a browser” is sound; I think it’s just been miserably executed all along.

I don’t really object to everything appearing TO THE USER to be executing within a browser window, if it can be true that the regular full-fledged version of the program (not some chopped-down plug-in version) is doing the real work and if the prgrammer doesn’t have to either avoid a substantial subset of what could be utilized on a local non-browser version or else rewrite the entire functionality a second time for the web after programming the local-desktop version.

As you say, it’s the quality of the existing mish-mosh that annoys me.

First I went to Google to find a test file. Then I went to WinWord and selected “open file”. When it asked me what folder I wanted to open from, I selected the “entire network”. I then pasted in the URL. I ended up with the word file opened from the web into the Word program. There’s probably an even simpler way of doing this. Here’s the test file I used:
http://www.stc.org/Word_Files/RegCh2001.doc

Is there something wrong with this? Do you want the web browser to do this automatically, instead of having to do it yourself?

Standards.

There is something to be said about a universal standard, even a less-than-ideal ones. Without the browser we would be stuck in a world of incompatibility, redudency (and not in a good way) and frusteration. Look at how much greif the reletivly minor differences between browsers have caused people, and multiply that infinatly. Things would ultimatly be more complicated and more expensive.

AHunter3:

Good idea. Oh don’t mind me. I’m just putting a little something in the macro of these attractive documents. Hmm, what should it be? Melissa? Code Red? Kakworm? Oh, here we go: I Love You. Here you go AHunter3. A download ready for you to autoload.

AHunter3 wrote:

Pretty much the same litany of objections was levelled at TCP/IP over 10 years ago. And, like the OP, many predicted that TCP/IP’s time in the sun would soon be over with. Yet TCP/IP is still with us.

If there was one that I would REALLY like to see changed about http and browsers, it would be this:

There should be an extension added to http that allows for a connect and hold connection.

This would be of enormous value when writing sites that require logins and/or transaction. In particular it would make e-commerce/e-business sites much less clunky and more reliable. It would also make the development of those much less expensive. Developers are constantly using work arounds to maintain states between page hits.

Yeah, scotth, that’s one of the ones that drives me up the wall. In my databases, I can set one field to be formatted as a context-dependent drop-down value list, where the values shown depend on values entered into other fields.

As it stands now, the only way I can obtain the second value list is to force a submit and return an action after the user enters values into the first field.

Not to mention related value portals.
To answer some folks’ earlier questions, yes, I’d like it if you could enter a much wider range of URL handlers into an OS-level Control Panel and could code a much wider range of URL protocols. I guess “file” could work if coded as a hybrid mix of IP address and filename-plus-extension, rather than a separate string xxxx:// for each file type, but the browser should hand off the net path and the file type to the OS, which would consult the Control Panel settings to determine which program on your computer should be used to open the file over the internet when you click on the link.

Viruses? Well, yeah, I suppose, although good antivirus software ought to check the incoming bytes and nab them on the fly, aside from which viral payloads can be executed to infect your computer from within the current web paradigm.

I thought HTTP/1.1 supported holding the connection open.