As I understand it the world wide web has standards definitions that are written by organisations such as W3. Why don’t all browsers comply with these standards? Pages will frequently render differently in Chrome to Firefox and both are again different to IE, in some cases pages refuse to load at all in one browser.
Presumably the standards are well defined so compliance with rendering rules should be black and white. Are browser developers choosing not to comply or are there difficulties in developing the rendering engines that I don’t see?
The history of ACID tests would seem to indicate that browsers become more compliant over time, is this because it takes so long to alter the engines or because they are giving web designers time to update their sites?
It seems to me that the web would be a simpler place if all browsers rendered in the same way and consequently web developers didn’t have to worry about how their sites display in 10 different browsers. One set of rules, followed by all, is is that hard?
I think it’s both the browser developers and the web designers. A browser developer does NOT want to be generic, he wants his product to stand out. So he gives his browser some very cool abilities. Then the web designer uses those features so that he can impress the corporate suits who hired him.
If the web designers really cared about making it a nice experience for you and me, they wouldn’t blast sound at us as soon as we enter the web site. And they’d bring us straight to the useful content, without making us watch a stupid video whose “close” button is impossible to find.
No, they are not that detailed & explicit.
So when they got to implementing the specific details, browser programmers picked one way. Or, more likely, they didn’t even think that there might be other ways to do this, and just left it the way that seemed ‘obvious’ to them.
These are usually very minor details, but they do result in differences. As an example, for the border around a table, the standards defined how to specify if it should appear at all, and how to specify the width of it. But nothing was mentioned about the color of the border. So most browsers (like Netscape & Firefox) just displayed it in the background color for the page. They could have chosen to display it in the foreground color, or the text color, or whatever, since it wasn’t specified in the standards. But IE programmers decided to add a parm bordercolor to allow users to specify the color. That worked only on IE, other browsers ignored it. So that causes a (minor) difference in the appearance of the page on different browsers.
(Note that much of this is going away with CSS styles. They allow much more details to be specified in the style, and all browsers should display things as specified. There are far fewer unspecified items where the browser programmers get to pick a way to display something.)
One of the huge problems is that the W3C specs define how properly coded HTML should be displayed. There are a LOT of miscoded pages out there. Downright broken pages. Browser software generally tries to make sense of these bad pages, and all of the browsers do it differently. Watch what happens, for example, when you leave out or misplace <table> <tr> and <td> tags and their corresponding closing symbols. I can almost guarantee different results in IE, Firefox, Chrome, and Opera.
Another issue is that old versions of browsers tend to stay around for a long time. When, for example, Microsoft misinterpreted the box model (how margin, border, and padding work), it meant years of having to kludge together CSS to try and make it work on IE and everything else.
Another reason is that even though many of these things have been specified pretty well these days (although not 100%, as t-bonham points out), back when the first browsers were created, things were much less clearly specified, so there were even more instances where browsers could choose to render things one way or another. And HTML was much less full-featured, so browsers just threw in nonstandard features so they’d be able to do cooler things on web pages. Some of these features have become standards, but as they were completely unspecified before, everyone implemented them differently.
Furthermore, early browser implementors decided to be helpful by trying to “do the right thing” with malformed HTML rather than just displaying an error. The ways in which they “do the right thing” aren’t standardized, since if they were to adhere strictly to the standards, they would just show an error.
So those are two reasons early browsers differed. And because new versions of browsers never want to be the ones who broke some popular site, the easiest thing for them to do was to carry all these ad-hoc behaviors forward for backward-compatibility reasons.
So, despite all these great standards, the world of web development still involves dealing with a lot of browser compatibility issues.
The problem gets smaller and smaller all the time, though. For one thing, the newer standards have provided a way for HTML pages to specify stuff like “I’m a fully conformant HTML 4.01 document, so please disable all that broken IE5 rendering compatibility, thank you very much.” And every new browser version tends to get even better about adhering to the standards. Acidtests.org provides some tests that show how well your browser adheres to certain standards, and how well a prerelease version of a browser scores on these tests is always news on sites like slashdot, so there’s some pressure to improve. And finally, we drop support for old browsers eventually. Google just made big news by announcing that they were dropping support for IE6 on many of their sites (e.g. GMail). As old browser versions fall off, new versions can stop trying to faithfully reproduce their old broken behaviors out of a desire for compatibility.
Remember that “making the page appear the same” on all browsers, or for all users was specifically NOT intended when the web was created.
The intent was to communicate information, not appearance. There was no intent to make things look the same – that would have been extremely difficult, given the variety of hardware being used at the time.
And it’s a bad idea because such an attempt to assert control over the users screen is not workable. People have their own computers set the way they want, for their own reasons. Attempts by web ‘designers’ to overrule them are wrong.
For example, a friend of mine has very limited eyesight, and has set her computer to work for her. She is constantly running into webpages where some designer tries to forcibly make her screen show it the way he wants, usually rendering it unreadable for her. Smart designers do not do this – they worry more about communicating their info in all browsers, rather than pretty pictures.
What do you see? Errors. Most sites are not even close to being compliant. If browsers didn’t try to correct what they think are errors, you’d have a mess. Or a lot of job opportunities for HTML experts.
Also is the way stanards are defined. For instance, let’s say you have a box with a border.
And you say the text must begin an inch after the border ends.
But let’s say you have a thick border. Where does the border begin? From the inside of the border or the outside.
At one time IE and Firefox measured it differently. One went from the inside of the border one went from the outside. This was not a problem usually unless it was a tightly defined site with a lot of data.
Also programs like Dreamweaver don’t always use the best HTML or the most efficent. Some HTML editors used proprietary tags. 'Though this is becoming less of an issue as time goes by
The standard document no matter how verbose or carefully written can’t specify all possible behaviors. There is always ambiguity.
[ul][li]broken or malformed HTML[]javascript dynamically modifying the HTML markup[]plugins (which are neither HTML nor CSS) interactions with the page[/li][/ul]
It is unrealistic to expect a document to predict, itemize and therefore specify all possible interactions and side effects.
The closest thing in real life you have to a “standard” is to reverse-engineer the behavior of the market leading browser (happens to be IE at the moment but that can change.)
Actually, IE is the one web browser that is not looked to for standards. The reason why IE8 is much more nearly compatible with Firefox is not because Firefox changed. In fact, Mozilla is really, really dedicated to following the standards, so much that non-standard requests are regularly ignored on bug pages.
Ultimately, I really don’t see anything in it for you to be different. It’s not like they’re actually trying to sell anything. Every popular web browser is free. If everybody in the world used IE, how would that help Microsoft?
Isn’t this what reference implementations are for? Browser implementor doesn’t know how to implement certain feature correctly? Run it through the reference implementation and see what comes out. Does W3C even provide a reference implementation?
I agree that MS can’t really be the source of standards like this - I think they would like to have taken ownership (my memory wants me to think that they did in fact try to muscle into a position of control at some point, but I’m not sure whether to trust it), but I don’t think it would be a very good thing at all. Having an independent body define standards is good.
Mozilla may not be as bad an offender as MS for straying from standards, but it still happens - one example that springs to mind is the CSS property -moz-border-radius - which pre-empted the anticipated border radius property in CSS3 (and by jumping the gun, ended up implementing it differently to CSS3)
I’m not saying Microsoft should be looked to for standards. I’m saying that because standards are ambiguous, it’s what often happens as a side effect.
In theory, but who’s going to build the reference web browser? The people on the standards body aren’t going to build a working browser from scratch. It requires Microsoft, Mozilla, Apple Safari, Opera to build the browsers with the rich functionality. Here’s the key: it’s the act of building a real working browser clarifies and completes the standard. Typing out words on a standards document is not enough – it’s always an incomplete standard. There are several examples where the specifications in the standard were not discovered to be incomplete and ambiguous UNTIL the 4 different browsers were built that interpreted the standards differently. All 4 different interpretations could be seen as “valid” because the specification was unclear on a subtle unforeseen behavior.
Now you have 4 browsers “in the wild” used by millions rendering pages slightly differently. The crazy part is that they are all “right.”
Now you have to go back and update the standard based on the chaos that was observed. Understandably, you often go with a “defacto” behavior from one of the 4 browsers. Perhaps for a particular rendering rule, Microsoft’s interpretation makes the most sense. But for another rendering rule, W3C would choose Mozilla’s handling of it. In the meantime, people think the big 4 browsers were not following standards when in reality, they were (or trying to.)
Defining standards does not create a working browser with real behavior. Many “standards” work backwards from real world browsers. For example, Microsoft implemented a function called XMLHttpRequest. Mozilla later followed this convention. It is now in the W3C standard. There are also examples of Netscape/Firefox enhancements that were later adopted by Microsoft and eventually adopted by W3C.
Browsers are complicated and evolving. HTML5, 3D, etc. It is unrealistic to expect that the existence of a standards body will enable all browsers to render all current and future web pages exactly the same. A standards organization cannot anticipate all future innovations in web browser technologies.
Because Microsoft didn’t care about what an independent board decided were the standards and did their own thing. Just as they’ve done with damn near everything (oh, docx, why?).
I can write compliant code and it renders the same in Gecko (rendering engine for Firefox, Camino), Webkit (Safari, Chrome) and IE8’s Trident. IE6 and IE7 are the source of the issues.
Basically, it’s like the biggest automaker in the world decided to use gasoline for all its vehicles after an independent organization decided diesel was more efficient so all the other automakers decided to use diesel. Now the automaker has decided to move to diesel like everyone else but since so many of their vehicles are still on the road gas stations can’t get rid of their gasoline pumps yet.
This bears repeating, several times. The whole ethos of the WWW is (or was, and still ought to be) that the person viewing the content can define how it appears to them. So someone with a crappy slow connection and old hardware can view text only, someone partially sighted can view it in a large font or via a text-to-speech reader, and so on.
Making a page appear the same on every browser defeats the object.
Sadly, a lot of web designers seem to think they are print-media designers, and try to force people to use a specific font in a specific size, in a window that has fixed proportions. :rolleyes:
These designers need to pay attention to text-only browsing – there’s this company that reads everything on the net (text only), and indexes it – called Google, I believe. If your page isn’t readable in text-only mode, Google (and Yahoo, and the others) can’t index it. That might hurt your traffic a bit.
Many of them are, or they were previously, until the print media they were working for went bankrupt (which they blamed on competition from the online media).