But loading images rarely changes the order of things on a page. It’s not like text where it wraps to the next line, most of the time images only make pages wider or longer, but its neighbors don’t change.
I do this for a living and assure you that the images can completely change the shape of a page, causing everything to reflow.
Imagine a news site. Consider a box with rounded corners that makes up a call-out box embedded in an article - a very common feature of many sites. Those rounded corners are created with GIFs that are probably something like 5x5. There are two at the top and two at the bottom of a table or DIV. If the doofus coding the HTML doesn’t specify the dimensions in the IMG tags for the corners, then the browser will assign a default image size - probably something like 50x50.
Now imagine all the other artefacts all over the page that cause it to look nice - transparent spacer GIFs, more rounded corners, pictures relevant to the article - haven’t got their dimensions specified. Imagine a relevant picture is 300x300 with no specified dimensions. The browser again will put in the default placehoder at 50x50. So (particularly on a slow connection) when the HTML loads in, every image will be set at 50x50 and the text will flow round it per these dimensions. If your callout box is a DIV or table that the page content flows round, then everything will reflow - inline images and text.
Now the browser actually reads the image files and has to reassign the correct dimensions for every image on the entire page, for every image that is loaded in. Multiple boxes jump from 50x50 to 5x5, or from 50x50 to 300x300. Everything has to be re-flowed and re-rendered - it’s an absolute mess.
Of course this is an extremely unlikely example, but image dimensions do indeed make a lot of stuff reflow and the page have to be re-rendered. It’s not as simple as you’re making out.
Perhaps. But I think a slight modification similar to the one you propose might work. Go ahead and let the link activate, but, since it always takes a little bit for the page to be downloaded, delay changing the page for a brief time. Indicate that the link has been clicked, so you know to immediately scoot the mouse over to click on what you wanted to click on. Load the page in the background for a bit.
Then again, I think most browsers already do that. I know I’ve successfully cancelled a link by clicking on another one before it has time to load. So maybe just my suggestion for preventing scrolling would be sufficient.
(IMO, very few web pages should horizontally scroll. But, if they must, then the same idea could work.)
Thanks for your detailed explanation. Maybe I should have a look at those script blockers then.
A well-made website will give the browser an accurate structure initially, and a well-made browser will then be able to accurately display the page throughout the loading process. This problem was solved, in the technical perspective, over 15 years ago.
The problem is that not every browser is equally well-made and as for the web content itself? Not even close. Even these days, big-name professional sites still have atrocious HTML more often than not.
I open virtually all links in a new tab and use the tab bar as a to-read queue. This means I’m virtually never looking at a still-loading page. If I’m reading a news article split into multiple pages, I’ll try to find the “single page” or “print preview” option.
I used to love MSNBC’s website, but have given up precisely because of this. That site is like controlling an explosion…things just keep loading up and bouncing pages down and up and down and up and whenever you try to scroll down, you wind up hitting some link you didn’t want to hit. Then you get that stupid Pulse360 ad and, well - it’s just not worth the trouble to open that site anymore.