Stop browsers jumping when loading!

How many times have you clicked on the wrong link because just before you click, the browser loads an image and jumps the page down? Wouldn’t it make much more sense to have the page fixed about your current screen as it loads?

Does anyone else know what I’m talking about/agree with me? Anyone know how I can suggest this to Google and Mozilla?

I know what you’re talking about - and it’s really annoying - sometimes it makes you click on an ad by mistake, for example.

Not sure there’s any easy way around it though - unless the image size is explicitly stated in the page coding (and this is optional in HTML/XML), the browser won’t know how big it is until it’s retrieved the image.

The browser could be made to wait until all the content is retrieved before rendering the page, but that would make the whole browsing experience seem really slow, and for pages where one little bit of content is on a server that takes forever to respond, the whole page would have to wait for it to time out.

I don’t know what the answer is, but I’m almost positive there’s a stupid name for it. But at 3:30 in the morning I can’t find it.

It has ruined many of my porn quests.

The only way I could see to do it would be to keep track of where the mouse is, and automatically scroll the page to keep it in line. But that only covers vertical jumps.

The way I think you could deal with horizontal jumps is to allow for input delay. It takes approximately 200 ms for you to even notice the jump, so delay making links active for that long after any jump. When you notice your click didn’t work, all you’d have to do is look to the left or right to see where it moved.

Right now, the only real solution for the end user is to only use sites that declare their image sizes, and make sure you have image placeholders turned on in options. And even that won’t help with iframe and JavaScript jumps. This means it won’t help with the moronically stupid way vBulletin handles it: jumping you on first load, and then, even if you scroll, jumping you back. There is no reason to both jump you manually and also tell the browser to do it automatically.

I have a partial solution for a variety of on-line annoyances, but I think many users will not like it.

I always run my browser with JavaScript disabled, and enable it only on occasion when I know I will need it. (Like for paying bills on-line.)

This vastly reduces (but does not completely eliminate) a whole lot of excess activity in many web pages I view, yet only occasionally reduces any functionality that I actually care about.

Sometimes, when I have JS enabled and many pages open in many tabs, there will be so much JS running that my whole machine bogs down. Offing JS puts a stop to that too.

This doesn’t directly address the OP (which I sometimes have trouble with too), but it certainly does reduce a lot of shitty annoyances, and I think that may sometimes entail fewer re-flows as well.

Note also: If you right-click on an most images, you get an option to block all future images from that same web site. (At least, in Firefox you do.) I have blocked a whole lot of ads that way, at least when they come from third-party places, and this has made a great improvement in my browsing experience too.

That sounds like something that would work in a technical sense, but would change the ‘feel’ of the browser in a way that made it appear unreliable.

It would be good, I suppose, if it was possible to perform file metadata queries via HTTP - so the browser could do that before retrieving the whole file.

Is this a feature of FF or is it an add-on? I think it’s AdBlock, isn’t it?

Also check out NoScript, which is a Firefox add-on.

It blocks JavaScript from… probably loading, not just firing. I never looked into the details. You may choose to grant temporary or permanent privileges to a site as you wish. It’s creepy how some sites want to load JavaScript from 20 other entirely non-related sites.

AdBlock is good too. I run them both.

I have done this too.

I have another question, semi-related, that doesn’t merit its own thread.

Ever click on a link and get directed elsewhere? I mean, hovering over the link (or, ahem, picture) you can see where you’re supposed to be going, but you end up elsewhere. Sometimes I need to click two, three, eight, nine, times to get where I’m supposed to be going.
Is there a name for that intentional misdirection?

Yes, I run them both and always have. That’s why I wasn’t sure what caused that particular option.

I love NoScript. It’s amazing to see how many things want to be loaded on the simplest site. I have gotten very stingy about granting the “allow all for this site” rights.

Glad to know people also experience this. The thing is, the solution doesn’t seem that hard to program. What I’d do is:
If user hasn’t scrolled, page can do whatever it wants.

If user has scrolled, browser is active window and cursor is on the page:
keep the cursor’s position relative to the page as it loads, moving the page as necessary. If the cursor was on “next page”, keep it there. If the cursor was on an image (even an icon or placeholder), keep it there. Even better if you maintain the position of the cursor relative to the icon.

If user has scrolled, but browser isn’t active, keep a certain portion of the window fixed relative to the page. Maybe the top left, probably the center.

I think your solution is actually quite difficult to program and would lead to an awful user experience. The cure is worse than the disease.

I have done lots of browser based web scraping, and in the general case the specific solution you describe wouldn’t work. The way websites are setup nowadays, there is no single clear “page fully loaded” event observed by either IE or Mozilla. Instead they have multiple events indicating completion of loading of various parts of the page, and of course the browser (or any custom addition thereto) cannot know which of these events is a final one.

Conceivably websites could be structured to contain an easily identifiable element that loads last, but then people who make websites don’t want to make scrapers’ lives easier. And that’s a good thing, because scrapers want to make a living too :). Sort of like web designers who make a living from the inefficiency of html as the basis of interfaces on the web.

Anyway, more clever solutions on the same principle are workable. For instance, you could make a rule that “browser may not update an X*Y rectangle around the current location of the mouse, regardless of whether page is loaded or not”. Although in practice that would require too much of a hack into the browser internals so that wouldn’t happen. So, you get even more clever and say “if I cannot make browser stop changing this screen rectangle, I will use another program to make a similar interface and it wouldn’t change trivially”. This is basically the answer - automatically take a screenshot of the screen area you care about, capture all hyperlink elements it contains from the DOM and incorporate them into the screenshot (sort of like a flash applet with hyperlink buttons) and have the user use that, regardless of what may be happening with the underlying browser document.

I’m not familiar with how browsers and web sites are coded, but isn’t there a point early on in loading where the browser has an idea of what elements are on a page e.g. banner at the top, image halfway down, flash to the right? From this point, isn’t it possible to record the cursor position relative to the page?

Even if not, surely it knows what’s under the cursor at all times – it can change the cursor for links. I’d do:
on cursor stop moving, save position relative to window (e.g. 400 pixels, 556 pixels)
identify what’s under the cursor in this order: links, text, images (even if the link is in a frame/script etc. there should be no difficulty in locating it as the page loads)
keep the second constant relative to the first
which of these steps do you think would be difficult to code?

It’s not that different from pinch to zoom, is it? Parts of the page under your finger stay constant.

It’s a native feature of Firefox as far as I know. Just right-click on any image and see if you get a menu with an option to block all future images from the same site.

I like the sound of this! I’ve been wishing for a way I could selectively disable JavaScript on a per-site basis! Can I selectively block a page from loading ANYTHING from particular 3rd party sites?

Why? Would it be disorienting? Hard to use? Slow? I think you don’t find the problem as irritating as I do.

HTML allows images to be placed but not sized. So the image lower down the page may be 1x1 pixels or 1024x640. Well coded sites will always tell the browser the image size in the tag, and the page ‘shape’ will be instantly rendered; however lazy coding, or in some cases programming that relies on images being accessible at different dimensions (e.g. retrieving from a photo gallery or posting random images), the browser has no idea how big or small to make the IMG placeholder until it has actually retrieved the image from the directory. That is, of course, only one of the causes of jumping, but probably the most common.

That said I’m guessing a browser could be programmed to retrieve the IMG metadata at page load and fix placeholder size at that point, but that wouldn’t get round instances where placeholder size is determined by JavaScript or deep-embedded CSS.