Program for extracting images from web pages

I need to extract a series of images from a web page in order of their appearance on the page and rename them so that I can make a CBR (comic) from them. The image files have names that are random strings of numbers and letters, and my current method is to tediously drag the images from the web page to a folder one at a time, renaming each sequentially as I place it in the folder. What I would like to find is a (free, off-line, Windows) program that will extract and rename these files automatically (ideally with me setting a template for the filename, such as title_chapter_###.)

The web developer toolbar for Firefox allows to to see the individual images. But you manually use it. There is no automation feature. So does Flashgot. That’s just two for starters. As to automatically naming the images for you, good luck with that. A file rename tool works just fine but you have to do some work.

If you have any skill at javascript you could write a “scriptlet” or “bookmarklet” Bookmarklet - Wikipedia to do just what you want.

Then you’d drive your browser to the relevant page and click the bookmarklet you’d already added to your favorites. So the process is manual, but it’s just one-click-per-page manual.

I’ve used Firefox add-ins like “DownThemAll!” for this. From the webpage for it: “It lets you download all the links or images contained in a webpage and much more”.

Sent from my SM-G930V using Tapatalk

Getting them in order is the issue, assuming there’s no order inherent in the file names. You’ll need a downloader that allows you to set a name, will download in order, and will automatically append a number.

Though, if you’re talking about an imgur album, there’s an option near the bottom to download the whole thing as a ZIP file. It’s hidden behind a … or More button or something like that. Rename to .cbz, and you have a valid comic book file.

Are we helping you facilitate copyright violation with this? Website contents are under copyright protection, and no one can assume you have to right to rip and reformat J. Random Website’s content.

I’m not a legal eagle, and I find the way copyright is enforced generally distasteful, but the SDMB has a long-standing practice of giving copyright-related issues a wide berth.

I have used this for many years, amongst other ways of downloading, ( one of my favourites for difficult cases, if disabling JavaScript is a pain at that moment as may be any extension to defeat right-click hindrances — eg: if in a hurry — is good old Page Info on Firefox, all the images on a page are shown full-size under Media, with a convenient Save button. But that’s not automated. )

DownThemAll! is also the most convenient way to capture those Twitter Banners on people’s pages which are not easy to get. Just fire it up and include profilebanners which is not selected by default even if you’ve chosen all images.

I’ve generally had difficulty with things like ImageGrabber, cos it’s difficult to install under Linux — like most things — and after installing it in a Windows VirtualBox, I can’t work it out ! Anyway that’s specifically for 'boorus; however there may be other generalized image-takers on GitHub.

That’s adorable.

for a text based http file grabber, there xget
eg you can get it for cygwin, which makes it xget.exe

then you can script it with bash, xargs, sed, cut, etc

In IE, save the page as “web page, complete” (.htm). This puts the html in the main file, and everything else in subfolders. You can use “find” in file explorer to display them all, then move them en masse to wherever you want.

Bump.
I never did find a satisfactory program for this last time, but the need came up again and this time I did some deeper digging, and found a little-known (only 271 users) Firefox add-on that does exactly what I need it to do. If you need to do something similar, you should take a look at Simage. (During the search, I also found this useful little add-on that displays only images from a web page.)