How do you see the index of a site?

I want to see a list of every page on this website: http://www.citizensinformation.ie/en/

I won’t get into the rather complicated reasons.

I remember that used to be possible on old websites - you’d click a button and a list of all pages would appear.
Is that still possible?

You mean a sitemap? You can only see it if they have created one and published it.

You’re thinking of a site map. Not everyone does those anymore.

You can easily download the whole site and make your own index.

Assuming you use Windows, you can do this using HTTrack (or WebHTTrack for Linux)
edit… or just go to the site’s own sitemap: http://www.citizensinformation.ie/sitemap.xml

I know of no former browser built-in method that would display a list of all web pages on a site.

It used to be that you could frequently see all the file names in a directory by just using the URL of the directory (ending in a “/”), but 1) that would only list the files in that directory which would be a subset of the whole site and 2) most sites don’t allow this anymore since it can be a huge security problem.

The only option is to use a spider program. I use “Wget” to spider a web site. It has various options to copy all the pages or merely list them.

There could also be a browser add-on to do that, check with add-on scripts site for your browser.

Note that spidering can be a very intensive hit on a web site and can do bad things like go on forever spidering dynamically created pages if you don’t limit the number of levels. (I had this happen to my server once back in the early days by the idiots at ryhmes with “Sucktomi”. Their spider hit an exponentially increasing number of dynamically generated (useless after a bit) web pages. Drove our whole network to a crawl.)

Right, I was thinking of a site map - I didn’t realise that not all sites had those :o

(In fairness I’ve never needed one before)