What I do, when I want to download hundreds of file without clicking individually, is use a url2file utility, preparing the list of urls to get manually (using an editor with advanced features). Googling ‘url2file’ this YouTube shows up. The way the guy in the YouTube does it is much more complicated than the way I do it, but he details an approach, and gives some other alternatives.
When I do it, I do NOT get the browser’s cookies — that can cause trouble if, for example, you need to be logged-in to access those pages. I think there’s a way to point url2file at your browser’s cookie directory but I’ve never tried that.
There’s also a wget utility, but I’ve never used it.