Is it possible to automatically extract a specific values from a website?

It is a website that posts wholesale fuel prices. It would be very helpful if I could somehow extract those values automatically.

The website is this: http://www.gge.gr/36/index.asp
The first link has the prices for today: http://www.gge.gr/36/sub.asp?3023. All other pages follow the same format.

Is there any way to extract those values, preferably to an Excel file?

I never used it, but it seems that iMacros add-on to Firefox can extract values - and maybe fill them in form also. One of the demo macros is even titled ‘extract and fill’.

The process you need to use is called HTML Screen Scraping. Google this term and you will find all sorts of tools and how-to documents. You will probably need to create some code to get exactly what you want, and you will want to understand Regular Expressions. Perl will probably be your friend.

Good luck.

Si

Perl is everybody’s friend. If you want to do some multi-page screen-scraping, then the WWW::Mechanize suite can do it very easily. I’d write some sample code for you, but I don’t really know what you want to extract from that site. (It’s all Greek to me.)

For certain values of friend :wink:

Si

FileMaker.

Make a web viewer object and aim it at your web site, give the object a name in Object Info, then use Middle (), Position (), and PatternCount () to parse the raw HTML returned when you query Get(LayoutObjectAttributes) of it, specifying content.

If you want it recorded, have your script go on to create a new record, timestamp it, and set a local field to the abstracted price values.

You want it spat out as Excel, that’s easy enough too.

Quicken automatically extracts personal account info from bank websites. I think.

It doesn’t do it through HTML-scraping, though. The banks provide either an API or a statement file in a popular format that Quicken can import.