Skillett.com

I apologise now if I put off some of the more general readers with this post, but I’ve struck upon a bit a problem!

I have some automated code that I wrote using PHP and Curl, that retrieves a mountain of information from a website, does some statistical analysis on it and then presents me with a nice little report (having inserted the data into a MySQL database). It’s wonderful – to do the process manually would take maybe 2-3 hours every day, as it is I wake up to a nice report sat in my inbox everyday with all the information in it.

Now I have a problem, the website that I crawl to get this information is converting to Ajax – this presents me with a huge problem……

Web Spiders

Web spiders for the most part, grab a page from a server, make a list of links in the page and then go off and repeat the process on all those links it’s found (each one triggering a different database or variable call in a website).

Ajax isn’t quite so easy, most of the page isn’t even in the HTML! It’s inserted after by javascript – mostly when the user clicks something, meaning the user doesn’t even navigate to a different page, but actually stays on the same page and lets the javascript refresh (hence our crawler can’t make a list of the links!). Some websites allow for this having a “lite” version, the one I’m using doesn’t 🙁

Thinking like a Human

We need to make our crawler think and act like a human – sounds easy enough right? You’ve written a crawler before surely you can do that?

Wrong! I can’t think of any logical way to get PHP to do this for me!

Any crawler process would need to be able to see events and states in the document that a real user might click on.

Some reading around this problem (and believe me, I have), suggests that the easiest way of possible of doing this is to create an AJAX-enabled event-driven reader. Heck we use one of these everyday of the week (it’s your web browser folks, whether it be IE, Firefox, Opera, Safari etc. etc.).

Using the Browser

FleshEater There are a couple of tools around that seem to use the browser, Watir (using Ruby) and Crowbar (which uses a mozilla based browser).

Does anyone have any other bright ideas before I spend hours fighting with yet another new technology?