I apologise now if I put off some of the more general readers with this post, but I’ve struck upon a bit a problem!
I have some automated code that I wrote using PHP and Curl, that retrieves a mountain of information from a website, does some statistical analysis on it and then presents me with a nice little report (having inserted the data into a MySQL database). It’s wonderful – to do the process manually would take maybe 2-3 hours every day, as it is I wake up to a nice report sat in my inbox everyday with all the information in it.
Now I have a problem, the website that I crawl to get this information is converting to Ajax – this presents me with a huge problem……
Web spiders for the most part, grab a page from a server, make a list of links in the page and then go off and repeat the process on all those links it’s found (each one triggering a different database or variable call in a website).
Thinking like a Human
We need to make our crawler think and act like a human – sounds easy enough right? You’ve written a crawler before surely you can do that?
Wrong! I can’t think of any logical way to get PHP to do this for me!
Any crawler process would need to be able to see events and states in the document that a real user might click on.
Some reading around this problem (and believe me, I have), suggests that the easiest way of possible of doing this is to create an AJAX-enabled event-driven reader. Heck we use one of these everyday of the week (it’s your web browser folks, whether it be IE, Firefox, Opera, Safari etc. etc.).
Using the Browser
Does anyone have any other bright ideas before I spend hours fighting with yet another new technology?