Hi there,
I am trying to do a spin off from automating a webpage. I am trying to put together an application that can be handed any url and from that it can "click" the links on that page. Am coding in C# in VS express 2008.
EG:
1) Google a topic.
2) Grab a list of the url's on page 1 result set.
3) Open each page one at a time and automate clicking on every link.
This would all happen in the background with no visible web browser window/frame.
The application must cater for all types of page design. EG: Each page could have a different way of downloading a file from it...Javascript, redirecting, direct link....
I have been able to get as far as point 2 where I can scrape the html and get a list of url's, and I have been able to use a webclient to download files for pages with direct links to the files....I still am a bit shaky on determining what kind of webpage I'm dealing with in order to implement a specific "download" process.
I am having major difficulty dealing with scripted pages where downloading the files comes via a java script or redirect.
Can anyone help please?
Thanks in advance!!!