Click here to Skip to main content
15,878,809 members
Articles / Programming Languages / Visual Basic
Article

C#/VB - Automated WebSpider / WebRobot

Rate me:
Please Sign up or sign in to vote.
4.66/5 (35 votes)
15 Mar 20045 min read 144.1K   2.8K   170   14
Build a flexible WebRobot and process an entire site using a WebSpider

Image 1

Introduction

What is a WebSpider

A WebSpider or crawler is an automated program that follows links on websites and calls a WebRobot to handle the contents of each link.

What is a WebRobot

A WebRobot is a program that processes the content found through a link, a WebRobot can be used for indexing a page or extracting useful information based on a predefined query, common examples are - Link checkers, e-mail address extractors, multimedia extractors and update watchers.

Background

I had a recent contract to build a web page link checker, this component had to be able to check links that were stored in a database as well as to check links on a website, both through the local file system and over the internet.

This article explains the WebRobot, the WebSpider and how to enhance the WebRobot through specialized content handlers, the code shown has some superfluous code such try blocks, variable initialization and minor methods removed.

Class overview

The classes that make up the WebRobot are; WebPageState, which represents a URI and its current state in the process chain and an implementation of IWebPageProcessor, which performs the actual reading of the URI, calling content handlers and dealing with page errors.

The WebSpider has only one class WebSpider, this maintains a list of pending/processed URI's contained in a list of WebPageState objects and runs WebPageProcessor against each WebPageState to extract links to other pages and to test whether the URI's are valid.

Using the code - WebRobot

Web page processing is handled by an object that implements IWebPageProcessor. The Process method expects to receive a WebPageState, this will be updated during page processing and if all is successful the method will return true. Any number of content handlers can be also be called after the page has been read, by assigning WebPageContentDelegate delegates to the processor.

C#
public delegate void WebPageContentDelegate( WebPageState state );

public interface IWebPageProcessor
{
   bool Process( WebPageState state );

   WebPageContentDelegate ContentHandler { get; set; }
}

The WebPageState object holds state and content information for the URI being processed. All properties of this object are read/write accept for the URI which must be passed in through the constructor.

C#
public class WebPageState
{
   private WebPageState( ) {}

   public WebPageState( Uri uri )
   {
      m_uri             = uri;
   }

   public WebPageState( string uri )
      : this( new Uri( uri ) ) { }

   Uri      m_uri;                           // URI to be processed
   string   m_content;                       // Content of webpage
   string   m_processInstructions   = "";    // User defined instructions 
                 // for content handlers
   bool     m_processStarted        = false; 
                // Becomes true when processing starts
   bool     m_processSuccessfull    = false; 
                // Becomes true if process was successful
   string   m_statusCode;                    
                // HTTP status code
   string   m_statusDescription;             
               // HTTP status description, or exception message

   // Standard Getters/Setters....
}

The WebPageProcessor is an implementation of the IWebPageProcessor that does the actual work of reading in the content, handling error codes/exceptions and calling the content handlers. WebPageProcessor may be replaced or extended to provide additional functionality, though adding a content handler is generally a better option.

C#
public class WebPageProcessor : IWebPageProcessor
{
   public bool Process( WebPageState state )
   {
      state.ProcessStarted       = true;
      state.ProcessSuccessfull   = false;

      // Use WebRequest.Create to handle URI's for
      // the following schemes: file, http & https
      WebRequest  req = WebRequest.Create( state.Uri );
      WebResponse res = null;

      try
      {
         // Issue a response against the request.
         // If any problems are going to happen they
         // they are likly to happen here in the form of an exception.
         res = req.GetResponse( );

         // If we reach here then everything is likly to be OK.
         if ( res is HttpWebResponse )
         {
            state.StatusCode        =
             ((HttpWebResponse)res).StatusCode.ToString( );
            state.StatusDescription =
             ((HttpWebResponse)res).StatusDescription;
         }
         if ( res is FileWebResponse )
         {
            state.StatusCode        = "OK";
            state.StatusDescription = "OK";
         }

         if ( state.StatusCode.Equals( "OK" ) )
         {
            // Read the contents into our state
            // object and fire the content handlers
            StreamReader   sr    = new StreamReader(
              res.GetResponseStream( ) );

            state.Content        = sr.ReadToEnd( );

            if ( ContentHandler != null )
            {
               ContentHandler( state );
            }
         }

         state.ProcessSuccessfull = true;
      }
      catch( Exception ex )
      {
         HandleException( ex, state );
      }
      finally
      {
         if ( res != null )
         {
            res.Close( );
         }
      }

      return state.ProcessSuccessfull;
   }
}

// Store any content handlers
private WebPageContentDelegate m_contentHandler = null;

public WebPageContentDelegate ContentHandler
{
   get { return m_contentHandler; }
   set { m_contentHandler = value; }
}

There are additonal private methods in the WebPageProcessor to handle HTTP error codes and file not found errors when dealing with the "file://" scheme as well as more severe exceptions.

Using the code - WebSpider

The WebSpider class is really just a harness for calling the WebRobot in a particular way. It provides the robot with a specialized content handler for crawling through web links and maintains a list of both pending pages and already visited pages. The current WebSpider is designed to start from a given URI and to limit full page processing to a base path.

C#
// CONSTRUCTORS
//
// Process a URI, until all links are checked, 
// only add new links for processing if they
// point to the same host as specified in the startUri.
public WebSpider(
   string            startUri
   ) : this ( startUri, -1 ) { }

// As above only limit the links to uriProcessedCountMax.
public WebSpider(
   string            startUri,
   int               uriProcessedCountMax
   ) : this ( startUri, "", uriProcessedCountMax, 
     false, new WebPageProcessor( ) ) { }

// As above, except new links are only added if
// they are on the path specified by baseUri.
public WebSpider(
   string            startUri,
   string            baseUri,
   int               uriProcessedCountMax
   ) : this ( startUri, baseUri, uriProcessedCountMax, 
     false, new WebPageProcessor( ) ) { }

// As above, you can specify whether the web page
// content is kept after it is processed, by
// default this would be false to conserve memory
// when used on large sites.
public WebSpider(
   string            startUri,
   string            baseUri,
   int               uriProcessedCountMax,
   bool              keepWebContent,
   IWebPageProcessor webPageProcessor )
{
   // Initialize web spider ...
}

Why is there a base path limit?

Since there are trillions of pages on the Internet, this spider will check all links that it finds to see if they are valid, but it will only add new links to the pending queue if those links belong within the context of the initial website or sub path of that website.

So if we are starting from www.myhost.com/index.html and this page has link to www.myhost.com/pageWithSomeLinks.html and www.google.com/pageWithManyLinks.html then the WebRobot will be called against both links to check if they are valid but it will only add new links found within www.myhost.com/pageWithSomeLinks.html

Call the Execute method to start the spider. This method will add the startUri to a Queue of pending pages and then call the IWebPageProcessor until there are no pages left to process.

C#
public void Execute( )
{
   AddWebPage( StartUri, StartUri.AbsoluteUri );

   while ( WebPagesPending.Count > 0 &&
      ( UriProcessedCountMax == -1 || UriProcessedCount 
        < UriProcessedCountMax ) )
   {
      WebPageState state = (WebPageState)m_webPagesPending.Dequeue( );

      m_webPageProcessor.Process( state );

      if ( ! KeepWebContent )
      {
         state.Content = null;
      }

      UriProcessedCount++;
   }
}

A web page can only be added to the queue if the Uri "excluding anchor" points to a path or a valid page (e.g. .html, .aspx, .jsp etc...) and has not already been seen before.

C#
private bool AddWebPage( Uri baseUri, string newUri )
{
   Uri      uri      = new Uri( baseUri, 
     StrUtil.LeftIndexOf( newUri, "#" ) );

   if ( ! ValidPage( uri.LocalPath ) || m_webPages.Contains( uri ) )
   {
      return false;
   }
   WebPageState state = new WebPageState( uri );

   if ( uri.AbsoluteUri.StartsWith( BaseUri.AbsoluteUri ) )
   {
      state.ProcessInstructions += "Handle Links";
   }

   m_webPagesPending.Enqueue  ( state );
   m_webPages.Add             ( uri, state );

   return true;
}

Examples of running the spider

The following code shows three examples for calling the WebSpider, the paths shown are examples only, they don't represent the true structure of this website. Note: the Bondi Beer website in the example, is a site that I built using my own SiteGenerator. This easy to use program produces static websites from dynamic content such as proprietary data files, XML / XSLT files, databases, RSS feeds and more...

C#
/*
* Check for broken links found on this website, limit the spider to 100 pages.
*/
WebSpider spider = new WebSpider( "http://www.bondibeer.com.au/", 100 );
spider.execute( );

/*
* Check for broken links found on this website, 
* there is no limit on the number
* of pages, but it will not look for new links on
* pages that are not within the
* path http://www.bondibeer.com.au/products/.  This
* means that the home page found
* at http://www.bondibeer.com.au/home.html may be
* checked for existence if it was
* called from the somepub/index.html but any
* links within that page will not be
* added to the pending list, as there on an a lower path.
*/
spider = new WebSpider(
      "http://www.bondibeer.com.au/products/somepub/index.html",
      "http://www.bondibeer.com.au/products/", -1 );
spider.execute( );

/*
* Check for pages on the website that have funny 
* jokes or pictures of sexy women.
*/
spider = new WebSpider( "http://www.bondibeer.com.au/" );
spider.WebPageProcessor.ContentHandler += 
  new WebPageContentDelegate( FunnyJokes );
spider.WebPageProcessor.ContentHandler += 
  new WebPageContentDelegate( SexyWomen );
spider.execute( );

private void FunnyJokes( WebPageState state )
{
   if( state.Content.IndexOf( "Funny Joke" ) > -1 )
   {
      // Do something
   }
}
private void SexyWomen( WebPageState state )
{
   Match       m     = RegExUtil.GetMatchRegEx( 
     RegularExpression.SrcExtractor, state.Content );
   string      image;

   while( m.Success )
   {
      m     = m.NextMatch( );
      image = m.Groups[1].ToString( ).toLowerCase( );

      if ( image.indexOf( "sexy" ) > -1 || 
        image.indexOf( "women" ) > -1 )
      {
         DownloadImage( image );
      }
   }
}

Conclusion

The WebSpider is flexible enough to be used in a variety of useful scenarios, and could be powerful tool for Data Mining websites on the Internet and Intranet. I would like to here how people have used this code.

Outstanding Issues

These issues are minor but if anyone has any ideas then please share them.

  • state.ProcessInstructions - This is really just a quick hack to provide instructions that the content handlers can use as they see fit. I am looking for a more elegant solution to this problem.
  • MultiThreaded Spider - This project 1st started of as a multi threaded spider but that soon fell by the wayside when I found that performance was much slower using threads to process each URI. It seems that the bottle neck is in the GetResponse, which does not seem to run in multiple threads.
  • Valid URI, but the query data that returns a bad page. - The current processor does not handle the scenario where the URI points to a valid page, but the page returned by the webserver is considered to be bad. Eg. http://www.validhost.com/validpage.html?opensubpage=invalidid. One idea to to resolve this problem is to read the contents of a returned page and look for key pieces of information but that technique is a little flakey.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here


Written By
Web Developer
Australia Australia
I have been programming commercially since 1990, my last two major roles have been Architect/Lead Developer for an online bank and Australia's largest consumer finance comparison portal.

On the side I am a Forex Currency Trader and actively develop tools and applications for Currency Traders.

I have just launched a personal blog at www.davidcruwys.com and a website targeting Foreign Exchange traders at www.my-trading-journal.com

Comments and Discussions

 
Questionwonderful Pin
wolfxin201024-Jul-11 6:11
wolfxin201024-Jul-11 6:11 
GeneralMy vote of 1 Pin
Shane Story25-Jun-10 10:12
Shane Story25-Jun-10 10:12 
GeneralRequest must be authorized first Pin
kanvishok28-Dec-09 20:37
kanvishok28-Dec-09 20:37 
Generala marvelous article Pin
sermin26-Sep-06 2:19
sermin26-Sep-06 2:19 
GeneralPlease help! Pin
thefrenzybug12-Jun-06 23:26
thefrenzybug12-Jun-06 23:26 
GeneralHi! David Pin
Kunal Mukherjee23-Aug-05 7:18
Kunal Mukherjee23-Aug-05 7:18 
GeneralInvalid URI Scheme &amp; Broken LInks Pin
Santaji Garwe1-May-05 4:36
Santaji Garwe1-May-05 4:36 
GeneralPlease Help Pin
Owaid24-Mar-05 0:19
Owaid24-Mar-05 0:19 
GeneralCheers! Pin
Stonie30-May-04 2:38
Stonie30-May-04 2:38 
GeneralMultiThreaded Spider Pin
Andrew Chabokha25-Mar-04 3:57
Andrew Chabokha25-Mar-04 3:57 
Hi David,

Nice job!

Now about multithreaded Spider: I think it is not a problem with multithreading itself this is an HTTP restriction. I assume GetResponse finally comes to use WinInet library and here is some limitations:

WinInet limits connections to a single HTTP 1.0 server to four simultaneous connections. Connections to a single HTTP 1.1 server are limited to two simultaneous connections. The HTTP 1.1 specification (RFC2068) mandates the two-connection limit. The four-connection limit for HTTP 1.0 is a self-imposed restriction that coincides with the standard that is used by a number of popular Web browsers.

Read more here:
http://support.microsoft.com/default.aspx?scid=http://support.microsoft.com:80/support/kb/articles/Q183/1/10.ASP&NoWebContent=1

You can change programmatically number of simultaneous connections.

Cheers,
Andrew

GeneralRe: MultiThreaded Spider Pin
warlock6x38-Mar-06 5:35
warlock6x38-Mar-06 5:35 
GeneralRe: MultiThreaded Spider Pin
Vikcia26-Nov-07 2:43
Vikcia26-Nov-07 2:43 
Generalnunit Pin
Gary Thom16-Mar-04 0:36
Gary Thom16-Mar-04 0:36 
GeneralRe: nunit Pin
David Cruwys16-Mar-04 1:30
David Cruwys16-Mar-04 1:30 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.