Click here to Skip to main content
12,946,783 members (47,842 online)
Click here to Skip to main content
Add your own
alternative version


46 bookmarked
Posted 31 Jan 2009

Lucene Website Crawler and Indexer

, 31 Jan 2009 CPOL
Rate this:
Please Sign up or sign in to vote.
Java Lucene website crawler and indexer


This project makes use of the Java Lucene indexing library to make a compact yet powerful web crawling and indexing solution. There are many powerful open source internet and enterprise search solutions available that make use of Lucene such as Solr and Nutch. These projects although excellent may be over kill for more simple projects.


A CodeProject article that inspired me in creating this demo was the .NET searcharoo search engine created by craigd. He created a web search engine designed to search entire websites by recursively crawling the links form the home page of the target site. This JSearchEngine Lucene project is different from searcharoo because it uses the Lucene indexer rather than the custom indexer used in searcharoo. Another difference between the projects is that searcharoo has a function that uses Window’s document iFilters to parse non-HTML pages. If there is enough interest, I may extend the project to use the document filters from the Nutch web crawler to index PDF and Microsoft Office type files.

Using the Code

The solution is made up from two projects, one called JSearchEngine and one called JSP, both projects were created with the netbeans IDE version 6.5.


The JSearchEngine project is the nuts and bolts of the operation. In the main method, the home page of the site to be crawled and indexed is hard-coded. Since it is a command line app, the code can be easily modified to take the home page as a command line parameter. The main control function for the crawler is below, and it works as follows:

  1. The indexDocs function is called with the first page as a parameter.
  2. The URl for the first page is used to build a Lucene Document object. The document object is made up of field and value pairs, such as the <title> tag as field and the actual field as value. This is all taken care of by the document object constructor.
  3. Once the Document has been built, then Lucene adds it to its index. The workings of Lucene are outside the scope of this article as they are covered here.
  4. After the document has been indexed, the links from the document are parsed into a string array, then each of those strings are recursively indexed by the indexDocs function. The HTMLParser at is used.
  5. Only URL names from the original page will be followed, this will prevent the crawler from following external links and attempting to crawl the internet!
  6. The indexer excludes zip files as it cannot index them. 
private static void indexDocs(String url) throws Exception {

      //index page
      Document doc = HTMLDocument.Document(url);
      System.out.println("adding " + doc.get("path"));
      try {
          writer.addDocument(doc);          // add docs unconditionally
          //TODO: only add HTML docs
          //and create other doc types

          //get all links on the page then index them
          LinkParser lp = new LinkParser(url);
          URL[] links = lp.ExtractLinks();

          for (URL l : links) {
              //make sure the URL hasn't already been indexed
              //make sure the URL contains the home domain
              //ignore URLs with a querystrings by excluding "?"
              if ((!indexed.contains(l.toURI().toString())) &&
                  (l.toURI().toString().contains(beginDomain)) &&
                  (!l.toURI().toString().contains("?"))) {
                  //don't index zip files
                  if (!l.toURI().toString().endsWith(".zip")) {

      } catch (Exception e) {

JSP Search Client

Once the target site has been completely indexed, the index can be queried; further sites can also be added to the index before querying. Since the index is Lucene based, it can be queried with any compatible Lucene library such as the Java or .NET implementation. In this demo, Java implementation has been used. The JSP project is a set of Java server pages that are used to search and display search results. In order to run this web app, it is necessary to deploy the complied .war file on a J2EE compatible server such as Glassfish or Tomcat. The following mark-up is the entry point for the web app, it takes a search term and passes it to the results.jsp page which will query the index and display the results:

       <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
       <title>JSP Search Page</title>
       <form name="search" action="results.jsp" method="get">
           <input name="query" size="44"/> Search Criteria
           <input name="maxresults" size="4" value="100"/> Results Per Page
           <input type="submit" value="Search"/>

The following is the main Java code from the results page. The variables are initialized with parameters passed from the search page in order to construct a Lucene index searcher: 

String indexName = "/opt/lucene/index";
IndexSearcher searcher = null;
Query query = null;
Hits hits = null;
int startindex = 0;
int maxpage = 50;
String queryString = null;
String startVal = null;
String maxresults = null;
int thispage = 0;
searcher = new IndexSearcher(indexName);
queryString = request.getParameter("query");
Analyzer analyzer = new StandardAnalyzer();
QueryParser qp = new QueryParser("contents",
query = qp.parse(queryString);

hits =; 

Once the hits object has been instantiated using the search results, it will be possible to loop through the hits and display them with the HTML on page:

    for (int i = startindex; i < (thispage + startindex); i++) {  // for each element

        Document doc = hits.doc(i);          //get the next document
        String doctitle = doc.get("title");  //get its title
        String url = doc.get("path");        //get its path field 

Points of Interest  

Since there are two separate projects, they can be mixed and matched with other programming environments that are Lucene compatible, for example, the JSP project could easily be modified to query an index created by Lucene.Net.

Further Information

Both of these projects are described in more detail in this four part series on my blog:


  • 31st January, 2009: Initial post


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


About the Author

Engineer tek-dev
Ireland Ireland
The author of this article is a web designer and software developer, he is also currently completing a PhD in software engineering in web services development.

You may also be interested in...

Comments and Discussions

QuestionVery GooD project Pin
Member 114182381-Feb-15 7:13
memberMember 114182381-Feb-15 7:13 
QuestionThank You Pin
PRASHANTdutta16-Mar-12 5:53
memberPRASHANTdutta16-Mar-12 5:53 
QuestionHow to customize this crawler to support utf-8 Pin
medalion_girl25-Sep-11 21:22
membermedalion_girl25-Sep-11 21:22 
Questionhow can I run this jsp crawler? Pin
medalion_girl8-Sep-11 1:56
membermedalion_girl8-Sep-11 1:56 
AnswerRe: how can I run this jsp crawler? Pin
stlane8-Sep-11 2:31
memberstlane8-Sep-11 2:31 
GeneralRe: how can I run this jsp crawler? Pin
medalion_girl8-Sep-11 6:19
membermedalion_girl8-Sep-11 6:19 
GeneralDistributed web crawler Pin
medalion_girl23-Mar-11 22:05
membermedalion_girl23-Mar-11 22:05 
GeneralRe: Distributed web crawler Pin
stlane24-Mar-11 0:56
memberstlane24-Mar-11 0:56 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

Permalink | Advertise | Privacy | Terms of Use | Mobile
Web01 | 2.8.170518.1 | Last Updated 31 Jan 2009
Article Copyright 2009 by stlane
Everything else Copyright © CodeProject, 1999-2017
Layout: fixed | fluid