Click here to Skip to main content
12,501,683 members (64,699 online)
Click here to Skip to main content

Stats

304K views
9.6K downloads
315 bookmarked
Posted

Lucene.Net ultra fast search for MVC or WebForms site => made easy!

, 22 Aug 2013 CPOL
Step-by-step tutorial for any developer who wishes to get Lucene.Net search working with their web site or app really quickly!
Lucene.Net-search-MVC-sample-site-master
.gitignore
.nuget
NuGet.targets
LuceneSearch.Data
Model
Properties
Repository
LuceneSearch.Library
Properties
LuceneSearch.Mvc
Archives
ClearLuceneIndex.old
Content
kickstart
fonts
base
icomoon-webfont.eot
icomoon-webfont.svg
icomoon-webfont.ttf
icomoon-webfont.woff
social
icomoonsocial-webfont.eot
icomoonsocial-webfont.svg
icomoonsocial-webfont.ttf
icomoonsocial-webfont.woff
img
breadcrumbs-bg.gif
chosen-sprite.png
fancybox
blank.gif
fancy_close.png
fancy_loading.png
fancy_nav_left.png
fancy_nav_right.png
fancy_shadow_e.png
fancy_shadow_n.png
fancy_shadow_ne.png
fancy_shadow_nw.png
fancy_shadow_s.png
fancy_shadow_se.png
fancy_shadow_sw.png
fancy_shadow_w.png
fancy_title_left.png
fancy_title_main.png
fancy_title_over.png
fancy_title_right.png
fancybox-x.png
fancybox-y.png
fancybox.png
grid.png
icon-arrow-right.png
icon-check.png
rte
link.png
link_break.png
picture_empty.png
text_align_center.png
text_align_left.png
text_align_right.png
text_bold.png
text_italic.png
text_list_bullets.png
text_list_numbers.png
text_strikethrough.png
text_subscript.png
text_superscript.png
Controllers
Global.asax
Properties
Scripts
ViewModels
Views
Home
Shared
LuceneSearch.WebForms
Content
kickstart
fonts
base
icomoon-webfont.eot
icomoon-webfont.svg
icomoon-webfont.ttf
icomoon-webfont.woff
social
icomoonsocial-webfont.eot
icomoonsocial-webfont.svg
icomoonsocial-webfont.ttf
icomoonsocial-webfont.woff
img
breadcrumbs-bg.gif
chosen-sprite.png
fancybox
blank.gif
fancy_close.png
fancy_loading.png
fancy_nav_left.png
fancy_nav_right.png
fancy_shadow_e.png
fancy_shadow_n.png
fancy_shadow_ne.png
fancy_shadow_nw.png
fancy_shadow_s.png
fancy_shadow_se.png
fancy_shadow_sw.png
fancy_shadow_w.png
fancy_title_left.png
fancy_title_main.png
fancy_title_over.png
fancy_title_right.png
fancybox-x.png
fancybox-y.png
fancybox.png
grid.png
icon-arrow-right.png
icon-check.png
rte
link.png
link_break.png
picture_empty.png
text_align_center.png
text_align_left.png
text_align_right.png
text_bold.png
text_italic.png
text_list_bullets.png
text_list_numbers.png
text_strikethrough.png
text_subscript.png
text_superscript.png
Global.asax
Properties
Scripts
ViewModels
README.md
.gitignore
NuGet.exe
NuGet.targets
ClearLuceneIndex.old
icomoon-webfont.eot
icomoon-webfont.svg
icomoon-webfont.ttf
icomoon-webfont.woff
icomoonsocial-webfont.eot
icomoonsocial-webfont.svg
icomoonsocial-webfont.ttf
icomoonsocial-webfont.woff
breadcrumbs-bg.gif
chosen-sprite.png
blank.gif
fancy_close.png
fancy_loading.png
fancy_nav_left.png
fancy_nav_right.png
fancy_shadow_e.png
fancy_shadow_n.png
fancy_shadow_ne.png
fancy_shadow_nw.png
fancy_shadow_s.png
fancy_shadow_se.png
fancy_shadow_sw.png
fancy_shadow_w.png
fancy_title_left.png
fancy_title_main.png
fancy_title_over.png
fancy_title_right.png
fancybox-x.png
fancybox-y.png
fancybox.png
grid.png
icon-arrow-right.png
icon-check.png
link.png
link_break.png
picture_empty.png
text_align_center.png
text_align_left.png
text_align_right.png
text_bold.png
text_italic.png
text_list_bullets.png
text_list_numbers.png
text_strikethrough.png
text_subscript.png
text_superscript.png
Global.asax
icomoon-webfont.eot
icomoon-webfont.svg
icomoon-webfont.ttf
icomoon-webfont.woff
icomoonsocial-webfont.eot
icomoonsocial-webfont.svg
icomoonsocial-webfont.ttf
icomoonsocial-webfont.woff
breadcrumbs-bg.gif
chosen-sprite.png
blank.gif
fancy_close.png
fancy_loading.png
fancy_nav_left.png
fancy_nav_right.png
fancy_shadow_e.png
fancy_shadow_n.png
fancy_shadow_ne.png
fancy_shadow_nw.png
fancy_shadow_s.png
fancy_shadow_se.png
fancy_shadow_sw.png
fancy_shadow_w.png
fancy_title_left.png
fancy_title_main.png
fancy_title_over.png
fancy_title_right.png
fancybox-x.png
fancybox-y.png
fancybox.png
grid.png
icon-arrow-right.png
icon-check.png
link.png
link_break.png
picture_empty.png
text_align_center.png
text_align_left.png
text_align_right.png
text_bold.png
text_italic.png
text_list_bullets.png
text_list_numbers.png
text_strikethrough.png
text_subscript.png
text_superscript.png
Global.asax
README.md
MvcLuceneSampleApp
MvcLuceneSampleApp
Archives
ClearLuceneIndex.old
bin
ICSharpCode.SharpZipLib.dll
Lucene.Net.dll
MvcLuceneSampleApp.dll
MvcLuceneSampleApp.pdb
Content
Controllers
Global.asax
Lucene
Model
MvcLuceneSampleApp.csproj.user
Properties
Scripts
ZeroClipboard.swf
ViewModels
Views
Home
Shared
packages
Lucene.Net.2.9.4.1
lib
net40
Lucene.Net.dll
Lucene.Net.2.9.4.1.nupkg
SharpZipLib.0.86.0
lib
11
ICSharpCode.SharpZipLib.dll
20
ICSharpCode.SharpZipLib.dll
SL3
SharpZipLib.Silverlight3.dll
SL4
SharpZipLib.Silverlight4.dll
SharpZipLib.0.86.0.nupkg
README
<?xml version="1.0"?>
<doc>
    <assembly>
        <name>Lucene.Net</name>
    </assembly>
    <members>
        <member name="T:Lucene.Net.Analysis.Analyzer">
            <summary>An Analyzer builds TokenStreams, which analyze text.  It thus represents a
            policy for extracting index terms from text.
            <p/>
            Typical implementations first build a Tokenizer, which breaks the stream of
            characters from the Reader into raw Tokens.  One or more TokenFilters may
            then be applied to the output of the Tokenizer.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Analyzer.TokenStream(System.String,System.IO.TextReader)">
            <summary>Creates a TokenStream which tokenizes all the text in the provided
            Reader.  Must be able to handle null field name for
            backward compatibility.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Analyzer.ReusableTokenStream(System.String,System.IO.TextReader)">
            <summary>Creates a TokenStream that is allowed to be re-used
            from the previous time that the same thread called
            this method.  Callers that do not need to use more
            than one TokenStream at the same time from this
            analyzer should use this method for better
            performance.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Analyzer.GetPreviousTokenStream">
            <summary>Used by Analyzers that implement reusableTokenStream
            to retrieve previously saved TokenStreams for re-use
            by the same thread. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Analyzer.SetPreviousTokenStream(System.Object)">
            <summary>Used by Analyzers that implement reusableTokenStream
            to save a TokenStream for later re-use by the same
            thread. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Analyzer.SetOverridesTokenStreamMethod(System.Type)">
            <deprecated> This is only present to preserve
            back-compat of classes that subclass a core analyzer
            and override tokenStream but not reusableTokenStream 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Analyzer.GetPositionIncrementGap(System.String)">
            <summary> Invoked before indexing a Fieldable instance if
            terms have already been added to that field.  This allows custom
            analyzers to place an automatic position increment gap between
            Fieldable instances using the same field name.  The default value
            position increment gap is 0.  With a 0 position increment gap and
            the typical default token position increment of 1, all terms in a field,
            including across Fieldable instances, are in successive positions, allowing
            exact PhraseQuery matches, for instance, across Fieldable instance boundaries.
            
            </summary>
            <param name="fieldName">Fieldable name being indexed.
            </param>
            <returns> position increment gap, added to the next token emitted from <see cref="M:Lucene.Net.Analysis.Analyzer.TokenStream(System.String,System.IO.TextReader)"/>
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Analyzer.GetOffsetGap(Lucene.Net.Documents.Fieldable)">
            <summary> Just like <see cref="M:Lucene.Net.Analysis.Analyzer.GetPositionIncrementGap(System.String)"/>, except for
            Token offsets instead.  By default this returns 1 for
            tokenized fields and, as if the fields were joined
            with an extra space character, and 0 for un-tokenized
            fields.  This method is only called if the field
            produced at least one token for indexing.
            
            </summary>
            <param name="field">the field just indexed
            </param>
            <returns> offset gap, added to the next token emitted from <see cref="M:Lucene.Net.Analysis.Analyzer.TokenStream(System.String,System.IO.TextReader)"/>
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Analyzer.Close">
            <summary>Frees persistent resources used by this Analyzer </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.ASCIIFoldingFilter">
            <summary> This class converts alphabetic, numeric, and symbolic Unicode characters
            which are not in the first 127 ASCII characters (the "Basic Latin" Unicode
            block) into their ASCII equivalents, if one exists.
            
            Characters from the following Unicode blocks are converted; however, only
            those characters with reasonable ASCII alternatives are converted:
            
            <list type="bullet">
            <item>C1 Controls and Latin-1 Supplement: <a href="http://www.unicode.org/charts/PDF/U0080.pdf">http://www.unicode.org/charts/PDF/U0080.pdf</a></item>
            <item>Latin Extended-A: <a href="http://www.unicode.org/charts/PDF/U0100.pdf">http://www.unicode.org/charts/PDF/U0100.pdf</a></item>
            <item>Latin Extended-B: <a href="http://www.unicode.org/charts/PDF/U0180.pdf">http://www.unicode.org/charts/PDF/U0180.pdf</a></item>
            <item>Latin Extended Additional: <a href="http://www.unicode.org/charts/PDF/U1E00.pdf">http://www.unicode.org/charts/PDF/U1E00.pdf</a></item>
            <item>Latin Extended-C: <a href="http://www.unicode.org/charts/PDF/U2C60.pdf">http://www.unicode.org/charts/PDF/U2C60.pdf</a></item>
            <item>Latin Extended-D: <a href="http://www.unicode.org/charts/PDF/UA720.pdf">http://www.unicode.org/charts/PDF/UA720.pdf</a></item>
            <item>IPA Extensions: <a href="http://www.unicode.org/charts/PDF/U0250.pdf">http://www.unicode.org/charts/PDF/U0250.pdf</a></item>
            <item>Phonetic Extensions: <a href="http://www.unicode.org/charts/PDF/U1D00.pdf">http://www.unicode.org/charts/PDF/U1D00.pdf</a></item>
            <item>Phonetic Extensions Supplement: <a href="http://www.unicode.org/charts/PDF/U1D80.pdf">http://www.unicode.org/charts/PDF/U1D80.pdf</a></item>
            <item>General Punctuation: <a href="http://www.unicode.org/charts/PDF/U2000.pdf">http://www.unicode.org/charts/PDF/U2000.pdf</a></item>
            <item>Superscripts and Subscripts: <a href="http://www.unicode.org/charts/PDF/U2070.pdf">http://www.unicode.org/charts/PDF/U2070.pdf</a></item>
            <item>Enclosed Alphanumerics: <a href="http://www.unicode.org/charts/PDF/U2460.pdf">http://www.unicode.org/charts/PDF/U2460.pdf</a></item>
            <item>Dingbats: <a href="http://www.unicode.org/charts/PDF/U2700.pdf">http://www.unicode.org/charts/PDF/U2700.pdf</a></item>
            <item>Supplemental Punctuation: <a href="http://www.unicode.org/charts/PDF/U2E00.pdf">http://www.unicode.org/charts/PDF/U2E00.pdf</a></item>
            <item>Alphabetic Presentation Forms: <a href="http://www.unicode.org/charts/PDF/UFB00.pdf">http://www.unicode.org/charts/PDF/UFB00.pdf</a></item>
            <item>Halfwidth and Fullwidth Forms: <a href="http://www.unicode.org/charts/PDF/UFF00.pdf">http://www.unicode.org/charts/PDF/UFF00.pdf</a></item>
            </list>
            
            See: <a href="http://en.wikipedia.org/wiki/Latin_characters_in_Unicode">http://en.wikipedia.org/wiki/Latin_characters_in_Unicode</a>
            
            The set of character conversions supported by this class is a superset of
            those supported by Lucene's <see cref="T:Lucene.Net.Analysis.ISOLatin1AccentFilter"/> which strips
            accents from Latin1 characters.  For example, 'À' will be replaced by
            'a'.
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.TokenFilter">
            <summary> A TokenFilter is a TokenStream whose input is another TokenStream.
            <p/>
            This is an abstract class; subclasses must override <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/>.
            
            </summary>
            <seealso cref="T:Lucene.Net.Analysis.TokenStream">
            </seealso>
        </member>
        <member name="T:Lucene.Net.Analysis.TokenStream">
            <summary> A <c>TokenStream</c> enumerates the sequence of tokens, either from
            <see cref="T:Lucene.Net.Documents.Field"/>s of a <see cref="T:Lucene.Net.Documents.Document"/> or from query text.
            <p/>
            This is an abstract class. Concrete subclasses are:
            <list type="bullet">
            <item><see cref="T:Lucene.Net.Analysis.Tokenizer"/>, a <c>TokenStream</c> whose input is a Reader; and</item>
            <item><see cref="T:Lucene.Net.Analysis.TokenFilter"/>, a <c>TokenStream</c> whose input is another
            <c>TokenStream</c>.</item>
            </list>
            A new <c>TokenStream</c> API has been introduced with Lucene 2.9. This API
            has moved from being <see cref="T:Lucene.Net.Analysis.Token"/> based to <see cref="T:Lucene.Net.Util.Attribute"/> based. While
            <see cref="T:Lucene.Net.Analysis.Token"/> still exists in 2.9 as a convenience class, the preferred way
            to store the information of a <see cref="T:Lucene.Net.Analysis.Token"/> is to use <see cref="T:Lucene.Net.Util.AttributeImpl"/>s.
            <p/>
            <c>TokenStream</c> now extends <see cref="T:Lucene.Net.Util.AttributeSource"/>, which provides
            access to all of the token <see cref="T:Lucene.Net.Util.Attribute"/>s for the <c>TokenStream</c>.
            Note that only one instance per <see cref="T:Lucene.Net.Util.AttributeImpl"/> is created and reused
            for every token. This approach reduces object creation and allows local
            caching of references to the <see cref="T:Lucene.Net.Util.AttributeImpl"/>s. See
            <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/> for further details.
            <p/>
            <b>The workflow of the new <c>TokenStream</c> API is as follows:</b>
            <list type="bullet">
            <item>Instantiation of <c>TokenStream</c>/<see cref="T:Lucene.Net.Analysis.TokenFilter"/>s which add/get
            attributes to/from the <see cref="T:Lucene.Net.Util.AttributeSource"/>.</item>
            <item>The consumer calls <see cref="M:Lucene.Net.Analysis.TokenStream.Reset"/>.</item>
            <item>The consumer retrieves attributes from the stream and stores local
            references to all attributes it wants to access</item>
            <item>The consumer calls <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/> until it returns false and
            consumes the attributes after each call.</item>
            <item>The consumer calls <see cref="M:Lucene.Net.Analysis.TokenStream.End"/> so that any end-of-stream operations
            can be performed.</item>
            <item>The consumer calls <see cref="M:Lucene.Net.Analysis.TokenStream.Close"/> to release any resource when finished
            using the <c>TokenStream</c></item>
            </list>
            To make sure that filters and consumers know which attributes are available,
            the attributes must be added during instantiation. Filters and consumers are
            not required to check for availability of attributes in
            <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/>.
            <p/>
            You can find some example code for the new API in the analysis package level
            Javadoc.
            <p/>
            Sometimes it is desirable to capture a current state of a <c>TokenStream</c>
            , e. g. for buffering purposes (see <see cref="T:Lucene.Net.Analysis.CachingTokenFilter"/>,
            <see cref="T:Lucene.Net.Analysis.TeeSinkTokenFilter"/>). For this usecase
            <see cref="M:Lucene.Net.Util.AttributeSource.CaptureState"/> and <see cref="M:Lucene.Net.Util.AttributeSource.RestoreState(Lucene.Net.Util.AttributeSource.State)"/>
            can be used.
            </summary>
        </member>
        <member name="T:Lucene.Net.Util.AttributeSource">
            <summary> An AttributeSource contains a list of different <see cref="T:Lucene.Net.Util.AttributeImpl"/>s,
            and methods to add and get them. There can only be a single instance
            of an attribute in the same AttributeSource instance. This is ensured
            by passing in the actual type of the Attribute (Class&lt;Attribute&gt;) to 
            the <see cref="M:Lucene.Net.Util.AttributeSource.AddAttribute(System.Type)"/>, which then checks if an instance of
            that type is already present. If yes, it returns the instance, otherwise
            it creates a new instance and returns it.
            </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.#ctor">
            <summary> An AttributeSource using the default attribute factory <see cref="F:Lucene.Net.Util.AttributeSource.AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY"/>.</summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.#ctor(Lucene.Net.Util.AttributeSource)">
            <summary> An AttributeSource that uses the same attributes as the supplied one.</summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory)">
            <summary> An AttributeSource using the supplied <see cref="T:Lucene.Net.Util.AttributeSource.AttributeFactory"/> for creating new <see cref="T:Lucene.Net.Util.Attribute"/> instances.</summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.GetAttributeFactory">
            <summary> returns the used AttributeFactory.</summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.GetAttributeClassesIterator">
             <summary>Returns a new iterator that iterates the attribute classes
             in the same order they were added in.
             Signature for Java 1.5: <c>public Iterator&lt;Class&lt;? extends Attribute&gt;&gt; getAttributeClassesIterator()</c>
            
             Note that this return value is different from Java in that it enumerates over the values
             and not the keys
             </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.GetAttributeImplsIterator">
            <summary>Returns a new iterator that iterates all unique Attribute implementations.
            This iterator may contain less entries that <see cref="M:Lucene.Net.Util.AttributeSource.GetAttributeClassesIterator"/>,
            if one instance implements more than one Attribute interface.
            Signature for Java 1.5: <c>public Iterator&lt;AttributeImpl&gt; getAttributeImplsIterator()</c>
            </summary>
        </member>
        <member name="F:Lucene.Net.Util.AttributeSource.knownImplClasses">
            <summary>a cache that stores all interfaces for known implementation classes for performance (slow reflection) </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.AddAttributeImpl(Lucene.Net.Util.AttributeImpl)">
            <summary>Adds a custom AttributeImpl instance with one or more Attribute interfaces. </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.AddAttribute(System.Type)">
            <summary> The caller must pass in a Class&lt;? extends Attribute&gt; value.
            This method first checks if an instance of that class is 
            already in this AttributeSource and returns it. Otherwise a
            new instance is created, added to this AttributeSource and returned. 
            Signature for Java 1.5: <c>public &lt;T extends Attribute&gt; T addAttribute(Class&lt;T&gt;)</c>
            </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.HasAttributes">
            <summary>Returns true, iff this AttributeSource has any attributes </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.HasAttribute(System.Type)">
            <summary> The caller must pass in a Class&lt;? extends Attribute&gt; value. 
            Returns true, iff this AttributeSource contains the passed-in Attribute.
            Signature for Java 1.5: <c>public boolean hasAttribute(Class&lt;? extends Attribute&gt;)</c>
            </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.GetAttribute(System.Type)">
            <summary> The caller must pass in a Class&lt;? extends Attribute&gt; value. 
            Returns the instance of the passed in Attribute contained in this AttributeSource
            Signature for Java 1.5: <c>public &lt;T extends Attribute&gt; T getAttribute(Class&lt;T&gt;)</c>
            
            </summary>
            <throws>  IllegalArgumentException if this AttributeSource does not contain the </throws>
            <summary>         Attribute. It is recommended to always use <see cref="M:Lucene.Net.Util.AttributeSource.AddAttribute(System.Type)"/> even in consumers
            of TokenStreams, because you cannot know if a specific TokenStream really uses
            a specific Attribute. <see cref="M:Lucene.Net.Util.AttributeSource.AddAttribute(System.Type)"/> will automatically make the attribute
            available. If you want to only use the attribute, if it is available (to optimize
            consuming), use <see cref="M:Lucene.Net.Util.AttributeSource.HasAttribute(System.Type)"/>.
            </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.ClearAttributes">
            <summary> Resets all Attributes in this AttributeSource by calling
            <see cref="M:Lucene.Net.Util.AttributeImpl.Clear"/> on each Attribute implementation.
            </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.CaptureState">
            <summary> Captures the state of all Attributes. The return value can be passed to
            <see cref="M:Lucene.Net.Util.AttributeSource.RestoreState(Lucene.Net.Util.AttributeSource.State)"/> to restore the state of this or another AttributeSource.
            </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.RestoreState(Lucene.Net.Util.AttributeSource.State)">
            <summary> Restores this state by copying the values of all attribute implementations
            that this state contains into the attributes implementations of the targetStream.
            The targetStream must contain a corresponding instance for each argument
            contained in this state (e.g. it is not possible to restore the state of
            an AttributeSource containing a TermAttribute into a AttributeSource using
            a Token instance as implementation).
            
            Note that this method does not affect attributes of the targetStream
            that are not contained in this state. In other words, if for example
            the targetStream contains an OffsetAttribute, but this state doesn't, then
            the value of the OffsetAttribute remains unchanged. It might be desirable to
            reset its value to the default, in which case the caller should first
            call <see cref="M:Lucene.Net.Util.AttributeSource.ClearAttributes"/> on the targetStream.   
            </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.CloneAttributes">
            <summary> Performs a clone of all <see cref="T:Lucene.Net.Util.AttributeImpl"/> instances returned in a new
            AttributeSource instance. This method can be used to e.g. create another TokenStream
            with exactly the same attributes (using <see cref="M:Lucene.Net.Util.AttributeSource.#ctor(Lucene.Net.Util.AttributeSource)"/>)
            </summary>
        </member>
        <member name="T:Lucene.Net.Util.AttributeSource.AttributeFactory">
            <summary> An AttributeFactory creates instances of <see cref="T:Lucene.Net.Util.AttributeImpl"/>s.</summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeSource.AttributeFactory.CreateAttributeInstance(System.Type)">
            <summary> returns an <see cref="T:Lucene.Net.Util.AttributeImpl"/> for the supplied <see cref="T:Lucene.Net.Util.Attribute"/> interface class.
            <p/>Signature for Java 1.5: <c>public AttributeImpl createAttributeInstance(Class%lt;? extends Attribute&gt; attClass)</c>
            </summary>
        </member>
        <member name="F:Lucene.Net.Util.AttributeSource.AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY">
            <summary> This is the default factory that creates <see cref="T:Lucene.Net.Util.AttributeImpl"/>s using the
            class name of the supplied <see cref="T:Lucene.Net.Util.Attribute"/> interface class by appending <c>Impl</c> to it.
            </summary>
        </member>
        <member name="T:Lucene.Net.Util.AttributeSource.State">
            <summary> This class holds the state of an AttributeSource.</summary>
            <seealso cref="M:Lucene.Net.Util.AttributeSource.CaptureState">
            </seealso>
            <seealso cref="M:Lucene.Net.Util.AttributeSource.RestoreState(Lucene.Net.Util.AttributeSource.State)">
            </seealso>
        </member>
        <member name="F:Lucene.Net.Analysis.TokenStream.DEFAULT_TOKEN_WRAPPER_ATTRIBUTE_FACTORY">
            <deprecated> Remove this when old API is removed! 
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.TokenStream.tokenWrapper">
            <deprecated> Remove this when old API is removed! 
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.TokenStream.onlyUseNewAPI">
            <deprecated> Remove this when old API is removed! 
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.TokenStream.supportedMethods">
            <deprecated> Remove this when old API is removed! 
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.TokenStream.knownMethodSupport">
            <deprecated> Remove this when old API is removed! 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenStream.GetSupportedMethods(System.Type)">
            <deprecated> Remove this when old API is removed! 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenStream.#ctor">
            <summary> A TokenStream using the default attribute factory.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenStream.#ctor(Lucene.Net.Util.AttributeSource)">
            <summary> A TokenStream that uses the same attributes as the supplied one.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenStream.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory)">
            <summary> A TokenStream using the supplied AttributeFactory for creating new <see cref="T:Lucene.Net.Util.Attribute"/> instances.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenStream.InitTokenWrapper(Lucene.Net.Util.AttributeSource)">
            <deprecated> Remove this when old API is removed! 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenStream.Check">
            <deprecated> Remove this when old API is removed! 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenStream.SetOnlyUseNewAPI(System.Boolean)">
            <summary> For extra performance you can globally enable the new
            <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/> API using <see cref="T:Lucene.Net.Util.Attribute"/>s. There will be a
            small, but in most cases negligible performance increase by enabling this,
            but it only works if <b>all</b> <c>TokenStream</c>s use the new API and
            implement <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/>. This setting can only be enabled
            globally.
            <p/>
            This setting only affects <c>TokenStream</c>s instantiated after this
            call. All <c>TokenStream</c>s already created use the other setting.
            <p/>
            All core <see cref="T:Lucene.Net.Analysis.Analyzer"/>s are compatible with this setting, if you have
            your own <c>TokenStream</c>s that are also compatible, you should enable
            this.
            <p/>
            When enabled, tokenization may throw <see cref="T:System.InvalidOperationException"/>
            s, if the whole tokenizer chain is not compatible eg one of the
            <c>TokenStream</c>s does not implement the new <c>TokenStream</c> API.
            <p/>
            The default is <c>false</c>, so there is the fallback to the old API
            available.
            
            </summary>
            <deprecated> This setting will no longer be needed in Lucene 3.0 as the old
            API will be removed.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenStream.GetOnlyUseNewAPI">
            <summary> Returns if only the new API is used.
            
            </summary>
            <seealso cref="M:Lucene.Net.Analysis.TokenStream.SetOnlyUseNewAPI(System.Boolean)">
            </seealso>
            <deprecated> This setting will no longer be needed in Lucene 3.0 as
            the old API will be removed.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenStream.IncrementToken">
            <summary> Consumers (i.e., <see cref="T:Lucene.Net.Index.IndexWriter"/>) use this method to advance the stream to
            the next token. Implementing classes must implement this method and update
            the appropriate <see cref="T:Lucene.Net.Util.AttributeImpl"/>s with the attributes of the next
            token.
            
            The producer must make no assumptions about the attributes after the
            method has been returned: the caller may arbitrarily change it. If the
            producer needs to preserve the state for subsequent calls, it can use
            <see cref="M:Lucene.Net.Util.AttributeSource.CaptureState"/> to create a copy of the current attribute state.
            
            This method is called for every token of a document, so an efficient
            implementation is crucial for good performance. To avoid calls to
            <see cref="M:Lucene.Net.Util.AttributeSource.AddAttribute(System.Type)"/> and <see cref="M:Lucene.Net.Util.AttributeSource.GetAttribute(System.Type)"/> or downcasts,
            references to all <see cref="T:Lucene.Net.Util.AttributeImpl"/>s that this stream uses should be
            retrieved during instantiation.
            
            To ensure that filters and consumers know which attributes are available,
            the attributes must be added during instantiation. Filters and consumers
            are not required to check for availability of attributes in
            <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/>.
            
            </summary>
            <returns> false for end of stream; true otherwise
            
            Note that this method will be defined abstract in Lucene
            3.0.
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenStream.End">
            <summary> This method is called by the consumer after the last token has been
            consumed, after <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/> returned <c>false</c>
            (using the new <c>TokenStream</c> API). Streams implementing the old API
            should upgrade to use this feature.
            <p/>
            This method can be used to perform any end-of-stream operations, such as
            setting the final offset of a stream. The final offset of a stream might
            differ from the offset of the last token eg in case one or more whitespaces
            followed after the last token, but a <see cref="T:Lucene.Net.Analysis.WhitespaceTokenizer"/> was used.
            
            </summary>
            <throws>  IOException </throws>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenStream.Next(Lucene.Net.Analysis.Token)">
            <summary> Returns the next token in the stream, or null at EOS. When possible, the
            input Token should be used as the returned Token (this gives fastest
            tokenization performance), but this is not required and a new Token may be
            returned. Callers may re-use a single Token instance for successive calls
            to this method.
            
            This implicitly defines a "contract" between consumers (callers of this
            method) and producers (implementations of this method that are the source
            for tokens):
            <list type="bullet">
            <item>A consumer must fully consume the previously returned <see cref="T:Lucene.Net.Analysis.Token"/>
            before calling this method again.</item>
            <item>A producer must call <see cref="M:Lucene.Net.Analysis.Token.Clear"/> before setting the fields in
            it and returning it</item>
            </list>
            Also, the producer must make no assumptions about a <see cref="T:Lucene.Net.Analysis.Token"/> after it
            has been returned: the caller may arbitrarily change it. If the producer
            needs to hold onto the <see cref="T:Lucene.Net.Analysis.Token"/> for subsequent calls, it must clone()
            it before storing it. Note that a <see cref="T:Lucene.Net.Analysis.TokenFilter"/> is considered a
            consumer.
            
            </summary>
            <param name="reusableToken">a <see cref="T:Lucene.Net.Analysis.Token"/> that may or may not be used to return;
            this parameter should never be null (the callee is not required to
            check for null before using it, but it is a good idea to assert that
            it is not null.)
            </param>
            <returns> next <see cref="T:Lucene.Net.Analysis.Token"/> in the stream or null if end-of-stream was hit
            </returns>
            <deprecated> The new <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/> and <see cref="T:Lucene.Net.Util.AttributeSource"/>
            APIs should be used instead.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenStream.Next">
            <summary> Returns the next <see cref="T:Lucene.Net.Analysis.Token"/> in the stream, or null at EOS.
            
            </summary>
            <deprecated> The returned Token is a "full private copy" (not re-used across
            calls to <see cref="M:Lucene.Net.Analysis.TokenStream.Next"/>) but will be slower than calling
            <see cref="M:Lucene.Net.Analysis.TokenStream.Next(Lucene.Net.Analysis.Token)"/> or using the new <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/>
            method with the new <see cref="T:Lucene.Net.Util.AttributeSource"/> API.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenStream.Reset">
            <summary> Resets this stream to the beginning. This is an optional operation, so
            subclasses may or may not implement this method. <see cref="M:Lucene.Net.Analysis.TokenStream.Reset"/> is not needed for
            the standard indexing process. However, if the tokens of a
            <c>TokenStream</c> are intended to be consumed more than once, it is
            necessary to implement <see cref="M:Lucene.Net.Analysis.TokenStream.Reset"/>. Note that if your TokenStream
            caches tokens and feeds them back again after a reset, it is imperative
            that you clone the tokens when you store them away (on the first pass) as
            well as when you return them (on future passes after <see cref="M:Lucene.Net.Analysis.TokenStream.Reset"/>).
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenStream.Close">
            <summary>Releases resources associated with this stream. </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.TokenStream.MethodSupport">
            <deprecated> Remove this when old API is removed! 
            </deprecated>
        </member>
        <member name="T:Lucene.Net.Analysis.TokenStream.TokenWrapperAttributeFactory">
            <deprecated> Remove this when old API is removed! 
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.TokenFilter.input">
            <summary>The source of tokens for this filter. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenFilter.#ctor(Lucene.Net.Analysis.TokenStream)">
            <summary>Construct a token stream filtering the given input. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenFilter.End">
            <summary>Performs end-of-stream operations, if any, and calls then <c>end()</c> on the
            input TokenStream.<p/> 
            <b>NOTE:</b> Be sure to call <c>super.end()</c> first when overriding this method.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenFilter.Close">
            <summary>Close the input TokenStream. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.TokenFilter.Reset">
            <summary>Reset the filter as well as the input TokenStream. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.ASCIIFoldingFilter.FoldToASCII(System.Char[],System.Int32)">
            <summary> Converts characters above ASCII to their ASCII equivalents.  For example,
            accents are removed from accented characters.
            </summary>
            <param name="input">The string to fold
            </param>
            <param name="length">The number of characters in the input string
            </param>
        </member>
        <member name="T:Lucene.Net.Analysis.BaseCharFilter">
            <summary>
            * Base utility class for implementing a <see cref="T:Lucene.Net.Analysis.CharFilter"/>.
            * You subclass this, and then record mappings by calling
            * <see cref="M:Lucene.Net.Analysis.BaseCharFilter.AddOffCorrectMap(System.Int32,System.Int32)"/>, and then invoke the correct
            * method to correct an offset.
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.CharFilter">
            <summary> Subclasses of CharFilter can be chained to filter CharStream.
            They can be used as <see cref="T:System.IO.TextReader"/> with additional offset
            correction. <see cref="T:Lucene.Net.Analysis.Tokenizer"/>s will automatically use <see cref="M:Lucene.Net.Analysis.CharFilter.CorrectOffset(System.Int32)"/>
            if a CharFilter/CharStream subclass is used.
            
            </summary>
            <version>  $Id$
            
            </version>
        </member>
        <member name="T:Lucene.Net.Analysis.CharStream">
            <summary> CharStream adds <see cref="M:Lucene.Net.Analysis.CharStream.CorrectOffset(System.Int32)"/>
            functionality over <see cref="T:System.IO.TextReader"/>.  All Tokenizers accept a
            CharStream instead of <see cref="T:System.IO.TextReader"/> as input, which enables
            arbitrary character based filtering before tokenization. 
            The <see cref="M:Lucene.Net.Analysis.CharStream.CorrectOffset(System.Int32)"/> method fixed offsets to account for
            removal or insertion of characters, so that the offsets
            reported in the tokens match the character offsets of the
            original Reader.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharStream.CorrectOffset(System.Int32)">
            <summary> Called by CharFilter(s) and Tokenizer to correct token offset.
            
            </summary>
            <param name="currentOff">offset as seen in the output
            </param>
            <returns> corrected offset based on the input
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.CharFilter.Correct(System.Int32)">
            <summary> Subclass may want to override to correct the current offset.
            
            </summary>
            <param name="currentOff">current offset
            </param>
            <returns> corrected offset
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.CharFilter.CorrectOffset(System.Int32)">
            <summary> Chains the corrected offset through the input
            CharFilter.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.BaseCharFilter.Correct(System.Int32)">
            Retrieve the corrected offset. 
        </member>
        <member name="T:Lucene.Net.Analysis.CachingTokenFilter">
            <summary> This class can be used if the token attributes of a TokenStream
            are intended to be consumed more than once. It caches
            all token attribute states locally in a List.
            
            <p/>CachingTokenFilter implements the optional method
            <see cref="M:Lucene.Net.Analysis.TokenStream.Reset"/>, which repositions the
            stream to the first Token. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CachingTokenFilter.Next(Lucene.Net.Analysis.Token)">
            <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
            not be overridden. Delegates to the backwards compatibility layer. 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.CachingTokenFilter.Next">
            <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
            not be overridden. Delegates to the backwards compatibility layer. 
            </deprecated>
        </member>
        <member name="T:Lucene.Net.Analysis.CharacterCache">
            <summary> Replacement for Java 1.5 Character.valueOf()</summary>
            <deprecated> Move to Character.valueOf() in 3.0
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.CharacterCache.ValueOf(System.Char)">
            <summary> Returns a Character instance representing the given char value
            
            </summary>
            <param name="c">a char value
            </param>
            <returns> a Character representation of the given char value.
            </returns>
        </member>
        <member name="T:Lucene.Net.Analysis.CharArraySet">
            <summary> A simple class that stores Strings as char[]'s in a
            hash table.  Note that this is not a general purpose
            class.  For example, it cannot remove items from the
            set, nor does it resize its hash table to be smaller,
            etc.  It is designed to be quick to test if a char[]
            is in the set without the necessity of converting it
            to a String first.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharArraySet.#ctor(System.Int32,System.Boolean)">
            <summary>Create set with enough capacity to hold startSize
            terms 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharArraySet.#ctor(System.Collections.ICollection,System.Boolean)">
            <summary>Create set from a Collection of char[] or String </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharArraySet.#ctor(System.Char[][],System.Boolean,System.Int32)">
            <summary>Create set from entries </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharArraySet.Contains(System.Char[],System.Int32,System.Int32)">
            <summary>true if the <c>len</c> chars of <c>text</c> starting at <c>off</c>
            are in the set 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharArraySet.Contains(System.String)">
            <summary>true if the <c>System.String</c> is in the set </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharArraySet.GetSlot(System.String)">
            <summary>Returns true if the String is in the set </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharArraySet.Add(System.String)">
            <summary>Add this String into the set </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharArraySet.Add(System.Char[])">
            <summary>Add this char[] directly to the set.
            If ignoreCase is true for this Set, the text array will be directly modified.
            The user should never modify this text array after calling this method.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharArraySet.UnmodifiableSet(Lucene.Net.Analysis.CharArraySet)">
            <summary> Returns an unmodifiable <see cref="T:Lucene.Net.Analysis.CharArraySet"/>. This allows to provide
            unmodifiable views of internal sets for "read-only" use.
            </summary>
            <param name="set_Renamed">a set for which the unmodifiable set is returned.
            </param>
            <returns> an new unmodifiable <see cref="T:Lucene.Net.Analysis.CharArraySet"/>.
            </returns>
            <exception cref="T:System.NullReferenceException">NullReferenceException thrown 
            if the given set is <c>null</c>.</exception>
        </member>
        <member name="M:Lucene.Net.Analysis.CharArraySet.AddAll(System.Collections.ICollection)">
            <summary>Adds all of the elements in the specified collection to this collection </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharArraySet.Clear">
            <summary>Removes all elements from the set </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharArraySet.RemoveAll(System.Collections.ICollection)">
            <summary>Removes from this set all of its elements that are contained in the specified collection </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharArraySet.RetainAll(System.Collections.ICollection)">
            <summary>Retains only the elements in this set that are contained in the specified collection </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.CharArraySet.CharArraySetIterator">
            <summary>The Iterator&lt;String&gt; for this set.  Strings are constructed on the fly, so
            use <c>nextCharArray</c> for more efficient access. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharArraySet.CharArraySetIterator.NextCharArray">
            <summary>do not modify the returned char[] </summary>
        </member>
        <member name="P:Lucene.Net.Analysis.CharArraySet.CharArraySetIterator.Current">
            <summary>Returns the next String, as a Set&lt;String&gt; would...
            use nextCharArray() for better efficiency. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.CharArraySet.UnmodifiableCharArraySet">
            <summary> Efficient unmodifiable <see cref="T:Lucene.Net.Analysis.CharArraySet"/>. This implementation does not
            delegate calls to a given <see cref="T:Lucene.Net.Analysis.CharArraySet"/> like
            Collections.UnmodifiableSet(java.util.Set) does. Instead is passes
            the internal representation of a <see cref="T:Lucene.Net.Analysis.CharArraySet"/> to a super
            constructor and overrides all mutators. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.CharReader">
            <summary> CharReader is a Reader wrapper. It reads chars from
            Reader and outputs <see cref="T:Lucene.Net.Analysis.CharStream"/>, defining an
            identify function <see cref="M:Lucene.Net.Analysis.CharReader.CorrectOffset(System.Int32)"/> method that
            simply returns the provided offset.
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.CharTokenizer">
            <summary>An abstract base class for simple, character-oriented tokenizers.</summary>
        </member>
        <member name="T:Lucene.Net.Analysis.Tokenizer">
            <summary> A Tokenizer is a TokenStream whose input is a Reader.
            <p/>
            This is an abstract class; subclasses must override <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/>
            <p/>
            NOTE: Subclasses overriding <see cref="M:Lucene.Net.Analysis.TokenStream.Next(Lucene.Net.Analysis.Token)"/> must call
            <see cref="M:Lucene.Net.Util.AttributeSource.ClearAttributes"/> before setting attributes.
            Subclasses overriding <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/> must call
            <see cref="M:Lucene.Net.Analysis.Token.Clear"/> before setting Token attributes.
            </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Tokenizer.input">
            <summary>The text source for this Tokenizer. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenizer.#ctor">
            <summary>Construct a tokenizer with null input. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenizer.#ctor(System.IO.TextReader)">
            <summary>Construct a token stream processing the given input. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenizer.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory)">
            <summary>Construct a tokenizer with null input using the given AttributeFactory. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenizer.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory,System.IO.TextReader)">
            <summary>Construct a token stream processing the given input using the given AttributeFactory. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenizer.#ctor(Lucene.Net.Util.AttributeSource)">
            <summary>Construct a token stream processing the given input using the given AttributeSource. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenizer.#ctor(Lucene.Net.Util.AttributeSource,System.IO.TextReader)">
            <summary>Construct a token stream processing the given input using the given AttributeSource. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenizer.Close">
            <summary>By default, closes the input Reader. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenizer.CorrectOffset(System.Int32)">
            <summary>Return the corrected offset. If <see cref="F:Lucene.Net.Analysis.Tokenizer.input"/> is a <see cref="T:Lucene.Net.Analysis.CharStream"/> subclass
            this method calls <see cref="M:Lucene.Net.Analysis.CharStream.CorrectOffset(System.Int32)"/>, else returns <c>currentOff</c>.
            </summary>
            <param name="currentOff">offset as seen in the output
            </param>
            <returns> corrected offset based on the input
            </returns>
            <seealso cref="M:Lucene.Net.Analysis.CharStream.CorrectOffset(System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenizer.Reset(System.IO.TextReader)">
            <summary>Expert: Reset the tokenizer to a new reader.  Typically, an
            analyzer (in its reusableTokenStream method) will use
            this to re-use a previously created tokenizer. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharTokenizer.IsTokenChar(System.Char)">
            <summary>Returns true iff a character should be included in a token.  This
            tokenizer generates as tokens adjacent sequences of characters which
            satisfy this predicate.  Characters for which this is false are used to
            define token boundaries and are not included in tokens. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharTokenizer.Normalize(System.Char)">
            <summary>Called on each token character to normalize it before it is added to the
            token.  The default implementation does nothing.  Subclasses may use this
            to, e.g., lowercase tokens. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.CharTokenizer.Next(Lucene.Net.Analysis.Token)">
            <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
            not be overridden. Delegates to the backwards compatibility layer. 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.CharTokenizer.Next">
            <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
            not be overridden. Delegates to the backwards compatibility layer. 
            </deprecated>
        </member>
        <member name="T:Lucene.Net.Analysis.ISOLatin1AccentFilter">
            <summary> A filter that replaces accented characters in the ISO Latin 1 character set 
            (ISO-8859-1) by their unaccented equivalent. The case will not be altered.
            <p/>
            For instance, 'À' will be replaced by 'a'.
            <p/>
            
            </summary>
            <deprecated> in favor of <see cref="T:Lucene.Net.Analysis.ASCIIFoldingFilter"/> which covers a superset 
            of Latin 1. This class will be removed in Lucene 3.0.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.ISOLatin1AccentFilter.Next(Lucene.Net.Analysis.Token)">
            <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
            not be overridden. Delegates to the backwards compatibility layer. 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.ISOLatin1AccentFilter.Next">
            <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
            not be overridden. Delegates to the backwards compatibility layer. 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.ISOLatin1AccentFilter.RemoveAccents(System.Char[],System.Int32)">
            <summary> To replace accented characters in a String by unaccented equivalents.</summary>
        </member>
        <member name="T:Lucene.Net.Analysis.KeywordAnalyzer">
            <summary> "Tokenizes" the entire stream as a single token. This is useful
            for data like zip codes, ids, and some product names.
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.KeywordTokenizer">
            <summary> Emits the entire input as a single token.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.KeywordTokenizer.Next(Lucene.Net.Analysis.Token)">
            <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
            not be overridden. Delegates to the backwards compatibility layer. 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.KeywordTokenizer.Next">
            <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
            not be overridden. Delegates to the backwards compatibility layer. 
            </deprecated>
        </member>
        <member name="T:Lucene.Net.Analysis.LengthFilter">
            <summary> Removes words that are too long or too short from the stream.
            
            
            </summary>
            <version>  $Id: LengthFilter.java 807201 2009-08-24 13:22:34Z markrmiller $
            </version>
        </member>
        <member name="M:Lucene.Net.Analysis.LengthFilter.#ctor(Lucene.Net.Analysis.TokenStream,System.Int32,System.Int32)">
            <summary> Build a filter that removes words that are too long or too
            short from the text.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.LengthFilter.IncrementToken">
            <summary> Returns the next input Token whose term() is the right len</summary>
        </member>
        <member name="T:Lucene.Net.Analysis.LetterTokenizer">
            <summary>A LetterTokenizer is a tokenizer that divides text at non-letters.  That's
            to say, it defines tokens as maximal strings of adjacent letters, as defined
            by java.lang.Character.isLetter() predicate.
            Note: this does a decent job for most European languages, but does a terrible
            job for some Asian languages, where words are not separated by spaces. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.LetterTokenizer.#ctor(System.IO.TextReader)">
            <summary>Construct a new LetterTokenizer. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.LetterTokenizer.#ctor(Lucene.Net.Util.AttributeSource,System.IO.TextReader)">
            <summary>Construct a new LetterTokenizer using a given <see cref="T:Lucene.Net.Util.AttributeSource"/>. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.LetterTokenizer.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory,System.IO.TextReader)">
            <summary>Construct a new LetterTokenizer using a given <see cref="T:Lucene.Net.Util.AttributeSource.AttributeFactory"/>. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.LetterTokenizer.IsTokenChar(System.Char)">
            <summary>Collects only characters which satisfy
            <see cref="M:System.Char.IsLetter(System.Char)"/>.
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.LowerCaseFilter">
            <summary> Normalizes token text to lower case.
            
            </summary>
            <version>  $Id: LowerCaseFilter.java 797665 2009-07-24 21:45:48Z buschmi $
            </version>
        </member>
        <member name="T:Lucene.Net.Analysis.LowerCaseTokenizer">
            <summary> LowerCaseTokenizer performs the function of LetterTokenizer
            and LowerCaseFilter together.  It divides text at non-letters and converts
            them to lower case.  While it is functionally equivalent to the combination
            of LetterTokenizer and LowerCaseFilter, there is a performance advantage
            to doing the two tasks at once, hence this (redundant) implementation.
            <p/>
            Note: this does a decent job for most European languages, but does a terrible
            job for some Asian languages, where words are not separated by spaces.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.LowerCaseTokenizer.#ctor(System.IO.TextReader)">
            <summary>Construct a new LowerCaseTokenizer. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.LowerCaseTokenizer.#ctor(Lucene.Net.Util.AttributeSource,System.IO.TextReader)">
            <summary>Construct a new LowerCaseTokenizer using a given <see cref="T:Lucene.Net.Util.AttributeSource"/>. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.LowerCaseTokenizer.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory,System.IO.TextReader)">
            <summary>Construct a new LowerCaseTokenizer using a given <see cref="T:Lucene.Net.Util.AttributeSource.AttributeFactory"/>. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.LowerCaseTokenizer.Normalize(System.Char)">
            <summary>Converts char to lower case
            <see cref="M:System.Char.ToLower(System.Char)"/>.
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.MappingCharFilter">
            <summary> Simplistic <see cref="T:Lucene.Net.Analysis.CharFilter"/> that applies the mappings
            contained in a <see cref="T:Lucene.Net.Analysis.NormalizeCharMap"/> to the character
            stream, and correcting the resulting changes to the
            offsets.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.MappingCharFilter.#ctor(Lucene.Net.Analysis.NormalizeCharMap,Lucene.Net.Analysis.CharStream)">
            Default constructor that takes a <see cref="T:Lucene.Net.Analysis.CharStream"/>.
        </member>
        <member name="M:Lucene.Net.Analysis.MappingCharFilter.#ctor(Lucene.Net.Analysis.NormalizeCharMap,System.IO.TextReader)">
            Easy-use constructor that takes a <see cref="T:System.IO.TextReader"/>.
        </member>
        <member name="T:Lucene.Net.Analysis.NormalizeCharMap">
            <summary> Holds a map of String input to String output, to be used
            with <see cref="T:Lucene.Net.Analysis.MappingCharFilter"/>.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.NormalizeCharMap.Add(System.String,System.String)">
            <summary>Records a replacement to be applied to the inputs
            stream.  Whenever <c>singleMatch</c> occurs in
            the input, it will be replaced with
            <c>replacement</c>.
            
            </summary>
            <param name="singleMatch">input String to be replaced
            </param>
            <param name="replacement">output String
            </param>
        </member>
        <member name="T:Lucene.Net.Analysis.NumericTokenStream">
            <summary> <b>Expert:</b> This class provides a <see cref="T:Lucene.Net.Analysis.TokenStream"/>
            for indexing numeric values that can be used by <see cref="T:Lucene.Net.Search.NumericRangeQuery"/>
            or <see cref="T:Lucene.Net.Search.NumericRangeFilter"/>.
            
            <p/>Note that for simple usage, <see cref="T:Lucene.Net.Documents.NumericField"/> is
            recommended.  <see cref="T:Lucene.Net.Documents.NumericField"/> disables norms and
            term freqs, as they are not usually needed during
            searching.  If you need to change these settings, you
            should use this class.
            
            <p/>See <see cref="T:Lucene.Net.Documents.NumericField"/> for capabilities of fields
            indexed numerically.<p/>
            
            <p/>Here's an example usage, for an <c>int</c> field:
            
            <code>
             Field field = new Field(name, new NumericTokenStream(precisionStep).setIntValue(value));
             field.setOmitNorms(true);
             field.setOmitTermFreqAndPositions(true);
             document.add(field);
            </code>
            
            <p/>For optimal performance, re-use the TokenStream and Field instance
            for more than one document:
            
            <code>
             NumericTokenStream stream = new NumericTokenStream(precisionStep);
             Field field = new Field(name, stream);
             field.setOmitNorms(true);
             field.setOmitTermFreqAndPositions(true);
             Document document = new Document();
             document.add(field);
            
             for(all documents) {
               stream.setIntValue(value)
               writer.addDocument(document);
             }
            </code>
            
            <p/>This stream is not intended to be used in analyzers;
            it's more for iterating the different precisions during
            indexing a specific numeric value.<p/>
            
            <p/><b>NOTE</b>: as token streams are only consumed once
            the document is added to the index, if you index more
            than one numeric field, use a separate <c>NumericTokenStream</c>
            instance for each.<p/>
            
            <p/>See <see cref="T:Lucene.Net.Search.NumericRangeQuery"/> for more details on the
            <a href="../search/NumericRangeQuery.html#precisionStepDesc"><c>precisionStep</c></a>
            parameter as well as how numeric fields work under the hood.<p/>
            
            <p/><font color="red"><b>NOTE:</b> This API is experimental and
            might change in incompatible ways in the next release.</font>
              Since 2.9
            </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.NumericTokenStream.TOKEN_TYPE_FULL_PREC">
            <summary>The full precision token gets this token type assigned. </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.NumericTokenStream.TOKEN_TYPE_LOWER_PREC">
            <summary>The lower precision tokens gets this token type assigned. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.NumericTokenStream.#ctor">
            <summary> Creates a token stream for numeric values using the default <c>precisionStep</c>
            <see cref="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT"/> (4). The stream is not yet initialized,
            before using set a value using the various set<em>???</em>Value() methods.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.NumericTokenStream.#ctor(System.Int32)">
            <summary> Creates a token stream for numeric values with the specified
            <c>precisionStep</c>. The stream is not yet initialized,
            before using set a value using the various set<em>???</em>Value() methods.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.NumericTokenStream.#ctor(Lucene.Net.Util.AttributeSource,System.Int32)">
            <summary> Expert: Creates a token stream for numeric values with the specified
            <c>precisionStep</c> using the given <see cref="T:Lucene.Net.Util.AttributeSource"/>.
            The stream is not yet initialized,
            before using set a value using the various set<em>???</em>Value() methods.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.NumericTokenStream.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory,System.Int32)">
            <summary> Expert: Creates a token stream for numeric values with the specified
            <c>precisionStep</c> using the given
            <see cref="T:Lucene.Net.Util.AttributeSource.AttributeFactory"/>.
            The stream is not yet initialized,
            before using set a value using the various set<em>???</em>Value() methods.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.NumericTokenStream.SetLongValue(System.Int64)">
            <summary> Initializes the token stream with the supplied <c>long</c> value.</summary>
            <param name="value_Renamed">the value, for which this TokenStream should enumerate tokens.
            </param>
            <returns> this instance, because of this you can use it the following way:
            <c>new Field(name, new NumericTokenStream(precisionStep).SetLongValue(value))</c>
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.NumericTokenStream.SetIntValue(System.Int32)">
            <summary> Initializes the token stream with the supplied <c>int</c> value.</summary>
            <param name="value_Renamed">the value, for which this TokenStream should enumerate tokens.
            </param>
            <returns> this instance, because of this you can use it the following way:
            <c>new Field(name, new NumericTokenStream(precisionStep).SetIntValue(value))</c>
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.NumericTokenStream.SetDoubleValue(System.Double)">
            <summary> Initializes the token stream with the supplied <c>double</c> value.</summary>
            <param name="value_Renamed">the value, for which this TokenStream should enumerate tokens.
            </param>
            <returns> this instance, because of this you can use it the following way:
            <c>new Field(name, new NumericTokenStream(precisionStep).SetDoubleValue(value))</c>
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.NumericTokenStream.SetFloatValue(System.Single)">
            <summary> Initializes the token stream with the supplied <c>float</c> value.</summary>
            <param name="value_Renamed">the value, for which this TokenStream should enumerate tokens.
            </param>
            <returns> this instance, because of this you can use it the following way:
            <c>new Field(name, new NumericTokenStream(precisionStep).SetFloatValue(value))</c>
            </returns>
        </member>
        <member name="T:Lucene.Net.Analysis.PerFieldAnalyzerWrapper">
            <summary> This analyzer is used to facilitate scenarios where different
            fields require different analysis techniques.  Use <see cref="M:Lucene.Net.Analysis.PerFieldAnalyzerWrapper.AddAnalyzer(System.String,Lucene.Net.Analysis.Analyzer)"/>
            to add a non-default analyzer on a field name basis.
            
            <p/>Example usage:
            
            <code>
            PerFieldAnalyzerWrapper aWrapper =
            new PerFieldAnalyzerWrapper(new StandardAnalyzer());
            aWrapper.addAnalyzer("firstname", new KeywordAnalyzer());
            aWrapper.addAnalyzer("lastname", new KeywordAnalyzer());
            </code>
            
            <p/>In this example, StandardAnalyzer will be used for all fields except "firstname"
            and "lastname", for which KeywordAnalyzer will be used.
            
            <p/>A PerFieldAnalyzerWrapper can be used like any other analyzer, for both indexing
            and query parsing.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.PerFieldAnalyzerWrapper.#ctor(Lucene.Net.Analysis.Analyzer)">
            <summary> Constructs with default analyzer.
            
            </summary>
            <param name="defaultAnalyzer">Any fields not specifically
            defined to use a different analyzer will use the one provided here.
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.PerFieldAnalyzerWrapper.#ctor(Lucene.Net.Analysis.Analyzer,System.Collections.IDictionary)">
            <summary> Constructs with default analyzer and a map of analyzers to use for 
            specific fields.
            
            </summary>
            <param name="defaultAnalyzer">Any fields not specifically
            defined to use a different analyzer will use the one provided here.
            </param>
            <param name="fieldAnalyzers">a Map (String field name to the Analyzer) to be 
            used for those fields 
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.PerFieldAnalyzerWrapper.AddAnalyzer(System.String,Lucene.Net.Analysis.Analyzer)">
            <summary> Defines an analyzer to use for the specified field.
            
            </summary>
            <param name="fieldName">field name requiring a non-default analyzer
            </param>
            <param name="analyzer">non-default analyzer to use for field
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.PerFieldAnalyzerWrapper.GetPositionIncrementGap(System.String)">
            <summary>Return the positionIncrementGap from the analyzer assigned to fieldName </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.PerFieldAnalyzerWrapper.GetOffsetGap(Lucene.Net.Documents.Fieldable)">
            <summary> Return the offsetGap from the analyzer assigned to field </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.PorterStemFilter">
            <summary>Transforms the token stream as per the Porter stemming algorithm.
            Note: the input to the stemming filter must already be in lower case,
            so you will need to use LowerCaseFilter or LowerCaseTokenizer farther
            down the Tokenizer chain in order for this to work properly!
            <p/>
            To use this filter with other analyzers, you'll want to write an
            Analyzer class that sets up the TokenStream chain as you want it.
            To use this with LowerCaseTokenizer, for example, you'd write an
            analyzer like this:
            <p/>
            <code>
            class MyAnalyzer extends Analyzer {
                public final TokenStream tokenStream(String fieldName, Reader reader) {
                     return new PorterStemFilter(new LowerCaseTokenizer(reader));
                }
            }
            </code>
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.PorterStemmer">
            <summary> 
            Stemmer, implementing the Porter Stemming Algorithm
            
            The Stemmer class transforms a word into its root form.  The input
            word can be provided a character at time (by calling add()), or at once
            by calling one of the various stem(something) methods.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.PorterStemmer.Reset">
            <summary> reset() resets the stemmer so it can stem another word.  If you invoke
            the stemmer by calling add(char) and then stem(), you must call reset()
            before starting another word.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.PorterStemmer.Add(System.Char)">
            <summary> Add a character to the word being stemmed.  When you are finished
            adding characters, you can call stem(void) to process the word.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.PorterStemmer.ToString">
            <summary> After a word has been stemmed, it can be retrieved by toString(),
            or a reference to the internal buffer can be retrieved by getResultBuffer
            and getResultLength (which is generally more efficient.)
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.PorterStemmer.GetResultLength">
            <summary> Returns the length of the word resulting from the stemming process.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.PorterStemmer.GetResultBuffer">
            <summary> Returns a reference to a character buffer containing the results of
            the stemming process.  You also need to consult getResultLength()
            to determine the length of the result.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.PorterStemmer.Stem(System.String)">
            <summary> Stem a word provided as a String.  Returns the result as a String.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.PorterStemmer.Stem(System.Char[])">
            <summary>Stem a word contained in a char[].  Returns true if the stemming process
            resulted in a word different from the input.  You can retrieve the
            result with getResultLength()/getResultBuffer() or toString().
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.PorterStemmer.Stem(System.Char[],System.Int32,System.Int32)">
            <summary>Stem a word contained in a portion of a char[] array.  Returns
            true if the stemming process resulted in a word different from
            the input.  You can retrieve the result with
            getResultLength()/getResultBuffer() or toString().
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.PorterStemmer.Stem(System.Char[],System.Int32)">
            <summary>Stem a word contained in a leading portion of a char[] array.
            Returns true if the stemming process resulted in a word different
            from the input.  You can retrieve the result with
            getResultLength()/getResultBuffer() or toString().
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.PorterStemmer.Stem">
            <summary>Stem the word placed into the Stemmer buffer through calls to add().
            Returns true if the stemming process resulted in a word different
            from the input.  You can retrieve the result with
            getResultLength()/getResultBuffer() or toString().
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.PorterStemmer.Main(System.String[])">
            <summary>Test program for demonstrating the Stemmer.  It reads a file and
            stems each word, writing the result to standard out.
            Usage: Stemmer file-name
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.SimpleAnalyzer">
            <summary>An <see cref="T:Lucene.Net.Analysis.Analyzer"/> that filters <see cref="T:Lucene.Net.Analysis.LetterTokenizer"/> 
            with <see cref="T:Lucene.Net.Analysis.LowerCaseFilter"/> 
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.SinkTokenizer">
            <summary> A SinkTokenizer can be used to cache Tokens for use in an Analyzer
            <p/>
            WARNING: <see cref="T:Lucene.Net.Analysis.TeeTokenFilter"/> and <see cref="T:Lucene.Net.Analysis.SinkTokenizer"/> only work with the old TokenStream API.
            If you switch to the new API, you need to use <see cref="T:Lucene.Net.Analysis.TeeSinkTokenFilter"/> instead, which offers 
            the same functionality.
            </summary>
            <seealso cref="T:Lucene.Net.Analysis.TeeTokenFilter">
            </seealso>
            <deprecated> Use <see cref="T:Lucene.Net.Analysis.TeeSinkTokenFilter"/> instead
            
            
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.SinkTokenizer.GetTokens">
            <summary> Get the tokens in the internal List.
            <p/>
            WARNING: Adding tokens to this list requires the <see cref="M:Lucene.Net.Analysis.SinkTokenizer.Reset"/> method to be called in order for them
            to be made available.  Also, this Tokenizer does nothing to protect against <see cref="T:System.InvalidOperationException"/>s
            in the case of adds happening while <see cref="M:Lucene.Net.Analysis.SinkTokenizer.Next(Lucene.Net.Analysis.Token)"/> is being called.
            <p/>
            WARNING: Since this SinkTokenizer can be reset and the cached tokens made available again, do not modify them. Modify clones instead.
            
            </summary>
            <returns> A List of <see cref="T:Lucene.Net.Analysis.Token"/>s
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.SinkTokenizer.Next(Lucene.Net.Analysis.Token)">
            <summary> Returns the next token out of the list of cached tokens</summary>
            <returns> The next <see cref="T:Lucene.Net.Analysis.Token"/> in the Sink.
            </returns>
            <throws>  IOException </throws>
        </member>
        <member name="M:Lucene.Net.Analysis.SinkTokenizer.Add(Lucene.Net.Analysis.Token)">
            <summary> Override this method to cache only certain tokens, or new tokens based
            on the old tokens.
            
            </summary>
            <param name="t">The <see cref="T:Lucene.Net.Analysis.Token"/> to add to the sink
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.SinkTokenizer.Reset">
            <summary> Reset the internal data structures to the start at the front of the list of tokens.  Should be called
            if tokens were added to the list after an invocation of <see cref="M:Lucene.Net.Analysis.SinkTokenizer.Next(Lucene.Net.Analysis.Token)"/>
            </summary>
            <throws>  IOException </throws>
        </member>
        <member name="T:Lucene.Net.Analysis.Standard.StandardAnalyzer">
            <summary> Filters <see cref="T:Lucene.Net.Analysis.Standard.StandardTokenizer"/> with <see cref="T:Lucene.Net.Analysis.Standard.StandardFilter"/>,
            <see cref="T:Lucene.Net.Analysis.LowerCaseFilter"/> and <see cref="T:Lucene.Net.Analysis.StopFilter"/>, using a list of English stop
            words.
            
            <a name="version"/>
            <p/>
            You must specify the required <see cref="T:Lucene.Net.Util.Version"/> compatibility when creating
            StandardAnalyzer:
            <list type="bullet">
            <item>As of 2.9, StopFilter preserves position increments</item>
            <item>As of 2.4, Tokens incorrectly identified as acronyms are corrected (see
            <a href="https://issues.apache.org/jira/browse/LUCENE-1068">LUCENE-1608</a>)</item>
            </list>
            
            </summary>
            <version>  $Id: StandardAnalyzer.java 829134 2009-10-23 17:18:53Z mikemccand $
            </version>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardAnalyzer.DEFAULT_MAX_TOKEN_LENGTH">
            <summary>Default maximum allowed token length </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardAnalyzer.replaceInvalidAcronym">
            <summary> Specifies whether deprecated acronyms should be replaced with HOST type.
            This is false by default to support backward compatibility.
            
            </summary>
            <deprecated> this should be removed in the next release (3.0).
            
            See https://issues.apache.org/jira/browse/LUCENE-1068
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.GetDefaultReplaceInvalidAcronym">
            <summary> </summary>
            <returns> true if new instances of StandardTokenizer will
            replace mischaracterized acronyms
            
            See https://issues.apache.org/jira/browse/LUCENE-1068
            </returns>
            <deprecated> This will be removed (hardwired to true) in 3.0
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.SetDefaultReplaceInvalidAcronym(System.Boolean)">
            <summary> </summary>
            <param name="replaceInvalidAcronym">Set to true to have new
            instances of StandardTokenizer replace mischaracterized
            acronyms by default.  Set to false to preserve the
            previous (before 2.4) buggy behavior.  Alternatively,
            set the system property
            Lucene.Net.Analysis.Standard.StandardAnalyzer.replaceInvalidAcronym
            to false.
            
            See https://issues.apache.org/jira/browse/LUCENE-1068
            </param>
            <deprecated> This will be removed (hardwired to true) in 3.0
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardAnalyzer.STOP_WORDS">
            <summary>An array containing some common English words that are usually not
            useful for searching. 
            </summary>
            <deprecated> Use <see cref="F:Lucene.Net.Analysis.Standard.StandardAnalyzer.STOP_WORDS_SET"/> instead 
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardAnalyzer.STOP_WORDS_SET">
            <summary>An unmodifiable set containing some common English words that are usually not
            useful for searching. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor">
            <summary>Builds an analyzer with the default stop words 
            (<see cref="F:Lucene.Net.Analysis.Standard.StandardAnalyzer.STOP_WORDS_SET"/>).
            </summary>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(Lucene.Net.Util.Version)"/> instead. 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(Lucene.Net.Util.Version)">
             <summary>Builds an analyzer with the default stop words (<see cref="F:Lucene.Net.Analysis.Standard.StandardAnalyzer.STOP_WORDS"/>).
             </summary>
             <param name="matchVersion">Lucene version to match See <see cref="T:Lucene.Net.Util.Version">above</see> /&gt;
            
             </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(System.Collections.Hashtable)">
            <summary>Builds an analyzer with the given stop words.</summary>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(Lucene.Net.Util.Version,System.Collections.Hashtable)"/>
            instead 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(Lucene.Net.Util.Version,System.Collections.Hashtable)">
             <summary>Builds an analyzer with the given stop words.</summary>
             <param name="matchVersion">Lucene version to match See <see cref="T:Lucene.Net.Util.Version">above</see> /&gt;
            
             </param>
             <param name="stopWords">stop words 
             </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(System.String[])">
            <summary>Builds an analyzer with the given stop words.</summary>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(Lucene.Net.Util.Version,System.Collections.Hashtable)"/> instead 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(System.IO.FileInfo)">
            <summary>Builds an analyzer with the stop words from the given file.</summary>
            <seealso cref="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.FileInfo)">
            </seealso>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(Lucene.Net.Util.Version,System.IO.FileInfo)"/>
            instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(Lucene.Net.Util.Version,System.IO.FileInfo)">
             <summary>Builds an analyzer with the stop words from the given file.</summary>
             <seealso cref="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.FileInfo)">
             </seealso>
             <param name="matchVersion">Lucene version to match See <see cref="T:Lucene.Net.Util.Version">above</see> /&gt;
            
             </param>
             <param name="stopwords">File to read stop words from 
             </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(System.IO.TextReader)">
            <summary>Builds an analyzer with the stop words from the given reader.</summary>
            <seealso cref="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.TextReader)">
            </seealso>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(Lucene.Net.Util.Version,System.IO.TextReader)"/>
            instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(Lucene.Net.Util.Version,System.IO.TextReader)">
             <summary>Builds an analyzer with the stop words from the given reader.</summary>
             <seealso cref="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.TextReader)">
             </seealso>
             <param name="matchVersion">Lucene version to match See <see cref="T:Lucene.Net.Util.Version">above</see> /&gt;
            
             </param>
             <param name="stopwords">Reader to read stop words from 
             </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(System.Boolean)">
            <summary> </summary>
            <param name="replaceInvalidAcronym">Set to true if this analyzer should replace mischaracterized acronyms in the StandardTokenizer
            
            See https://issues.apache.org/jira/browse/LUCENE-1068
            
            </param>
            <deprecated> Remove in 3.X and make true the only valid value
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(System.IO.TextReader,System.Boolean)">
            <param name="stopwords">The stopwords to use
            </param>
            <param name="replaceInvalidAcronym">Set to true if this analyzer should replace mischaracterized acronyms in the StandardTokenizer
            
            See https://issues.apache.org/jira/browse/LUCENE-1068
            
            </param>
            <deprecated> Remove in 3.X and make true the only valid value
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(System.IO.FileInfo,System.Boolean)">
            <param name="stopwords">The stopwords to use
            </param>
            <param name="replaceInvalidAcronym">Set to true if this analyzer should replace mischaracterized acronyms in the StandardTokenizer
            
            See https://issues.apache.org/jira/browse/LUCENE-1068
            
            </param>
            <deprecated> Remove in 3.X and make true the only valid value
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(System.String[],System.Boolean)">
            <summary> </summary>
            <param name="stopwords">The stopwords to use
            </param>
            <param name="replaceInvalidAcronym">Set to true if this analyzer should replace mischaracterized acronyms in the StandardTokenizer
            
            See https://issues.apache.org/jira/browse/LUCENE-1068
            
            </param>
            <deprecated> Remove in 3.X and make true the only valid value
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(System.Collections.Hashtable,System.Boolean)">
            <param name="stopwords">The stopwords to use
            </param>
            <param name="replaceInvalidAcronym">Set to true if this analyzer should replace mischaracterized acronyms in the StandardTokenizer
            
            See https://issues.apache.org/jira/browse/LUCENE-1068
            
            </param>
            <deprecated> Remove in 3.X and make true the only valid value
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.TokenStream(System.String,System.IO.TextReader)">
             <summary>Constructs a <see cref="T:Lucene.Net.Analysis.Standard.StandardTokenizer"/> filtered by a <see cref="T:Lucene.Net.Analysis.Standard.StandardFilter"/>
            , a <see cref="T:Lucene.Net.Analysis.LowerCaseFilter"/> and a <see cref="T:Lucene.Net.Analysis.StopFilter"/>. 
             </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.SetMaxTokenLength(System.Int32)">
            <summary> Set maximum allowed token length.  If a token is seen
            that exceeds this length then it is discarded.  This
            setting only takes effect the next time tokenStream or
            reusableTokenStream is called.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.GetMaxTokenLength">
            <seealso cref="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.SetMaxTokenLength(System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.ReusableTokenStream(System.String,System.IO.TextReader)">
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.TokenStream(System.String,System.IO.TextReader)"/> instead 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.IsReplaceInvalidAcronym">
            <summary> </summary>
            <returns> true if this Analyzer is replacing mischaracterized acronyms in the StandardTokenizer
            
            See https://issues.apache.org/jira/browse/LUCENE-1068
            </returns>
            <deprecated> This will be removed (hardwired to true) in 3.0
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.SetReplaceInvalidAcronym(System.Boolean)">
            <summary> </summary>
            <param name="replaceInvalidAcronym">Set to true if this Analyzer is replacing mischaracterized acronyms in the StandardTokenizer
            
            See https://issues.apache.org/jira/browse/LUCENE-1068
            </param>
            <deprecated> This will be removed (hardwired to true) in 3.0
            </deprecated>
        </member>
        <member name="T:Lucene.Net.Analysis.Standard.StandardFilter">
            <summary>Normalizes tokens extracted with <see cref="T:Lucene.Net.Analysis.Standard.StandardTokenizer"/>. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardFilter.#ctor(Lucene.Net.Analysis.TokenStream)">
            <summary>Construct filtering <i>in</i>. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardFilter.IncrementToken">
            <summary>Returns the next token in the stream, or null at EOS.
            <p/>Removes <tt>'s</tt> from the end of words.
            <p/>Removes dots from acronyms.
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.Standard.StandardTokenizer">
            <summary>A grammar-based tokenizer constructed with JFlex
            
            <p/> This should be a good tokenizer for most European-language documents:
            
            <list type="bullet">
            <item>Splits words at punctuation characters, removing punctuation. However, a 
            dot that's not followed by whitespace is considered part of a token.</item>
            <item>Splits words at hyphens, unless there's a number in the token, in which case
            the whole token is interpreted as a product number and is not split.</item>
            <item>Recognizes email addresses and internet hostnames as one token.</item>
            </list>
            
            <p/>Many applications have specific tokenizer needs.  If this tokenizer does
            not suit your application, please consider copying this source code
            directory to your project and maintaining your own grammar-based tokenizer.
            
            <a name="version"/>
            <p/>
            You must specify the required <see cref="T:Lucene.Net.Util.Version"/> compatibility when creating
            StandardAnalyzer:
            <list type="bullet">
            <item>As of 2.4, Tokens incorrectly identified as acronyms are corrected (see
            <a href="https://issues.apache.org/jira/browse/LUCENE-1068">LUCENE-1608</a></item>
            </list>
            </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizer.ACRONYM_DEP">
            <deprecated> this solves a bug where HOSTs that end with '.' are identified
            as ACRONYMs. It is deprecated and will be removed in the next
            release.
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizer.scanner">
            <summary>A private instance of the JFlex-constructed scanner </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizer.TOKEN_TYPES">
            <summary>String token types that correspond to token type int constants </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizer.tokenImage">
            <deprecated> Please use <see cref="F:Lucene.Net.Analysis.Standard.StandardTokenizer.TOKEN_TYPES"/> instead 
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizer.replaceInvalidAcronym">
            <summary> Specifies whether deprecated acronyms should be replaced with HOST type.
            This is false by default to support backward compatibility.
            <p/>
            See http://issues.apache.org/jira/browse/LUCENE-1068
            
            </summary>
            <deprecated> this should be removed in the next release (3.0).
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.SetMaxTokenLength(System.Int32)">
            <summary>Set the max allowed token length.  Any token longer
            than this is skipped. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.GetMaxTokenLength">
            <seealso cref="M:Lucene.Net.Analysis.Standard.StandardTokenizer.SetMaxTokenLength(System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.#ctor(System.IO.TextReader)">
            <summary> Creates a new instance of the <see cref="T:Lucene.Net.Analysis.Standard.StandardTokenizer"/>. Attaches the
            <c>input</c> to a newly created JFlex scanner.
            </summary>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.Standard.StandardTokenizer.#ctor(Lucene.Net.Util.Version,System.IO.TextReader)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.#ctor(System.IO.TextReader,System.Boolean)">
            <summary> Creates a new instance of the <see cref="T:Lucene.Net.Analysis.Standard.StandardTokenizer"/>.  Attaches
            the <c>input</c> to the newly created JFlex scanner.
            
            </summary>
            <param name="input">The input reader
            </param>
            <param name="replaceInvalidAcronym">Set to true to replace mischaracterized acronyms with HOST.
            
            See http://issues.apache.org/jira/browse/LUCENE-1068
            </param>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.Standard.StandardTokenizer.#ctor(Lucene.Net.Util.Version,System.IO.TextReader)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.#ctor(Lucene.Net.Util.Version,System.IO.TextReader)">
            <summary> Creates a new instance of the
            <see cref="T:Lucene.Net.Analysis.Standard.StandardTokenizer"/>. Attaches
            the <c>input</c> to the newly created JFlex scanner.
            
            </summary>
            <param name="matchVersion"></param>
            <param name="input">The input reader
            
            See http://issues.apache.org/jira/browse/LUCENE-1068
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.#ctor(Lucene.Net.Util.AttributeSource,System.IO.TextReader,System.Boolean)">
            <summary> Creates a new StandardTokenizer with a given <see cref="T:Lucene.Net.Util.AttributeSource"/>. </summary>
            <deprecated> Use
            <see cref="M:Lucene.Net.Analysis.Standard.StandardTokenizer.#ctor(Lucene.Net.Util.Version,Lucene.Net.Util.AttributeSource,System.IO.TextReader)"/>
            instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.#ctor(Lucene.Net.Util.Version,Lucene.Net.Util.AttributeSource,System.IO.TextReader)">
            <summary> Creates a new StandardTokenizer with a given <see cref="T:Lucene.Net.Util.AttributeSource"/>.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory,System.IO.TextReader,System.Boolean)">
            <summary> Creates a new StandardTokenizer with a given <see cref="T:Lucene.Net.Util.AttributeSource.AttributeFactory"/> </summary>
            <deprecated> Use
            <see cref="M:Lucene.Net.Analysis.Standard.StandardTokenizer.#ctor(Lucene.Net.Util.Version,Lucene.Net.Util.AttributeSource.AttributeFactory,System.IO.TextReader)"/>
            instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.#ctor(Lucene.Net.Util.Version,Lucene.Net.Util.AttributeSource.AttributeFactory,System.IO.TextReader)">
            <summary> Creates a new StandardTokenizer with a given
            <see cref="T:Lucene.Net.Util.AttributeSource.AttributeFactory"/>
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.IncrementToken">
            <summary>
             (non-Javadoc)
            
             <see cref="M:Lucene.Net.Analysis.TokenStream.Next"/>
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.Next(Lucene.Net.Analysis.Token)">
            <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
            not be overridden. Delegates to the backwards compatibility layer. 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.Next">
            <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
            not be overridden. Delegates to the backwards compatibility layer. 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.IsReplaceInvalidAcronym">
            <summary> Prior to https://issues.apache.org/jira/browse/LUCENE-1068, StandardTokenizer mischaracterized as acronyms tokens like www.abc.com
            when they should have been labeled as hosts instead.
            </summary>
            <returns> true if StandardTokenizer now returns these tokens as Hosts, otherwise false
            
            </returns>
            <deprecated> Remove in 3.X and make true the only valid value
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.SetReplaceInvalidAcronym(System.Boolean)">
            <summary> </summary>
            <param name="replaceInvalidAcronym">Set to true to replace mischaracterized acronyms as HOST.
            </param>
            <deprecated> Remove in 3.X and make true the only valid value
            
            See https://issues.apache.org/jira/browse/LUCENE-1068
            </deprecated>
        </member>
        <member name="T:Lucene.Net.Analysis.Standard.StandardTokenizerImpl">
            <summary> This class is a scanner generated by 
            <a href="http://www.jflex.de/">JFlex</a> 1.4.1
            on 9/4/08 6:49 PM from the specification file
            <tt>/tango/mike/src/lucene.standarddigit/src/java/org/apache/lucene/analysis/standard/StandardTokenizerImpl.jflex</tt>
            </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.YYEOF">
            <summary>This character denotes the end of file </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZZ_BUFFERSIZE">
            <summary>initial size of the lookahead buffer </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.YYINITIAL">
            <summary>lexical states </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZZ_CMAP_PACKED">
            <summary> Translates characters to character classes</summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZZ_CMAP">
            <summary> Translates characters to character classes</summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZZ_ACTION">
            <summary> Translates DFA states to action switch labels.</summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZZ_ROWMAP">
            <summary> Translates a state to a row index in the transition table</summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZZ_TRANS">
            <summary> The transition table of the DFA</summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZZ_ATTRIBUTE">
            <summary> ZZ_ATTRIBUTE[aState] contains the attributes of state <c>aState</c></summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzReader">
            <summary>the input device </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzState">
            <summary>the current state of the DFA </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzLexicalState">
            <summary>the current lexical state </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzBuffer">
            <summary>this buffer contains the current text to be matched and is
            the source of the yytext() string 
            </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzMarkedPos">
            <summary>the textposition at the last accepting state </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzPushbackPos">
            <summary>the textposition at the last state to be included in yytext </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzCurrentPos">
            <summary>the current text position in the buffer </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzStartRead">
            <summary>startRead marks the beginning of the yytext() string in the buffer </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzEndRead">
            <summary>endRead marks the last character in the buffer, that has been read
            from input 
            </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.yyline">
            <summary>number of newlines encountered up to the start of the matched text </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.yychar">
            <summary>the number of characters up to the start of the matched text </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.yycolumn">
            <summary> the number of characters from the last newline up to the start of the 
            matched text
            </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzAtBOL">
            <summary> zzAtBOL == true &lt;=&gt; the scanner is currently at the beginning of a line</summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzAtEOF">
            <summary>zzAtEOF == true &lt;=&gt; the scanner is at the EOF </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ACRONYM_DEP">
            <deprecated> this solves a bug where HOSTs that end with '.' are identified
            as ACRONYMs. It is deprecated and will be removed in the next
            release.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Reset(System.IO.TextReader)">
            Resets the Tokenizer to a new Reader.
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.GetText(Lucene.Net.Analysis.Token)">
            <summary> Fills Lucene token with the current token text.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.GetText(Lucene.Net.Analysis.Tokenattributes.TermAttribute)">
            <summary> Fills TermAttribute with the current token text.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.#ctor(System.IO.TextReader)">
            <summary> Creates a new scanner
            There is also a java.io.InputStream version of this constructor.
            
            </summary>
            <param name="in_Renamed"> the java.io.Reader to read input from.
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.#ctor(System.IO.Stream)">
            <summary> Creates a new scanner.
            There is also java.io.Reader version of this constructor.
            
            </summary>
            <param name="in_Renamed"> the java.io.Inputstream to read input from.
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZzUnpackCMap(System.String)">
            <summary> Unpacks the compressed character translation table.
            
            </summary>
            <param name="packed">  the packed character translation table
            </param>
            <returns>         the unpacked character translation table
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZzRefill">
            <summary> Refills the input buffer.
            </summary>
            <returns><c>false</c>, iff there was new input.
            
            </returns>
            <exception cref="T:System.IO.IOException"> if any I/O-Error occurs
            </exception>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yyclose">
            <summary> Closes the input stream.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yyreset(System.IO.TextReader)">
            <summary> Resets the scanner to read from a new input stream.
            Does not close the old reader.
            
            All internal variables are reset, the old input stream 
            <b>cannot</b> be reused (internal buffer is discarded and lost).
            Lexical state is set to <tt>ZZ_INITIAL</tt>.
            
            </summary>
            <param name="reader">  the new input stream 
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yystate">
            <summary> Returns the current lexical state.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yybegin(System.Int32)">
            <summary> Enters a new lexical state
            
            </summary>
            <param name="newState">the new lexical state
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yytext">
            <summary> Returns the text matched by the current regular expression.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yycharat(System.Int32)">
            <summary> Returns the character at position <tt>pos</tt> from the 
            matched text. 
            
            It is equivalent to yytext().charAt(pos), but faster
            
            </summary>
            <param name="pos">the position of the character to fetch. 
            A value from 0 to yylength()-1.
            
            </param>
            <returns> the character at position pos
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yylength">
            <summary> Returns the length of the matched text region.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZzScanError(System.Int32)">
            <summary> Reports an error that occured while scanning.
            
            In a wellformed scanner (no or only correct usage of 
            yypushback(int) and a match-all fallback rule) this method 
            will only be called with things that "Can't Possibly Happen".
            If this method is called, something is seriously wrong
            (e.g. a JFlex bug producing a faulty scanner etc.).
            
            Usual syntax/scanner level error handling should be done
            in error fallback rules.
            
            </summary>
            <param name="errorCode"> the code of the errormessage to display
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yypushback(System.Int32)">
            <summary> Pushes the specified amount of characters back into the input stream.
            
            They will be read again by then next call of the scanning method
            
            </summary>
            <param name="number"> the number of characters to be read again.
            This number must not be greater than yylength()!
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.GetNextToken">
            <summary> Resumes scanning until the next regular expression is matched,
            the end of input is encountered or an I/O-Error occurs.
            
            </summary>
            <returns>      the next token
            </returns>
            <exception cref="T:System.IO.IOException"> if any I/O-Error occurs
            </exception>
        </member>
        <member name="T:Lucene.Net.Analysis.StopAnalyzer">
            <summary> Filters <see cref="T:Lucene.Net.Analysis.LetterTokenizer"/> with <see cref="T:Lucene.Net.Analysis.LowerCaseFilter"/> and
            <see cref="T:Lucene.Net.Analysis.StopFilter"/>.
            
            <a name="version"/>
            <p/>
            You must specify the required <see cref="T:Lucene.Net.Util.Version"/> compatibility when creating
            StopAnalyzer:
            <list type="bullet">
            <item>As of 2.9, position increments are preserved</item>
            </list>
            </summary>
        </member>
        <member name="F:Lucene.Net.Analysis.StopAnalyzer.ENGLISH_STOP_WORDS">
            <summary>An array containing some common English words that are not usually useful
            for searching. 
            </summary>
            <deprecated> Use <see cref="F:Lucene.Net.Analysis.StopAnalyzer.ENGLISH_STOP_WORDS_SET"/> instead 
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.StopAnalyzer.ENGLISH_STOP_WORDS_SET">
            <summary>An unmodifiable set containing some common English words that are not usually useful
            for searching.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor">
            <summary>Builds an analyzer which removes words in
            ENGLISH_STOP_WORDS.
            </summary>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version)">
            <summary> Builds an analyzer which removes words in ENGLISH_STOP_WORDS.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(System.Boolean)">
            <summary>Builds an analyzer which removes words in
            ENGLISH_STOP_WORDS.
            </summary>
            <param name="enablePositionIncrements">
            See <see cref="M:Lucene.Net.Analysis.StopFilter.SetEnablePositionIncrements(System.Boolean)"/>
            </param>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(System.Collections.Hashtable)">
            <summary>Builds an analyzer with the stop words from the given set.</summary>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version,System.Collections.Hashtable)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version,System.Collections.Hashtable)">
            <summary>Builds an analyzer with the stop words from the given set.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(System.Collections.Hashtable,System.Boolean)">
            <summary>Builds an analyzer with the stop words from the given set.</summary>
            <param name="stopWords">Set of stop words
            </param>
            <param name="enablePositionIncrements">
            See <see cref="M:Lucene.Net.Analysis.StopFilter.SetEnablePositionIncrements(System.Boolean)"/>
            </param>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version,System.Collections.Hashtable)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(System.String[])">
            <summary>Builds an analyzer which removes words in the provided array.</summary>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(System.Collections.Hashtable,System.Boolean)"/> instead 
            </deprecated>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version,System.Collections.Hashtable)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(System.String[],System.Boolean)">
            <summary>Builds an analyzer which removes words in the provided array.</summary>
            <param name="stopWords">Array of stop words
            </param>
            <param name="enablePositionIncrements">
            See <see cref="M:Lucene.Net.Analysis.StopFilter.SetEnablePositionIncrements(System.Boolean)"/>
            </param>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version,System.Collections.Hashtable)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(System.IO.FileInfo)">
            <summary>Builds an analyzer with the stop words from the given file.</summary>
            <seealso cref="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.FileInfo)">
            </seealso>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version,System.IO.FileInfo)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(System.IO.FileInfo,System.Boolean)">
            <summary>Builds an analyzer with the stop words from the given file.</summary>
            <seealso cref="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.FileInfo)">
            </seealso>
            <param name="stopwordsFile">File to load stop words from
            </param>
            <param name="enablePositionIncrements">
            See <see cref="M:Lucene.Net.Analysis.StopFilter.SetEnablePositionIncrements(System.Boolean)"/>
            </param>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version,System.IO.FileInfo)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version,System.IO.FileInfo)">
            <summary> Builds an analyzer with the stop words from the given file.
            
            </summary>
            <seealso cref="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.FileInfo)">
            </seealso>
            <param name="matchVersion">See <a href="#version">above</a>
            </param>
            <param name="stopwordsFile">File to load stop words from
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(System.IO.TextReader)">
            <summary>Builds an analyzer with the stop words from the given reader.</summary>
            <seealso cref="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.TextReader)">
            </seealso>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version,System.IO.TextReader)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(System.IO.TextReader,System.Boolean)">
            <summary>Builds an analyzer with the stop words from the given reader.</summary>
            <seealso cref="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.TextReader)">
            </seealso>
            <param name="stopwords">Reader to load stop words from
            </param>
            <param name="enablePositionIncrements">
            See <see cref="M:Lucene.Net.Analysis.StopFilter.SetEnablePositionIncrements(System.Boolean)"/>
            </param>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version,System.IO.TextReader)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version,System.IO.TextReader)">
            <summary>Builds an analyzer with the stop words from the given reader. </summary>
            <seealso cref="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.TextReader)">
            </seealso>
            <param name="matchVersion">See <a href="#Version">above</a>
            </param>
            <param name="stopwords">Reader to load stop words from
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.StopAnalyzer.TokenStream(System.String,System.IO.TextReader)">
            <summary>Filters LowerCaseTokenizer with StopFilter. </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.StopAnalyzer.SavedStreams">
            <summary>Filters LowerCaseTokenizer with StopFilter. </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.StopFilter">
            <summary> Removes stop words from a token stream.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.#ctor(Lucene.Net.Analysis.TokenStream,System.String[])">
            <summary> Construct a token stream filtering the given input.</summary>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopFilter.#ctor(System.Boolean,Lucene.Net.Analysis.TokenStream,System.String[])"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.#ctor(System.Boolean,Lucene.Net.Analysis.TokenStream,System.String[])">
            <summary> Construct a token stream filtering the given input.</summary>
            <param name="enablePositionIncrements">true if token positions should record the removed stop words
            </param>
            <param name="input">input TokenStream
            </param>
            <param name="stopWords">array of stop words
            </param>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopFilter.#ctor(System.Boolean,Lucene.Net.Analysis.TokenStream,System.Collections.Hashtable)"/> instead.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.#ctor(Lucene.Net.Analysis.TokenStream,System.String[],System.Boolean)">
            <summary> Constructs a filter which removes words from the input
            TokenStream that are named in the array of words.
            </summary>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopFilter.#ctor(System.Boolean,Lucene.Net.Analysis.TokenStream,System.String[],System.Boolean)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.#ctor(System.Boolean,Lucene.Net.Analysis.TokenStream,System.String[],System.Boolean)">
            <summary> Constructs a filter which removes words from the input
            TokenStream that are named in the array of words.
            </summary>
            <param name="enablePositionIncrements">true if token positions should record the removed stop words
            </param>
             <param name="in_Renamed">input TokenStream
            </param>
            <param name="stopWords">array of stop words
            </param>
            <param name="ignoreCase">true if case is ignored
            </param>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopFilter.#ctor(System.Boolean,Lucene.Net.Analysis.TokenStream,System.Collections.Hashtable,System.Boolean)"/> instead.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.#ctor(Lucene.Net.Analysis.TokenStream,System.Collections.Hashtable,System.Boolean)">
            <summary> Construct a token stream filtering the given input.
            If <c>stopWords</c> is an instance of <see cref="T:Lucene.Net.Analysis.CharArraySet"/> (true if
            <c>makeStopSet()</c> was used to construct the set) it will be directly used
            and <c>ignoreCase</c> will be ignored since <c>CharArraySet</c>
            directly controls case sensitivity.
            <p/>
            If <c>stopWords</c> is not an instance of <see cref="T:Lucene.Net.Analysis.CharArraySet"/>,
            a new CharArraySet will be constructed and <c>ignoreCase</c> will be
            used to specify the case sensitivity of that set.
            
            </summary>
            <param name="input">
            </param>
            <param name="stopWords">The set of Stop Words.
            </param>
            <param name="ignoreCase">-Ignore case when stopping.
            </param>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopFilter.#ctor(System.Boolean,Lucene.Net.Analysis.TokenStream,System.Collections.Hashtable,System.Boolean)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.#ctor(System.Boolean,Lucene.Net.Analysis.TokenStream,System.Collections.Hashtable,System.Boolean)">
            <summary> Construct a token stream filtering the given input.
            If <c>stopWords</c> is an instance of <see cref="T:Lucene.Net.Analysis.CharArraySet"/> (true if
            <c>makeStopSet()</c> was used to construct the set) it will be directly used
            and <c>ignoreCase</c> will be ignored since <c>CharArraySet</c>
            directly controls case sensitivity.
            <p/>
            If <c>stopWords</c> is not an instance of <see cref="T:Lucene.Net.Analysis.CharArraySet"/>,
            a new CharArraySet will be constructed and <c>ignoreCase</c> will be
            used to specify the case sensitivity of that set.
            
            </summary>
            <param name="enablePositionIncrements">true if token positions should record the removed stop words
            </param>
            <param name="input">Input TokenStream
            </param>
            <param name="stopWords">The set of Stop Words.
            </param>
            <param name="ignoreCase">-Ignore case when stopping.
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.#ctor(Lucene.Net.Analysis.TokenStream,System.Collections.Hashtable)">
            <summary> Constructs a filter which removes words from the input
            TokenStream that are named in the Set.
            
            </summary>
            <seealso cref="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.String[])">
            </seealso>
            <deprecated> Use <see cref="M:Lucene.Net.Analysis.StopFilter.#ctor(System.Boolean,Lucene.Net.Analysis.TokenStream,System.Collections.Hashtable)"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.#ctor(System.Boolean,Lucene.Net.Analysis.TokenStream,System.Collections.Hashtable)">
            <summary> Constructs a filter which removes words from the input
            TokenStream that are named in the Set.
            
            </summary>
            <param name="enablePositionIncrements">true if token positions should record the removed stop words
            </param>
             <param name="in_Renamed">Input stream
            </param>
            <param name="stopWords">The set of Stop Words.
            </param>
            <seealso cref="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.String[])">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.String[])">
            <summary> Builds a Set from an array of stop words,
            appropriate for passing into the StopFilter constructor.
            This permits this stopWords construction to be cached once when
            an Analyzer is constructed.
            
            </summary>
            <seealso cref="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.String[],System.Boolean)"> passing false to ignoreCase
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.Collections.IList)">
            <summary> Builds a Set from an array of stop words,
            appropriate for passing into the StopFilter constructor.
            This permits this stopWords construction to be cached once when
            an Analyzer is constructed.
            
            </summary>
            <seealso cref="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.String[],System.Boolean)"> passing false to ignoreCase
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.String[],System.Boolean)">
            <summary> </summary>
            <param name="stopWords">An array of stopwords
            </param>
            <param name="ignoreCase">If true, all words are lower cased first.  
            </param>
            <returns> a Set containing the words
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.Collections.IList,System.Boolean)">
            <summary> </summary>
            <param name="stopWords">A List of Strings representing the stopwords
            </param>
            <param name="ignoreCase">if true, all words are lower cased first
            </param>
            <returns> A Set containing the words
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.IncrementToken">
            <summary> Returns the next input Token whose term() is not a stop word.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.GetEnablePositionIncrementsDefault">
            <seealso cref="M:Lucene.Net.Analysis.StopFilter.SetEnablePositionIncrementsDefault(System.Boolean)">
            </seealso>
            <deprecated> Please specify this when you create the StopFilter
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.GetEnablePositionIncrementsVersionDefault(Lucene.Net.Util.Version)">
            <summary> Returns version-dependent default for enablePositionIncrements. Analyzers
            that embed StopFilter use this method when creating the StopFilter. Prior
            to 2.9, this returns <see cref="M:Lucene.Net.Analysis.StopFilter.GetEnablePositionIncrementsDefault"/>. On 2.9
            or later, it returns true.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.SetEnablePositionIncrementsDefault(System.Boolean)">
            <summary> Set the default position increments behavior of every StopFilter created
            from now on.
            <p/>
            Note: behavior of a single StopFilter instance can be modified with
            <see cref="M:Lucene.Net.Analysis.StopFilter.SetEnablePositionIncrements(System.Boolean)"/>. This static method allows
            control over behavior of classes using StopFilters internally, for
            example <see cref="T:Lucene.Net.Analysis.Standard.StandardAnalyzer"/>
            if used with the no-arg ctor.
            <p/>
            Default : false.
            
            </summary>
            <seealso cref="M:Lucene.Net.Analysis.StopFilter.SetEnablePositionIncrements(System.Boolean)">
            </seealso>
            <deprecated> Please specify this when you create the StopFilter
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.GetEnablePositionIncrements">
            <seealso cref="M:Lucene.Net.Analysis.StopFilter.SetEnablePositionIncrements(System.Boolean)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.StopFilter.SetEnablePositionIncrements(System.Boolean)">
            <summary> If <c>true</c>, this StopFilter will preserve
            positions of the incoming tokens (ie, accumulate and
            set position increments of the removed stop tokens).
            Generally, <c>true</c> is best as it does not
            lose information (positions of the original tokens)
            during indexing.
            
            <p/> When set, when a token is stopped
            (omitted), the position increment of the following
            token is incremented.
            
            <p/> <b>NOTE</b>: be sure to also
            set <see cref="M:Lucene.Net.QueryParsers.QueryParser.SetEnablePositionIncrements(System.Boolean)"/> if
            you use QueryParser to create queries.
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.TeeSinkTokenFilter">
            <summary> This TokenFilter provides the ability to set aside attribute states
            that have already been analyzed.  This is useful in situations where multiple fields share
            many common analysis steps and then go their separate ways.
            <p/>
            It is also useful for doing things like entity extraction or proper noun analysis as
            part of the analysis workflow and saving off those tokens for use in another field.
            
            <code>
            TeeSinkTokenFilter source1 = new TeeSinkTokenFilter(new WhitespaceTokenizer(reader1));
            TeeSinkTokenFilter.SinkTokenStream sink1 = source1.newSinkTokenStream();
            TeeSinkTokenFilter.SinkTokenStream sink2 = source1.newSinkTokenStream();
            TeeSinkTokenFilter source2 = new TeeSinkTokenFilter(new WhitespaceTokenizer(reader2));
            source2.addSinkTokenStream(sink1);
            source2.addSinkTokenStream(sink2);
            TokenStream final1 = new LowerCaseFilter(source1);
            TokenStream final2 = source2;
            TokenStream final3 = new EntityDetect(sink1);
            TokenStream final4 = new URLDetect(sink2);
            d.add(new Field("f1", final1));
            d.add(new Field("f2", final2));
            d.add(new Field("f3", final3));
            d.add(new Field("f4", final4));
            </code>
            In this example, <c>sink1</c> and <c>sink2</c> will both get tokens from both
            <c>reader1</c> and <c>reader2</c> after whitespace tokenizer
            and now we can further wrap any of these in extra analysis, and more "sources" can be inserted if desired.
            It is important, that tees are consumed before sinks (in the above example, the field names must be
            less the sink's field names). If you are not sure, which stream is consumed first, you can simply
            add another sink and then pass all tokens to the sinks at once using <see cref="M:Lucene.Net.Analysis.TeeSinkTokenFilter.ConsumeAllTokens"/>.
            This TokenFilter is exhausted after this. In the above example, change
            the example above to:
            <code>
            ...
            TokenStream final1 = new LowerCaseFilter(source1.newSinkTokenStream());
            TokenStream final2 = source2.newSinkTokenStream();
            sink1.consumeAllTokens();
            sink2.consumeAllTokens();
            ...
            </code>
            In this case, the fields can be added in any order, because the sources are not used anymore and all sinks are ready.
            <p/>Note, the EntityDetect and URLDetect TokenStreams are for the example and do not currently exist in Lucene.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.TeeSinkTokenFilter.#ctor(Lucene.Net.Analysis.TokenStream)">
            <summary> Instantiates a new TeeSinkTokenFilter.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.TeeSinkTokenFilter.NewSinkTokenStream">
            <summary> Returns a new <see cref="T:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkTokenStream"/> that receives all tokens consumed by this stream.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.TeeSinkTokenFilter.NewSinkTokenStream(Lucene.Net.Analysis.TeeSinkTokenFilter.SinkFilter)">
            <summary> Returns a new <see cref="T:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkTokenStream"/> that receives all tokens consumed by this stream
            that pass the supplied filter.
            </summary>
            <seealso cref="T:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkFilter">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.TeeSinkTokenFilter.AddSinkTokenStream(Lucene.Net.Analysis.TeeSinkTokenFilter.SinkTokenStream)">
            <summary> Adds a <see cref="T:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkTokenStream"/> created by another <c>TeeSinkTokenFilter</c>
            to this one. The supplied stream will also receive all consumed tokens.
            This method can be used to pass tokens from two different tees to one sink.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.TeeSinkTokenFilter.ConsumeAllTokens">
            <summary> <c>TeeSinkTokenFilter</c> passes all tokens to the added sinks
            when itself is consumed. To be sure, that all tokens from the input
            stream are passed to the sinks, you can call this methods.
            This instance is exhausted after this, but all sinks are instant available.
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkFilter">
            <summary> A filter that decides which <see cref="T:Lucene.Net.Util.AttributeSource"/> states to store in the sink.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkFilter.Accept(Lucene.Net.Util.AttributeSource)">
            <summary> Returns true, iff the current state of the passed-in <see cref="T:Lucene.Net.Util.AttributeSource"/> shall be stored
            in the sink. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkFilter.Reset">
            <summary> Called by <see cref="M:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkTokenStream.Reset"/>. This method does nothing by default
            and can optionally be overridden.
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.TeeTokenFilter">
            <summary> Works in conjunction with the SinkTokenizer to provide the ability to set aside tokens
            that have already been analyzed.  This is useful in situations where multiple fields share
            many common analysis steps and then go their separate ways.
            <p/>
            It is also useful for doing things like entity extraction or proper noun analysis as
            part of the analysis workflow and saving off those tokens for use in another field.
            
            <code>
            SinkTokenizer sink1 = new SinkTokenizer();
            SinkTokenizer sink2 = new SinkTokenizer();
            TokenStream source1 = new TeeTokenFilter(new TeeTokenFilter(new WhitespaceTokenizer(reader1), sink1), sink2);
            TokenStream source2 = new TeeTokenFilter(new TeeTokenFilter(new WhitespaceTokenizer(reader2), sink1), sink2);
            TokenStream final1 = new LowerCaseFilter(source1);
            TokenStream final2 = source2;
            TokenStream final3 = new EntityDetect(sink1);
            TokenStream final4 = new URLDetect(sink2);
            d.add(new Field("f1", final1));
            d.add(new Field("f2", final2));
            d.add(new Field("f3", final3));
            d.add(new Field("f4", final4));
            </code>
            In this example, <c>sink1</c> and <c>sink2</c> will both get tokens from both
            <c>reader1</c> and <c>reader2</c> after whitespace tokenizer
            and now we can further wrap any of these in extra analysis, and more "sources" can be inserted if desired.
            It is important, that tees are consumed before sinks (in the above example, the field names must be
            less the sink's field names).
            Note, the EntityDetect and URLDetect TokenStreams are for the example and do not currently exist in Lucene
            <p/>
            
            See <a href="http://issues.apache.org/jira/browse/LUCENE-1058">LUCENE-1058</a>.
            <p/>
            WARNING: <see cref="T:Lucene.Net.Analysis.TeeTokenFilter"/> and <see cref="T:Lucene.Net.Analysis.SinkTokenizer"/> only work with the old TokenStream API.
            If you switch to the new API, you need to use <see cref="T:Lucene.Net.Analysis.TeeSinkTokenFilter"/> instead, which offers 
            the same functionality.
            </summary>
            <seealso cref="T:Lucene.Net.Analysis.SinkTokenizer">
            </seealso>
            <deprecated> Use <see cref="T:Lucene.Net.Analysis.TeeSinkTokenFilter"/> instead
            
            </deprecated>
        </member>
        <member name="T:Lucene.Net.Analysis.Token">
            <summary>A Token is an occurrence of a term from the text of a field.  It consists of
            a term's text, the start and end offset of the term in the text of the field,
            and a type string.
            <p/>
            The start and end offsets permit applications to re-associate a token with
            its source text, e.g., to display highlighted query terms in a document
            browser, or to show matching text fragments in a <abbr title="KeyWord In Context">KWIC</abbr> display, etc.
            <p/>
            The type is a string, assigned by a lexical analyzer
            (a.k.a. tokenizer), naming the lexical or syntactic class that the token
            belongs to.  For example an end of sentence marker token might be implemented
            with type "eos".  The default token type is "word".  
            <p/>
            A Token can optionally have metadata (a.k.a. Payload) in the form of a variable
            length byte array. Use <see cref="M:Lucene.Net.Index.TermPositions.GetPayloadLength"/> and 
            <see cref="M:Lucene.Net.Index.TermPositions.GetPayload(System.Byte[],System.Int32)"/> to retrieve the payloads from the index.
            </summary>
            <summary><br/><br/>
            </summary>
            <summary><p/><b>NOTE:</b> As of 2.9, Token implements all <see cref="T:Lucene.Net.Util.Attribute"/> interfaces
            that are part of core Lucene and can be found in the <see cref="N:Lucene.Net.Analysis.Tokenattributes"/> namespace.
            Even though it is not necessary to use Token anymore, with the new TokenStream API it can
            be used as convenience class that implements all <see cref="T:Lucene.Net.Util.Attribute"/>s, which is especially useful
            to easily switch from the old to the new TokenStream API.
            </summary>
            <summary><br/><br/>
            <p/><b>NOTE:</b> As of 2.3, Token stores the term text
            internally as a malleable char[] termBuffer instead of
            String termText.  The indexing code and core tokenizers
            have been changed to re-use a single Token instance, changing
            its buffer and other fields in-place as the Token is
            processed.  This provides substantially better indexing
            performance as it saves the GC cost of new'ing a Token and
            String for every term.  The APIs that accept String
            termText are still available but a warning about the
            associated performance cost has been added (below).  The
            <see cref="M:Lucene.Net.Analysis.Token.TermText"/> method has been deprecated.<p/>
            </summary>
            <summary><p/>Tokenizers and TokenFilters should try to re-use a Token instance when
            possible for best performance, by implementing the
            <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/> API.
            Failing that, to create a new Token you should first use
            one of the constructors that starts with null text.  To load
            the token from a char[] use <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.Char[],System.Int32,System.Int32)"/>.
            To load from a String use <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String)"/> or <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String,System.Int32,System.Int32)"/>.
            Alternatively you can get the Token's termBuffer by calling either <see cref="M:Lucene.Net.Analysis.Token.TermBuffer"/>,
            if you know that your text is shorter than the capacity of the termBuffer
            or <see cref="M:Lucene.Net.Analysis.Token.ResizeTermBuffer(System.Int32)"/>, if there is any possibility
            that you may need to grow the buffer. Fill in the characters of your term into this
            buffer, with <see cref="M:System.String.ToCharArray(System.Int32,System.Int32)"/> if loading from a string,
            or with <see cref="M:System.Array.Copy(System.Array,System.Int64,System.Array,System.Int64,System.Int64)"/>, and finally call <see cref="M:Lucene.Net.Analysis.Token.SetTermLength(System.Int32)"/> to
            set the length of the term text.  See <a target="_top" href="https://issues.apache.org/jira/browse/LUCENE-969">LUCENE-969</a>
            for details.<p/>
            <p/>Typical Token reuse patterns:
            <list type="bullet">
            <item> Copying text from a string (type is reset to <see cref="F:Lucene.Net.Analysis.Token.DEFAULT_TYPE"/> if not
            specified):<br/>
            <code>
            return reusableToken.reinit(string, startOffset, endOffset[, type]);
            </code>
            </item>
            <item> Copying some text from a string (type is reset to <see cref="F:Lucene.Net.Analysis.Token.DEFAULT_TYPE"/>
            if not specified):<br/>
            <code>
            return reusableToken.reinit(string, 0, string.length(), startOffset, endOffset[, type]);
            </code>
            </item>
            <item> Copying text from char[] buffer (type is reset to <see cref="F:Lucene.Net.Analysis.Token.DEFAULT_TYPE"/>
            if not specified):<br/>
            <code>
            return reusableToken.reinit(buffer, 0, buffer.length, startOffset, endOffset[, type]);
            </code>
            </item>
            <item> Copying some text from a char[] buffer (type is reset to
            <see cref="F:Lucene.Net.Analysis.Token.DEFAULT_TYPE"/> if not specified):<br/>
            <code>
            return reusableToken.reinit(buffer, start, end - start, startOffset, endOffset[, type]);
            </code>
            </item>
            <item> Copying from one one Token to another (type is reset to
            <see cref="F:Lucene.Net.Analysis.Token.DEFAULT_TYPE"/> if not specified):<br/>
            <code>
            return reusableToken.reinit(source.termBuffer(), 0, source.termLength(), source.startOffset(), source.endOffset()[, source.type()]);
            </code>
            </item>
            </list>
            A few things to note:
            <list type="bullet">
            <item>clear() initializes all of the fields to default values. This was changed in contrast to Lucene 2.4, but should affect no one.</item>
            <item>Because <c>TokenStreams</c> can be chained, one cannot assume that the <c>Token's</c> current type is correct.</item>
            <item>The startOffset and endOffset represent the start and offset in the
            source text, so be careful in adjusting them.</item>
            <item>When caching a reusable token, clone it. When injecting a cached token into a stream that can be reset, clone it again.</item>
            </list>
            <p/>
            </summary>
            <seealso cref="T:Lucene.Net.Index.Payload">
            </seealso>
        </member>
        <member name="T:Lucene.Net.Util.AttributeImpl">
            <summary> Base class for Attributes that can be added to a 
            <see cref="T:Lucene.Net.Util.AttributeSource"/>.
            <p/>
            Attributes are used to add data in a dynamic, yet type-safe way to a source
            of usually streamed objects, e. g. a <see cref="T:Lucene.Net.Analysis.TokenStream"/>.
            </summary>
        </member>
        <member name="T:Lucene.Net.Util.Attribute">
            <summary> Base interface for attributes.</summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeImpl.Clear">
            <summary> Clears the values in this AttributeImpl and resets it to its 
            default value. If this implementation implements more than one Attribute interface
            it clears all.
            </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeImpl.ToString">
            <summary> The default implementation of this method accesses all declared
            fields of this object and prints the values in the following syntax:
            
            <code>
            public String toString() {
            return "start=" + startOffset + ",end=" + endOffset;
            }
            </code>
            
            This method may be overridden by subclasses.
            </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeImpl.GetHashCode">
            <summary> Subclasses must implement this method and should compute
            a hashCode similar to this:
            <code>
            public int hashCode() {
            int code = startOffset;
            code = code * 31 + endOffset;
            return code;
            }
            </code> 
            
            see also <see cref="M:Lucene.Net.Util.AttributeImpl.Equals(System.Object)"/>
            </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeImpl.Equals(System.Object)">
            <summary> All values used for computation of <see cref="M:Lucene.Net.Util.AttributeImpl.GetHashCode"/> 
            should be checked here for equality.
            
            see also <see cref="M:System.Object.Equals(System.Object)"/>
            </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeImpl.CopyTo(Lucene.Net.Util.AttributeImpl)">
            <summary> Copies the values from this Attribute into the passed-in
            target attribute. The target implementation must support all the
            Attributes this implementation supports.
            </summary>
        </member>
        <member name="M:Lucene.Net.Util.AttributeImpl.Clone">
            <summary> Shallow clone. Subclasses must override this if they 
            need to clone any members deeply,
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.Tokenattributes.TermAttribute">
            <summary> The term text of a Token.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.Term">
            <summary>Returns the Token's term text.
            
            This method has a performance penalty
            because the text is stored internally in a char[].  If
            possible, use <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.TermBuffer"/> and <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.TermLength"/>
            directly instead.  If you really need a
            String, use this method, which is nothing more than
            a convenience call to <b>new String(token.termBuffer(), 0, token.termLength())</b>
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermBuffer(System.Char[],System.Int32,System.Int32)">
            <summary>Copies the contents of buffer, starting at offset for
            length characters, into the termBuffer array.
            </summary>
            <param name="buffer">the buffer to copy
            </param>
            <param name="offset">the index in the buffer of the first character to copy
            </param>
            <param name="length">the number of characters to copy
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermBuffer(System.String)">
            <summary>Copies the contents of buffer into the termBuffer array.</summary>
            <param name="buffer">the buffer to copy
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermBuffer(System.String,System.Int32,System.Int32)">
            <summary>Copies the contents of buffer, starting at offset and continuing
            for length characters, into the termBuffer array.
            </summary>
            <param name="buffer">the buffer to copy
            </param>
            <param name="offset">the index in the buffer of the first character to copy
            </param>
            <param name="length">the number of characters to copy
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.TermBuffer">
            <summary>Returns the internal termBuffer character array which
            you can then directly alter.  If the array is too
            small for your token, use <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.ResizeTermBuffer(System.Int32)"/>
            to increase it.  After
            altering the buffer be sure to call <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermLength(System.Int32)"/>
            to record the number of valid
            characters that were placed into the termBuffer. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.ResizeTermBuffer(System.Int32)">
            <summary>Grows the termBuffer to at least size newSize, preserving the
            existing content. Note: If the next operation is to change
            the contents of the term buffer use
            <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermBuffer(System.Char[],System.Int32,System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermBuffer(System.String)"/>, or
            <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermBuffer(System.String,System.Int32,System.Int32)"/>
            to optimally combine the resize with the setting of the termBuffer.
            </summary>
            <param name="newSize">minimum size of the new termBuffer
            </param>
            <returns> newly created termBuffer with length &gt;= newSize
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.TermLength">
            <summary>Return number of valid characters (length of the term)
            in the termBuffer array. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermLength(System.Int32)">
            <summary>Set number of valid characters (length of the term) in
            the termBuffer array. Use this to truncate the termBuffer
            or to synchronize with external manipulation of the termBuffer.
            Note: to grow the size of the array,
            use <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.ResizeTermBuffer(System.Int32)"/> first.
            </summary>
            <param name="length">the truncated length
            </param>
        </member>
        <member name="T:Lucene.Net.Analysis.Tokenattributes.TypeAttribute">
            <summary> A Token's lexical type. The Default value is "word". </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TypeAttribute.Type">
            <summary>Returns this Token's lexical type.  Defaults to "word". </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TypeAttribute.SetType(System.String)">
            <summary>Set the lexical type.</summary>
            <seealso cref="M:Lucene.Net.Analysis.Tokenattributes.TypeAttribute.Type">
            </seealso>
        </member>
        <member name="T:Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttribute">
            <summary>The positionIncrement determines the position of this token
            relative to the previous Token in a TokenStream, used in phrase
            searching.
            
            <p/>The default value is one.
            
            <p/>Some common uses for this are:<list>
            
            <item>Set it to zero to put multiple terms in the same position.  This is
            useful if, e.g., a word has multiple stems.  Searches for phrases
            including either stem will match.  In this case, all but the first stem's
            increment should be set to zero: the increment of the first instance
            should be one.  Repeating a token with an increment of zero can also be
            used to boost the scores of matches on that token.</item>
            
            <item>Set it to values greater than one to inhibit exact phrase matches.
            If, for example, one does not want phrases to match across removed stop
            words, then one could build a stop word filter that removes stop words and
            also sets the increment to the number of stop words removed before each
            non-stop word.  Then exact phrase queries will only match when the terms
            occur with no intervening stop words.</item>
            
            </list>
            
            </summary>
            <seealso cref="T:Lucene.Net.Index.TermPositions">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttribute.SetPositionIncrement(System.Int32)">
            <summary>Set the position increment. The default value is one.
            
            </summary>
            <param name="positionIncrement">the distance from the prior term
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttribute.GetPositionIncrement">
            <summary>Returns the position increment of this Token.</summary>
            <seealso cref="M:Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttribute.SetPositionIncrement(System.Int32)">
            </seealso>
        </member>
        <member name="T:Lucene.Net.Analysis.Tokenattributes.FlagsAttribute">
            <summary> This attribute can be used to pass different flags down the <see cref="T:Lucene.Net.Analysis.Tokenizer"/> chain,
            eg from one TokenFilter to another one. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.FlagsAttribute.GetFlags">
            <summary> EXPERIMENTAL:  While we think this is here to stay, we may want to change it to be a long.
            <p/>
            
            Get the bitset for any bits that have been set.  This is completely distinct from <see cref="M:Lucene.Net.Analysis.Tokenattributes.TypeAttribute.Type"/>, although they do share similar purposes.
            The flags can be used to encode information about the token for use by other <see cref="T:Lucene.Net.Analysis.TokenFilter"/>s.
            
            
            </summary>
            <returns> The bits
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.FlagsAttribute.SetFlags(System.Int32)">
            <seealso cref="M:Lucene.Net.Analysis.Tokenattributes.FlagsAttribute.GetFlags">
            </seealso>
        </member>
        <member name="T:Lucene.Net.Analysis.Tokenattributes.OffsetAttribute">
            <summary> The start and end character offset of a Token. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.OffsetAttribute.StartOffset">
            <summary>Returns this Token's starting offset, the position of the first character
            corresponding to this token in the source text.
            Note that the difference between endOffset() and startOffset() may not be
            equal to termText.length(), as the term text may have been altered by a
            stemmer or some other filter. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.OffsetAttribute.SetOffset(System.Int32,System.Int32)">
            <summary>Set the starting and ending offset.
            See StartOffset() and EndOffset()
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.OffsetAttribute.EndOffset">
            <summary>Returns this Token's ending offset, one greater than the position of the
            last character corresponding to this token in the source text. The length
            of the token in the source text is (endOffset - startOffset). 
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.Tokenattributes.PayloadAttribute">
            <summary> The payload of a Token. See also <see cref="T:Lucene.Net.Index.Payload"/>.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.PayloadAttribute.GetPayload">
            <summary> Returns this Token's payload.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.PayloadAttribute.SetPayload(Lucene.Net.Index.Payload)">
            <summary> Sets this Token's payload.</summary>
        </member>
        <member name="F:Lucene.Net.Analysis.Token.termText">
            <deprecated> We will remove this when we remove the
            deprecated APIs 
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.Token.termBuffer">
            <summary> Characters for the term text.</summary>
            <deprecated> This will be made private. Instead, use:
            <see cref="M:Lucene.Net.Analysis.Token.TermBuffer"/>, 
            <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.Char[],System.Int32,System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String)"/>, or
            <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String,System.Int32,System.Int32)"/>
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.Token.termLength">
            <summary> Length of term text in the buffer.</summary>
            <deprecated> This will be made private. Instead, use:
            <see cref="M:Lucene.Net.Analysis.Token.TermLength"/>, or <see cref="M:Lucene.Net.Analysis.Token.SetTermLength(System.Int32)"/>.
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.Token.startOffset">
            <summary> Start in source text.</summary>
            <deprecated> This will be made private. Instead, use:
            <see cref="M:Lucene.Net.Analysis.Token.StartOffset"/>, or <see cref="M:Lucene.Net.Analysis.Token.SetStartOffset(System.Int32)"/>.
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.Token.endOffset">
            <summary> End in source text.</summary>
            <deprecated> This will be made private. Instead, use:
            <see cref="M:Lucene.Net.Analysis.Token.EndOffset"/>, or <see cref="M:Lucene.Net.Analysis.Token.SetEndOffset(System.Int32)"/>.
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.Token.type">
            <summary> The lexical type of the token.</summary>
            <deprecated> This will be made private. Instead, use:
            <see cref="M:Lucene.Net.Analysis.Token.Type"/>, or <see cref="M:Lucene.Net.Analysis.Token.SetType(System.String)"/>.
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.Token.payload">
            <deprecated> This will be made private. Instead, use:
            <see cref="M:Lucene.Net.Analysis.Token.GetPayload"/>, or <see cref="M:Lucene.Net.Analysis.Token.SetPayload(Lucene.Net.Index.Payload)"/>.
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Analysis.Token.positionIncrement">
            <deprecated> This will be made private. Instead, use:
            <see cref="M:Lucene.Net.Analysis.Token.GetPositionIncrement"/>, or <see cref="M:Lucene.Net.Analysis.Token.SetPositionIncrement(System.Int32)"/>.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.#ctor">
            <summary>Constructs a Token will null text. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.#ctor(System.Int32,System.Int32)">
            <summary>Constructs a Token with null text and start &amp; end
            offsets.
            </summary>
            <param name="start">start offset in the source text
            </param>
            <param name="end">end offset in the source text 
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.#ctor(System.Int32,System.Int32,System.String)">
            <summary>Constructs a Token with null text and start &amp; end
            offsets plus the Token type.
            </summary>
            <param name="start">start offset in the source text
            </param>
            <param name="end">end offset in the source text
            </param>
            <param name="typ">the lexical type of this Token 
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.#ctor(System.Int32,System.Int32,System.Int32)">
            <summary> Constructs a Token with null text and start &amp; end
            offsets plus flags. NOTE: flags is EXPERIMENTAL.
            </summary>
            <param name="start">start offset in the source text
            </param>
            <param name="end">end offset in the source text
            </param>
            <param name="flags">The bits to set for this token
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.#ctor(System.String,System.Int32,System.Int32)">
            <summary>Constructs a Token with the given term text, and start
            &amp; end offsets.  The type defaults to "word."
            <b>NOTE:</b> for better indexing speed you should
            instead use the char[] termBuffer methods to set the
            term text.
            </summary>
            <param name="text">term text
            </param>
            <param name="start">start offset
            </param>
            <param name="end">end offset
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.#ctor(System.String,System.Int32,System.Int32,System.String)">
            <summary>Constructs a Token with the given text, start and end
            offsets, &amp; type.  <b>NOTE:</b> for better indexing
            speed you should instead use the char[] termBuffer
            methods to set the term text.
            </summary>
            <param name="text">term text
            </param>
            <param name="start">start offset
            </param>
            <param name="end">end offset
            </param>
            <param name="typ">token type
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.#ctor(System.String,System.Int32,System.Int32,System.Int32)">
            <summary>  Constructs a Token with the given text, start and end
            offsets, &amp; type.  <b>NOTE:</b> for better indexing
            speed you should instead use the char[] termBuffer
            methods to set the term text.
            </summary>
            <param name="text">
            </param>
            <param name="start">
            </param>
            <param name="end">
            </param>
            <param name="flags">token type bits
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.#ctor(System.Char[],System.Int32,System.Int32,System.Int32,System.Int32)">
            <summary>  Constructs a Token with the given term buffer (offset
            &amp; length), start and end
            offsets
            </summary>
            <param name="startTermBuffer">
            </param>
            <param name="termBufferOffset">
            </param>
            <param name="termBufferLength">
            </param>
            <param name="start">
            </param>
            <param name="end">
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.SetPositionIncrement(System.Int32)">
            <summary>Set the position increment.  This determines the position of this token
            relative to the previous Token in a <see cref="T:Lucene.Net.Analysis.TokenStream"/>, used in phrase
            searching.
            
            <p/>The default value is one.
            
            <p/>Some common uses for this are:<list>
            
            <item>Set it to zero to put multiple terms in the same position.  This is
            useful if, e.g., a word has multiple stems.  Searches for phrases
            including either stem will match.  In this case, all but the first stem's
            increment should be set to zero: the increment of the first instance
            should be one.  Repeating a token with an increment of zero can also be
            used to boost the scores of matches on that token.</item>
            
            <item>Set it to values greater than one to inhibit exact phrase matches.
            If, for example, one does not want phrases to match across removed stop
            words, then one could build a stop word filter that removes stop words and
            also sets the increment to the number of stop words removed before each
            non-stop word.  Then exact phrase queries will only match when the terms
            occur with no intervening stop words.</item>
            
            </list>
            </summary>
            <param name="positionIncrement">the distance from the prior term
            </param>
            <seealso cref="T:Lucene.Net.Index.TermPositions">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.GetPositionIncrement">
            <summary>Returns the position increment of this Token.</summary>
            <seealso cref="M:Lucene.Net.Analysis.Token.SetPositionIncrement(System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.SetTermText(System.String)">
            <summary>Sets the Token's term text.  <b>NOTE:</b> for better
            indexing speed you should instead use the char[]
            termBuffer methods to set the term text.
            </summary>
            <deprecated> use <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.Char[],System.Int32,System.Int32)"/> or
            <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String)"/> or
            <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String,System.Int32,System.Int32)"/>.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.TermText">
            <summary>Returns the Token's term text.
            
            </summary>
            <deprecated> This method now has a performance penalty
            because the text is stored internally in a char[].  If
            possible, use <see cref="M:Lucene.Net.Analysis.Token.TermBuffer"/> and <see cref="M:Lucene.Net.Analysis.Token.TermLength"/>
            directly instead.  If you really need a
            String, use <see cref="M:Lucene.Net.Analysis.Token.Term"/>
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.Term">
            <summary>Returns the Token's term text.
            
            This method has a performance penalty
            because the text is stored internally in a char[].  If
            possible, use <see cref="M:Lucene.Net.Analysis.Token.TermBuffer"/> and <see cref="M:Lucene.Net.Analysis.Token.TermLength"/>
            directly instead.  If you really need a
            String, use this method, which is nothing more than
            a convenience call to <b>new String(token.termBuffer(), 0, token.termLength())</b>
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.Char[],System.Int32,System.Int32)">
            <summary>Copies the contents of buffer, starting at offset for
            length characters, into the termBuffer array.
            </summary>
            <param name="buffer">the buffer to copy
            </param>
            <param name="offset">the index in the buffer of the first character to copy
            </param>
            <param name="length">the number of characters to copy
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String)">
            <summary>Copies the contents of buffer into the termBuffer array.</summary>
            <param name="buffer">the buffer to copy
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String,System.Int32,System.Int32)">
            <summary>Copies the contents of buffer, starting at offset and continuing
            for length characters, into the termBuffer array.
            </summary>
            <param name="buffer">the buffer to copy
            </param>
            <param name="offset">the index in the buffer of the first character to copy
            </param>
            <param name="length">the number of characters to copy
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.TermBuffer">
            <summary>Returns the internal termBuffer character array which
            you can then directly alter.  If the array is too
            small for your token, use <see cref="M:Lucene.Net.Analysis.Token.ResizeTermBuffer(System.Int32)"/>
            to increase it.  After
            altering the buffer be sure to call <see cref="M:Lucene.Net.Analysis.Token.SetTermLength(System.Int32)"/>
            to record the number of valid
            characters that were placed into the termBuffer. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.ResizeTermBuffer(System.Int32)">
            <summary>Grows the termBuffer to at least size newSize, preserving the
            existing content. Note: If the next operation is to change
            the contents of the term buffer use
            <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.Char[],System.Int32,System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String)"/>, or
            <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String,System.Int32,System.Int32)"/>
            to optimally combine the resize with the setting of the termBuffer.
            </summary>
            <param name="newSize">minimum size of the new termBuffer
            </param>
            <returns> newly created termBuffer with length &gt;= newSize
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.GrowTermBuffer(System.Int32)">
            <summary>Allocates a buffer char[] of at least newSize, without preserving the existing content.
            its always used in places that set the content 
            </summary>
            <param name="newSize">minimum size of the buffer
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.TermLength">
            <summary>Return number of valid characters (length of the term)
            in the termBuffer array. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.SetTermLength(System.Int32)">
            <summary>Set number of valid characters (length of the term) in
            the termBuffer array. Use this to truncate the termBuffer
            or to synchronize with external manipulation of the termBuffer.
            Note: to grow the size of the array,
            use <see cref="M:Lucene.Net.Analysis.Token.ResizeTermBuffer(System.Int32)"/> first.
            </summary>
            <param name="length">the truncated length
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.StartOffset">
            <summary>Returns this Token's starting offset, the position of the first character
            corresponding to this token in the source text.
            Note that the difference between endOffset() and startOffset() may not be
            equal to termText.length(), as the term text may have been altered by a
            stemmer or some other filter. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.SetStartOffset(System.Int32)">
            <summary>Set the starting offset.</summary>
            <seealso cref="M:Lucene.Net.Analysis.Token.StartOffset">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.EndOffset">
            <summary>Returns this Token's ending offset, one greater than the position of the
            last character corresponding to this token in the source text. The length
            of the token in the source text is (endOffset - startOffset). 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.SetEndOffset(System.Int32)">
            <summary>Set the ending offset.</summary>
            <seealso cref="M:Lucene.Net.Analysis.Token.EndOffset">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.SetOffset(System.Int32,System.Int32)">
            <summary>Set the starting and ending offset.
            See StartOffset() and EndOffset()
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.Type">
            <summary>Returns this Token's lexical type.  Defaults to "word". </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.SetType(System.String)">
            <summary>Set the lexical type.</summary>
            <seealso cref="M:Lucene.Net.Analysis.Token.Type">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.GetFlags">
            <summary> EXPERIMENTAL:  While we think this is here to stay, we may want to change it to be a long.
            <p/>
            
            Get the bitset for any bits that have been set.  This is completely distinct from <see cref="M:Lucene.Net.Analysis.Token.Type"/>, although they do share similar purposes.
            The flags can be used to encode information about the token for use by other <see cref="T:Lucene.Net.Analysis.TokenFilter"/>s.
            
            
            </summary>
            <returns> The bits
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.SetFlags(System.Int32)">
            <seealso cref="M:Lucene.Net.Analysis.Token.GetFlags">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.GetPayload">
            <summary> Returns this Token's payload.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.SetPayload(Lucene.Net.Index.Payload)">
            <summary> Sets this Token's payload.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.Clear">
            <summary>Resets the term text, payload, flags, and positionIncrement,
            startOffset, endOffset and token type to default.
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.Clone(System.Char[],System.Int32,System.Int32,System.Int32,System.Int32)">
            <summary>Makes a clone, but replaces the term buffer &amp;
            start/end offset in the process.  This is more
            efficient than doing a full clone (and then calling
            setTermBuffer) because it saves a wasted copy of the old
            termBuffer. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.Reinit(System.Char[],System.Int32,System.Int32,System.Int32,System.Int32,System.String)">
            <summary>Shorthand for calling <see cref="M:Lucene.Net.Analysis.Token.Clear"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.Char[],System.Int32,System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetStartOffset(System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetEndOffset(System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetType(System.String)"/>
            </summary>
            <returns> this Token instance 
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.Reinit(System.Char[],System.Int32,System.Int32,System.Int32,System.Int32)">
            <summary>Shorthand for calling <see cref="M:Lucene.Net.Analysis.Token.Clear"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.Char[],System.Int32,System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetStartOffset(System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetEndOffset(System.Int32)"/>
            <see cref="M:Lucene.Net.Analysis.Token.SetType(System.String)"/> on Token.DEFAULT_TYPE
            </summary>
            <returns> this Token instance 
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.Reinit(System.String,System.Int32,System.Int32,System.String)">
            <summary>Shorthand for calling <see cref="M:Lucene.Net.Analysis.Token.Clear"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetStartOffset(System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetEndOffset(System.Int32)"/>
            <see cref="M:Lucene.Net.Analysis.Token.SetType(System.String)"/>
            </summary>
            <returns> this Token instance 
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.Reinit(System.String,System.Int32,System.Int32,System.Int32,System.Int32,System.String)">
            <summary>Shorthand for calling <see cref="M:Lucene.Net.Analysis.Token.Clear"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String,System.Int32,System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetStartOffset(System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetEndOffset(System.Int32)"/>
            <see cref="M:Lucene.Net.Analysis.Token.SetType(System.String)"/>
            </summary>
            <returns> this Token instance 
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.Reinit(System.String,System.Int32,System.Int32)">
            <summary>Shorthand for calling <see cref="M:Lucene.Net.Analysis.Token.Clear"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetStartOffset(System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetEndOffset(System.Int32)"/>
            <see cref="M:Lucene.Net.Analysis.Token.SetType(System.String)"/> on Token.DEFAULT_TYPE
            </summary>
            <returns> this Token instance 
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.Reinit(System.String,System.Int32,System.Int32,System.Int32,System.Int32)">
            <summary>Shorthand for calling <see cref="M:Lucene.Net.Analysis.Token.Clear"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String,System.Int32,System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetStartOffset(System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Token.SetEndOffset(System.Int32)"/>
            <see cref="M:Lucene.Net.Analysis.Token.SetType(System.String)"/> on Token.DEFAULT_TYPE
            </summary>
            <returns> this Token instance 
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.Reinit(Lucene.Net.Analysis.Token)">
            <summary> Copy the prototype token's fields into this one. Note: Payloads are shared.</summary>
            <param name="prototype">
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.Reinit(Lucene.Net.Analysis.Token,System.String)">
            <summary> Copy the prototype token's fields into this one, with a different term. Note: Payloads are shared.</summary>
            <param name="prototype">
            </param>
            <param name="newTerm">
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Token.Reinit(Lucene.Net.Analysis.Token,System.Char[],System.Int32,System.Int32)">
            <summary> Copy the prototype token's fields into this one, with a different term. Note: Payloads are shared.</summary>
            <param name="prototype">
            </param>
            <param name="newTermBuffer">
            </param>
            <param name="offset">
            </param>
            <param name="length">
            </param>
        </member>
        <member name="T:Lucene.Net.Analysis.Tokenattributes.FlagsAttributeImpl">
            <summary> This attribute can be used to pass different flags down the tokenizer chain,
            eg from one TokenFilter to another one. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.FlagsAttributeImpl.GetFlags">
            <summary> EXPERIMENTAL:  While we think this is here to stay, we may want to change it to be a long.
            <p/>
            
            Get the bitset for any bits that have been set.  This is completely distinct from <see cref="M:Lucene.Net.Analysis.Tokenattributes.TypeAttribute.Type"/>, although they do share similar purposes.
            The flags can be used to encode information about the token for use by other <see cref="T:Lucene.Net.Analysis.TokenFilter"/>s.
            
            
            </summary>
            <returns> The bits
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.FlagsAttributeImpl.SetFlags(System.Int32)">
            <seealso cref="M:Lucene.Net.Analysis.Tokenattributes.FlagsAttributeImpl.GetFlags">
            </seealso>
        </member>
        <member name="T:Lucene.Net.Analysis.Tokenattributes.OffsetAttributeImpl">
            <summary> The start and end character offset of a Token. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.OffsetAttributeImpl.StartOffset">
            <summary>Returns this Token's starting offset, the position of the first character
            corresponding to this token in the source text.
            Note that the difference between endOffset() and startOffset() may not be
            equal to termText.length(), as the term text may have been altered by a
            stemmer or some other filter. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.OffsetAttributeImpl.SetOffset(System.Int32,System.Int32)">
            <summary>Set the starting and ending offset.
            See StartOffset() and EndOffset()
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.OffsetAttributeImpl.EndOffset">
            <summary>Returns this Token's ending offset, one greater than the position of the
            last character corresponding to this token in the source text. The length
            of the token in the source text is (endOffset - startOffset). 
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.Tokenattributes.PayloadAttributeImpl">
            <summary> The payload of a Token. See also <see cref="T:Lucene.Net.Index.Payload"/>.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.PayloadAttributeImpl.#ctor">
            <summary> Initialize this attribute with no payload.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.PayloadAttributeImpl.#ctor(Lucene.Net.Index.Payload)">
            <summary> Initialize this attribute with the given payload. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.PayloadAttributeImpl.GetPayload">
            <summary> Returns this Token's payload.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.PayloadAttributeImpl.SetPayload(Lucene.Net.Index.Payload)">
            <summary> Sets this Token's payload.</summary>
        </member>
        <member name="T:Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttributeImpl">
            <summary>The positionIncrement determines the position of this token
            relative to the previous Token in a <see cref="T:Lucene.Net.Analysis.TokenStream"/>, used in phrase
            searching.
            
            <p/>The default value is one.
            
            <p/>Some common uses for this are:<list>
            
            <item>Set it to zero to put multiple terms in the same position.  This is
            useful if, e.g., a word has multiple stems.  Searches for phrases
            including either stem will match.  In this case, all but the first stem's
            increment should be set to zero: the increment of the first instance
            should be one.  Repeating a token with an increment of zero can also be
            used to boost the scores of matches on that token.</item>
            
            <item>Set it to values greater than one to inhibit exact phrase matches.
            If, for example, one does not want phrases to match across removed stop
            words, then one could build a stop word filter that removes stop words and
            also sets the increment to the number of stop words removed before each
            non-stop word.  Then exact phrase queries will only match when the terms
            occur with no intervening stop words.</item>
            
            </list>
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttributeImpl.SetPositionIncrement(System.Int32)">
            <summary>Set the position increment. The default value is one.
            
            </summary>
            <param name="positionIncrement">the distance from the prior term
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttributeImpl.GetPositionIncrement">
            <summary>Returns the position increment of this Token.</summary>
            <seealso cref="M:Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttributeImpl.SetPositionIncrement(System.Int32)">
            </seealso>
        </member>
        <member name="T:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl">
            <summary> The term text of a Token.</summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.Term">
            <summary>Returns the Token's term text.
            
            This method has a performance penalty
            because the text is stored internally in a char[].  If
            possible, use <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.TermBuffer"/> and 
            <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.TermLength"/> directly instead.  If you 
            really need a String, use this method, which is nothing more than
            a convenience call to <b>new String(token.termBuffer(), 0, token.termLength())</b>
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.SetTermBuffer(System.Char[],System.Int32,System.Int32)">
            <summary>Copies the contents of buffer, starting at offset for
            length characters, into the termBuffer array.
            </summary>
            <param name="buffer">the buffer to copy
            </param>
            <param name="offset">the index in the buffer of the first character to copy
            </param>
            <param name="length">the number of characters to copy
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.SetTermBuffer(System.String)">
            <summary>Copies the contents of buffer into the termBuffer array.</summary>
            <param name="buffer">the buffer to copy
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.SetTermBuffer(System.String,System.Int32,System.Int32)">
            <summary>Copies the contents of buffer, starting at offset and continuing
            for length characters, into the termBuffer array.
            </summary>
            <param name="buffer">the buffer to copy
            </param>
            <param name="offset">the index in the buffer of the first character to copy
            </param>
            <param name="length">the number of characters to copy
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.TermBuffer">
            <summary>Returns the internal termBuffer character array which
            you can then directly alter.  If the array is too
            small for your token, use <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.ResizeTermBuffer(System.Int32)"/>
            to increase it.  After
            altering the buffer be sure to call <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.SetTermLength(System.Int32)"/>
            to record the number of valid
            characters that were placed into the termBuffer. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.ResizeTermBuffer(System.Int32)">
            <summary>Grows the termBuffer to at least size newSize, preserving the
            existing content. Note: If the next operation is to change
            the contents of the term buffer use
            <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.SetTermBuffer(System.Char[],System.Int32,System.Int32)"/>,
            <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.SetTermBuffer(System.String)"/>, or
            <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.SetTermBuffer(System.String,System.Int32,System.Int32)"/>
            to optimally combine the resize with the setting of the termBuffer.
            </summary>
            <param name="newSize">minimum size of the new termBuffer
            </param>
            <returns> newly created termBuffer with length &gt;= newSize
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.GrowTermBuffer(System.Int32)">
            <summary>Allocates a buffer char[] of at least newSize, without preserving the existing content.
            its always used in places that set the content 
            </summary>
            <param name="newSize">minimum size of the buffer
            </param>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.TermLength">
            <summary>Return number of valid characters (length of the term)
            in the termBuffer array. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.SetTermLength(System.Int32)">
            <summary>Set number of valid characters (length of the term) in
            the termBuffer array. Use this to truncate the termBuffer
            or to synchronize with external manipulation of the termBuffer.
            Note: to grow the size of the array,
            use <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttributeImpl.ResizeTermBuffer(System.Int32)"/> first.
            </summary>
            <param name="length">the truncated length
            </param>
        </member>
        <member name="T:Lucene.Net.Analysis.Tokenattributes.TypeAttributeImpl">
            <summary> A Token's lexical type. The Default value is "word". </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TypeAttributeImpl.Type">
            <summary>Returns this Token's lexical type.  Defaults to "word". </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.Tokenattributes.TypeAttributeImpl.SetType(System.String)">
            <summary>Set the lexical type.</summary>
            <seealso cref="M:Lucene.Net.Analysis.Tokenattributes.TypeAttributeImpl.Type">
            </seealso>
        </member>
        <member name="T:Lucene.Net.Analysis.TokenWrapper">
            <summary> This class wraps a Token and supplies a single attribute instance
            where the delegate token can be replaced.
            </summary>
            <deprecated> Will be removed, when old TokenStream API is removed.
            </deprecated>
        </member>
        <member name="T:Lucene.Net.Analysis.WhitespaceAnalyzer">
            <summary>An Analyzer that uses <see cref="T:Lucene.Net.Analysis.WhitespaceTokenizer"/>. </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.WhitespaceTokenizer">
            <summary>A WhitespaceTokenizer is a tokenizer that divides text at whitespace.
            Adjacent sequences of non-Whitespace characters form tokens. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.WhitespaceTokenizer.#ctor(System.IO.TextReader)">
            <summary>Construct a new WhitespaceTokenizer. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.WhitespaceTokenizer.#ctor(Lucene.Net.Util.AttributeSource,System.IO.TextReader)">
            <summary>Construct a new WhitespaceTokenizer using a given <see cref="T:Lucene.Net.Util.AttributeSource"/>. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.WhitespaceTokenizer.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory,System.IO.TextReader)">
            <summary>Construct a new WhitespaceTokenizer using a given <see cref="T:Lucene.Net.Util.AttributeSource.AttributeFactory"/>. </summary>
        </member>
        <member name="M:Lucene.Net.Analysis.WhitespaceTokenizer.IsTokenChar(System.Char)">
            <summary>Collects only characters which do not satisfy
            <see cref="M:System.Char.IsWhiteSpace(System.Char)"/>.
            </summary>
        </member>
        <member name="T:Lucene.Net.Analysis.WordlistLoader">
            <summary> Loader for text files that represent a list of stopwords.
            
            
            </summary>
            <version>  $Id: WordlistLoader.java 706342 2008-10-20 17:19:29Z gsingers $
            </version>
        </member>
        <member name="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.FileInfo)">
            <summary> Loads a text file and adds every line as an entry to a HashSet (omitting
            leading and trailing whitespace). Every line of the file should contain only
            one word. The words need to be in lowercase if you make use of an
            Analyzer which uses LowerCaseFilter (like StandardAnalyzer).
            
            </summary>
            <param name="wordfile">File containing the wordlist
            </param>
            <returns> A HashSet with the file's words
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.FileInfo,System.String)">
            <summary> Loads a text file and adds every non-comment line as an entry to a HashSet (omitting
            leading and trailing whitespace). Every line of the file should contain only
            one word. The words need to be in lowercase if you make use of an
            Analyzer which uses LowerCaseFilter (like StandardAnalyzer).
            
            </summary>
            <param name="wordfile">File containing the wordlist
            </param>
            <param name="comment">The comment string to ignore
            </param>
            <returns> A HashSet with the file's words
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.TextReader)">
            <summary> Reads lines from a Reader and adds every line as an entry to a HashSet (omitting
            leading and trailing whitespace). Every line of the Reader should contain only
            one word. The words need to be in lowercase if you make use of an
            Analyzer which uses LowerCaseFilter (like StandardAnalyzer).
            
            </summary>
            <param name="reader">Reader containing the wordlist
            </param>
            <returns> A HashSet with the reader's words
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.TextReader,System.String)">
            <summary> Reads lines from a Reader and adds every non-comment line as an entry to a HashSet (omitting
            leading and trailing whitespace). Every line of the Reader should contain only
            one word. The words need to be in lowercase if you make use of an
            Analyzer which uses LowerCaseFilter (like StandardAnalyzer).
            
            </summary>
            <param name="reader">Reader containing the wordlist
            </param>
            <param name="comment">The string representing a comment.
            </param>
            <returns> A HashSet with the reader's words
            </returns>
        </member>
        <member name="M:Lucene.Net.Analysis.WordlistLoader.GetStemDict(System.IO.FileInfo)">
            <summary> Reads a stem dictionary. Each line contains:
            <c>word<b>\t</b>stem</c>
            (i.e. two tab seperated words)
            
            </summary>
            <returns> stem dictionary that overrules the stemming algorithm
            </returns>
            <throws>  IOException  </throws>
        </member>
        <member name="T:Lucene.Net.Documents.AbstractField">
            <summary> 
            
            
            </summary>
        </member>
        <member name="T:Lucene.Net.Documents.Fieldable">
            <summary> Synonymous with <see cref="T:Lucene.Net.Documents.Field"/>.
            
            <p/><bold>WARNING</bold>: This interface may change within minor versions, despite Lucene's backward compatibility requirements.
            This means new methods may be added from version to version.  This change only affects the Fieldable API; other backwards
            compatibility promises remain intact. For example, Lucene can still
            read and write indices created within the same major version.
            <p/>
            
            
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.SetBoost(System.Single)">
            <summary>Sets the boost factor hits on this field.  This value will be
            multiplied into the score of all hits on this this field of this
            document.
            
            <p/>The boost is multiplied by <see cref="M:Lucene.Net.Documents.Document.GetBoost"/> of the document
            containing this field.  If a document has multiple fields with the same
            name, all such values are multiplied together.  This product is then
            used to compute the norm factor for the field.  By
            default, in the <see cref="M:Lucene.Net.Search.Similarity.ComputeNorm(System.String,Lucene.Net.Index.FieldInvertState)"/>
            method, the boost value is multiplied
            by the <see cref="M:Lucene.Net.Search.Similarity.LengthNorm(System.String,System.Int32)"/>
            and then rounded by <see cref="M:Lucene.Net.Search.Similarity.EncodeNorm(System.Single)"/> before it is stored in the
            index.  One should attempt to ensure that this product does not overflow
            the range of that encoding.
            
            </summary>
            <seealso cref="M:Lucene.Net.Documents.Document.SetBoost(System.Single)">
            </seealso>
            <seealso cref="M:Lucene.Net.Search.Similarity.ComputeNorm(System.String,Lucene.Net.Index.FieldInvertState)">
            </seealso>
            <seealso cref="M:Lucene.Net.Search.Similarity.EncodeNorm(System.Single)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.GetBoost">
            <summary>Returns the boost factor for hits for this field.
            
            <p/>The default value is 1.0.
            
            <p/>Note: this value is not stored directly with the document in the index.
            Documents returned from <see cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32)"/> and
            <see cref="M:Lucene.Net.Search.Hits.Doc(System.Int32)"/> may thus not have the same value present as when
            this field was indexed.
            
            </summary>
            <seealso cref="M:Lucene.Net.Documents.Fieldable.SetBoost(System.Single)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.Name">
            <summary>Returns the name of the field as an interned string.
            For example "date", "title", "body", ...
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.StringValue">
            <summary>The value of the field as a String, or null.
            <p/>
            For indexing, if isStored()==true, the stringValue() will be used as the stored field value
            unless isBinary()==true, in which case binaryValue() will be used.
            
            If isIndexed()==true and isTokenized()==false, this String value will be indexed as a single token.
            If isIndexed()==true and isTokenized()==true, then tokenStreamValue() will be used to generate indexed tokens if not null,
            else readerValue() will be used to generate indexed tokens if not null, else stringValue() will be used to generate tokens.
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.ReaderValue">
            <summary>The value of the field as a Reader, which can be used at index time to generate indexed tokens.</summary>
            <seealso cref="M:Lucene.Net.Documents.Fieldable.StringValue">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.BinaryValue">
            <summary>The value of the field in Binary, or null.</summary>
            <seealso cref="M:Lucene.Net.Documents.Fieldable.StringValue">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.TokenStreamValue">
            <summary>The TokenStream for this field to be used when indexing, or null.</summary>
            <seealso cref="M:Lucene.Net.Documents.Fieldable.StringValue">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.IsStored">
            <summary>True if the value of the field is to be stored in the index for return
            with search hits. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.IsIndexed">
            <summary>True if the value of the field is to be indexed, so that it may be
            searched on. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.IsTokenized">
            <summary>True if the value of the field should be tokenized as text prior to
            indexing.  Un-tokenized fields are indexed as a single word and may not be
            Reader-valued. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.IsCompressed">
            <summary>True if the value of the field is stored and compressed within the index </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.IsTermVectorStored">
            <summary>True if the term or terms used to index this field are stored as a term
            vector, available from <see cref="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,System.String)"/>.
            These methods do not provide access to the original content of the field,
            only to terms used to index it. If the original content must be
            preserved, use the <c>stored</c> attribute instead.
            
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,System.String)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.IsStoreOffsetWithTermVector">
            <summary> True if terms are stored as term vector together with their offsets 
            (start and end positon in source text).
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.IsStorePositionWithTermVector">
            <summary> True if terms are stored as term vector together with their token positions.</summary>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.IsBinary">
            <summary>True if the value of the field is stored as binary </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.GetOmitNorms">
            <summary>True if norms are omitted for this indexed field </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.SetOmitNorms(System.Boolean)">
            <summary>Expert:
            
            If set, omit normalization factors associated with this indexed field.
            This effectively disables indexing boosts and length normalization for this field.
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.SetOmitTf(System.Boolean)">
            <deprecated> Renamed to <see cref="M:Lucene.Net.Documents.AbstractField.SetOmitTermFreqAndPositions(System.Boolean)"/> 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.GetOmitTf">
            <deprecated> Renamed to <see cref="M:Lucene.Net.Documents.AbstractField.GetOmitTermFreqAndPositions"/> 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.IsLazy">
            <summary> Indicates whether a Field is Lazy or not.  The semantics of Lazy loading are such that if a Field is lazily loaded, retrieving
            it's values via <see cref="M:Lucene.Net.Documents.Fieldable.StringValue"/> or <see cref="M:Lucene.Net.Documents.Fieldable.BinaryValue"/> is only valid as long as the <see cref="T:Lucene.Net.Index.IndexReader"/> that
            retrieved the <see cref="T:Lucene.Net.Documents.Document"/> is still open.
            
            </summary>
            <returns> true if this field can be loaded lazily
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.GetBinaryOffset">
            <summary> Returns offset into byte[] segment that is used as value, if Field is not binary
            returned value is undefined
            </summary>
            <returns> index of the first character in byte[] segment that represents this Field value
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.GetBinaryLength">
            <summary> Returns length of byte[] segment that is used as value, if Field is not binary
            returned value is undefined
            </summary>
            <returns> length of byte[] segment that represents this Field value
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.GetBinaryValue">
            <summary> Return the raw byte[] for the binary field.  Note that
            you must also call <see cref="M:Lucene.Net.Documents.Fieldable.GetBinaryLength"/> and <see cref="M:Lucene.Net.Documents.Fieldable.GetBinaryOffset"/>
            to know which range of bytes in this
            returned array belong to the field.
            </summary>
            <returns> reference to the Field value as byte[].
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.Fieldable.GetBinaryValue(System.Byte[])">
            <summary> Return the raw byte[] for the binary field.  Note that
            you must also call <see cref="M:Lucene.Net.Documents.Fieldable.GetBinaryLength"/> and <see cref="M:Lucene.Net.Documents.Fieldable.GetBinaryOffset"/>
            to know which range of bytes in this
            returned array belong to the field.<p/>
            About reuse: if you pass in the result byte[] and it is
            used, likely the underlying implementation will hold
            onto this byte[] and return it in future calls to
            <see cref="M:Lucene.Net.Documents.Fieldable.BinaryValue"/> or <see cref="M:Lucene.Net.Documents.Fieldable.GetBinaryValue"/>.
            So if you subsequently re-use the same byte[] elsewhere
            it will alter this Fieldable's value.
            </summary>
            <param name="result"> User defined buffer that will be used if
            possible.  If this is null or not large enough, a new
            buffer is allocated
            </param>
            <returns> reference to the Field value as byte[].
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.SetBoost(System.Single)">
            <summary>Sets the boost factor hits on this field.  This value will be
            multiplied into the score of all hits on this this field of this
            document.
            
            <p/>The boost is multiplied by <see cref="M:Lucene.Net.Documents.Document.GetBoost"/> of the document
            containing this field.  If a document has multiple fields with the same
            name, all such values are multiplied together.  This product is then
            used to compute the norm factor for the field.  By
            default, in the <see cref="M:Lucene.Net.Search.Similarity.ComputeNorm(System.String,Lucene.Net.Index.FieldInvertState)"/>
            method, the boost value is multipled
            by the <see cref="M:Lucene.Net.Search.Similarity.LengthNorm(System.String,System.Int32)"/> and then
            rounded by <see cref="M:Lucene.Net.Search.Similarity.EncodeNorm(System.Single)"/> before it is stored in the
            index.  One should attempt to ensure that this product does not overflow
            the range of that encoding.
            
            </summary>
            <seealso cref="M:Lucene.Net.Documents.Document.SetBoost(System.Single)">
            </seealso>
            <seealso cref="M:Lucene.Net.Search.Similarity.ComputeNorm(System.String,Lucene.Net.Index.FieldInvertState)">
            </seealso>
            <seealso cref="M:Lucene.Net.Search.Similarity.EncodeNorm(System.Single)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.GetBoost">
            <summary>Returns the boost factor for hits for this field.
            
            <p/>The default value is 1.0.
            
            <p/>Note: this value is not stored directly with the document in the index.
            Documents returned from <see cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32)"/> and
            <see cref="M:Lucene.Net.Search.Hits.Doc(System.Int32)"/> may thus not have the same value present as when
            this field was indexed.
            
            </summary>
            <seealso cref="M:Lucene.Net.Documents.AbstractField.SetBoost(System.Single)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.Name">
            <summary>Returns the name of the field as an interned string.
            For example "date", "title", "body", ...
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.IsStored">
            <summary>True iff the value of the field is to be stored in the index for return
            with search hits.  It is an error for this to be true if a field is
            Reader-valued. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.IsIndexed">
            <summary>True iff the value of the field is to be indexed, so that it may be
            searched on. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.IsTokenized">
            <summary>True iff the value of the field should be tokenized as text prior to
            indexing.  Un-tokenized fields are indexed as a single word and may not be
            Reader-valued. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.IsCompressed">
            <summary>True if the value of the field is stored and compressed within the index </summary>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.IsTermVectorStored">
            <summary>True iff the term or terms used to index this field are stored as a term
            vector, available from <see cref="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,System.String)"/>.
            These methods do not provide access to the original content of the field,
            only to terms used to index it. If the original content must be
            preserved, use the <c>stored</c> attribute instead.
            
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,System.String)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.IsStoreOffsetWithTermVector">
            <summary> True iff terms are stored as term vector together with their offsets 
            (start and end position in source text).
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.IsStorePositionWithTermVector">
            <summary> True iff terms are stored as term vector together with their token positions.</summary>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.IsBinary">
            <summary>True iff the value of the filed is stored as binary </summary>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.GetBinaryValue">
            <summary> Return the raw byte[] for the binary field.  Note that
            you must also call <see cref="M:Lucene.Net.Documents.AbstractField.GetBinaryLength"/> and <see cref="M:Lucene.Net.Documents.AbstractField.GetBinaryOffset"/>
            to know which range of bytes in this
            returned array belong to the field.
            </summary>
            <returns> reference to the Field value as byte[].
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.GetBinaryLength">
            <summary> Returns length of byte[] segment that is used as value, if Field is not binary
            returned value is undefined
            </summary>
            <returns> length of byte[] segment that represents this Field value
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.GetBinaryOffset">
            <summary> Returns offset into byte[] segment that is used as value, if Field is not binary
            returned value is undefined
            </summary>
            <returns> index of the first character in byte[] segment that represents this Field value
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.GetOmitNorms">
            <summary>True if norms are omitted for this indexed field </summary>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.GetOmitTf">
            <deprecated> Renamed to <see cref="M:Lucene.Net.Documents.AbstractField.GetOmitTermFreqAndPositions"/> 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.GetOmitTermFreqAndPositions">
            <seealso cref="M:Lucene.Net.Documents.AbstractField.SetOmitTermFreqAndPositions(System.Boolean)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.SetOmitNorms(System.Boolean)">
            <summary>Expert:
            
            If set, omit normalization factors associated with this indexed field.
            This effectively disables indexing boosts and length normalization for this field.
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.SetOmitTf(System.Boolean)">
            <deprecated> Renamed to <see cref="M:Lucene.Net.Documents.AbstractField.SetOmitTermFreqAndPositions(System.Boolean)"/> 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.SetOmitTermFreqAndPositions(System.Boolean)">
            <summary>Expert:
            
            If set, omit term freq, positions and payloads from
            postings for this field.
            
            <p/><b>NOTE</b>: While this option reduces storage space
            required in the index, it also means any query
            requiring positional information, such as <see cref="T:Lucene.Net.Search.PhraseQuery"/>
            or <see cref="T:Lucene.Net.Search.Spans.SpanQuery"/> subclasses will
            silently fail to find results.
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.AbstractField.ToString">
            <summary>Prints a Field for human consumption. </summary>
        </member>
        <member name="T:Lucene.Net.Documents.CompressionTools">
            <summary>Simple utility class providing static methods to
            compress and decompress binary data for stored fields.
            This class uses java.util.zip.Deflater and Inflater
            classes to compress and decompress, which is the same
            format previously used by the now deprecated
            Field.Store.COMPRESS.
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.CompressionTools.Compress(System.Byte[],System.Int32,System.Int32,System.Int32)">
            <summary>Compresses the specified byte range using the
            specified compressionLevel (constants are defined in
            java.util.zip.Deflater). 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.CompressionTools.Compress(System.Byte[],System.Int32,System.Int32)">
            <summary>Compresses the specified byte range, with default BEST_COMPRESSION level </summary>
        </member>
        <member name="M:Lucene.Net.Documents.CompressionTools.Compress(System.Byte[])">
            <summary>Compresses all bytes in the array, with default BEST_COMPRESSION level </summary>
        </member>
        <member name="M:Lucene.Net.Documents.CompressionTools.CompressString(System.String)">
            <summary>Compresses the String value, with default BEST_COMPRESSION level </summary>
        </member>
        <member name="M:Lucene.Net.Documents.CompressionTools.CompressString(System.String,System.Int32)">
            <summary>Compresses the String value using the specified
            compressionLevel (constants are defined in
            java.util.zip.Deflater). 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.CompressionTools.Decompress(System.Byte[])">
            <summary>Decompress the byte array previously returned by
            compress 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.CompressionTools.DecompressString(System.Byte[])">
            <summary>Decompress the byte array previously returned by
            compressString back into a String 
            </summary>
        </member>
        <member name="T:Lucene.Net.Documents.DateField">
            <summary> Provides support for converting dates to strings and vice-versa.
            The strings are structured so that lexicographic sorting orders by date,
            which makes them suitable for use as field values and search terms.
            
            <p/>Note that this class saves dates with millisecond granularity,
            which is bad for <see cref="T:Lucene.Net.Search.TermRangeQuery"/> and <see cref="T:Lucene.Net.Search.PrefixQuery"/>, as those
            queries are expanded to a BooleanQuery with a potentially large number
            of terms when searching. Thus you might want to use
            <see cref="T:Lucene.Net.Documents.DateTools"/> instead.
            
            <p/>
            Note: dates before 1970 cannot be used, and therefore cannot be
            indexed when using this class. See <see cref="T:Lucene.Net.Documents.DateTools"/> for an
            alternative without such a limitation.
            
            <p/>
            Another approach is <see cref="T:Lucene.Net.Util.NumericUtils"/>, which provides
            a sortable binary representation (prefix encoded) of numeric values, which
            date/time are.
            For indexing a <see cref="T:System.DateTime"/>, convert it to unix timestamp as
            <c>long</c> and
            index this as a numeric value with <see cref="T:Lucene.Net.Documents.NumericField"/>
            and use <see cref="T:Lucene.Net.Search.NumericRangeQuery"/> to query it.
            
            </summary>
            <deprecated> If you build a new index, use <see cref="T:Lucene.Net.Documents.DateTools"/> or 
            <see cref="T:Lucene.Net.Documents.NumericField"/> instead.
            This class is included for use with existing
            indices and will be removed in a future release.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Documents.DateField.DateToString(System.DateTime)">
            <summary> Converts a Date to a string suitable for indexing.</summary>
            <throws>  RuntimeException if the date specified in the </throws>
            <summary> method argument is before 1970
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.DateField.TimeToString(System.Int64)">
            <summary> Converts a millisecond time to a string suitable for indexing.</summary>
            <throws>  RuntimeException if the time specified in the </throws>
            <summary> method argument is negative, that is, before 1970
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.DateField.StringToTime(System.String)">
            <summary>Converts a string-encoded date into a millisecond time. </summary>
        </member>
        <member name="M:Lucene.Net.Documents.DateField.StringToDate(System.String)">
            <summary>Converts a string-encoded date into a Date object. </summary>
        </member>
        <member name="T:Lucene.Net.Documents.DateTools">
            <summary> Provides support for converting dates to strings and vice-versa.
            The strings are structured so that lexicographic sorting orders 
            them by date, which makes them suitable for use as field values 
            and search terms.
            
            <p/>This class also helps you to limit the resolution of your dates. Do not
            save dates with a finer resolution than you really need, as then
            RangeQuery and PrefixQuery will require more memory and become slower.
            
            <p/>Compared to <see cref="T:Lucene.Net.Documents.DateField"/> the strings generated by the methods
            in this class take slightly more space, unless your selected resolution
            is set to <c>Resolution.DAY</c> or lower.
            
            <p/>
            Another approach is <see cref="T:Lucene.Net.Util.NumericUtils"/>, which provides
            a sortable binary representation (prefix encoded) of numeric values, which
            date/time are.
            For indexing a <see cref="T:System.DateTime"/>, convert it to unix timestamp as
            <c>long</c> and
            index this as a numeric value with <see cref="T:Lucene.Net.Documents.NumericField"/>
            and use <see cref="T:Lucene.Net.Search.NumericRangeQuery"/> to query it.
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.DateTools.DateToString(System.DateTime,Lucene.Net.Documents.DateTools.Resolution)">
            <summary> Converts a Date to a string suitable for indexing.
            
            </summary>
            <param name="date">the date to be converted
            </param>
            <param name="resolution">the desired resolution, see
            <see cref="M:Lucene.Net.Documents.DateTools.Round(System.DateTime,Lucene.Net.Documents.DateTools.Resolution)"/>
            </param>
            <returns> a string in format <c>yyyyMMddHHmmssSSS</c> or shorter,
            depending on <c>resolution</c>; using GMT as timezone 
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.DateTools.TimeToString(System.Int64,Lucene.Net.Documents.DateTools.Resolution)">
            <summary> Converts a millisecond time to a string suitable for indexing.
            
            </summary>
            <param name="time">the date expressed as milliseconds since January 1, 1970, 00:00:00 GMT
            </param>
            <param name="resolution">the desired resolution, see
            <see cref="M:Lucene.Net.Documents.DateTools.Round(System.Int64,Lucene.Net.Documents.DateTools.Resolution)"/>
            </param>
            <returns> a string in format <c>yyyyMMddHHmmssSSS</c> or shorter,
            depending on <c>resolution</c>; using GMT as timezone
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.DateTools.StringToTime(System.String)">
            <summary> Converts a string produced by <c>timeToString</c> or
            <c>DateToString</c> back to a time, represented as the
            number of milliseconds since January 1, 1970, 00:00:00 GMT.
            
            </summary>
            <param name="dateString">the date string to be converted
            </param>
            <returns> the number of milliseconds since January 1, 1970, 00:00:00 GMT
            </returns>
            <throws>  ParseException if <c>dateString</c> is not in the  </throws>
            <summary>  expected format 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.DateTools.StringToDate(System.String)">
            <summary> Converts a string produced by <c>timeToString</c> or
            <c>DateToString</c> back to a time, represented as a
            Date object.
            
            </summary>
            <param name="dateString">the date string to be converted
            </param>
            <returns> the parsed time as a Date object 
            </returns>
            <throws>  ParseException if <c>dateString</c> is not in the  </throws>
            <summary>  expected format 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.DateTools.Round(System.DateTime,Lucene.Net.Documents.DateTools.Resolution)">
            <summary> Limit a date's resolution. For example, the date <c>2004-09-21 13:50:11</c>
            will be changed to <c>2004-09-01 00:00:00</c> when using
            <c>Resolution.MONTH</c>. 
            
            </summary>
            <param name="date"></param>
            <param name="resolution">The desired resolution of the date to be returned
            </param>
            <returns> the date with all values more precise than <c>resolution</c>
            set to 0 or 1
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.DateTools.Round(System.Int64,Lucene.Net.Documents.DateTools.Resolution)">
            <summary> Limit a date's resolution. For example, the date <c>1095767411000</c>
            (which represents 2004-09-21 13:50:11) will be changed to 
            <c>1093989600000</c> (2004-09-01 00:00:00) when using
            <c>Resolution.MONTH</c>.
            
            </summary>
            <param name="time">The time in milliseconds (not ticks).</param>
            <param name="resolution">The desired resolution of the date to be returned
            </param>
            <returns> the date with all values more precise than <c>resolution</c>
            set to 0 or 1, expressed as milliseconds since January 1, 1970, 00:00:00 GMT
            </returns>
        </member>
        <member name="T:Lucene.Net.Documents.DateTools.Resolution">
            <summary>Specifies the time granularity. </summary>
        </member>
        <member name="T:Lucene.Net.Documents.Document">
            <summary>Documents are the unit of indexing and search.
            
            A Document is a set of fields.  Each field has a name and a textual value.
            A field may be <see cref="M:Lucene.Net.Documents.Fieldable.IsStored">stored</see> with the document, in which
            case it is returned with search hits on the document.  Thus each document
            should typically contain one or more stored fields which uniquely identify
            it.
            
            <p/>Note that fields which are <i>not</i> <see cref="M:Lucene.Net.Documents.Fieldable.IsStored">stored</see> are
            <i>not</i> available in documents retrieved from the index, e.g. with <see cref="F:Lucene.Net.Search.ScoreDoc.doc"/>,
            <see cref="M:Lucene.Net.Search.Searcher.Doc(System.Int32)"/> or <see cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32)"/>.
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Document.#ctor">
            <summary>Constructs a new document with no fields. </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Document.SetBoost(System.Single)">
            <summary>Sets a boost factor for hits on any field of this document.  This value
            will be multiplied into the score of all hits on this document.
            
            <p/>The default value is 1.0.
            
            <p/>Values are multiplied into the value of <see cref="M:Lucene.Net.Documents.Fieldable.GetBoost"/> of
            each field in this document.  Thus, this method in effect sets a default
            boost for the fields of this document.
            
            </summary>
            <seealso cref="M:Lucene.Net.Documents.Fieldable.SetBoost(System.Single)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Documents.Document.GetBoost">
            <summary>Returns, at indexing time, the boost factor as set by <see cref="M:Lucene.Net.Documents.Document.SetBoost(System.Single)"/>. 
            
            <p/>Note that once a document is indexed this value is no longer available
            from the index.  At search time, for retrieved documents, this method always 
            returns 1. This however does not mean that the boost value set at  indexing 
            time was ignored - it was just combined with other indexing time factors and 
            stored elsewhere, for better indexing and search performance. (For more 
            information see the "norm(t,d)" part of the scoring formula in 
            <see cref="T:Lucene.Net.Search.Similarity">Similarity</see>.)
            
            </summary>
            <seealso cref="M:Lucene.Net.Documents.Document.SetBoost(System.Single)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Documents.Document.Add(Lucene.Net.Documents.Fieldable)">
            <summary> <p/>Adds a field to a document.  Several fields may be added with
            the same name.  In this case, if the fields are indexed, their text is
            treated as though appended for the purposes of search.<p/>
            <p/> Note that add like the removeField(s) methods only makes sense 
            prior to adding a document to an index. These methods cannot
            be used to change the content of an existing index! In order to achieve this,
            a document has to be deleted from an index and a new changed version of that
            document has to be added.<p/>
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Document.RemoveField(System.String)">
            <summary> <p/>Removes field with the specified name from the document.
            If multiple fields exist with this name, this method removes the first field that has been added.
            If there is no field with the specified name, the document remains unchanged.<p/>
            <p/> Note that the removeField(s) methods like the add method only make sense 
            prior to adding a document to an index. These methods cannot
            be used to change the content of an existing index! In order to achieve this,
            a document has to be deleted from an index and a new changed version of that
            document has to be added.<p/>
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Document.RemoveFields(System.String)">
            <summary> <p/>Removes all fields with the given name from the document.
            If there is no field with the specified name, the document remains unchanged.<p/>
            <p/> Note that the removeField(s) methods like the add method only make sense 
            prior to adding a document to an index. These methods cannot
            be used to change the content of an existing index! In order to achieve this,
            a document has to be deleted from an index and a new changed version of that
            document has to be added.<p/>
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Document.GetField(System.String)">
            <summary>Returns a field with the given name if any exist in this document, or
            null.  If multiple fields exists with this name, this method returns the
            first value added.
            Do not use this method with lazy loaded fields.
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Document.GetFieldable(System.String)">
            <summary>Returns a field with the given name if any exist in this document, or
            null.  If multiple fields exists with this name, this method returns the
            first value added.
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Document.Get(System.String)">
            <summary>Returns the string value of the field with the given name if any exist in
            this document, or null.  If multiple fields exist with this name, this
            method returns the first value added. If only binary fields with this name
            exist, returns null.
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Document.Fields">
            <summary>Returns an Enumeration of all the fields in a document.</summary>
            <deprecated> use <see cref="M:Lucene.Net.Documents.Document.GetFields"/> instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Documents.Document.GetFields">
            <summary>Returns a List of all the fields in a document.
            <p/>Note that fields which are <i>not</i> <see cref="M:Lucene.Net.Documents.Fieldable.IsStored">stored</see> are
            <i>not</i> available in documents retrieved from the
            index, e.g. <see cref="M:Lucene.Net.Search.Searcher.Doc(System.Int32)"/> or <see cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32)"/>.
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Document.GetFields(System.String)">
            <summary> Returns an array of <see cref="T:Lucene.Net.Documents.Field"/>s with the given name.
            Do not use with lazy loaded fields.
            This method returns an empty array when there are no
            matching fields.  It never returns null.
            
            </summary>
            <param name="name">the name of the field
            </param>
            <returns> a <c>Field[]</c> array
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.Document.GetFieldables(System.String)">
            <summary> Returns an array of <see cref="T:Lucene.Net.Documents.Fieldable"/>s with the given name.
            This method returns an empty array when there are no
            matching fields.  It never returns null.
            
            </summary>
            <param name="name">the name of the field
            </param>
            <returns> a <c>Fieldable[]</c> array
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.Document.GetValues(System.String)">
            <summary> Returns an array of values of the field specified as the method parameter.
            This method returns an empty array when there are no
            matching fields.  It never returns null.
            </summary>
            <param name="name">the name of the field
            </param>
            <returns> a <c>String[]</c> of field values
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.Document.GetBinaryValues(System.String)">
            <summary> Returns an array of byte arrays for of the fields that have the name specified
            as the method parameter.  This method returns an empty
            array when there are no matching fields.  It never
            returns null.
            
            </summary>
            <param name="name">the name of the field
            </param>
            <returns> a <c>byte[][]</c> of binary field values
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.Document.GetBinaryValue(System.String)">
            <summary> Returns an array of bytes for the first (or only) field that has the name
            specified as the method parameter. This method will return <c>null</c>
            if no binary fields with the specified name are available.
            There may be non-binary fields with the same name.
            
            </summary>
            <param name="name">the name of the field.
            </param>
            <returns> a <c>byte[]</c> containing the binary field value or <c>null</c>
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.Document.ToString">
            <summary>Prints the fields of a document for human consumption. </summary>
        </member>
        <member name="T:Lucene.Net.Documents.Field">
            <summary>A field is a section of a Document.  Each field has two parts, a name and a
            value.  Values may be free text, provided as a String or as a Reader, or they
            may be atomic keywords, which are not further processed.  Such keywords may
            be used to represent dates, urls, etc.  Fields are optionally stored in the
            index, so that they may be returned with hits on the document.
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Field.StringValue">
            <summary>The value of the field as a String, or null.  If null, the Reader value or
            binary value is used.  Exactly one of stringValue(),
            readerValue(), and getBinaryValue() must be set. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Field.ReaderValue">
            <summary>The value of the field as a Reader, or null.  If null, the String value or
            binary value is used.  Exactly one of stringValue(),
            readerValue(), and getBinaryValue() must be set. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Field.BinaryValue">
            <summary>The value of the field in Binary, or null.  If null, the Reader value,
            or String value is used. Exactly one of stringValue(),
            readerValue(), and getBinaryValue() must be set.
            </summary>
            <deprecated> This method must allocate a new byte[] if
            the <see cref="M:Lucene.Net.Documents.AbstractField.GetBinaryOffset"/> is non-zero
            or <see cref="M:Lucene.Net.Documents.AbstractField.GetBinaryLength"/> is not the
            full length of the byte[]. Please use <see cref="M:Lucene.Net.Documents.AbstractField.GetBinaryValue"/>
            instead, which simply
            returns the byte[].
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Documents.Field.TokenStreamValue">
            <summary>The TokesStream for this field to be used when indexing, or null.  If null, the Reader value
            or String value is analyzed to produce the indexed tokens. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Field.SetValue(System.String)">
            <summary><p/>Expert: change the value of this field.  This can
            be used during indexing to re-use a single Field
            instance to improve indexing speed by avoiding GC cost
            of new'ing and reclaiming Field instances.  Typically
            a single <see cref="T:Lucene.Net.Documents.Document"/> instance is re-used as
            well.  This helps most on small documents.<p/>
            
            <p/>Each Field instance should only be used once
            within a single <see cref="T:Lucene.Net.Documents.Document"/> instance.  See <a href="http://wiki.apache.org/lucene-java/ImproveIndexingSpeed">ImproveIndexingSpeed</a>
            for details.<p/> 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Field.SetValue(System.IO.TextReader)">
            <summary>Expert: change the value of this field.  See <a href="#setValue(java.lang.String)">setValue(String)</a>. </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Field.SetValue(System.Byte[])">
            <summary>Expert: change the value of this field.  See <a href="#setValue(java.lang.String)">setValue(String)</a>. </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Field.SetValue(System.Byte[],System.Int32,System.Int32)">
            <summary>Expert: change the value of this field.  See <a href="#setValue(java.lang.String)">setValue(String)</a>. </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Field.SetValue(Lucene.Net.Analysis.TokenStream)">
            <summary>Expert: change the value of this field.  See <a href="#setValue(java.lang.String)">setValue(String)</a>.</summary>
            <deprecated> use <see cref="M:Lucene.Net.Documents.Field.SetTokenStream(Lucene.Net.Analysis.TokenStream)"/> 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Documents.Field.SetTokenStream(Lucene.Net.Analysis.TokenStream)">
            <summary>Expert: sets the token stream to be used for indexing and causes isIndexed() and isTokenized() to return true.
            May be combined with stored values from stringValue() or binaryValue() 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Field.#ctor(System.String,System.String,Lucene.Net.Documents.Field.Store,Lucene.Net.Documents.Field.Index)">
            <summary> Create a field by specifying its name, value and how it will
            be saved in the index. Term vectors will not be stored in the index.
            
            </summary>
            <param name="name">The name of the field
            </param>
            <param name="value_Renamed">The string to process
            </param>
            <param name="store">Whether <c>value</c> should be stored in the index
            </param>
            <param name="index">Whether the field should be indexed, and if so, if it should
            be tokenized before indexing 
            </param>
            <throws>  NullPointerException if name or value is <c>null</c> </throws>
            <throws>  IllegalArgumentException if the field is neither stored nor indexed  </throws>
        </member>
        <member name="M:Lucene.Net.Documents.Field.#ctor(System.String,System.String,Lucene.Net.Documents.Field.Store,Lucene.Net.Documents.Field.Index,Lucene.Net.Documents.Field.TermVector)">
            <summary> Create a field by specifying its name, value and how it will
            be saved in the index.
            
            </summary>
            <param name="name">The name of the field
            </param>
            <param name="value_Renamed">The string to process
            </param>
            <param name="store">Whether <c>value</c> should be stored in the index
            </param>
            <param name="index">Whether the field should be indexed, and if so, if it should
            be tokenized before indexing 
            </param>
            <param name="termVector">Whether term vector should be stored
            </param>
            <throws>  NullPointerException if name or value is <c>null</c> </throws>
            <throws>  IllegalArgumentException in any of the following situations: </throws>
            <summary> <list> 
            <item>the field is neither stored nor indexed</item> 
            <item>the field is not indexed but termVector is <c>TermVector.YES</c></item>
            </list> 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Field.#ctor(System.String,System.Boolean,System.String,Lucene.Net.Documents.Field.Store,Lucene.Net.Documents.Field.Index,Lucene.Net.Documents.Field.TermVector)">
            <summary> Create a field by specifying its name, value and how it will
            be saved in the index.
            
            </summary>
            <param name="name">The name of the field
            </param>
            <param name="internName">Whether to .intern() name or not
            </param>
            <param name="value_Renamed">The string to process
            </param>
            <param name="store">Whether <c>value</c> should be stored in the index
            </param>
            <param name="index">Whether the field should be indexed, and if so, if it should
            be tokenized before indexing 
            </param>
            <param name="termVector">Whether term vector should be stored
            </param>
            <throws>  NullPointerException if name or value is <c>null</c> </throws>
            <throws>  IllegalArgumentException in any of the following situations: </throws>
            <summary> <list> 
            <item>the field is neither stored nor indexed</item> 
            <item>the field is not indexed but termVector is <c>TermVector.YES</c></item>
            </list> 
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.Field.#ctor(System.String,System.IO.TextReader)">
            <summary> Create a tokenized and indexed field that is not stored. Term vectors will
            not be stored.  The Reader is read only when the Document is added to the index,
            i.e. you may not close the Reader until <see cref="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document)"/>
            has been called.
            
            </summary>
            <param name="name">The name of the field
            </param>
            <param name="reader">The reader with the content
            </param>
            <throws>  NullPointerException if name or reader is <c>null</c> </throws>
        </member>
        <member name="M:Lucene.Net.Documents.Field.#ctor(System.String,System.IO.TextReader,Lucene.Net.Documents.Field.TermVector)">
            <summary> Create a tokenized and indexed field that is not stored, optionally with 
            storing term vectors.  The Reader is read only when the Document is added to the index,
            i.e. you may not close the Reader until <see cref="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document)"/>
            has been called.
            
            </summary>
            <param name="name">The name of the field
            </param>
            <param name="reader">The reader with the content
            </param>
            <param name="termVector">Whether term vector should be stored
            </param>
            <throws>  NullPointerException if name or reader is <c>null</c> </throws>
        </member>
        <member name="M:Lucene.Net.Documents.Field.#ctor(System.String,Lucene.Net.Analysis.TokenStream)">
            <summary> Create a tokenized and indexed field that is not stored. Term vectors will
            not be stored. This is useful for pre-analyzed fields.
            The TokenStream is read only when the Document is added to the index,
            i.e. you may not close the TokenStream until <see cref="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document)"/>
            has been called.
            
            </summary>
            <param name="name">The name of the field
            </param>
            <param name="tokenStream">The TokenStream with the content
            </param>
            <throws>  NullPointerException if name or tokenStream is <c>null</c> </throws>
        </member>
        <member name="M:Lucene.Net.Documents.Field.#ctor(System.String,Lucene.Net.Analysis.TokenStream,Lucene.Net.Documents.Field.TermVector)">
            <summary> Create a tokenized and indexed field that is not stored, optionally with 
            storing term vectors.  This is useful for pre-analyzed fields.
            The TokenStream is read only when the Document is added to the index,
            i.e. you may not close the TokenStream until <see cref="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document)"/>
            has been called.
            
            </summary>
            <param name="name">The name of the field
            </param>
            <param name="tokenStream">The TokenStream with the content
            </param>
            <param name="termVector">Whether term vector should be stored
            </param>
            <throws>  NullPointerException if name or tokenStream is <c>null</c> </throws>
        </member>
        <member name="M:Lucene.Net.Documents.Field.#ctor(System.String,System.Byte[],Lucene.Net.Documents.Field.Store)">
            <summary> Create a stored field with binary value. Optionally the value may be compressed.
            
            </summary>
            <param name="name">The name of the field
            </param>
            <param name="value_Renamed">The binary value
            </param>
            <param name="store">How <c>value</c> should be stored (compressed or not)
            </param>
            <throws>  IllegalArgumentException if store is <c>Store.NO</c>  </throws>
        </member>
        <member name="M:Lucene.Net.Documents.Field.#ctor(System.String,System.Byte[],System.Int32,System.Int32,Lucene.Net.Documents.Field.Store)">
            <summary> Create a stored field with binary value. Optionally the value may be compressed.
            
            </summary>
            <param name="name">The name of the field
            </param>
            <param name="value_Renamed">The binary value
            </param>
            <param name="offset">Starting offset in value where this Field's bytes are
            </param>
            <param name="length">Number of bytes to use for this Field, starting at offset
            </param>
            <param name="store">How <c>value</c> should be stored (compressed or not)
            </param>
            <throws>  IllegalArgumentException if store is <c>Store.NO</c>  </throws>
        </member>
        <member name="T:Lucene.Net.Documents.Field.Store">
            <summary>Specifies whether and how a field should be stored. </summary>
        </member>
        <member name="T:Lucene.Net.Util.Parameter">
            <summary> A serializable Enum class.</summary>
        </member>
        <member name="M:Lucene.Net.Util.Parameter.Equals(System.Object)">
            <summary> Resolves the deserialized instance to the local reference for accurate
            equals() and == comparisons.
            
            </summary>
            <returns> a reference to Parameter as resolved in the local VM
            </returns>
            <throws>  ObjectStreamException </throws>
        </member>
        <member name="F:Lucene.Net.Documents.Field.Store.COMPRESS">
            <summary>Store the original field value in the index in a compressed form. This is
            useful for long documents and for binary valued fields.
            </summary>
            <deprecated> Please use <see cref="T:Lucene.Net.Documents.CompressionTools"/> instead.
            For string fields that were previously indexed and stored using compression,
            the new way to achieve this is: First add the field indexed-only (no store)
            and additionally using the same field name as a binary, stored field
            with <see cref="M:Lucene.Net.Documents.CompressionTools.CompressString(System.String)"/>.
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Documents.Field.Store.YES">
            <summary>Store the original field value in the index. This is useful for short texts
            like a document's title which should be displayed with the results. The
            value is stored in its original form, i.e. no analyzer is used before it is
            stored.
            </summary>
        </member>
        <member name="F:Lucene.Net.Documents.Field.Store.NO">
            <summary>Do not store the field value in the index. </summary>
        </member>
        <member name="T:Lucene.Net.Documents.Field.Index">
            <summary>Specifies whether and how a field should be indexed. </summary>
        </member>
        <member name="F:Lucene.Net.Documents.Field.Index.NO">
            <summary>Do not index the field value. This field can thus not be searched,
            but one can still access its contents provided it is
            <see cref="T:Lucene.Net.Documents.Field.Store">stored</see>. 
            </summary>
        </member>
        <member name="F:Lucene.Net.Documents.Field.Index.ANALYZED">
            <summary>Index the tokens produced by running the field's
            value through an Analyzer.  This is useful for
            common text. 
            </summary>
        </member>
        <member name="F:Lucene.Net.Documents.Field.Index.TOKENIZED">
            <deprecated> this has been renamed to <see cref="F:Lucene.Net.Documents.Field.Index.ANALYZED"/> 
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Documents.Field.Index.NOT_ANALYZED">
            <summary>Index the field's value without using an Analyzer, so it can be searched.
            As no analyzer is used the value will be stored as a single term. This is
            useful for unique Ids like product numbers.
            </summary>
        </member>
        <member name="F:Lucene.Net.Documents.Field.Index.UN_TOKENIZED">
            <deprecated> This has been renamed to <see cref="F:Lucene.Net.Documents.Field.Index.NOT_ANALYZED"/> 
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Documents.Field.Index.NOT_ANALYZED_NO_NORMS">
            <summary>Expert: Index the field's value without an Analyzer,
            and also disable the storing of norms.  Note that you
            can also separately enable/disable norms by calling
            <see cref="M:Lucene.Net.Documents.AbstractField.SetOmitNorms(System.Boolean)"/>.  No norms means that
            index-time field and document boosting and field
            length normalization are disabled.  The benefit is
            less memory usage as norms take up one byte of RAM
            per indexed field for every document in the index,
            during searching.  Note that once you index a given
            field <i>with</i> norms enabled, disabling norms will
            have no effect.  In other words, for this to have the
            above described effect on a field, all instances of
            that field must be indexed with NOT_ANALYZED_NO_NORMS
            from the beginning. 
            </summary>
        </member>
        <member name="F:Lucene.Net.Documents.Field.Index.NO_NORMS">
            <deprecated> This has been renamed to
            <see cref="F:Lucene.Net.Documents.Field.Index.NOT_ANALYZED_NO_NORMS"/> 
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Documents.Field.Index.ANALYZED_NO_NORMS">
            <summary>Expert: Index the tokens produced by running the
            field's value through an Analyzer, and also
            separately disable the storing of norms.  See
            <see cref="F:Lucene.Net.Documents.Field.Index.NOT_ANALYZED_NO_NORMS"/> for what norms are
            and why you may want to disable them. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Documents.Field.TermVector">
            <summary>Specifies whether and how a field should have term vectors. </summary>
        </member>
        <member name="F:Lucene.Net.Documents.Field.TermVector.NO">
            <summary>Do not store term vectors. </summary>
        </member>
        <member name="F:Lucene.Net.Documents.Field.TermVector.YES">
            <summary>Store the term vectors of each document. A term vector is a list
            of the document's terms and their number of occurrences in that document. 
            </summary>
        </member>
        <member name="F:Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS">
            <summary> Store the term vector + token position information
            
            </summary>
            <seealso cref="F:Lucene.Net.Documents.Field.TermVector.YES">
            </seealso>
        </member>
        <member name="F:Lucene.Net.Documents.Field.TermVector.WITH_OFFSETS">
            <summary> Store the term vector + Token offset information
            
            </summary>
            <seealso cref="F:Lucene.Net.Documents.Field.TermVector.YES">
            </seealso>
        </member>
        <member name="F:Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS_OFFSETS">
            <summary> Store the term vector + Token position and offset information
            
            </summary>
            <seealso cref="F:Lucene.Net.Documents.Field.TermVector.YES">
            </seealso>
            <seealso cref="F:Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS">
            </seealso>
            <seealso cref="F:Lucene.Net.Documents.Field.TermVector.WITH_OFFSETS">
            </seealso>
        </member>
        <member name="T:Lucene.Net.Documents.FieldSelector">
            <summary> Similar to a <a href="http://download.oracle.com/javase/1.5.0/docs/api/java/io/FileFilter.html">
            java.io.FileFilter</a>, the FieldSelector allows one to make decisions about
            what Fields get loaded on a <see cref="T:Lucene.Net.Documents.Document"/> by <see cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32,Lucene.Net.Documents.FieldSelector)"/>
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.FieldSelector.Accept(System.String)">
            <summary> </summary>
            <param name="fieldName">the field to accept or reject
            </param>
            <returns> an instance of <see cref="T:Lucene.Net.Documents.FieldSelectorResult"/>
            if the <see cref="T:Lucene.Net.Documents.Field"/> named <c>fieldName</c> should be loaded.
            </returns>
        </member>
        <member name="T:Lucene.Net.Documents.FieldSelectorResult">
            <summary>  Provides information about what should be done with this Field 
            
            
            </summary>
        </member>
        <member name="F:Lucene.Net.Documents.FieldSelectorResult.LOAD">
            <summary> Load this <see cref="T:Lucene.Net.Documents.Field"/> every time the <see cref="T:Lucene.Net.Documents.Document"/> is loaded, reading in the data as it is encountered.
            <see cref="M:Lucene.Net.Documents.Document.GetField(System.String)"/> and <see cref="M:Lucene.Net.Documents.Document.GetFieldable(System.String)"/> should not return null.
            <p/>
            <see cref="M:Lucene.Net.Documents.Document.Add(Lucene.Net.Documents.Fieldable)"/> should be called by the Reader.
            </summary>
        </member>
        <member name="F:Lucene.Net.Documents.FieldSelectorResult.LAZY_LOAD">
            <summary> Lazily load this <see cref="T:Lucene.Net.Documents.Field"/>.  This means the <see cref="T:Lucene.Net.Documents.Field"/> is valid, but it may not actually contain its data until
            invoked.  <see cref="M:Lucene.Net.Documents.Document.GetField(System.String)"/> SHOULD NOT BE USED.  <see cref="M:Lucene.Net.Documents.Document.GetFieldable(System.String)"/> is safe to use and should
            return a valid instance of a <see cref="T:Lucene.Net.Documents.Fieldable"/>.
            <p/>
            <see cref="M:Lucene.Net.Documents.Document.Add(Lucene.Net.Documents.Fieldable)"/> should be called by the Reader.
            </summary>
        </member>
        <member name="F:Lucene.Net.Documents.FieldSelectorResult.NO_LOAD">
            <summary> Do not load the <see cref="T:Lucene.Net.Documents.Field"/>.  <see cref="M:Lucene.Net.Documents.Document.GetField(System.String)"/> and <see cref="M:Lucene.Net.Documents.Document.GetFieldable(System.String)"/> should return null.
            <see cref="M:Lucene.Net.Documents.Document.Add(Lucene.Net.Documents.Fieldable)"/> is not called.
            <p/>
            <see cref="M:Lucene.Net.Documents.Document.Add(Lucene.Net.Documents.Fieldable)"/> should not be called by the Reader.
            </summary>
        </member>
        <member name="F:Lucene.Net.Documents.FieldSelectorResult.LOAD_AND_BREAK">
            <summary> Load this field as in the <see cref="F:Lucene.Net.Documents.FieldSelectorResult.LOAD"/> case, but immediately return from <see cref="T:Lucene.Net.Documents.Field"/> loading for the <see cref="T:Lucene.Net.Documents.Document"/>.  Thus, the
            Document may not have its complete set of Fields.  <see cref="M:Lucene.Net.Documents.Document.GetField(System.String)"/> and <see cref="M:Lucene.Net.Documents.Document.GetFieldable(System.String)"/> should
            both be valid for this <see cref="T:Lucene.Net.Documents.Field"/>
            <p/>
            <see cref="M:Lucene.Net.Documents.Document.Add(Lucene.Net.Documents.Fieldable)"/> should be called by the Reader.
            </summary>
        </member>
        <member name="F:Lucene.Net.Documents.FieldSelectorResult.LOAD_FOR_MERGE">
            <summary> Behaves much like <see cref="F:Lucene.Net.Documents.FieldSelectorResult.LOAD"/> but does not uncompress any compressed data.  This is used for internal purposes.
            <see cref="M:Lucene.Net.Documents.Document.GetField(System.String)"/> and <see cref="M:Lucene.Net.Documents.Document.GetFieldable(System.String)"/> should not return null.
            <p/>
            <see cref="M:Lucene.Net.Documents.Document.Add(Lucene.Net.Documents.Fieldable)"/> should be called by
            the Reader.
            </summary>
            <deprecated> This is an internal option only, and is
            no longer needed now that <see cref="T:Lucene.Net.Documents.CompressionTools"/>
            is used for field compression.
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Documents.FieldSelectorResult.SIZE">
            <summary>Expert:  Load the size of this <see cref="T:Lucene.Net.Documents.Field"/> rather than its value.
            Size is measured as number of bytes required to store the field == bytes for a binary or any compressed value, and 2*chars for a String value.
            The size is stored as a binary value, represented as an int in a byte[], with the higher order byte first in [0]
            </summary>
        </member>
        <member name="F:Lucene.Net.Documents.FieldSelectorResult.SIZE_AND_BREAK">
            <summary>Expert: Like <see cref="F:Lucene.Net.Documents.FieldSelectorResult.SIZE"/> but immediately break from the field loading loop, i.e., stop loading further fields, after the size is loaded </summary>
        </member>
        <member name="T:Lucene.Net.Documents.LoadFirstFieldSelector">
            <summary> Load the First field and break.
            <p/>
            See <see cref="F:Lucene.Net.Documents.FieldSelectorResult.LOAD_AND_BREAK"/>
            </summary>
        </member>
        <member name="T:Lucene.Net.Documents.MapFieldSelector">
            <summary> A <see cref="T:Lucene.Net.Documents.FieldSelector"/> based on a Map of field names to <see cref="T:Lucene.Net.Documents.FieldSelectorResult"/>s
            
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.MapFieldSelector.#ctor(System.Collections.IDictionary)">
            <summary>Create a a MapFieldSelector</summary>
            <param name="fieldSelections">maps from field names (String) to <see cref="T:Lucene.Net.Documents.FieldSelectorResult"/>s
            </param>
        </member>
        <member name="M:Lucene.Net.Documents.MapFieldSelector.#ctor(System.Collections.IList)">
            <summary>Create a a MapFieldSelector</summary>
            <param name="fields">fields to LOAD.  List of Strings.  All other fields are NO_LOAD.
            </param>
        </member>
        <member name="M:Lucene.Net.Documents.MapFieldSelector.#ctor(System.String[])">
            <summary>Create a a MapFieldSelector</summary>
            <param name="fields">fields to LOAD.  All other fields are NO_LOAD.
            </param>
        </member>
        <member name="M:Lucene.Net.Documents.MapFieldSelector.Accept(System.String)">
            <summary>Load field according to its associated value in fieldSelections</summary>
            <param name="field">a field name
            </param>
            <returns> the fieldSelections value that field maps to or NO_LOAD if none.
            </returns>
        </member>
        <member name="T:Lucene.Net.Documents.NumberTools">
            <summary> Provides support for converting longs to Strings, and back again. The strings
            are structured so that lexicographic sorting order is preserved.
            
            <p/>
            That is, if l1 is less than l2 for any two longs l1 and l2, then
            NumberTools.longToString(l1) is lexicographically less than
            NumberTools.longToString(l2). (Similarly for "greater than" and "equals".)
            
            <p/>
            This class handles <b>all</b> long values (unlike
            <see cref="T:Lucene.Net.Documents.DateField"/>).
            
            </summary>
            <deprecated> For new indexes use <see cref="T:Lucene.Net.Util.NumericUtils"/> instead, which
            provides a sortable binary representation (prefix encoded) of numeric
            values.
            To index and efficiently query numeric values use <see cref="T:Lucene.Net.Documents.NumericField"/>
            and <see cref="T:Lucene.Net.Search.NumericRangeQuery"/>.
            This class is included for use with existing
            indices and will be removed in a future release.
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Documents.NumberTools.MIN_STRING_VALUE">
            <summary> Equivalent to longToString(Long.MIN_VALUE)</summary>
        </member>
        <member name="F:Lucene.Net.Documents.NumberTools.MAX_STRING_VALUE">
            <summary> Equivalent to longToString(Long.MAX_VALUE)</summary>
        </member>
        <member name="F:Lucene.Net.Documents.NumberTools.STR_SIZE">
            <summary> The length of (all) strings returned by <see cref="M:Lucene.Net.Documents.NumberTools.LongToString(System.Int64)"/></summary>
        </member>
        <member name="M:Lucene.Net.Documents.NumberTools.LongToString(System.Int64)">
            <summary> Converts a long to a String suitable for indexing.</summary>
        </member>
        <member name="M:Lucene.Net.Documents.NumberTools.StringToLong(System.String)">
            <summary> Converts a String that was returned by <see cref="M:Lucene.Net.Documents.NumberTools.LongToString(System.Int64)"/> back to a
            long.
            
            </summary>
            <throws>  IllegalArgumentException </throws>
            <summary>             if the input is null
            </summary>
            <throws>  NumberFormatException </throws>
            <summary>             if the input does not parse (it was not a String returned by
            longToString()).
            </summary>
        </member>
        <member name="T:Lucene.Net.Documents.NumericField">
             <summary> <p/>This class provides a <see cref="T:Lucene.Net.Documents.Field"/> that enables indexing
             of numeric values for efficient range filtering and
             sorting.  Here's an example usage, adding an int value:
             <code>
             document.add(new NumericField(name).setIntValue(value));
             </code>
             
             For optimal performance, re-use the
             <c>NumericField</c> and <see cref="T:Lucene.Net.Documents.Document"/> instance for more than
             one document:
             
             <code>
             NumericField field = new NumericField(name);
             Document document = new Document();
             document.add(field);
             
             for(all documents) {
             ...
             field.setIntValue(value)
             writer.addDocument(document);
             ...
             }
             </code>
             
             <p/>The .Net native types <c>int</c>, <c>long</c>,
             <c>float</c> and <c>double</c> are
             directly supported.  However, any value that can be
             converted into these native types can also be indexed.
             For example, date/time values represented by a
             <see cref="T:System.DateTime"/> can be translated into a long
             value using the <c>java.util.Date.getTime</c> method.  If you
             don't need millisecond precision, you can quantize the
             value, either by dividing the result of
             <c>java.util.Date.getTime</c> or using the separate getters
             (for year, month, etc.) to construct an <c>int</c> or
             <c>long</c> value.<p/>
             
             <p/>To perform range querying or filtering against a
             <c>NumericField</c>, use <see cref="T:Lucene.Net.Search.NumericRangeQuery"/> or <see cref="T:Lucene.Net.Search.NumericRangeFilter"/>
            .  To sort according to a
             <c>NumericField</c>, use the normal numeric sort types, eg
             <see cref="F:Lucene.Net.Search.SortField.INT"/> (note that <see cref="F:Lucene.Net.Search.SortField.AUTO"/>
             will not work with these fields).  <c>NumericField</c> values
             can also be loaded directly from <see cref="T:Lucene.Net.Search.FieldCache"/>.<p/>
             
             <p/>By default, a <c>NumericField</c>'s value is not stored but
             is indexed for range filtering and sorting.  You can use
             the <see cref="M:Lucene.Net.Documents.NumericField.#ctor(System.String,Lucene.Net.Documents.Field.Store,System.Boolean)"/>
             constructor if you need to change these defaults.<p/>
             
             <p/>You may add the same field name as a <c>NumericField</c> to
             the same document more than once.  Range querying and
             filtering will be the logical OR of all values; so a range query
             will hit all documents that have at least one value in
             the range. However sort behavior is not defined.  If you need to sort,
             you should separately index a single-valued <c>NumericField</c>.<p/>
             
             <p/>A <c>NumericField</c> will consume somewhat more disk space
             in the index than an ordinary single-valued field.
             However, for a typical index that includes substantial
             textual content per document, this increase will likely
             be in the noise. <p/>
             
             <p/>Within Lucene, each numeric value is indexed as a
             <em>trie</em> structure, where each term is logically
             assigned to larger and larger pre-defined brackets (which
             are simply lower-precision representations of the value).
             The step size between each successive bracket is called the
             <c>precisionStep</c>, measured in bits.  Smaller
             <c>precisionStep</c> values result in larger number
             of brackets, which consumes more disk space in the index
             but may result in faster range search performance.  The
             default value, 4, was selected for a reasonable tradeoff
             of disk space consumption versus performance.  You can
             use the expert constructor <see cref="M:Lucene.Net.Documents.NumericField.#ctor(System.String,System.Int32,Lucene.Net.Documents.Field.Store,System.Boolean)"/>
             if you'd
             like to change the value.  Note that you must also
             specify a congruent value when creating <see cref="T:Lucene.Net.Search.NumericRangeQuery"/>
             or <see cref="T:Lucene.Net.Search.NumericRangeFilter"/>.
             For low cardinality fields larger precision steps are good.
             If the cardinality is &lt; 100, it is fair
             to use <see cref="F:System.Int32.MaxValue"/>, which produces one
             term per value.
             
             <p/>For more information on the internals of numeric trie
             indexing, including the <a href="../search/NumericRangeQuery.html#precisionStepDesc"><c>precisionStep</c></a>
             configuration, see <see cref="T:Lucene.Net.Search.NumericRangeQuery"/>. The format of
             indexed values is described in <see cref="T:Lucene.Net.Util.NumericUtils"/>.
             
             <p/>If you only need to sort by numeric value, and never
             run range querying/filtering, you can index using a
             <c>precisionStep</c> of <see cref="F:System.Int32.MaxValue"/>.
             This will minimize disk space consumed. <p/>
             
             <p/>More advanced users can instead use <see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>
             directly, when indexing numbers. This
             class is a wrapper around this token stream type for
             easier, more intuitive usage.<p/>
             
             <p/><b>NOTE:</b> This class is only used during
             indexing. When retrieving the stored field value from a
             <see cref="T:Lucene.Net.Documents.Document"/> instance after search, you will get a
             conventional <see cref="T:Lucene.Net.Documents.Fieldable"/> instance where the numeric
             values are returned as <see cref="T:System.String"/>s (according to
             <c>toString(value)</c> of the used data type).
             
             <p/><font color="red"><b>NOTE:</b> This API is
             experimental and might change in incompatible ways in the
             next release.</font>
             
             </summary>
             <since> 2.9
             </since>
        </member>
        <member name="M:Lucene.Net.Documents.NumericField.#ctor(System.String)">
            <summary> Creates a field for numeric values using the default <c>precisionStep</c>
            <see cref="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT"/> (4). The instance is not yet initialized with
            a numeric value, before indexing a document containing this field,
            set a value using the various set<em>???</em>Value() methods.
            This constructor creates an indexed, but not stored field.
            </summary>
            <param name="name">the field name
            </param>
        </member>
        <member name="M:Lucene.Net.Documents.NumericField.#ctor(System.String,Lucene.Net.Documents.Field.Store,System.Boolean)">
            <summary> Creates a field for numeric values using the default <c>precisionStep</c>
            <see cref="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT"/> (4). The instance is not yet initialized with
            a numeric value, before indexing a document containing this field,
            set a value using the various set<em>???</em>Value() methods.
            </summary>
            <param name="name">the field name
            </param>
            <param name="store">if the field should be stored in plain text form
            (according to <c>toString(value)</c> of the used data type)
            </param>
            <param name="index">if the field should be indexed using <see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>
            </param>
        </member>
        <member name="M:Lucene.Net.Documents.NumericField.#ctor(System.String,System.Int32)">
            <summary> Creates a field for numeric values with the specified
            <c>precisionStep</c>. The instance is not yet initialized with
            a numeric value, before indexing a document containing this field,
            set a value using the various set<em>???</em>Value() methods.
            This constructor creates an indexed, but not stored field.
            </summary>
            <param name="name">the field name
            </param>
            <param name="precisionStep">the used <a href="../search/NumericRangeQuery.html#precisionStepDesc">precision step</a>
            </param>
        </member>
        <member name="M:Lucene.Net.Documents.NumericField.#ctor(System.String,System.Int32,Lucene.Net.Documents.Field.Store,System.Boolean)">
            <summary> Creates a field for numeric values with the specified
            <c>precisionStep</c>. The instance is not yet initialized with
            a numeric value, before indexing a document containing this field,
            set a value using the various set<em>???</em>Value() methods.
            </summary>
            <param name="name">the field name
            </param>
            <param name="precisionStep">the used <a href="../search/NumericRangeQuery.html#precisionStepDesc">precision step</a>
            </param>
            <param name="store">if the field should be stored in plain text form
            (according to <c>toString(value)</c> of the used data type)
            </param>
            <param name="index">if the field should be indexed using <see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>
            </param>
        </member>
        <member name="M:Lucene.Net.Documents.NumericField.TokenStreamValue">
            <summary>Returns a <see cref="T:Lucene.Net.Analysis.NumericTokenStream"/> for indexing the numeric value. </summary>
        </member>
        <member name="M:Lucene.Net.Documents.NumericField.BinaryValue">
            <summary>Returns always <c>null</c> for numeric fields </summary>
        </member>
        <member name="M:Lucene.Net.Documents.NumericField.GetBinaryValue(System.Byte[])">
            <summary>Returns always <c>null</c> for numeric fields </summary>
        </member>
        <member name="M:Lucene.Net.Documents.NumericField.ReaderValue">
            <summary>Returns always <c>null</c> for numeric fields </summary>
        </member>
        <member name="M:Lucene.Net.Documents.NumericField.StringValue">
            <summary>Returns the numeric value as a string (how it is stored, when <see cref="F:Lucene.Net.Documents.Field.Store.YES"/> is chosen). </summary>
        </member>
        <member name="M:Lucene.Net.Documents.NumericField.GetNumericValue">
            <summary>Returns the current numeric value as a subclass of <see cref="T:System.Number"/>, <c>null</c> if not yet initialized. </summary>
        </member>
        <member name="M:Lucene.Net.Documents.NumericField.SetLongValue(System.Int64)">
            <summary> Initializes the field with the supplied <c>long</c> value.</summary>
            <param name="value_Renamed">the numeric value
            </param>
            <returns> this instance, because of this you can use it the following way:
            <c>document.add(new NumericField(name, precisionStep).SetLongValue(value))</c>
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.NumericField.SetIntValue(System.Int32)">
            <summary> Initializes the field with the supplied <c>int</c> value.</summary>
            <param name="value_Renamed">the numeric value
            </param>
            <returns> this instance, because of this you can use it the following way:
            <c>document.add(new NumericField(name, precisionStep).setIntValue(value))</c>
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.NumericField.SetDoubleValue(System.Double)">
            <summary> Initializes the field with the supplied <c>double</c> value.</summary>
            <param name="value_Renamed">the numeric value
            </param>
            <returns> this instance, because of this you can use it the following way:
            <c>document.add(new NumericField(name, precisionStep).setDoubleValue(value))</c>
            </returns>
        </member>
        <member name="M:Lucene.Net.Documents.NumericField.SetFloatValue(System.Single)">
            <summary> Initializes the field with the supplied <c>float</c> value.</summary>
            <param name="value_Renamed">the numeric value
            </param>
            <returns> this instance, because of this you can use it the following way:
            <c>document.add(new NumericField(name, precisionStep).setFloatValue(value))</c>
            </returns>
        </member>
        <member name="T:Lucene.Net.Documents.SetBasedFieldSelector">
            <summary> Declare what fields to load normally and what fields to load lazily
            
            
            </summary>
        </member>
        <member name="M:Lucene.Net.Documents.SetBasedFieldSelector.#ctor(System.Collections.Hashtable,System.Collections.Hashtable)">
            <summary> Pass in the Set of <see cref="T:Lucene.Net.Documents.Field"/> names to load and the Set of <see cref="T:Lucene.Net.Documents.Field"/> names to load lazily.  If both are null, the
            Document will not have any <see cref="T:Lucene.Net.Documents.Field"/> on it.  
            </summary>
            <param name="fieldsToLoad">A Set of <see cref="T:System.String"/> field names to load.  May be empty, but not null
            </param>
            <param name="lazyFieldsToLoad">A Set of <see cref="T:System.String"/> field names to load lazily.  May be empty, but not null  
            </param>
        </member>
        <member name="M:Lucene.Net.Documents.SetBasedFieldSelector.Accept(System.String)">
            <summary> Indicate whether to load the field with the given name or not. If the <see cref="M:Lucene.Net.Documents.AbstractField.Name"/> is not in either of the 
            initializing Sets, then <see cref="F:Lucene.Net.Documents.FieldSelectorResult.NO_LOAD"/> is returned.  If a Field name
            is in both <c>fieldsToLoad</c> and <c>lazyFieldsToLoad</c>, lazy has precedence.
            
            </summary>
            <param name="fieldName">The <see cref="T:Lucene.Net.Documents.Field"/> name to check
            </param>
            <returns> The <see cref="T:Lucene.Net.Documents.FieldSelectorResult"/>
            </returns>
        </member>
        <member name="T:Lucene.Net.Index.AbstractAllTermDocs">
            <summary>
            Base class for enumerating all but deleted docs.
            
            <p/>NOTE: this class is meant only to be used internally
            by Lucene; it's only public so it can be shared across
            packages.  This means the API is freely subject to
            change, and, the class could be removed entirely, in any
            Lucene release.  Use directly at your own risk! */
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.TermDocs">
            <summary>TermDocs provides an interface for enumerating &lt;document, frequency&gt;
            pairs for a term.  <p/> The document portion names each document containing
            the term.  Documents are indicated by number.  The frequency portion gives
            the number of times the term occurred in each document.  <p/> The pairs are
            ordered by document number.
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexReader.TermDocs">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.TermDocs.Seek(Lucene.Net.Index.Term)">
            <summary>Sets this to the data for a term.
            The enumeration is reset to the start of the data for this term.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.TermDocs.Seek(Lucene.Net.Index.TermEnum)">
            <summary>Sets this to the data for the current term in a <see cref="T:Lucene.Net.Index.TermEnum"/>.
            This may be optimized in some implementations.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.TermDocs.Doc">
            <summary>Returns the current document number.  <p/> This is invalid until <see cref="M:Lucene.Net.Index.TermDocs.Next"/>
            is called for the first time.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.TermDocs.Freq">
            <summary>Returns the frequency of the term within the current document.  <p/> This
            is invalid until <see cref="M:Lucene.Net.Index.TermDocs.Next"/> is called for the first time.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.TermDocs.Next">
            <summary>Moves to the next pair in the enumeration.  <p/> Returns true iff there is
            such a next pair in the enumeration. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.TermDocs.Read(System.Int32[],System.Int32[])">
            <summary>Attempts to read multiple entries from the enumeration, up to length of
            <i>docs</i>.  Document numbers are stored in <i>docs</i>, and term
            frequencies are stored in <i>freqs</i>.  The <i>freqs</i> array must be as
            long as the <i>docs</i> array.
            
            <p/>Returns the number of entries read.  Zero is only returned when the
            stream has been exhausted.  
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.TermDocs.SkipTo(System.Int32)">
            <summary>Skips entries to the first beyond the current whose document number is
            greater than or equal to <i>target</i>. <p/>Returns true iff there is such
            an entry.  <p/>Behaves as if written: <code>
            boolean skipTo(int target) {
                do {
                    if (!next())
                        return false;
                } while (target > doc());
                    return true;
            }
            </code>
            Some implementations are considerably more efficient than that.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.TermDocs.Close">
            <summary>Frees associated resources. </summary>
        </member>
        <member name="T:Lucene.Net.Index.BufferedDeletes">
            <summary>Holds buffered deletes, by docID, term or query.  We
            hold two instances of this class: one for the deletes
            prior to the last flush, the other for deletes after
            the last flush.  This is so if we need to abort
            (discard all buffered docs) we can also discard the
            buffered deletes yet keep the deletes done during
            previously flushed segments. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Store.IndexInput">
            <summary>Abstract base class for input from a file in a <see cref="T:Lucene.Net.Store.Directory"/>.  A
            random-access input stream.  Used for all Lucene index input operations.
            </summary>
            <seealso cref="T:Lucene.Net.Store.Directory">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.ReadByte">
            <summary>Reads and returns a single byte.</summary>
            <seealso cref="M:Lucene.Net.Store.IndexOutput.WriteByte(System.Byte)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.ReadBytes(System.Byte[],System.Int32,System.Int32)">
            <summary>Reads a specified number of bytes into an array at the specified offset.</summary>
            <param name="b">the array to read bytes into
            </param>
            <param name="offset">the offset in the array to start storing bytes
            </param>
            <param name="len">the number of bytes to read
            </param>
            <seealso cref="M:Lucene.Net.Store.IndexOutput.WriteBytes(System.Byte[],System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.ReadBytes(System.Byte[],System.Int32,System.Int32,System.Boolean)">
            <summary>Reads a specified number of bytes into an array at the
            specified offset with control over whether the read
            should be buffered (callers who have their own buffer
            should pass in "false" for useBuffer).  Currently only
            <see cref="T:Lucene.Net.Store.BufferedIndexInput"/> respects this parameter.
            </summary>
            <param name="b">the array to read bytes into
            </param>
            <param name="offset">the offset in the array to start storing bytes
            </param>
            <param name="len">the number of bytes to read
            </param>
            <param name="useBuffer">set to false if the caller will handle
            buffering.
            </param>
            <seealso cref="M:Lucene.Net.Store.IndexOutput.WriteBytes(System.Byte[],System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.ReadInt">
            <summary>Reads four bytes and returns an int.</summary>
            <seealso cref="M:Lucene.Net.Store.IndexOutput.WriteInt(System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.ReadVInt">
            <summary>Reads an int stored in variable-length format.  Reads between one and
            five bytes.  Smaller values take fewer bytes.  Negative numbers are not
            supported.
            </summary>
            <seealso cref="M:Lucene.Net.Store.IndexOutput.WriteVInt(System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.ReadLong">
            <summary>Reads eight bytes and returns a long.</summary>
            <seealso cref="M:Lucene.Net.Store.IndexOutput.WriteLong(System.Int64)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.ReadVLong">
            <summary>Reads a long stored in variable-length format.  Reads between one and
            nine bytes.  Smaller values take fewer bytes.  Negative numbers are not
            supported. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.SetModifiedUTF8StringsMode">
            <summary>Call this if readString should read characters stored
            in the old modified UTF8 format (length in java chars
            and java's modified UTF8 encoding).  This is used for
            indices written pre-2.4 See LUCENE-510 for details. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.ReadString">
            <summary>Reads a string.</summary>
            <seealso cref="M:Lucene.Net.Store.IndexOutput.WriteString(System.String)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.ReadChars(System.Char[],System.Int32,System.Int32)">
            <summary>Reads Lucene's old "modified UTF-8" encoded
            characters into an array.
            </summary>
            <param name="buffer">the array to read characters into
            </param>
            <param name="start">the offset in the array to start storing characters
            </param>
            <param name="length">the number of characters to read
            </param>
            <seealso cref="M:Lucene.Net.Store.IndexOutput.WriteChars(System.String,System.Int32,System.Int32)">
            </seealso>
            <deprecated> -- please use readString or readBytes
            instead, and construct the string
            from those utf8 bytes
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.SkipChars(System.Int32)">
            <summary> Expert
            
            Similar to <see cref="M:Lucene.Net.Store.IndexInput.ReadChars(System.Char[],System.Int32,System.Int32)"/> but does not do any conversion operations on the bytes it is reading in.  It still
            has to invoke <see cref="M:Lucene.Net.Store.IndexInput.ReadByte"/> just as <see cref="M:Lucene.Net.Store.IndexInput.ReadChars(System.Char[],System.Int32,System.Int32)"/> does, but it does not need a buffer to store anything
            and it does not have to do any of the bitwise operations, since we don't actually care what is in the byte except to determine
            how many more bytes to read
            </summary>
            <param name="length">The number of chars to read
            </param>
            <deprecated> this method operates on old "modified utf8" encoded
            strings
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.Close">
            <summary>Closes the stream to futher operations. </summary>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.GetFilePointer">
            <summary>Returns the current position in this file, where the next read will
            occur.
            </summary>
            <seealso cref="M:Lucene.Net.Store.IndexInput.Seek(System.Int64)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.Seek(System.Int64)">
            <summary>Sets current position in this file, where the next read will occur.</summary>
            <seealso cref="M:Lucene.Net.Store.IndexInput.GetFilePointer">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.Length">
            <summary>The number of bytes in the file. </summary>
        </member>
        <member name="M:Lucene.Net.Store.IndexInput.Clone">
            <summary>Returns a clone of this stream.
            
            <p/>Clones of a stream access the same data, and are positioned at the same
            point as the stream they were cloned from.
            
            <p/>Expert: Subclasses must ensure that clones may be positioned at
            different points in the input from each other and from the stream they
            were cloned from.
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.ByteSliceWriter">
            <summary> Class to write byte streams into slices of shared
            byte[].  This is used by DocumentsWriter to hold the
            posting list for many terms in RAM.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.ByteSliceWriter.Init(System.Int32)">
            <summary> Set up the writer to write at address.</summary>
        </member>
        <member name="M:Lucene.Net.Index.ByteSliceWriter.WriteByte(System.Byte)">
            <summary>Write byte into byte slice stream </summary>
        </member>
        <member name="T:Lucene.Net.Index.CheckIndex">
            <summary> Basic tool and API to check the health of an index and
            write a new segments file that removes reference to
            problematic segments.
            
            <p/>As this tool checks every byte in the index, on a large
            index it can take quite a long time to run.
            
            <p/><b>WARNING</b>: this tool and API is new and
            experimental and is subject to suddenly change in the
            next release.  Please make a complete backup of your
            index before using this to fix your index!
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.out_Renamed">
            <summary>Default PrintStream for all CheckIndex instances.</summary>
            <deprecated> Use <see cref="M:Lucene.Net.Index.CheckIndex.SetInfoStream(System.IO.StreamWriter)"/> per instance,
            instead. 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.CheckIndex.#ctor(Lucene.Net.Store.Directory)">
            <summary>Create a new CheckIndex on the directory. </summary>
        </member>
        <member name="M:Lucene.Net.Index.CheckIndex.SetInfoStream(System.IO.StreamWriter)">
            <summary>Set infoStream where messages should go.  If null, no
            messages are printed 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.CheckIndex.Check(Lucene.Net.Store.Directory,System.Boolean)">
            <summary>Returns true if index is clean, else false. </summary>
            <deprecated> Please instantiate a CheckIndex and then use <see cref="M:Lucene.Net.Index.CheckIndex.CheckIndex_Renamed_Method"/> instead 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.CheckIndex.Check(Lucene.Net.Store.Directory,System.Boolean,System.Collections.IList)">
            <summary>Returns true if index is clean, else false.</summary>
            <deprecated> Please instantiate a CheckIndex and then use <see cref="M:Lucene.Net.Index.CheckIndex.CheckIndex_Renamed_Method(System.Collections.IList)"/> instead 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.CheckIndex.CheckIndex_Renamed_Method">
            <summary>Returns a <see cref="T:Lucene.Net.Index.CheckIndex.Status"/> instance detailing
            the state of the index.
            
            <p/>As this method checks every byte in the index, on a large
            index it can take quite a long time to run.
            
            <p/><b>WARNING</b>: make sure
            you only call this when the index is not opened by any
            writer. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.CheckIndex.CheckIndex_Renamed_Method(System.Collections.IList)">
            <summary>Returns a <see cref="T:Lucene.Net.Index.CheckIndex.Status"/> instance detailing
            the state of the index.
            
            </summary>
            <param name="onlySegments">list of specific segment names to check
            
            <p/>As this method checks every byte in the specified
            segments, on a large index it can take quite a long
            time to run.
            
            <p/><b>WARNING</b>: make sure
            you only call this when the index is not opened by any
            writer. 
            </param>
        </member>
        <member name="M:Lucene.Net.Index.CheckIndex.TestFieldNorms(System.Collections.Generic.ICollection{System.String},Lucene.Net.Index.SegmentReader)">
            <summary> Test field norms.</summary>
        </member>
        <member name="M:Lucene.Net.Index.CheckIndex.TestTermIndex(Lucene.Net.Index.SegmentInfo,Lucene.Net.Index.SegmentReader)">
            <summary> Test the term index.</summary>
        </member>
        <member name="M:Lucene.Net.Index.CheckIndex.TestStoredFields(Lucene.Net.Index.SegmentInfo,Lucene.Net.Index.SegmentReader,System.Globalization.NumberFormatInfo)">
            <summary> Test stored fields for a segment.</summary>
        </member>
        <member name="M:Lucene.Net.Index.CheckIndex.TestTermVectors(Lucene.Net.Index.SegmentInfo,Lucene.Net.Index.SegmentReader,System.Globalization.NumberFormatInfo)">
            <summary> Test term vectors for a segment.</summary>
        </member>
        <member name="M:Lucene.Net.Index.CheckIndex.FixIndex(Lucene.Net.Index.CheckIndex.Status)">
            <summary>Repairs the index using previously returned result
            from <see cref="T:Lucene.Net.Index.CheckIndex"/>.  Note that this does not
            remove any of the unreferenced files after it's done;
            you must separately open an <see cref="T:Lucene.Net.Index.IndexWriter"/>, which
            deletes unreferenced files when it's created.
            
            <p/><b>WARNING</b>: this writes a
            new segments file into the index, effectively removing
            all documents in broken segments from the index.
            BE CAREFUL.
            
            <p/><b>WARNING</b>: Make sure you only call this when the
            index is not opened  by any writer. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.CheckIndex.Main(System.String[])">
            <summary>Command-line interface to check and fix an index.
            <p/>
            Run it like this:
            <code>
            java -ea:Lucene.Net... Lucene.Net.Index.CheckIndex pathToIndex [-fix] [-segment X] [-segment Y]
            </code>
            <list type="bullet">
            <item><c>-fix</c>: actually write a new segments_N file, removing any problematic segments</item>
            <item><c>-segment X</c>: only check the specified
            segment(s).  This can be specified multiple times,
            to check more than one segment, eg <c>-segment _2
            -segment _a</c>.  You can't use this with the -fix
            option.</item>
            </list>
            <p/><b>WARNING</b>: <c>-fix</c> should only be used on an emergency basis as it will cause
            documents (perhaps many) to be permanently removed from the index.  Always make
            a backup copy of your index before running this!  Do not run this tool on an index
            that is actively being written to.  You have been warned!
            <p/>                Run without -fix, this tool will open the index, report version information
            and report any exceptions it hits and what action it would take if -fix were
            specified.  With -fix, this tool will remove any segments that have issues and
            write a new segments_N file.  This means all documents contained in the affected
            segments will be removed.
            <p/>
            This tool exits with exit code 1 if the index cannot be opened or has any
            corruption, else 0.
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.CheckIndex.Status">
            <summary> Returned from <see cref="M:Lucene.Net.Index.CheckIndex.CheckIndex_Renamed_Method"/> detailing the health and status of the index.
            
            <p/><b>WARNING</b>: this API is new and experimental and is
            subject to suddenly change in the next release.
            
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.clean">
            <summary>True if no problems were found with the index. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.missingSegments">
            <summary>True if we were unable to locate and load the segments_N file. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.cantOpenSegments">
            <summary>True if we were unable to open the segments_N file. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.missingSegmentVersion">
            <summary>True if we were unable to read the version number from segments_N file. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.segmentsFileName">
            <summary>Name of latest segments_N file in the index. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.numSegments">
            <summary>Number of segments in the index. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.segmentFormat">
            <summary>String description of the version of the index. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.segmentsChecked">
            <summary>Empty unless you passed specific segments list to check as optional 3rd argument.</summary>
            <seealso cref="M:Lucene.Net.Index.CheckIndex.CheckIndex_Renamed_Method(System.Collections.IList)">
            </seealso>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.toolOutOfDate">
            <summary>True if the index was created with a newer version of Lucene than the CheckIndex tool. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.segmentInfos">
            <summary>List of <see cref="T:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus"/> instances, detailing status of each segment. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.dir">
            <summary>Directory index is in. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.newSegments">
            <summary> SegmentInfos instance containing only segments that
            had no problems (this is used with the <see cref="M:Lucene.Net.Index.CheckIndex.FixIndex(Lucene.Net.Index.CheckIndex.Status)"/> 
            method to repair the index. 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.totLoseDocCount">
            <summary>How many documents will be lost to bad segments. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.numBadSegments">
            <summary>How many bad segments were found. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.partial">
            <summary>True if we checked only specific segments (<see cref="M:Lucene.Net.Index.CheckIndex.CheckIndex_Renamed_Method(System.Collections.IList)"/>)
            was called with non-null
            argument). 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.userData">
            <summary>Holds the userData of the last commit in the index </summary>
        </member>
        <member name="T:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus">
            <summary>Holds the status of each segment in the index.
            See <see cref="T:Lucene.Net.Index.SegmentInfos"/>.
            
            <p/><b>WARNING</b>: this API is new and experimental and is
            subject to suddenly change in the next release.
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.name">
            <summary>Name of the segment. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.docCount">
            <summary>Document count (does not take deletions into account). </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.compound">
            <summary>True if segment is compound file format. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.numFiles">
            <summary>Number of files referenced by this segment. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.sizeMB">
            <summary>Net size (MB) of the files referenced by this
            segment. 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.docStoreOffset">
            <summary>Doc store offset, if this segment shares the doc
            store files (stored fields and term vectors) with
            other segments.  This is -1 if it does not share. 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.docStoreSegment">
            <summary>String of the shared doc store segment, or null if
            this segment does not share the doc store files. 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.docStoreCompoundFile">
            <summary>True if the shared doc store files are compound file
            format. 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.hasDeletions">
            <summary>True if this segment has pending deletions. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.deletionsFileName">
            <summary>Name of the current deletions file name. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.numDeleted">
            <summary>Number of deleted documents. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.openReaderPassed">
            <summary>True if we were able to open a SegmentReader on this
            segment. 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.numFields">
            <summary>Number of fields in this segment. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.hasProx">
            <summary>True if at least one of the fields in this segment
            does not omitTermFreqAndPositions.
            </summary>
            <seealso cref="M:Lucene.Net.Documents.AbstractField.SetOmitTermFreqAndPositions(System.Boolean)">
            </seealso>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.diagnostics">
            <summary>Map&lt;String, String&gt; that includes certain
            debugging details that IndexWriter records into
            each segment it creates 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.fieldNormStatus">
            <summary>Status for testing of field norms (null if field norms could not be tested). </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.termIndexStatus">
            <summary>Status for testing of indexed terms (null if indexed terms could not be tested). </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.storedFieldStatus">
            <summary>Status for testing of stored fields (null if stored fields could not be tested). </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.termVectorStatus">
            <summary>Status for testing of term vectors (null if term vectors could not be tested). </summary>
        </member>
        <member name="T:Lucene.Net.Index.CheckIndex.Status.FieldNormStatus">
            <summary> Status from testing field norms.</summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.FieldNormStatus.totFields">
            <summary>Number of fields successfully tested </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.FieldNormStatus.error">
            <summary>Exception thrown during term index test (null on success) </summary>
        </member>
        <member name="T:Lucene.Net.Index.CheckIndex.Status.TermIndexStatus">
            <summary> Status from testing term index.</summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.TermIndexStatus.termCount">
            <summary>Total term count </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.TermIndexStatus.totFreq">
            <summary>Total frequency across all terms. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.TermIndexStatus.totPos">
            <summary>Total number of positions. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.TermIndexStatus.error">
            <summary>Exception thrown during term index test (null on success) </summary>
        </member>
        <member name="T:Lucene.Net.Index.CheckIndex.Status.StoredFieldStatus">
            <summary> Status from testing stored fields.</summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.StoredFieldStatus.docCount">
            <summary>Number of documents tested. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.StoredFieldStatus.totFields">
            <summary>Total number of stored fields tested. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.StoredFieldStatus.error">
            <summary>Exception thrown during stored fields test (null on success) </summary>
        </member>
        <member name="T:Lucene.Net.Index.CheckIndex.Status.TermVectorStatus">
            <summary> Status from testing stored fields.</summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.TermVectorStatus.docCount">
            <summary>Number of documents tested. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.TermVectorStatus.totVectors">
            <summary>Total number of term vectors tested. </summary>
        </member>
        <member name="F:Lucene.Net.Index.CheckIndex.Status.TermVectorStatus.error">
            <summary>Exception thrown during term vector test (null on success) </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentTermDocs.Read(System.Int32[],System.Int32[])">
            <summary>Optimized implementation. </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentTermDocs.SkipProx(System.Int64,System.Int32)">
            <summary>Overridden by SegmentTermPositions to skip in prox stream. </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentTermDocs.SkipTo(System.Int32)">
            <summary>Optimized implementation. </summary>
        </member>
        <member name="T:Lucene.Net.Index.CompoundFileReader">
            <summary> Class for accessing a compound stream.
            This class implements a directory, but is limited to only read operations.
            Directory methods that would normally modify data throw an exception.
            
            
            </summary>
            <version>  $Id: CompoundFileReader.java 673371 2008-07-02 11:57:27Z mikemccand $
            </version>
        </member>
        <member name="T:Lucene.Net.Store.Directory">
             <summary>A Directory is a flat list of files.  Files may be written once, when they
             are created.  Once a file is created it may only be opened for read, or
             deleted.  Random access is permitted both when reading and writing.
             
             <p/> Java's i/o APIs not used directly, but rather all i/o is
             through this API.  This permits things such as: <list>
             <item> implementation of RAM-based indices;</item>
             <item> implementation indices stored in a database, via JDBC;</item>
             <item> implementation of an index as a single file;</item>
             </list>
             
             Directory locking is implemented by an instance of <see cref="T:Lucene.Net.Store.LockFactory"/>
            , and can be changed for each Directory
             instance using <see cref="M:Lucene.Net.Store.Directory.SetLockFactory(Lucene.Net.Store.LockFactory)"/>.
             
             </summary>
        </member>
        <member name="F:Lucene.Net.Store.Directory.lockFactory">
            <summary>Holds the LockFactory instance (implements locking for
            this Directory instance). 
            </summary>
        </member>
        <member name="M:Lucene.Net.Store.Directory.List">
             <deprecated> For some Directory implementations (<see cref="T:Lucene.Net.Store.FSDirectory"/>
            , and its subclasses), this method
             silently filters its results to include only index
             files.  Please use <see cref="M:Lucene.Net.Store.Directory.ListAll"/> instead, which
             does no filtering. 
             </deprecated>
        </member>
        <member name="M:Lucene.Net.Store.Directory.ListAll">
            <summary>Returns an array of strings, one for each file in the
            directory.  Unlike <see cref="M:Lucene.Net.Store.Directory.List"/> this method does no
            filtering of the contents in a directory, and it will
            never return null (throws IOException instead).
            
            Currently this method simply fallsback to <see cref="M:Lucene.Net.Store.Directory.List"/>
            for Directory impls outside of Lucene's core &amp;
            contrib, but in 3.0 that method will be removed and
            this method will become abstract. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Store.Directory.FileExists(System.String)">
            <summary>Returns true iff a file with the given name exists. </summary>
        </member>
        <member name="M:Lucene.Net.Store.Directory.FileModified(System.String)">
            <summary>Returns the time the named file was last modified. </summary>
        </member>
        <member name="M:Lucene.Net.Store.Directory.TouchFile(System.String)">
            <summary>Set the modified time of an existing file to now. </summary>
        </member>
        <member name="M:Lucene.Net.Store.Directory.DeleteFile(System.String)">
            <summary>Removes an existing file in the directory. </summary>
        </member>
        <member name="M:Lucene.Net.Store.Directory.RenameFile(System.String,System.String)">
            <summary>Renames an existing file in the directory.
            If a file already exists with the new name, then it is replaced.
            This replacement is not guaranteed to be atomic.
            </summary>
            <deprecated> 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Store.Directory.FileLength(System.String)">
            <summary>Returns the length of a file in the directory. </summary>
        </member>
        <member name="M:Lucene.Net.Store.Directory.CreateOutput(System.String)">
            <summary>Creates a new, empty file in the directory with the given name.
            Returns a stream writing this file. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Store.Directory.Sync(System.String)">
            <summary>Ensure that any writes to this file are moved to
            stable storage.  Lucene uses this to properly commit
            changes to the index, to prevent a machine/OS crash
            from corrupting the index. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Store.Directory.OpenInput(System.String)">
            <summary>Returns a stream reading an existing file. </summary>
        </member>
        <member name="M:Lucene.Net.Store.Directory.OpenInput(System.String,System.Int32)">
             <summary>Returns a stream reading an existing file, with the
             specified read buffer size.  The particular Directory
             implementation may ignore the buffer size.  Currently
             the only Directory implementations that respect this
             parameter are <see cref="T:Lucene.Net.Store.FSDirectory"/> and <see cref="T:Lucene.Net.Index.CompoundFileReader"/>
            .
             </summary>
        </member>
        <member name="M:Lucene.Net.Store.Directory.MakeLock(System.String)">
            <summary>Construct a <see cref="T:Lucene.Net.Store.Lock"/>.</summary>
            <param name="name">the name of the lock file
            </param>
        </member>
        <member name="M:Lucene.Net.Store.Directory.ClearLock(System.String)">
            <summary> Attempt to clear (forcefully unlock and remove) the
            specified lock.  Only call this at a time when you are
            certain this lock is no longer in use.
            </summary>
            <param name="name">name of the lock to be cleared.
            </param>
        </member>
        <member name="M:Lucene.Net.Store.Directory.Close">
            <summary>Closes the store. </summary>
        </member>
        <member name="M:Lucene.Net.Store.Directory.SetLockFactory(Lucene.Net.Store.LockFactory)">
            <summary> Set the LockFactory that this Directory instance should
            use for its locking implementation.  Each * instance of
            LockFactory should only be used for one directory (ie,
            do not share a single instance across multiple
            Directories).
            
            </summary>
            <param name="lockFactory">instance of <see cref="T:Lucene.Net.Store.LockFactory"/>.
            </param>
        </member>
        <member name="M:Lucene.Net.Store.Directory.GetLockFactory">
            <summary> Get the LockFactory that this Directory instance is
            using for its locking implementation.  Note that this
            may be null for Directory implementations that provide
            their own locking implementation.
            </summary>
        </member>
        <member name="M:Lucene.Net.Store.Directory.GetLockID">
            <summary> Return a string identifier that uniquely differentiates
            this Directory instance from other Directory instances.
            This ID should be the same if two Directory instances
            (even in different JVMs and/or on different machines)
            are considered "the same index".  This is how locking
            "scopes" to the right index.
            </summary>
        </member>
        <member name="M:Lucene.Net.Store.Directory.Copy(Lucene.Net.Store.Directory,Lucene.Net.Store.Directory,System.Boolean)">
            <summary> Copy contents of a directory src to a directory dest.
            If a file in src already exists in dest then the
            one in dest will be blindly overwritten.
            
            <p/><b>NOTE:</b> the source directory cannot change
            while this method is running.  Otherwise the results
            are undefined and you could easily hit a
            FileNotFoundException.
            
            <p/><b>NOTE:</b> this method only copies files that look
            like index files (ie, have extensions matching the
            known extensions of index files).
            
            </summary>
            <param name="src">source directory
            </param>
            <param name="dest">destination directory
            </param>
            <param name="closeDirSrc">if <c>true</c>, call <see cref="M:Lucene.Net.Store.Directory.Close"/> method on source directory
            </param>
            <throws>  IOException </throws>
        </member>
        <member name="M:Lucene.Net.Store.Directory.EnsureOpen">
            <throws>  AlreadyClosedException if this Directory is closed </throws>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileReader.Dispose">
            <summary>
            .NET
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileReader.List">
            <summary>Returns an array of strings, one for each file in the directory. </summary>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileReader.FileExists(System.String)">
            <summary>Returns true iff a file with the given name exists. </summary>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileReader.FileModified(System.String)">
            <summary>Returns the time the compound file was last modified. </summary>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileReader.TouchFile(System.String)">
            <summary>Set the modified time of the compound file to now. </summary>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileReader.DeleteFile(System.String)">
            <summary>Not implemented</summary>
            <throws>  UnsupportedOperationException  </throws>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileReader.RenameFile(System.String,System.String)">
            <summary>Not implemented</summary>
            <throws>  UnsupportedOperationException  </throws>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileReader.FileLength(System.String)">
            <summary>Returns the length of a file in the directory.</summary>
            <throws>  IOException if the file does not exist  </throws>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileReader.CreateOutput(System.String)">
            <summary>Not implemented</summary>
            <throws>  UnsupportedOperationException  </throws>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileReader.MakeLock(System.String)">
            <summary>Not implemented</summary>
            <throws>  UnsupportedOperationException  </throws>
        </member>
        <member name="T:Lucene.Net.Index.CompoundFileReader.CSIndexInput">
            <summary>Implementation of an IndexInput that reads from a portion of the
            compound file. The visibility is left as "package" *only* because
            this helps with testing since JUnit test cases in a different class
            can then access package fields of this class.
            </summary>
        </member>
        <member name="T:Lucene.Net.Store.BufferedIndexInput">
            <summary>Base implementation class for buffered <see cref="T:Lucene.Net.Store.IndexInput"/>. </summary>
        </member>
        <member name="F:Lucene.Net.Store.BufferedIndexInput.BUFFER_SIZE">
            <summary>Default buffer size </summary>
        </member>
        <member name="M:Lucene.Net.Store.BufferedIndexInput.#ctor(System.Int32)">
            <summary>Inits BufferedIndexInput with a specific bufferSize </summary>
        </member>
        <member name="M:Lucene.Net.Store.BufferedIndexInput.SetBufferSize(System.Int32)">
            <summary>Change the buffer size used by this IndexInput </summary>
        </member>
        <member name="M:Lucene.Net.Store.BufferedIndexInput.GetBufferSize">
            <seealso cref="M:Lucene.Net.Store.BufferedIndexInput.SetBufferSize(System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Store.BufferedIndexInput.ReadInternal(System.Byte[],System.Int32,System.Int32)">
            <summary>Expert: implements buffer refill.  Reads bytes from the current position
            in the input.
            </summary>
            <param name="b">the array to read bytes into
            </param>
            <param name="offset">the offset in the array to start storing bytes
            </param>
            <param name="length">the number of bytes to read
            </param>
        </member>
        <member name="M:Lucene.Net.Store.BufferedIndexInput.SeekInternal(System.Int64)">
            <summary>Expert: implements seek.  Sets current position in this file, where the
            next <see cref="M:Lucene.Net.Store.BufferedIndexInput.ReadInternal(System.Byte[],System.Int32,System.Int32)"/> will occur.
            </summary>
            <seealso cref="M:Lucene.Net.Store.BufferedIndexInput.ReadInternal(System.Byte[],System.Int32,System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileReader.CSIndexInput.ReadInternal(System.Byte[],System.Int32,System.Int32)">
            <summary>Expert: implements buffer refill.  Reads bytes from the current
            position in the input.
            </summary>
            <param name="b">the array to read bytes into
            </param>
            <param name="offset">the offset in the array to start storing bytes
            </param>
            <param name="len">the number of bytes to read
            </param>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileReader.CSIndexInput.SeekInternal(System.Int64)">
            <summary>Expert: implements seek.  Sets current position in this file, where
            the next <see cref="M:Lucene.Net.Index.CompoundFileReader.CSIndexInput.ReadInternal(System.Byte[],System.Int32,System.Int32)"/> will occur.
            </summary>
            <seealso cref="M:Lucene.Net.Index.CompoundFileReader.CSIndexInput.ReadInternal(System.Byte[],System.Int32,System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileReader.CSIndexInput.Close">
            <summary>Closes the stream to further operations. </summary>
        </member>
        <member name="T:Lucene.Net.Index.CompoundFileWriter">
            <summary> Combines multiple files into a single compound file.
            The file format:<br/>
            <list type="bullet">
            <item>VInt fileCount</item>
            <item>{Directory}
            fileCount entries with the following structure:</item>
            <list type="bullet">
            <item>long dataOffset</item>
            <item>String fileName</item>
            </list>
            <item>{File Data}
            fileCount entries with the raw data of the corresponding file</item>
            </list>
            
            The fileCount integer indicates how many files are contained in this compound
            file. The {directory} that follows has that many entries. Each directory entry
            contains a long pointer to the start of this file's data section, and a String
            with that file's name.
            
            
            </summary>
            <version>  $Id: CompoundFileWriter.java 690539 2008-08-30 17:33:06Z mikemccand $
            </version>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileWriter.#ctor(Lucene.Net.Store.Directory,System.String)">
            <summary>Create the compound stream in the specified file. The file name is the
            entire name (no extensions are added).
            </summary>
            <throws>  NullPointerException if <c>dir</c> or <c>name</c> is null </throws>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileWriter.GetDirectory">
            <summary>Returns the directory of the compound file. </summary>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileWriter.GetName">
            <summary>Returns the name of the compound file. </summary>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileWriter.AddFile(System.String)">
            <summary>Add a source stream. <c>file</c> is the string by which the 
            sub-stream will be known in the compound stream.
            
            </summary>
            <throws>  IllegalStateException if this writer is closed </throws>
            <throws>  NullPointerException if <c>file</c> is null </throws>
            <throws>  IllegalArgumentException if a file with the same name </throws>
            <summary>   has been added already
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileWriter.Close">
            <summary>Merge files with the extensions added up to now.
            All files with these extensions are combined sequentially into the
            compound stream. After successful merge, the source files
            are deleted.
            </summary>
            <throws>  IllegalStateException if close() had been called before or </throws>
            <summary>   if no file has been added to this object
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.CompoundFileWriter.CopyFile(Lucene.Net.Index.CompoundFileWriter.FileEntry,Lucene.Net.Store.IndexOutput,System.Byte[])">
            <summary>Copy the contents of the file with specified extension into the
            provided output stream. Use the provided buffer for moving data
            to reduce memory allocation.
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.CompoundFileWriter.FileEntry.file">
            <summary>source file </summary>
        </member>
        <member name="F:Lucene.Net.Index.CompoundFileWriter.FileEntry.directoryOffset">
            <summary>temporary holder for the start of directory entry for this file </summary>
        </member>
        <member name="F:Lucene.Net.Index.CompoundFileWriter.FileEntry.dataOffset">
            <summary>temporary holder for the start of this file's data section </summary>
        </member>
        <member name="T:Lucene.Net.Index.ConcurrentMergeScheduler">
            <summary>A <see cref="T:Lucene.Net.Index.MergeScheduler"/> that runs each merge using a
            separate thread, up until a maximum number of threads
            (<see cref="M:Lucene.Net.Index.ConcurrentMergeScheduler.SetMaxThreadCount(System.Int32)"/>) at which when a merge is
            needed, the thread(s) that are updating the index will
            pause until one or more merges completes.  This is a
            simple way to use concurrency in the indexing process
            without having to create and manage application level
            threads. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.MergeScheduler">
            <summary><p/>Expert: <see cref="T:Lucene.Net.Index.IndexWriter"/> uses an instance
            implementing this interface to execute the merges
            selected by a <see cref="T:Lucene.Net.Index.MergePolicy"/>.  The default
            MergeScheduler is <see cref="T:Lucene.Net.Index.ConcurrentMergeScheduler"/>.<p/>
            
            <p/><b>NOTE:</b> This API is new and still experimental
            (subject to change suddenly in the next release)<p/>
            
            <p/><b>NOTE</b>: This class typically requires access to
            package-private APIs (eg, SegmentInfos) to do its job;
            if you implement your own MergePolicy, you'll need to put
            it in package Lucene.Net.Index in order to use
            these APIs.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.MergeScheduler.Merge(Lucene.Net.Index.IndexWriter)">
            <summary>Run the merges provided by <see cref="M:Lucene.Net.Index.IndexWriter.GetNextMerge"/>. </summary>
        </member>
        <member name="M:Lucene.Net.Index.MergeScheduler.Close">
            <summary>Close this MergeScheduler. </summary>
        </member>
        <member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.SetMaxThreadCount(System.Int32)">
            <summary>Sets the max # simultaneous threads that may be
            running.  If a merge is necessary yet we already have
            this many threads running, the incoming thread (that
            is calling add/updateDocument) will block until
            a merge thread has completed. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.GetMaxThreadCount">
            <summary>Get the max # simultaneous threads that may be</summary>
            <seealso cref="M:Lucene.Net.Index.ConcurrentMergeScheduler.SetMaxThreadCount(System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.GetMergeThreadPriority">
            <summary>Return the priority that merge threads run at.  By
            default the priority is 1 plus the priority of (ie,
            slightly higher priority than) the first thread that
            calls merge. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.SetMergeThreadPriority(System.Int32)">
            <summary>Return the priority that merge threads run at. </summary>
        </member>
        <member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.DoMerge(Lucene.Net.Index.MergePolicy.OneMerge)">
            <summary>Does the actual merge, by calling <see cref="M:Lucene.Net.Index.IndexWriter.Merge(Lucene.Net.Index.MergePolicy.OneMerge)"/> </summary>
        </member>
        <member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.GetMergeThread(Lucene.Net.Index.IndexWriter,Lucene.Net.Index.MergePolicy.OneMerge)">
            <summary>Create and return a new MergeThread </summary>
        </member>
        <member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.HandleMergeException(System.Exception)">
            <summary>Called when an exception is hit in a background merge
            thread 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.AnyUnhandledExceptions">
            <summary>Used for testing </summary>
        </member>
        <member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.AddMyself">
            <summary>Used for testing </summary>
        </member>
        <member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.SetSuppressExceptions">
            <summary>Used for testing </summary>
        </member>
        <member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.ClearSuppressExceptions">
            <summary>Used for testing </summary>
        </member>
        <member name="F:Lucene.Net.Index.ConcurrentMergeScheduler.allInstances">
            <summary>Used for testing </summary>
        </member>
        <member name="T:SupportClass.ThreadClass">
            <summary>
            Support class used to handle threads
            </summary>
        </member>
        <member name="T:IThreadRunnable">
            <summary>
            This interface should be implemented by any class whose instances are intended 
            to be executed by a thread.
            </summary>
        </member>
        <member name="M:IThreadRunnable.Run">
            <summary>
            This method has to be implemented in order that starting of the thread causes the object's 
            run method to be called in that separately executing thread.
            </summary>
        </member>
        <member name="T:SupportClass">
            <summary>
            Contains conversion support elements such as classes, interfaces and static methods.
            </summary>
        </member>
        <member name="M:SupportClass.TextSupport.GetCharsFromString(System.String,System.Int32,System.Int32,System.Char[],System.Int32)">
            <summary>
            Copies an array of chars obtained from a String into a specified array of chars
            </summary>
            <param name="sourceString">The String to get the chars from</param>
            <param name="sourceStart">Position of the String to start getting the chars</param>
            <param name="sourceEnd">Position of the String to end getting the chars</param>
            <param name="destinationArray">Array to return the chars</param>
            <param name="destinationStart">Position of the destination array of chars to start storing the chars</param>
            <returns>An array of chars</returns>
        </member>
        <member name="T:SupportClass.ThreadClass">
            <summary>
            Support class used to handle threads
            </summary>
        </member>
        <member name="F:SupportClass.ThreadClass.threadField">
            <summary>
            The instance of System.Threading.Thread
            </summary>
        </member>
        <member name="M:SupportClass.ThreadClass.#ctor">
            <summary>
            Initializes a new instance of the ThreadClass class
            </summary>
        </member>
        <member name="M:SupportClass.ThreadClass.#ctor(System.String)">
            <summary>
            Initializes a new instance of the Thread class.
            </summary>
            <param name="Name">The name of the thread</param>
        </member>
        <member name="M:SupportClass.ThreadClass.#ctor(System.Threading.ThreadStart)">
            <summary>
            Initializes a new instance of the Thread class.
            </summary>
            <param name="Start">A ThreadStart delegate that references the methods to be invoked when this thread begins executing</param>
        </member>
        <member name="M:SupportClass.ThreadClass.#ctor(System.Threading.ThreadStart,System.String)">
            <summary>
            Initializes a new instance of the Thread class.
            </summary>
            <param name="Start">A ThreadStart delegate that references the methods to be invoked when this thread begins executing</param>
            <param name="Name">The name of the thread</param>
        </member>
        <member name="M:SupportClass.ThreadClass.Run">
            <summary>
            This method has no functionality unless the method is overridden
            </summary>
        </member>
        <member name="M:SupportClass.ThreadClass.Start">
            <summary>
            Causes the operating system to change the state of the current thread instance to ThreadState.Running
            </summary>
        </member>
        <member name="M:SupportClass.ThreadClass.Interrupt">
            <summary>
            Interrupts a thread that is in the WaitSleepJoin thread state
            </summary>
        </member>
        <member name="M:SupportClass.ThreadClass.Join">
            <summary>
            Blocks the calling thread until a thread terminates
            </summary>
        </member>
        <member name="M:SupportClass.ThreadClass.Join(System.Int64)">
            <summary>
            Blocks the calling thread until a thread terminates or the specified time elapses
            </summary>
            <param name="MiliSeconds">Time of wait in milliseconds</param>
        </member>
        <member name="M:SupportClass.ThreadClass.Join(System.Int64,System.Int32)">
            <summary>
            Blocks the calling thread until a thread terminates or the specified time elapses
            </summary>
            <param name="MiliSeconds">Time of wait in milliseconds</param>
            <param name="NanoSeconds">Time of wait in nanoseconds</param>
        </member>
        <member name="M:SupportClass.ThreadClass.Resume">
            <summary>
            Resumes a thread that has been suspended
            </summary>
        </member>
        <member name="M:SupportClass.ThreadClass.Abort">
            <summary>
            Raises a ThreadAbortException in the thread on which it is invoked, 
            to begin the process of terminating the thread. Calling this method 
            usually terminates the thread
            </summary>
        </member>
        <member name="M:SupportClass.ThreadClass.Abort(System.Object)">
            <summary>
            Raises a ThreadAbortException in the thread on which it is invoked, 
            to begin the process of terminating the thread while also providing
            exception information about the thread termination. 
            Calling this method usually terminates the thread.
            </summary>
            <param name="stateInfo">An object that contains application-specific information, such as state, which can be used by the thread being aborted</param>
        </member>
        <member name="M:SupportClass.ThreadClass.Suspend">
            <summary>
            Suspends the thread, if the thread is already suspended it has no effect
            </summary>
        </member>
        <member name="M:SupportClass.ThreadClass.ToString">
            <summary>
            Obtain a String that represents the current object
            </summary>
            <returns>A String that represents the current object</returns>
        </member>
        <member name="M:SupportClass.ThreadClass.Current">
            <summary>
            Gets the currently running thread
            </summary>
            <returns>The currently running thread</returns>
        </member>
        <member name="P:SupportClass.ThreadClass.Instance">
            <summary>
            Gets the current thread instance
            </summary>
        </member>
        <member name="P:SupportClass.ThreadClass.Name">
            <summary>
            Gets or sets the name of the thread
            </summary>
        </member>
        <member name="P:SupportClass.ThreadClass.Priority">
            <summary>
            Gets or sets a value indicating the scheduling priority of a thread
            </summary>
        </member>
        <member name="P:SupportClass.ThreadClass.IsAlive">
            <summary>
            Gets a value indicating the execution status of the current thread
            </summary>
        </member>
        <member name="P:SupportClass.ThreadClass.IsBackground">
            <summary>
            Gets or sets a value indicating whether or not a thread is a background thread.
            </summary>
        </member>
        <member name="T:SupportClass.FileSupport">
            <summary>
            Represents the methods to support some operations over files.
            </summary>
        </member>
        <member name="M:SupportClass.FileSupport.GetFiles(System.IO.FileInfo)">
            <summary>
            Returns an array of abstract pathnames representing the files and directories of the specified path.
            </summary>
            <param name="path">The abstract pathname to list it childs.</param>
            <returns>An array of abstract pathnames childs of the path specified or null if the path is not a directory</returns>
        </member>
        <member name="M:SupportClass.FileSupport.GetLuceneIndexFiles(System.String,Lucene.Net.Index.IndexFileNameFilter)">
            <summary>
            Returns a list of files in a give directory.
            </summary>
            <param name="fullName">The full path name to the directory.</param>
            <param name="indexFileNameFilter"></param>
            <returns>An array containing the files.</returns>
        </member>
        <member name="M:SupportClass.FileSupport.Sync(System.IO.FileStream)">
            <summary>
            Flushes the specified file stream. Ensures that all buffered
            data is actually written to the file system.
            </summary>
            <param name="fileStream">The file stream.</param>
        </member>
        <member name="T:SupportClass.Number">
            <summary>
            A simple class for number conversions.
            </summary>
        </member>
        <member name="F:SupportClass.Number.MIN_RADIX">
            <summary>
            Min radix value.
            </summary>
        </member>
        <member name="F:SupportClass.Number.MAX_RADIX">
            <summary>
            Max radix value.
            </summary>
        </member>
        <member name="M:SupportClass.Number.ToString(System.Int64)">
            <summary>
            Converts a number to System.String.
            </summary>
            <param name="number"></param>
            <returns></returns>
        </member>
        <member name="M:SupportClass.Number.ToString(System.Single)">
            <summary>
            Converts a number to System.String.
            </summary>
            <param name="f"></param>
            <returns></returns>
        </member>
        <member name="M:SupportClass.Number.ToString(System.Int64,System.Int32)">
            <summary>
            Converts a number to System.String in the specified radix.
            </summary>
            <param name="i">A number to be converted.</param>
            <param name="radix">A radix.</param>
            <returns>A System.String representation of the number in the specified redix.</returns>
        </member>
        <member name="M:SupportClass.Number.Parse(System.String,System.Int32)">
            <summary>
            Parses a number in the specified radix.
            </summary>
            <param name="s">An input System.String.</param>
            <param name="radix">A radix.</param>
            <returns>The parsed number in the specified radix.</returns>
        </member>
        <member name="M:SupportClass.Number.URShift(System.Int32,System.Int32)">
            <summary>
            Performs an unsigned bitwise right shift with the specified number
            </summary>
            <param name="number">Number to operate on</param>
            <param name="bits">Ammount of bits to shift</param>
            <returns>The resulting number from the shift operation</returns>
        </member>
        <member name="M:SupportClass.Number.URShift(System.Int64,System.Int32)">
            <summary>
            Performs an unsigned bitwise right shift with the specified number
            </summary>
            <param name="number">Number to operate on</param>
            <param name="bits">Ammount of bits to shift</param>
            <returns>The resulting number from the shift operation</returns>
        </member>
        <member name="M:SupportClass.Number.NextSetBit(System.Collections.BitArray,System.Int32)">
            <summary>
            Returns the index of the first bit that is set to true that occurs 
            on or after the specified starting index. If no such bit exists 
            then -1 is returned.
            </summary>
            <param name="bits">The BitArray object.</param>
            <param name="fromIndex">The index to start checking from (inclusive).</param>
            <returns>The index of the next set bit.</returns>
        </member>
        <member name="M:SupportClass.Number.ToInt64(System.String)">
            <summary>
            Converts a System.String number to long.
            </summary>
            <param name="s"></param>
            <returns></returns>
        </member>
        <member name="T:SupportClass.Character">
            <summary>
            Mimics Java's Character class.
            </summary>
        </member>
        <member name="M:SupportClass.Character.ForDigit(System.Int32,System.Int32)">
            <summary>
            
            </summary>
            <param name="digit"></param>
            <param name="radix"></param>
            <returns></returns>
        </member>
        <member name="P:SupportClass.Character.MAX_RADIX">
            <summary>
            </summary>
        </member>
        <member name="P:SupportClass.Character.MIN_RADIX">
            <summary>
            </summary>
        </member>
        <member name="T:SupportClass.Double">
            <summary>
            
            </summary>
        </member>
        <member name="T:SupportClass.Single">
            <summary>
            
            </summary>
        </member>
        <member name="M:SupportClass.Single.Parse(System.String,System.Globalization.NumberStyles,System.IFormatProvider)">
            <summary>
            
            </summary>
            <param name="s"></param>
            <param name="style"></param>
            <param name="provider"></param>
            <returns></returns>
        </member>
        <member name="M:SupportClass.Single.Parse(System.String,System.IFormatProvider)">
            <summary>
            
            </summary>
            <param name="s"></param>
            <param name="provider"></param>
            <returns></returns>
        </member>
        <member name="M:SupportClass.Single.Parse(System.String,System.Globalization.NumberStyles)">
            <summary>
            
            </summary>
            <param name="s"></param>
            <param name="style"></param>
            <returns></returns>
        </member>
        <member name="M:SupportClass.Single.Parse(System.String)">
            <summary>
            
            </summary>
            <param name="s"></param>
            <returns></returns>
        </member>
        <member name="M:SupportClass.Single.ToString(System.Single)">
            <summary>
            
            </summary>
            <param name="f"></param>
            <returns></returns>
        </member>
        <member name="M:SupportClass.Single.ToString(System.Single,System.String)">
            <summary>
            
            </summary>
            <param name="f"></param>
            <param name="format"></param>
            <returns></returns>
        </member>
        <member name="T:SupportClass.AppSettings">
            <summary>
            
            </summary>
        </member>
        <member name="M:SupportClass.AppSettings.Set(System.String,System.Int32)">
            <summary>
            
            </summary>
            <param name="key"></param>
            <param name="defValue"></param>
        </member>
        <member name="M:SupportClass.AppSettings.Set(System.String,System.Int64)">
            <summary>
            
            </summary>
            <param name="key"></param>
            <param name="defValue"></param>
        </member>
        <member name="M:SupportClass.AppSettings.Set(System.String,System.String)">
            <summary>
            
            </summary>
            <param name="key"></param>
            <param name="defValue"></param>
        </member>
        <member name="M:SupportClass.AppSettings.Set(System.String,System.Boolean)">
            <summary>
            
            </summary>
            <param name="key"></param>
            <param name="defValue"></param>
        </member>
        <member name="M:SupportClass.AppSettings.Get(System.String,System.Int32)">
            <summary>
            
            </summary>
            <param name="key"></param>
            <param name="defValue"></param>
            <returns></returns>
        </member>
        <member name="M:SupportClass.AppSettings.Get(System.String,System.Int64)">
            <summary>
            
            </summary>
            <param name="key"></param>
            <param name="defValue"></param>
            <returns></returns>
        </member>
        <member name="M:SupportClass.AppSettings.Get(System.String,System.String)">
            <summary>
            
            </summary>
            <param name="key"></param>
            <param name="defValue"></param>
            <returns></returns>
        </member>
        <member name="T:SupportClass.BitSetSupport">
            <summary>
            This class provides supporting methods of java.util.BitSet
            that are not present in System.Collections.BitArray.
            </summary>
        </member>
        <member name="M:SupportClass.BitSetSupport.NextSetBit(System.Collections.BitArray,System.Int32)">
            <summary>
            Returns the next set bit at or after index, or -1 if no such bit exists.
            </summary>
            <param name="bitArray"></param>
            <param name="index">the index of bit array at which to start checking</param>
            <returns>the next set bit or -1</returns>
        </member>
        <member name="M:SupportClass.BitSetSupport.NextClearBit(System.Collections.BitArray,System.Int32)">
            <summary>
            Returns the next un-set bit at or after index, or -1 if no such bit exists.
            </summary>
            <param name="bitArray"></param>
            <param name="index">the index of bit array at which to start checking</param>
            <returns>the next set bit or -1</returns>
        </member>
        <member name="M:SupportClass.BitSetSupport.Cardinality(System.Collections.BitArray)">
            <summary>
            Returns the number of bits set to true in this BitSet.
            </summary>
            <param name="bits">The BitArray object.</param>
            <returns>The number of bits set to true in this BitSet.</returns>
        </member>
        <member name="T:SupportClass.Compare">
            <summary>
            Summary description for TestSupportClass.
            </summary>
        </member>
        <member name="M:SupportClass.Compare.CompareTermArrays(Lucene.Net.Index.Term[],Lucene.Net.Index.Term[])">
            <summary>
            Compares two Term arrays for equality.
            </summary>
            <param name="t1">First Term array to compare</param>
            <param name="t2">Second Term array to compare</param>
            <returns>true if the Terms are equal in both arrays, false otherwise</returns>
        </member>
        <member name="T:SupportClass.WeakHashTable">
            <summary>
            A Hashtable which holds weak references to its keys so they
            can be collected during GC. 
            </summary>
        </member>
        <member name="F:SupportClass.WeakHashTable.collectableObject">
            <summary>
            Serves as a simple "GC Monitor" that indicates whether cleanup is needed. 
            If collectableObject.IsAlive is false, GC has occurred and we should perform cleanup
            </summary>
        </member>
        <member name="M:SupportClass.WeakHashTable.KeyEquals(System.Object,System.Object)">
            <summary>
            Customize the hashtable lookup process by overriding KeyEquals. KeyEquals
            will compare both WeakKey to WeakKey and WeakKey to real keys
            </summary>
        </member>
        <member name="M:SupportClass.WeakHashTable.CleanIfNeeded">
            <summary>
            Perform cleanup if GC occurred
            </summary>
        </member>
        <member name="M:SupportClass.WeakHashTable.Clean">
            <summary>
            Iterate over all keys and remove keys that were collected
            </summary>
        </member>
        <member name="M:SupportClass.WeakHashTable.Add(System.Object,System.Object)">
            <summary>
            Wrap each key with a WeakKey and add it to the hashtable
            </summary>
        </member>
        <member name="P:SupportClass.WeakHashTable.Keys">
            <summary>
            Create a temporary copy of the real keys and return that
            </summary>
        </member>
        <member name="T:SupportClass.WeakHashTable.WeakKey">
            <summary>
            A weak referene wrapper for the hashtable keys. Whenever a key\value pair 
            is added to the hashtable, the key is wrapped using a WeakKey. WeakKey saves the
            value of the original object hashcode for fast comparison.
            </summary>
        </member>
        <member name="T:SupportClass.WeakHashTable.WeakDictionaryEnumerator">
            <summary>
            A Dictionary enumerator which wraps the original hashtable enumerator 
            and performs 2 tasks: Extract the real key from a WeakKey and skip keys
            that were already collected.
            </summary>
        </member>
        <member name="T:SupportClass.CollectionsHelper">
            <summary>
            Support class used to handle Hashtable addition, which does a check 
            first to make sure the added item is unique in the hash.
            </summary>
        </member>
        <member name="M:SupportClass.CollectionsHelper.CollectionToString(System.Collections.ICollection)">
            <summary>
            Converts the specified collection to its string representation.
            </summary>
            <param name="c">The collection to convert to string.</param>
            <returns>A string representation of the specified collection.</returns>
        </member>
        <member name="M:SupportClass.CollectionsHelper.CompareStringArrays(System.String[],System.String[])">
            <summary>
            Compares two string arrays for equality.
            </summary>
            <param name="l1">First string array list to compare</param>
            <param name="l2">Second string array list to compare</param>
            <returns>true if the strings are equal in both arrays, false otherwise</returns>
        </member>
        <member name="M:SupportClass.CollectionsHelper.Sort(System.Collections.IList,System.Collections.IComparer)">
            <summary>
            Sorts an IList collections
            </summary>
            <param name="list">The System.Collections.IList instance that will be sorted</param>
            <param name="Comparator">The Comparator criteria, null to use natural comparator.</param>
        </member>
        <member name="M:SupportClass.CollectionsHelper.Fill(System.Array,System.Int32,System.Int32,System.Object)">
            <summary>
            Fills the array with an specific value from an specific index to an specific index.
            </summary>
            <param name="array">The array to be filled.</param>
            <param name="fromindex">The first index to be filled.</param>
            <param name="toindex">The last index to be filled.</param>
            <param name="val">The value to fill the array with.</param>
        </member>
        <member name="M:SupportClass.CollectionsHelper.Fill(System.Array,System.Object)">
            <summary>
            Fills the array with an specific value.
            </summary>
            <param name="array">The array to be filled.</param>
            <param name="val">The value to fill the array with.</param>
        </member>
        <member name="M:SupportClass.CollectionsHelper.Equals(System.Array,System.Array)">
            <summary>
            Compares the entire members of one array whith the other one.
            </summary>
            <param name="array1">The array to be compared.</param>
            <param name="array2">The array to be compared with.</param>
            <returns>Returns true if the two specified arrays of Objects are equal 
            to one another. The two arrays are considered equal if both arrays 
            contain the same number of elements, and all corresponding pairs of 
            elements in the two arrays are equal. Two objects e1 and e2 are 
            considered equal if (e1==null ? e2==null : e1.equals(e2)). In other 
            words, the two arrays are equal if they contain the same elements in 
            the same order. Also, two array references are considered equal if 
            both are null.</returns>
        </member>
        <member name="T:SupportClass.GeneralKeyedCollection`2">
            <summary>A collection of <typeparamref name="TItem"/> which can be
            looked up by instances of <typeparamref name="TKey"/>.</summary>
            <typeparam name="TItem">The type of the items contains in this
            collection.</typeparam>
            <typeparam name="TKey">The type of the keys that can be used to look
            up the items.</typeparam>
        </member>
        <member name="M:SupportClass.GeneralKeyedCollection`2.#ctor(System.Converter{`1,`0})">
            <summary>Creates a new instance of the
            <see cref="T:SupportClass.GeneralKeyedCollection`2"/> class.</summary>
            <param name="converter">The <see cref="T:System.Converter`2"/> which will convert
            instances of <typeparamref name="TItem"/> to <typeparamref name="TKey"/>
            when the override of <see cref="M:SupportClass.GeneralKeyedCollection`2.GetKeyForItem(`1)"/> is called.</param>
        </member>
        <member name="F:SupportClass.GeneralKeyedCollection`2.converter">
            <summary>The <see cref="T:System.Converter`2"/> which will convert
            instances of <typeparamref name="TItem"/> to <typeparamref name="TKey"/>
            when the override of <see cref="M:SupportClass.GeneralKeyedCollection`2.GetKeyForItem(`1)"/> is called.</summary>
        </member>
        <member name="M:SupportClass.GeneralKeyedCollection`2.GetKeyForItem(`1)">
            <summary>Converts an item that is added to the collection to
            a key.</summary>
            <param name="item">The instance of <typeparamref name="TItem"/>
            to convert into an instance of <typeparamref name="TKey"/>.</param>
            <returns>The instance of <typeparamref name="TKey"/> which is the
            key for this item.</returns>
        </member>
        <member name="M:SupportClass.GeneralKeyedCollection`2.ContainsKey(`0)">
            <summary>Determines if a key for an item exists in this
            collection.</summary>
            <param name="key">The instance of <typeparamref name="TKey"/>
            to see if it exists in this collection.</param>
            <returns>True if the key exists in the collection, false otherwise.</returns>
        </member>
        <member name="T:SupportClass.EquatableList`1">
            <summary>Represents a strongly typed list of objects that can be accessed by index.
            Provides methods to search, sort, and manipulate lists. Also provides functionality
            to compare lists against each other through an implementations of
            <see cref="T:System.IEquatable`1"/>.</summary>
            <typeparam name="T">The type of elements in the list.</typeparam>
        </member>
        <member name="M:SupportClass.EquatableList`1.#ctor">
            <summary>Initializes a new instance of the 
            <see cref="T:SupportClass.EquatableList`1"/> class that is empty and has the 
            default initial capacity.</summary>
        </member>
        <member name="M:SupportClass.EquatableList`1.#ctor(System.Collections.Generic.IEnumerable{`0})">
            <summary>Initializes a new instance of the <see cref="T:SupportClass.EquatableList`1"/>
            class that contains elements copied from the specified collection and has
            sufficient capacity to accommodate the number of elements copied.</summary>
            <param name="collection">The collection whose elements are copied to the new list.</param>
        </member>
        <member name="M:SupportClass.EquatableList`1.#ctor(System.Int32)">
            <summary>Initializes a new instance of the <see cref="T:SupportClass.EquatableList`1"/> 
            class that is empty and has the specified initial capacity.</summary>
            <param name="capacity">The number of elements that the new list can initially store.</param>
        </member>
        <member name="M:SupportClass.EquatableList`1.AddRange(System.Collections.ICollection)">
            <summary>Adds a range of objects represented by the <see cref="T:System.Collections.ICollection"/>
            implementation.</summary>
            <param name="c">The <see cref="T:System.Collections.ICollection"/>
            implementation to add to this list.</param>
        </member>
        <member name="M:SupportClass.EquatableList`1.EnumerableCountsEqual(System.Collections.Generic.IEnumerable{`0},System.Collections.Generic.IEnumerable{`0})">
            <summary>Compares the counts of two <see cref="T:System.Collections.Generic.IEnumerable`1"/>
            implementations.</summary>
            <remarks>This uses a trick in LINQ, sniffing types for implementations
            of interfaces that might supply shortcuts when trying to make comparisons.
            In this case, that is the <see cref="T:System.Collections.Generic.ICollection`1"/> and
            <see cref="T:System.Collections.ICollection"/> interfaces, either of which can provide a count
            which can be used in determining the equality of sequences (if they don't have
            the same count, then they can't be equal).</remarks>
            <param name="x">The <see cref="T:System.Collections.Generic.IEnumerable`1"/> from the left hand side of the
            comparison to check the count of.</param>
            <param name="y">The <see cref="T:System.Collections.Generic.IEnumerable`1"/> from the right hand side of the
            comparison to check the count of.</param>
            <returns>Null if the result is indeterminate.  This occurs when either <paramref name="x"/>
            or <paramref name="y"/> doesn't implement <see cref="T:System.Collections.ICollection"/> or <see cref="T:System.Collections.Generic.ICollection`1"/>.
            Otherwise, it will get the count from each and return true if they are equal, false otherwise.</returns>
        </member>
        <member name="M:SupportClass.EquatableList`1.Equals(System.Collections.Generic.IEnumerable{`0},System.Collections.Generic.IEnumerable{`0})">
            <summary>Compares the contents of a <see cref="T:System.Collections.Generic.IEnumerable`1"/>
            implementation to another one to determine equality.</summary>
            <remarks>Thinking of the <see cref="T:System.Collections.Generic.IEnumerable`1"/> implementation as
            a string with any number of characters, the algorithm checks
            each item in each list.  If any item of the list is not equal (or
            one list contains all the elements of another list), then that list
            element is compared to the other list element to see which
            list is greater.</remarks>
            <param name="x">The <see cref="T:System.Collections.Generic.IEnumerable`1"/> implementation
            that is considered the left hand side.</param>
            <param name="y">The <see cref="T:System.Collections.Generic.IEnumerable`1"/> implementation
            that is considered the right hand side.</param>
            <returns>True if the items are equal, false otherwise.</returns>
        </member>
        <member name="M:SupportClass.EquatableList`1.Equals(System.Collections.Generic.IEnumerable{`0})">
            <summary>Compares this sequence to another <see cref="T:System.Collections.Generic.IEnumerable`1"/>
            implementation, returning true if they are equal, false otherwise.</summary>
            <param name="other">The other <see cref="T:System.Collections.Generic.IEnumerable`1"/> implementation
            to compare against.</param>
            <returns>True if the sequence in <paramref name="other"/> 
            is the same as this one.</returns>
        </member>
        <member name="M:SupportClass.EquatableList`1.Equals(System.Object)">
            <summary>Compares this object for equality against other.</summary>
            <param name="obj">The other object to compare this object against.</param>
            <returns>True if this object and <paramref name="obj"/> are equal, false
            otherwise.</returns>
        </member>
        <member name="M:SupportClass.EquatableList`1.GetHashCode">
            <summary>Gets the hash code for the list.</summary>
            <returns>The hash code value.</returns>
        </member>
        <member name="M:SupportClass.EquatableList`1.GetHashCode(System.Collections.Generic.IEnumerable{`0})">
            <summary>Gets the hash code for the list.</summary>
            <param name="source">The <see cref="T:System.Collections.Generic.IEnumerable`1"/>
            implementation which will have all the contents hashed.</param>
            <returns>The hash code value.</returns>
        </member>
        <member name="M:SupportClass.EquatableList`1.Clone">
            <summary>Clones the <see cref="T:SupportClass.EquatableList`1"/>.</summary>
            <remarks>This is a shallow clone.</remarks>
            <returns>A new shallow clone of this
            <see cref="T:SupportClass.EquatableList`1"/>.</returns>
        </member>
        <member name="T:SupportClass.AttributeImplItem">
            <summary>
            A simple wrapper to allow for the use of the GeneralKeyedCollection.  The
            wrapper is required as there can be several keys for an object depending
            on how many interfaces it implements.
            </summary>
        </member>
        <member name="T:SupportClass.OS">
            <summary>
            Provides platform infos.
            </summary>
        </member>
        <member name="P:SupportClass.OS.IsUnix">
            <summary>
            Whether we run under a Unix platform.
            </summary>
        </member>
        <member name="P:SupportClass.OS.IsWindows">
            <summary>
            Whether we run under a supported Windows platform.
            </summary>
        </member>
        <member name="T:SupportClass.CloseableThreadLocalProfiler">
            <summary>
            For Debuging purposes.
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.CorruptIndexException">
            <summary> This exception is thrown when Lucene detects
            an inconsistency in the index.
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.DefaultSkipListReader">
            <summary> Implements the skip list reader for the default posting list format
            that stores positions and payloads.
            
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.MultiLevelSkipListReader">
            <summary> This abstract class reads skip lists with multiple levels.
            
            See <see cref="T:Lucene.Net.Index.MultiLevelSkipListWriter"/> for the information about the encoding 
            of the multi level skip lists. 
            
            Subclasses must implement the abstract method <see cref="M:Lucene.Net.Index.MultiLevelSkipListReader.ReadSkipData(System.Int32,Lucene.Net.Store.IndexInput)"/>
            which defines the actual format of the skip data.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.MultiLevelSkipListReader.GetDoc">
            <summary>Returns the id of the doc to which the last call of <see cref="M:Lucene.Net.Index.MultiLevelSkipListReader.SkipTo(System.Int32)"/>
            has skipped.  
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.MultiLevelSkipListReader.SkipTo(System.Int32)">
            <summary>Skips entries to the first beyond the current whose document number is
            greater than or equal to <i>target</i>. Returns the current doc count. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.MultiLevelSkipListReader.SeekChild(System.Int32)">
            <summary>Seeks the skip entry on the given level </summary>
        </member>
        <member name="M:Lucene.Net.Index.MultiLevelSkipListReader.Init(System.Int64,System.Int32)">
            <summary>initializes the reader </summary>
        </member>
        <member name="M:Lucene.Net.Index.MultiLevelSkipListReader.LoadSkipLevels">
            <summary>Loads the skip levels  </summary>
        </member>
        <member name="M:Lucene.Net.Index.MultiLevelSkipListReader.ReadSkipData(System.Int32,Lucene.Net.Store.IndexInput)">
            <summary> Subclasses must implement the actual skip data encoding in this method.
            
            </summary>
            <param name="level">the level skip data shall be read from
            </param>
            <param name="skipStream">the skip stream to read from
            </param>
        </member>
        <member name="M:Lucene.Net.Index.MultiLevelSkipListReader.SetLastSkipData(System.Int32)">
            <summary>Copies the values of the last read skip entry on this level </summary>
        </member>
        <member name="T:Lucene.Net.Index.MultiLevelSkipListReader.SkipBuffer">
            <summary>used to buffer the top skip levels </summary>
        </member>
        <member name="M:Lucene.Net.Index.DefaultSkipListReader.GetFreqPointer">
            <summary>Returns the freq pointer of the doc to which the last call of 
            <see cref="M:Lucene.Net.Index.MultiLevelSkipListReader.SkipTo(System.Int32)"/> has skipped.  
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DefaultSkipListReader.GetProxPointer">
            <summary>Returns the prox pointer of the doc to which the last call of 
            <see cref="M:Lucene.Net.Index.MultiLevelSkipListReader.SkipTo(System.Int32)"/> has skipped.  
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DefaultSkipListReader.GetPayloadLength">
            <summary>Returns the payload length of the payload stored just before 
            the doc to which the last call of <see cref="M:Lucene.Net.Index.MultiLevelSkipListReader.SkipTo(System.Int32)"/> 
            has skipped.  
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.DefaultSkipListWriter">
            <summary> Implements the skip list writer for the default posting list format
            that stores positions and payloads.
            
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.MultiLevelSkipListWriter">
            <summary> This abstract class writes skip lists with multiple levels.
            
            Example for skipInterval = 3:
            c            (skip level 2)
            c                 c                 c            (skip level 1) 
            x     x     x     x     x     x     x     x     x     x      (skip level 0)
            d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d  (posting list)
            3     6     9     12    15    18    21    24    27    30     (df)
            
            d - document
            x - skip data
            c - skip data with child pointer
            
            Skip level i contains every skipInterval-th entry from skip level i-1.
            Therefore the number of entries on level i is: floor(df / ((skipInterval ^ (i + 1))).
            
            Each skip entry on a level i>0 contains a pointer to the corresponding skip entry in list i-1.
            This guarantess a logarithmic amount of skips to find the target document.
            
            While this class takes care of writing the different skip levels,
            subclasses must define the actual format of the skip data.
            
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.MultiLevelSkipListWriter.WriteSkipData(System.Int32,Lucene.Net.Store.IndexOutput)">
            <summary> Subclasses must implement the actual skip data encoding in this method.
            
            </summary>
            <param name="level">the level skip data shall be writting for
            </param>
            <param name="skipBuffer">the skip buffer to write to
            </param>
        </member>
        <member name="M:Lucene.Net.Index.MultiLevelSkipListWriter.BufferSkip(System.Int32)">
            <summary> Writes the current skip data to the buffers. The current document frequency determines
            the max level is skip data is to be written to. 
            
            </summary>
            <param name="df">the current document frequency 
            </param>
            <throws>  IOException </throws>
        </member>
        <member name="M:Lucene.Net.Index.MultiLevelSkipListWriter.WriteSkip(Lucene.Net.Store.IndexOutput)">
            <summary> Writes the buffered skip lists to the given output.
            
            </summary>
            <param name="output">the IndexOutput the skip lists shall be written to 
            </param>
            <returns> the pointer the skip list starts
            </returns>
        </member>
        <member name="M:Lucene.Net.Index.DefaultSkipListWriter.SetSkipData(System.Int32,System.Boolean,System.Int32)">
            <summary> Sets the values for the current skip data. </summary>
        </member>
        <member name="T:Lucene.Net.Index.DirectoryOwningReader">
            <summary> This class keeps track of closing the underlying directory. It is used to wrap
            DirectoryReaders, that are created using a String/File parameter
            in IndexReader.open() with FSDirectory.getDirectory().
            </summary>
            <deprecated> This helper class is removed with all String/File
            IndexReader.open() methods in Lucene 3.0
            </deprecated>
        </member>
        <member name="T:Lucene.Net.Index.FilterIndexReader">
            <summary>A <c>FilterIndexReader</c> contains another IndexReader, which it
            uses as its basic source of data, possibly transforming the data along the
            way or providing additional functionality. The class
            <c>FilterIndexReader</c> itself simply implements all abstract methods
            of <c>IndexReader</c> with versions that pass all requests to the
            contained index reader. Subclasses of <c>FilterIndexReader</c> may
            further override some of these methods and may also provide additional
            methods and fields.
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.IndexReader">
             <summary>IndexReader is an abstract class, providing an interface for accessing an
             index.  Search of an index is done entirely through this abstract interface,
             so that any subclass which implements it is searchable.
             <p/> Concrete subclasses of IndexReader are usually constructed with a call to
             one of the static <c>open()</c> methods, e.g. <see cref="M:Lucene.Net.Index.IndexReader.Open(System.String,System.Boolean)"/>
            .
             <p/> For efficiency, in this API documents are often referred to via
             <i>document numbers</i>, non-negative integers which each name a unique
             document in the index.  These document numbers are ephemeral--they may change
             as documents are added to and deleted from an index.  Clients should thus not
             rely on a given document having the same number between sessions.
             <p/> An IndexReader can be opened on a directory for which an IndexWriter is
             opened already, but it cannot be used to delete documents from the index then.
             <p/>
             <b>NOTE</b>: for backwards API compatibility, several methods are not listed 
             as abstract, but have no useful implementations in this base class and 
             instead always throw UnsupportedOperationException.  Subclasses are 
             strongly encouraged to override these methods, but in many cases may not 
             need to.
             <p/>
             <p/>
             <b>NOTE</b>: as of 2.4, it's possible to open a read-only
             IndexReader using one of the static open methods that
             accepts the boolean readOnly parameter.  Such a reader has
             better concurrency as it's not necessary to synchronize on
             the isDeleted method.  Currently the default for readOnly
             is false, meaning if not specified you will get a
             read/write IndexReader.  But in 3.0 this default will
             change to true, meaning you must explicitly specify false
             if you want to make changes with the resulting IndexReader.
             <p/>
             <a name="thread-safety"></a><p/><b>NOTE</b>: <see cref="T:Lucene.Net.Index.IndexReader"/>
             instances are completely thread
             safe, meaning multiple threads can call any of its methods,
             concurrently.  If your application requires external
             synchronization, you should <b>not</b> synchronize on the
             <c>IndexReader</c> instance; use your own
             (non-Lucene) objects instead.
             </summary>
             <version>  $Id: IndexReader.java 826049 2009-10-16 19:28:55Z mikemccand $
             </version>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetRefCount">
            <summary>Expert: returns the current refCount for this reader </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.IncRef">
            <summary> Expert: increments the refCount of this IndexReader
            instance.  RefCounts are used to determine when a
            reader can be closed safely, i.e. as soon as there are
            no more references.  Be sure to always call a
            corresponding <see cref="M:Lucene.Net.Index.IndexReader.DecRef"/>, in a finally clause;
            otherwise the reader may never be closed.  Note that
            <see cref="M:Lucene.Net.Index.IndexReader.Close"/> simply calls decRef(), which means that
            the IndexReader will not really be closed until <see cref="M:Lucene.Net.Index.IndexReader.DecRef"/>
            has been called for all outstanding
            references.
            
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexReader.DecRef">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.DecRef">
            <summary> Expert: decreases the refCount of this IndexReader
            instance.  If the refCount drops to 0, then pending
            changes (if any) are committed to the index and this
            reader is closed.
            
            </summary>
            <throws>  IOException in case an IOException occurs in commit() or doClose() </throws>
            <summary> 
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexReader.IncRef">
            </seealso>
        </member>
        <member name="F:Lucene.Net.Index.IndexReader.directory">
            <deprecated> will be deleted when IndexReader(Directory) is deleted
            </deprecated>
            <seealso cref="M:Lucene.Net.Index.IndexReader.Directory">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.#ctor(Lucene.Net.Store.Directory)">
            <summary> Legacy Constructor for backwards compatibility.
            
            <p/>
            This Constructor should not be used, it exists for backwards 
            compatibility only to support legacy subclasses that did not "own" 
            a specific directory, but needed to specify something to be returned 
            by the directory() method.  Future subclasses should delegate to the 
            no arg constructor and implement the directory() method as appropriate.
            
            </summary>
            <param name="directory">Directory to be returned by the directory() method
            </param>
            <seealso cref="M:Lucene.Net.Index.IndexReader.Directory">
            </seealso>
            <deprecated> - use IndexReader()
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.EnsureOpen">
            <throws>  AlreadyClosedException if this IndexReader is closed </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Open(System.String)">
            <summary>Returns a read/write IndexReader reading the index in an FSDirectory in the named
            path.
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,System.Boolean)"/> instead. 
            This method will be removed in the 3.0 release.
            
            </deprecated>
            <param name="path">the path to the index directory 
            </param>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Open(System.String,System.Boolean)">
            <summary>Returns an IndexReader reading the index in an
            FSDirectory in the named path.  You should pass
            readOnly=true, since it gives much better concurrent
            performance, unless you intend to do write operations
            (delete documents or change norms) with the reader.
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
            <param name="path">the path to the index directory
            </param>
            <param name="readOnly">true if this should be a readOnly
            reader
            </param>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,System.Boolean)"/> instead.
            This method will be removed in the 3.0 release.
            
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Open(System.IO.FileInfo)">
            <summary>Returns a read/write IndexReader reading the index in an FSDirectory in the named
            path.
            </summary>
            <param name="path">the path to the index directory
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,System.Boolean)"/> instead.
            This method will be removed in the 3.0 release.
            
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Open(System.IO.FileInfo,System.Boolean)">
            <summary>Returns an IndexReader reading the index in an
            FSDirectory in the named path.  You should pass
            readOnly=true, since it gives much better concurrent
            performance, unless you intend to do write operations
            (delete documents or change norms) with the reader.
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
            <param name="path">the path to the index directory
            </param>
            <param name="readOnly">true if this should be a readOnly
            reader
            </param>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,System.Boolean)"/> instead.
            This method will be removed in the 3.0 release.
            
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory)">
            <summary>Returns a read/write IndexReader reading the index in
            the given Directory.
            </summary>
            <param name="directory">the index directory
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,System.Boolean)"/> instead
            This method will be removed in the 3.0 release.
            
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,System.Boolean)">
            <summary>Returns an IndexReader reading the index in the given
            Directory.  You should pass readOnly=true, since it
            gives much better concurrent performance, unless you
            intend to do write operations (delete documents or
            change norms) with the reader.
            </summary>
            <param name="directory">the index directory
            </param>
            <param name="readOnly">true if no changes (deletions, norms) will be made with this IndexReader
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Index.IndexCommit)">
            <summary>Expert: returns a read/write IndexReader reading the index in the given
            <see cref="T:Lucene.Net.Index.IndexCommit"/>.
            </summary>
            <param name="commit">the commit point to open
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Index.IndexCommit,System.Boolean)"/> instead.
            This method will be removed in the 3.0 release.
            
            </deprecated>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Index.IndexCommit,System.Boolean)">
            <summary>Expert: returns an IndexReader reading the index in the given
            <see cref="T:Lucene.Net.Index.IndexCommit"/>.  You should pass readOnly=true, since it
            gives much better concurrent performance, unless you
            intend to do write operations (delete documents or
            change norms) with the reader.
            </summary>
            <param name="commit">the commit point to open
            </param>
            <param name="readOnly">true if no changes (deletions, norms) will be made with this IndexReader
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,Lucene.Net.Index.IndexDeletionPolicy)">
            <summary>Expert: returns a read/write IndexReader reading the index in the given
            Directory, with a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>.
            </summary>
            <param name="directory">the index directory
            </param>
            <param name="deletionPolicy">a custom deletion policy (only used
            if you use this reader to perform deletes or to set
            norms); see <see cref="T:Lucene.Net.Index.IndexWriter"/> for details.
            </param>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,Lucene.Net.Index.IndexDeletionPolicy,System.Boolean)"/> instead.
            This method will be removed in the 3.0 release.
            
            </deprecated>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,Lucene.Net.Index.IndexDeletionPolicy,System.Boolean)">
             <summary>Expert: returns an IndexReader reading the index in
             the given Directory, with a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>
            .  You should pass readOnly=true,
             since it gives much better concurrent performance,
             unless you intend to do write operations (delete
             documents or change norms) with the reader.
             </summary>
             <param name="directory">the index directory
             </param>
             <param name="deletionPolicy">a custom deletion policy (only used
             if you use this reader to perform deletes or to set
             norms); see <see cref="T:Lucene.Net.Index.IndexWriter"/> for details.
             </param>
             <param name="readOnly">true if no changes (deletions, norms) will be made with this IndexReader
             </param>
             <throws>  CorruptIndexException if the index is corrupt </throws>
             <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,Lucene.Net.Index.IndexDeletionPolicy,System.Boolean,System.Int32)">
             <summary>Expert: returns an IndexReader reading the index in
             the given Directory, with a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>
            .  You should pass readOnly=true,
             since it gives much better concurrent performance,
             unless you intend to do write operations (delete
             documents or change norms) with the reader.
             </summary>
             <param name="directory">the index directory
             </param>
             <param name="deletionPolicy">a custom deletion policy (only used
             if you use this reader to perform deletes or to set
             norms); see <see cref="T:Lucene.Net.Index.IndexWriter"/> for details.
             </param>
             <param name="readOnly">true if no changes (deletions, norms) will be made with this IndexReader
             </param>
             <param name="termInfosIndexDivisor">Subsamples which indexed
             terms are loaded into RAM. This has the same effect as <see cref="M:Lucene.Net.Index.IndexWriter.SetTermIndexInterval(System.Int32)"/>
             except that setting
             must be done at indexing time while this setting can be
             set per reader.  When set to N, then one in every
             N*termIndexInterval terms in the index is loaded into
             memory.  By setting this to a value &gt; 1 you can reduce
             memory usage, at the expense of higher latency when
             loading a TermInfo.  The default value is 1.  Set this
             to -1 to skip loading the terms index entirely.
             </param>
             <throws>  CorruptIndexException if the index is corrupt </throws>
             <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Index.IndexCommit,Lucene.Net.Index.IndexDeletionPolicy)">
            <summary>Expert: returns a read/write IndexReader reading the index in the given
            Directory, using a specific commit and with a custom
            <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>.
            </summary>
            <param name="commit">the specific <see cref="T:Lucene.Net.Index.IndexCommit"/> to open;
            see <see cref="M:Lucene.Net.Index.IndexReader.ListCommits(Lucene.Net.Store.Directory)"/> to list all commits
            in a directory
            </param>
            <param name="deletionPolicy">a custom deletion policy (only used
            if you use this reader to perform deletes or to set
            norms); see <see cref="T:Lucene.Net.Index.IndexWriter"/> for details.
            </param>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Index.IndexCommit,Lucene.Net.Index.IndexDeletionPolicy,System.Boolean)"/> instead.
            This method will be removed in the 3.0 release.
            
            </deprecated>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Index.IndexCommit,Lucene.Net.Index.IndexDeletionPolicy,System.Boolean)">
            <summary>Expert: returns an IndexReader reading the index in
            the given Directory, using a specific commit and with
            a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>.  You should pass
            readOnly=true, since it gives much better concurrent
            performance, unless you intend to do write operations
            (delete documents or change norms) with the reader.
            </summary>
            <param name="commit">the specific <see cref="T:Lucene.Net.Index.IndexCommit"/> to open;
            see <see cref="M:Lucene.Net.Index.IndexReader.ListCommits(Lucene.Net.Store.Directory)"/> to list all commits
            in a directory
            </param>
            <param name="deletionPolicy">a custom deletion policy (only used
            if you use this reader to perform deletes or to set
            norms); see <see cref="T:Lucene.Net.Index.IndexWriter"/> for details.
            </param>
            <param name="readOnly">true if no changes (deletions, norms) will be made with this IndexReader
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Index.IndexCommit,Lucene.Net.Index.IndexDeletionPolicy,System.Boolean,System.Int32)">
            <summary>Expert: returns an IndexReader reading the index in
            the given Directory, using a specific commit and with
            a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>.  You should pass
            readOnly=true, since it gives much better concurrent
            performance, unless you intend to do write operations
            (delete documents or change norms) with the reader.
            </summary>
            <param name="commit">the specific <see cref="T:Lucene.Net.Index.IndexCommit"/> to open;
            see <see cref="M:Lucene.Net.Index.IndexReader.ListCommits(Lucene.Net.Store.Directory)"/> to list all commits
            in a directory
            </param>
            <param name="deletionPolicy">a custom deletion policy (only used
            if you use this reader to perform deletes or to set
            norms); see <see cref="T:Lucene.Net.Index.IndexWriter"/> for details.
            </param>
            <param name="readOnly">true if no changes (deletions, norms) will be made with this IndexReader
            </param>
            <param name="termInfosIndexDivisor">Subsambles which indexed
            terms are loaded into RAM. This has the same effect as <see cref="M:Lucene.Net.Index.IndexWriter.SetTermIndexInterval(System.Int32)"/>
            except that setting
            must be done at indexing time while this setting can be
            set per reader.  When set to N, then one in every
            N*termIndexInterval terms in the index is loaded into
            memory.  By setting this to a value &gt; 1 you can reduce
            memory usage, at the expense of higher latency when
            loading a TermInfo.  The default value is 1.  Set this
            to -1 to skip loading the terms index entirely.
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Reopen">
            <summary> Refreshes an IndexReader if the index has changed since this instance 
            was (re)opened. 
            <p/>
            Opening an IndexReader is an expensive operation. This method can be used
            to refresh an existing IndexReader to reduce these costs. This method 
            tries to only load segments that have changed or were created after the 
            IndexReader was (re)opened.
            <p/>
            If the index has not changed since this instance was (re)opened, then this
            call is a NOOP and returns this instance. Otherwise, a new instance is 
            returned. The old instance is <b>not</b> closed and remains usable.<br/>
            <p/>   
            If the reader is reopened, even though they share
            resources internally, it's safe to make changes
            (deletions, norms) with the new reader.  All shared
            mutable state obeys "copy on write" semantics to ensure
            the changes are not seen by other readers.
            <p/>
            You can determine whether a reader was actually reopened by comparing the
            old instance with the instance returned by this method: 
            <code>
            IndexReader reader = ... 
            ...
            IndexReader newReader = r.reopen();
            if (newReader != reader) {
            ...     // reader was reopened
            reader.close(); 
            }
            reader = newReader;
            ...
            </code>
            
            Be sure to synchronize that code so that other threads,
            if present, can never use reader after it has been
            closed and before it's switched to newReader.
            
            <p/><b>NOTE</b>: If this reader is a near real-time
            reader (obtained from <see cref="M:Lucene.Net.Index.IndexWriter.GetReader"/>,
            reopen() will simply call writer.getReader() again for
            you, though this may change in the future.
            
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Reopen(System.Boolean)">
            <summary>Just like <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/>, except you can change the
            readOnly of the original reader.  If the index is
            unchanged but readOnly is different then a new reader
            will be returned. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Reopen(Lucene.Net.Index.IndexCommit)">
            <summary>Expert: reopen this reader on a specific commit point.
            This always returns a readOnly reader.  If the
            specified commit point matches what this reader is
            already on, and this reader is already readOnly, then
            this same instance is returned; if it is not already
            readOnly, a readOnly clone is returned. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Clone">
            <summary> Efficiently clones the IndexReader (sharing most
            internal state).
            <p/>
            On cloning a reader with pending changes (deletions,
            norms), the original reader transfers its write lock to
            the cloned reader.  This means only the cloned reader
            may make further changes to the index, and commit the
            changes to the index on close, but the old reader still
            reflects all changes made up until it was cloned.
            <p/>
            Like <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/>, it's safe to make changes to
            either the original or the cloned reader: all shared
            mutable state obeys "copy on write" semantics to ensure
            the changes are not seen by other readers.
            <p/>
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Clone(System.Boolean)">
            <summary> Clones the IndexReader and optionally changes readOnly.  A readOnly 
            reader cannot open a writeable reader.  
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Directory">
            <summary> Returns the directory associated with this index.  The Default 
            implementation returns the directory specified by subclasses when 
            delegating to the IndexReader(Directory) constructor, or throws an 
            UnsupportedOperationException if one was not specified.
            </summary>
            <throws>  UnsupportedOperationException if no directory </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.LastModified(System.String)">
            <summary> Returns the time the index in the named directory was last modified.
            Do not use this to check whether the reader is still up-to-date, use
            <see cref="M:Lucene.Net.Index.IndexReader.IsCurrent"/> instead. 
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.LastModified(Lucene.Net.Store.Directory)"/> instead.
            This method will be removed in the 3.0 release.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.LastModified(System.IO.FileInfo)">
            <summary> Returns the time the index in the named directory was last modified. 
            Do not use this to check whether the reader is still up-to-date, use
            <see cref="M:Lucene.Net.Index.IndexReader.IsCurrent"/> instead. 
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.LastModified(Lucene.Net.Store.Directory)"/> instead.
            This method will be removed in the 3.0 release.
            
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.LastModified(Lucene.Net.Store.Directory)">
            <summary> Returns the time the index in the named directory was last modified. 
            Do not use this to check whether the reader is still up-to-date, use
            <see cref="M:Lucene.Net.Index.IndexReader.IsCurrent"/> instead. 
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetCurrentVersion(System.String)">
            <summary> Reads version number from segments files. The version number is
            initialized with a timestamp and then increased by one for each change of
            the index.
            
            </summary>
            <param name="directory">where the index resides.
            </param>
            <returns> version number.
            </returns>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.GetCurrentVersion(Lucene.Net.Store.Directory)"/> instead.
            This method will be removed in the 3.0 release.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetCurrentVersion(System.IO.FileInfo)">
            <summary> Reads version number from segments files. The version number is
            initialized with a timestamp and then increased by one for each change of
            the index.
            
            </summary>
            <param name="directory">where the index resides.
            </param>
            <returns> version number.
            </returns>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.GetCurrentVersion(Lucene.Net.Store.Directory)"/> instead.
            This method will be removed in the 3.0 release.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetCurrentVersion(Lucene.Net.Store.Directory)">
            <summary> Reads version number from segments files. The version number is
            initialized with a timestamp and then increased by one for each change of
            the index.
            
            </summary>
            <param name="directory">where the index resides.
            </param>
            <returns> version number.
            </returns>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetCommitUserData(Lucene.Net.Store.Directory)">
            <summary> Reads commitUserData, previously passed to 
            <see cref="M:Lucene.Net.Index.IndexWriter.Commit(System.Collections.Generic.IDictionary{System.String,System.String})"/>,
            from current index segments file.  This will return null if 
            <see cref="M:Lucene.Net.Index.IndexWriter.Commit(System.Collections.Generic.IDictionary{System.String,System.String})"/>
            has never been called for this index.
            </summary>
            <param name="directory">where the index resides.
            </param>
            <returns> commit userData.
            </returns>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
            <summary> 
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexReader.GetCommitUserData">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetVersion">
            <summary> Version number when this IndexReader was opened. Not implemented in the
            IndexReader base class.
            
            <p/>
            If this reader is based on a Directory (ie, was created by calling
            <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory)"/>, or <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/> 
            on a reader based on a Directory), then
            this method returns the version recorded in the commit that the reader
            opened. This version is advanced every time <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> is
            called.
            <p/>
            
            <p/>
            If instead this reader is a near real-time reader (ie, obtained by a call
            to <see cref="M:Lucene.Net.Index.IndexWriter.GetReader"/>, or by calling <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/> on a near
            real-time reader), then this method returns the version of the last
            commit done by the writer. Note that even as further changes are made
            with the writer, the version will not changed until a commit is
            completed. Thus, you should not rely on this method to determine when a
            near real-time reader should be opened. Use <see cref="M:Lucene.Net.Index.IndexReader.IsCurrent"/> instead.
            <p/>
            
            </summary>
            <throws>  UnsupportedOperationException </throws>
            <summary>             unless overridden in subclass
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetCommitUserData">
            <summary> Retrieve the String userData optionally passed to
            <see cref="M:Lucene.Net.Index.IndexWriter.Commit(System.Collections.Generic.IDictionary{System.String,System.String})"/>.  
            This will return null if 
            <see cref="M:Lucene.Net.Index.IndexWriter.Commit(System.Collections.Generic.IDictionary{System.String,System.String})"/>
            has never been called for this index.
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexReader.GetCommitUserData(Lucene.Net.Store.Directory)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.SetTermInfosIndexDivisor(System.Int32)">
            <summary><p/>For IndexReader implementations that use
            TermInfosReader to read terms, this sets the
            indexDivisor to subsample the number of indexed terms
            loaded into memory.  This has the same effect as <see cref="M:Lucene.Net.Index.IndexWriter.SetTermIndexInterval(System.Int32)"/>
            except that setting
            must be done at indexing time while this setting can be
            set per reader.  When set to N, then one in every
            N*termIndexInterval terms in the index is loaded into
            memory.  By setting this to a value &gt; 1 you can reduce
            memory usage, at the expense of higher latency when
            loading a TermInfo.  The default value is 1.<p/>
            
            <b>NOTE:</b> you must call this before the term
            index is loaded.  If the index is already loaded, 
            an IllegalStateException is thrown.
            </summary>
            <throws>  IllegalStateException if the term index has already been loaded into memory </throws>
            <deprecated> Please use <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,Lucene.Net.Index.IndexDeletionPolicy,System.Boolean,System.Int32)"/> to specify the required TermInfos index divisor instead.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetTermInfosIndexDivisor">
            <summary><p/>For IndexReader implementations that use
            TermInfosReader to read terms, this returns the
            current indexDivisor as specified when the reader was
            opened.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.IsCurrent">
            <summary> Check whether any new changes have occurred to the index since this
            reader was opened.
            
            <p/>
            If this reader is based on a Directory (ie, was created by calling
            <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory)"/>, or <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/> on a reader based on a Directory), then
            this method checks if any further commits (see <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/>
            have occurred in that directory).
            <p/>
            
            <p/>
            If instead this reader is a near real-time reader (ie, obtained by a call
            to <see cref="M:Lucene.Net.Index.IndexWriter.GetReader"/>, or by calling <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/> on a near
            real-time reader), then this method checks if either a new commmit has
            occurred, or any new uncommitted changes have taken place via the writer.
            Note that even if the writer has only performed merging, this method will
            still return false.
            <p/>
            
            <p/>
            In any event, if this returns false, you should call <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/> to
            get a new reader that sees the changes.
            <p/>
            
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
            <throws>  UnsupportedOperationException unless overridden in subclass </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.IsOptimized">
            <summary> Checks is the index is optimized (if it has a single segment and 
            no deletions).  Not implemented in the IndexReader base class.
            </summary>
            <returns> <c>true</c> if the index is optimized; <c>false</c> otherwise
            </returns>
            <throws>  UnsupportedOperationException unless overridden in subclass </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetTermFreqVectors(System.Int32)">
            <summary> Return an array of term frequency vectors for the specified document.
            The array contains a vector for each vectorized field in the document.
            Each vector contains terms and frequencies for all terms in a given vectorized field.
            If no such fields existed, the method returns null. The term vectors that are
            returned may either be of type <see cref="T:Lucene.Net.Index.TermFreqVector"/>
            or of type <see cref="T:Lucene.Net.Index.TermPositionVector"/> if
            positions or offsets have been stored.
            
            </summary>
            <param name="docNumber">document for which term frequency vectors are returned
            </param>
            <returns> array of term frequency vectors. May be null if no term vectors have been
            stored for the specified document.
            </returns>
            <throws>  IOException if index cannot be accessed </throws>
            <seealso cref="T:Lucene.Net.Documents.Field.TermVector">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,System.String)">
            <summary> Return a term frequency vector for the specified document and field. The
            returned vector contains terms and frequencies for the terms in
            the specified field of this document, if the field had the storeTermVector
            flag set. If termvectors had been stored with positions or offsets, a 
            <see cref="T:Lucene.Net.Index.TermPositionVector"/> is returned.
            
            </summary>
            <param name="docNumber">document for which the term frequency vector is returned
            </param>
            <param name="field">field for which the term frequency vector is returned.
            </param>
            <returns> term frequency vector May be null if field does not exist in the specified
            document or term vector was not stored.
            </returns>
            <throws>  IOException if index cannot be accessed </throws>
            <seealso cref="T:Lucene.Net.Documents.Field.TermVector">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,System.String,Lucene.Net.Index.TermVectorMapper)">
            <summary> Load the Term Vector into a user-defined data structure instead of relying on the parallel arrays of
            the <see cref="T:Lucene.Net.Index.TermFreqVector"/>.
            </summary>
            <param name="docNumber">The number of the document to load the vector for
            </param>
            <param name="field">The name of the field to load
            </param>
            <param name="mapper">The <see cref="T:Lucene.Net.Index.TermVectorMapper"/> to process the vector.  Must not be null
            </param>
            <throws>  IOException if term vectors cannot be accessed or if they do not exist on the field and doc. specified. </throws>
            <summary> 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,Lucene.Net.Index.TermVectorMapper)">
            <summary> Map all the term vectors for all fields in a Document</summary>
            <param name="docNumber">The number of the document to load the vector for
            </param>
            <param name="mapper">The <see cref="T:Lucene.Net.Index.TermVectorMapper"/> to process the vector.  Must not be null
            </param>
            <throws>  IOException if term vectors cannot be accessed or if they do not exist on the field and doc. specified. </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.IndexExists(System.String)">
            <summary> Returns <c>true</c> if an index exists at the specified directory.
            If the directory does not exist or if there is no index in it.
            <c>false</c> is returned.
            </summary>
            <param name="directory">the directory to check for an index
            </param>
            <returns> <c>true</c> if an index exists; <c>false</c> otherwise
            </returns>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.IndexExists(Lucene.Net.Store.Directory)"/> instead
            This method will be removed in the 3.0 release.
            
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.IndexExists(System.IO.FileInfo)">
            <summary> Returns <c>true</c> if an index exists at the specified directory.
            If the directory does not exist or if there is no index in it.
            </summary>
            <param name="directory">the directory to check for an index
            </param>
            <returns> <c>true</c> if an index exists; <c>false</c> otherwise
            </returns>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.IndexExists(Lucene.Net.Store.Directory)"/> instead.
            This method will be removed in the 3.0 release.
            
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.IndexExists(Lucene.Net.Store.Directory)">
            <summary> Returns <c>true</c> if an index exists at the specified directory.
            If the directory does not exist or if there is no index in it.
            </summary>
            <param name="directory">the directory to check for an index
            </param>
            <returns> <c>true</c> if an index exists; <c>false</c> otherwise
            </returns>
            <throws>  IOException if there is a problem with accessing the index </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.NumDocs">
            <summary>Returns the number of documents in this index. </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.MaxDoc">
            <summary>Returns one greater than the largest possible document number.
            This may be used to, e.g., determine how big to allocate an array which
            will have an element for every document number in an index.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.NumDeletedDocs">
            <summary>Returns the number of deleted documents. </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Document(System.Int32)">
            <summary> Returns the stored fields of the <c>n</c><sup>th</sup>
            <c>Document</c> in this index.
            <p/>
            <b>NOTE:</b> for performance reasons, this method does not check if the
            requested document is deleted, and therefore asking for a deleted document
            may yield unspecified results. Usually this is not required, however you
            can call <see cref="M:Lucene.Net.Index.IndexReader.IsDeleted(System.Int32)"/> with the requested document ID to verify
            the document is not deleted.
            
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Document(System.Int32,Lucene.Net.Documents.FieldSelector)">
            <summary> Get the <see cref="T:Lucene.Net.Documents.Document"/> at the <c>n</c>
            <sup>th</sup> position. The <see cref="T:Lucene.Net.Documents.FieldSelector"/> may be used to determine
            what <see cref="T:Lucene.Net.Documents.Field"/>s to load and how they should
            be loaded. <b>NOTE:</b> If this Reader (more specifically, the underlying
            <c>FieldsReader</c>) is closed before the lazy
            <see cref="T:Lucene.Net.Documents.Field"/> is loaded an exception may be
            thrown. If you want the value of a lazy
            <see cref="T:Lucene.Net.Documents.Field"/> to be available after closing you
            must explicitly load it or fetch the Document again with a new loader.
            <p/>
            <b>NOTE:</b> for performance reasons, this method does not check if the
            requested document is deleted, and therefore asking for a deleted document
            may yield unspecified results. Usually this is not required, however you
            can call <see cref="M:Lucene.Net.Index.IndexReader.IsDeleted(System.Int32)"/> with the requested document ID to verify
            the document is not deleted.
            
            </summary>
            <param name="n">Get the document at the <c>n</c><sup>th</sup> position
            </param>
            <param name="fieldSelector">The <see cref="T:Lucene.Net.Documents.FieldSelector"/> to use to determine what
            Fields should be loaded on the Document. May be null, in which case
            all Fields will be loaded.
            </param>
            <returns> The stored fields of the
            <see cref="T:Lucene.Net.Documents.Document"/> at the nth position
            </returns>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
            <seealso cref="T:Lucene.Net.Documents.Fieldable">
            </seealso>
            <seealso cref="T:Lucene.Net.Documents.FieldSelector">
            </seealso>
            <seealso cref="T:Lucene.Net.Documents.SetBasedFieldSelector">
            </seealso>
            <seealso cref="T:Lucene.Net.Documents.LoadFirstFieldSelector">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.IsDeleted(System.Int32)">
            <summary>Returns true if document <i>n</i> has been deleted </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.HasDeletions">
            <summary>Returns true if any documents have been deleted </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.HasNorms(System.String)">
            <summary>Returns true if there are norms stored for this field. </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Norms(System.String)">
            <summary>Returns the byte-encoded normalization factor for the named field of
            every document.  This is used by the search code to score documents.
            
            </summary>
            <seealso cref="M:Lucene.Net.Documents.AbstractField.SetBoost(System.Single)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Norms(System.String,System.Byte[],System.Int32)">
            <summary>Reads the byte-encoded normalization factor for the named field of every
            document.  This is used by the search code to score documents.
            
            </summary>
            <seealso cref="M:Lucene.Net.Documents.AbstractField.SetBoost(System.Single)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.SetNorm(System.Int32,System.String,System.Byte)">
            <summary>Expert: Resets the normalization factor for the named field of the named
            document.  The norm represents the product of the field's <see cref="M:Lucene.Net.Documents.Fieldable.SetBoost(System.Single)">boost</see>
            and its <see cref="M:Lucene.Net.Search.Similarity.LengthNorm(System.String,System.Int32)">length normalization</see>.  Thus, to preserve the length normalization
            values when resetting this, one should base the new value upon the old.
            
            <b>NOTE:</b> If this field does not store norms, then
            this method call will silently do nothing.
            
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexReader.Norms(System.String)">
            </seealso>
            <seealso cref="M:Lucene.Net.Search.Similarity.DecodeNorm(System.Byte)">
            </seealso>
            <throws>  StaleReaderException if the index has changed </throws>
            <summary>  since this reader was opened
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.DoSetNorm(System.Int32,System.String,System.Byte)">
            <summary>Implements setNorm in subclass.</summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.SetNorm(System.Int32,System.String,System.Single)">
            <summary>Expert: Resets the normalization factor for the named field of the named
            document.
            
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexReader.Norms(System.String)">
            </seealso>
            <seealso cref="M:Lucene.Net.Search.Similarity.DecodeNorm(System.Byte)">
            
            </seealso>
            <throws>  StaleReaderException if the index has changed </throws>
            <summary>  since this reader was opened
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Terms">
            <summary>Returns an enumeration of all the terms in the index. The
            enumeration is ordered by Term.compareTo(). Each term is greater
            than all that precede it in the enumeration. Note that after
            calling terms(), <see cref="M:Lucene.Net.Index.TermEnum.Next"/> must be called
            on the resulting enumeration before calling other methods such as
            <see cref="M:Lucene.Net.Index.TermEnum.Term"/>.
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Terms(Lucene.Net.Index.Term)">
            <summary>Returns an enumeration of all terms starting at a given term. If
            the given term does not exist, the enumeration is positioned at the
            first term greater than the supplied term. The enumeration is
            ordered by Term.compareTo(). Each term is greater than all that
            precede it in the enumeration.
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.DocFreq(Lucene.Net.Index.Term)">
            <summary>Returns the number of documents containing the term <c>t</c>.</summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.TermDocs(Lucene.Net.Index.Term)">
            <summary>Returns an enumeration of all the documents which contain
            <c>term</c>. For each document, the document number, the frequency of
            the term in that document is also provided, for use in
            search scoring.  If term is null, then all non-deleted
            docs are returned with freq=1.
            Thus, this method implements the mapping:
            <p/><list>
            Term &#160;&#160; =&gt; &#160;&#160; &lt;docNum, freq&gt;<sup>*</sup>
            </list>
            <p/>The enumeration is ordered by document number.  Each document number
            is greater than all that precede it in the enumeration.
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.TermDocs">
            <summary>Returns an unpositioned <see cref="T:Lucene.Net.Index.TermDocs"/> enumerator.</summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.TermPositions(Lucene.Net.Index.Term)">
            <summary>Returns an enumeration of all the documents which contain
            <c>term</c>.  For each document, in addition to the document number
            and frequency of the term in that document, a list of all of the ordinal
            positions of the term in the document is available.  Thus, this method
            implements the mapping:
            
            <p/><list>
            Term &#160;&#160; =&gt; &#160;&#160; &lt;docNum, freq,
            &lt;pos<sub>1</sub>, pos<sub>2</sub>, ...
            pos<sub>freq-1</sub>&gt;
            &gt;<sup>*</sup>
            </list>
            <p/> This positional information facilitates phrase and proximity searching.
            <p/>The enumeration is ordered by document number.  Each document number is
            greater than all that precede it in the enumeration.
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.TermPositions">
            <summary>Returns an unpositioned <see cref="T:Lucene.Net.Index.TermPositions"/> enumerator.</summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.DeleteDocument(System.Int32)">
            <summary>Deletes the document numbered <c>docNum</c>.  Once a document is
            deleted it will not appear in TermDocs or TermPostitions enumerations.
            Attempts to read its field with the <see cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32)"/>
            method will result in an error.  The presence of this document may still be
            reflected in the <see cref="M:Lucene.Net.Index.IndexReader.DocFreq(Lucene.Net.Index.Term)"/> statistic, though
            this will be corrected eventually as the index is further modified.
            
            </summary>
            <throws>  StaleReaderException if the index has changed </throws>
            <summary> since this reader was opened
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.DoDelete(System.Int32)">
            <summary>Implements deletion of the document numbered <c>docNum</c>.
            Applications should call <see cref="M:Lucene.Net.Index.IndexReader.DeleteDocument(System.Int32)"/> or <see cref="M:Lucene.Net.Index.IndexReader.DeleteDocuments(Lucene.Net.Index.Term)"/>.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.DeleteDocuments(Lucene.Net.Index.Term)">
            <summary>Deletes all documents that have a given <c>term</c> indexed.
            This is useful if one uses a document field to hold a unique ID string for
            the document.  Then to delete such a document, one merely constructs a
            term with the appropriate field and the unique ID string as its text and
            passes it to this method.
            See <see cref="M:Lucene.Net.Index.IndexReader.DeleteDocument(System.Int32)"/> for information about when this deletion will 
            become effective.
            
            </summary>
            <returns> the number of documents deleted
            </returns>
            <throws>  StaleReaderException if the index has changed </throws>
            <summary>  since this reader was opened
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.UndeleteAll">
            <summary>Undeletes all documents currently marked as deleted in this index.
            
            </summary>
            <throws>  StaleReaderException if the index has changed </throws>
            <summary>  since this reader was opened
            </summary>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.DoUndeleteAll">
            <summary>Implements actual undeleteAll() in subclass. </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.AcquireWriteLock">
            <summary>Does nothing by default. Subclasses that require a write lock for
            index modifications must implement this method. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Flush">
            <summary> </summary>
            <throws>  IOException </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Flush(System.Collections.Generic.IDictionary{System.String,System.String})">
             <param name="commitUserData">Opaque Map (String -&gt; String)
             that's recorded into the segments file in the index,
             and retrievable by <see cref="M:Lucene.Net.Index.IndexReader.GetCommitUserData(Lucene.Net.Store.Directory)"/>
            .
             </param>
             <throws>  IOException </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Commit">
            <summary> Commit changes resulting from delete, undeleteAll, or
            setNorm operations
            
            If an exception is hit, then either no changes or all
            changes will have been committed to the index
            (transactional semantics).
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Commit(System.Collections.Generic.IDictionary{System.String,System.String})">
            <summary> Commit changes resulting from delete, undeleteAll, or
            setNorm operations
            
            If an exception is hit, then either no changes or all
            changes will have been committed to the index
            (transactional semantics).
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.DoCommit">
            <summary>Implements commit.</summary>
            <deprecated> Please implement 
            <see cref="M:Lucene.Net.Index.IndexReader.DoCommit(System.Collections.Generic.IDictionary{System.String,System.String})"/>
            instead. 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.DoCommit(System.Collections.Generic.IDictionary{System.String,System.String})">
            <summary>Implements commit.  NOTE: subclasses should override
            this.  In 3.0 this will become an abstract method. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Close">
            <summary> Closes files associated with this index.
            Also saves any new deletions to disk.
            No other methods should be called after this has been called.
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Dispose">
            <summary>
            .NET
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.DoClose">
            <summary>Implements close. </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetFieldNames(Lucene.Net.Index.IndexReader.FieldOption)">
            <summary> Get a list of unique field names that exist in this index and have the specified
            field option information.
            </summary>
            <param name="fldOption">specifies which field option should be available for the returned fields
            </param>
            <returns> Collection of Strings indicating the names of the fields.
            </returns>
            <seealso cref="T:Lucene.Net.Index.IndexReader.FieldOption">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.IsLocked(Lucene.Net.Store.Directory)">
            <summary> Returns <c>true</c> iff the index in the named directory is
            currently locked.
            </summary>
            <param name="directory">the directory to check for a lock
            </param>
            <throws>  IOException if there is a low-level IO error </throws>
            <deprecated> Please use <see cref="M:Lucene.Net.Index.IndexWriter.IsLocked(Lucene.Net.Store.Directory)"/> instead.
            This method will be removed in the 3.0 release.
            
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.IsLocked(System.String)">
            <summary> Returns <c>true</c> iff the index in the named directory is
            currently locked.
            </summary>
            <param name="directory">the directory to check for a lock
            </param>
            <throws>  IOException if there is a low-level IO error </throws>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexReader.IsLocked(Lucene.Net.Store.Directory)"/> instead.
            This method will be removed in the 3.0 release.
            
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Unlock(Lucene.Net.Store.Directory)">
            <summary> Forcibly unlocks the index in the named directory.
            <p/>
            Caution: this should only be used by failure recovery code,
            when it is known that no other process nor thread is in fact
            currently accessing this index.
            </summary>
            <deprecated> Please use <see cref="M:Lucene.Net.Index.IndexWriter.Unlock(Lucene.Net.Store.Directory)"/> instead.
            This method will be removed in the 3.0 release.
            
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetIndexCommit">
            <summary> Expert: return the IndexCommit that this reader has
            opened.  This method is only implemented by those
            readers that correspond to a Directory with its own
            segments_N file.
            
            <p/><b>WARNING</b>: this API is new and experimental and
            may suddenly change.<p/>
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.Main(System.String[])">
            <summary> Prints the filename and size of each file within a given compound file.
            Add the -extract flag to extract files to the current working directory.
            In order to make the extracted version of the index work, you have to copy
            the segments file from the compound index into the directory where the extracted files are stored.
            </summary>
            <param name="args">Usage: Lucene.Net.Index.IndexReader [-extract] &lt;cfsfile&gt;
            </param>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.ListCommits(Lucene.Net.Store.Directory)">
             <summary>Returns all commit points that exist in the Directory.
             Normally, because the default is <see cref="T:Lucene.Net.Index.KeepOnlyLastCommitDeletionPolicy"/>
            , there would be only
             one commit point.  But if you're using a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>
             then there could be many commits.
             Once you have a given commit, you can open a reader on
             it by calling <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Index.IndexCommit)"/>
             There must be at least one commit in
             the Directory, else this method throws <see cref="T:System.IO.IOException"/>.  
             Note that if a commit is in
             progress while this method is running, that commit
             may or may not be returned array.  
             </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetSequentialSubReaders">
            <summary>Expert: returns the sequential sub readers that this
            reader is logically composed of.  For example,
            IndexSearcher uses this API to drive searching by one
            sub reader at a time.  If this reader is not composed
            of sequential child readers, it should return null.
            If this method returns an empty array, that means this
            reader is a null reader (for example a MultiReader
            that has no sub readers).
            <p/>
            NOTE: You should not try using sub-readers returned by
            this method to make any changes (setNorm, deleteDocument,
            etc.). While this might succeed for one composite reader
            (like MultiReader), it will most likely lead to index
            corruption for other readers (like DirectoryReader obtained
            through <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,System.Boolean)"/>. Use the parent reader directly. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetFieldCacheKey">
            <summary>Expert    </summary>
            <deprecated> 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetDeletesCacheKey">
            Expert.  Warning: this returns null if the reader has
            no deletions 
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetUniqueTermCount">
            <summary>Returns the number of unique terms (across all fields)
            in this reader.
            
            This method returns long, even though internally
            Lucene cannot handle more than 2^31 unique terms, for
            a possible future when this limitation is removed.
            
            </summary>
            <throws>  UnsupportedOperationException if this count </throws>
            <summary>  cannot be easily determined (eg Multi*Readers).
            Instead, you should call <see cref="M:Lucene.Net.Index.IndexReader.GetSequentialSubReaders"/>
            and ask each sub reader for
            its unique term count. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.GetDisableFakeNorms">
            <summary>Expert: Return the state of the flag that disables fakes norms in favor of representing the absence of field norms with null.</summary>
            <returns> true if fake norms are disabled
            </returns>
            <deprecated> This currently defaults to false (to remain
            back-compatible), but in 3.0 it will be hardwired to
            true, meaning the norms() methods will return null for
            fields that had disabled norms.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexReader.SetDisableFakeNorms(System.Boolean)">
            <summary>Expert: Set the state of the flag that disables fakes norms in favor of representing the absence of field norms with null.</summary>
            <param name="disableFakeNorms">true to disable fake norms, false to preserve the legacy behavior
            </param>
            <deprecated> This currently defaults to false (to remain
            back-compatible), but in 3.0 it will be hardwired to
            true, meaning the norms() methods will return null for
            fields that had disabled norms.
            </deprecated>
        </member>
        <member name="T:Lucene.Net.Index.SegmentInfos.FindSegmentsFile">
            <summary> Utility class for executing code that needs to do
            something with the current segments file.  This is
            necessary with lock-less commits because from the time
            you locate the current segments file name, until you
            actually open it, read its contents, or check modified
            time, etc., it could have been deleted due to a writer
            commit finishing.
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.SegmentInfos">
            <summary> A collection of segmentInfo objects with methods for operating on
            those segments in relation to the file system.
            
            <p/><b>NOTE:</b> This API is new and still experimental
            (subject to change suddenly in the next release)<p/>
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.SegmentInfos.FORMAT">
            <summary>The file format version, a negative number. </summary>
        </member>
        <member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_LOCKLESS">
            <summary>This format adds details used for lockless commits.  It differs
            slightly from the previous format in that file names
            are never re-used (write once).  Instead, each file is
            written to the next generation.  For example,
            segments_1, segments_2, etc.  This allows us to not use
            a commit lock.  See <a
            href="http://lucene.apache.org/java/docs/fileformats.html">file
            formats</a> for details.
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_SINGLE_NORM_FILE">
            <summary>This format adds a "hasSingleNormFile" flag into each segment info.
            See <a href="http://issues.apache.org/jira/browse/LUCENE-756">LUCENE-756</a>
            for details.
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_SHARED_DOC_STORE">
            <summary>This format allows multiple segments to share a single
            vectors and stored fields file. 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_CHECKSUM">
            <summary>This format adds a checksum at the end of the file to
            ensure all bytes were successfully written. 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_DEL_COUNT">
            <summary>This format adds the deletion count for each segment.
            This way IndexWriter can efficiently report numDocs(). 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_HAS_PROX">
            <summary>This format adds the boolean hasProx to record if any
            fields in the segment store prox information (ie, have
            omitTermFreqAndPositions==false) 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_USER_DATA">
            <summary>This format adds optional commit userData (String) storage. </summary>
        </member>
        <member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_DIAGNOSTICS">
            <summary>This format adds optional per-segment String
            dianostics storage, and switches userData to Map 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.SegmentInfos.version">
            <summary> counts how often the index has been changed by adding or deleting docs.
            starting with the current time in milliseconds forces to create unique version numbers.
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.SegmentInfos.infoStream">
            <summary> If non-null, information about loading segments_N files</summary>
            <seealso cref="M:Lucene.Net.Index.SegmentInfos.SetInfoStream(System.IO.StreamWriter)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.GetCurrentSegmentGeneration(System.String[])">
            <summary> Get the generation (N) of the current segments_N file
            from a list of files.
            
            </summary>
            <param name="files">-- array of file names to check
            </param>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.GetCurrentSegmentGeneration(Lucene.Net.Store.Directory)">
            <summary> Get the generation (N) of the current segments_N file
            in the directory.
            
            </summary>
            <param name="directory">-- directory to search for the latest segments_N file
            </param>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.GetCurrentSegmentFileName(System.String[])">
            <summary> Get the filename of the current segments_N file
            from a list of files.
            
            </summary>
            <param name="files">-- array of file names to check
            </param>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.GetCurrentSegmentFileName(Lucene.Net.Store.Directory)">
            <summary> Get the filename of the current segments_N file
            in the directory.
            
            </summary>
            <param name="directory">-- directory to search for the latest segments_N file
            </param>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.GetCurrentSegmentFileName">
            <summary> Get the segments_N filename in use by this segment infos.</summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.GenerationFromSegmentsFileName(System.String)">
            <summary> Parse the generation off the segments file name and
            return it.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.GetNextSegmentFileName">
            <summary> Get the next segments_N filename that will be written.</summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.Read(Lucene.Net.Store.Directory,System.String)">
            <summary> Read a particular segmentFileName.  Note that this may
            throw an IOException if a commit is in process.
            
            </summary>
            <param name="directory">-- directory containing the segments file
            </param>
            <param name="segmentFileName">-- segment file to load
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.Read(Lucene.Net.Store.Directory)">
            <summary> This version of read uses the retry logic (for lock-less
            commits) to find the right segments file to load.
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.Clone">
            <summary> Returns a copy of this instance, also copying each
            SegmentInfo.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.GetVersion">
            <summary> version number when this SegmentInfos was generated.</summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.ReadCurrentVersion(Lucene.Net.Store.Directory)">
            <summary> Current version number from segments file.</summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.ReadCurrentUserData(Lucene.Net.Store.Directory)">
            <summary> Returns userData from latest segments file</summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.SetInfoStream(System.IO.StreamWriter)">
            <summary>If non-null, information about retries when loading
            the segments file will be printed to this.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.SetDefaultGenFileRetryCount(System.Int32)">
            <summary> Advanced: set how many times to try loading the
            segments.gen file contents to determine current segment
            generation.  This file is only referenced when the
            primary method (listing the directory) fails.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.GetDefaultGenFileRetryCount">
            <seealso cref="M:Lucene.Net.Index.SegmentInfos.SetDefaultGenFileRetryCount(System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.SetDefaultGenFileRetryPauseMsec(System.Int32)">
            <summary> Advanced: set how many milliseconds to pause in between
            attempts to load the segments.gen file.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.GetDefaultGenFileRetryPauseMsec">
            <seealso cref="M:Lucene.Net.Index.SegmentInfos.SetDefaultGenFileRetryPauseMsec(System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.SetDefaultGenLookaheadCount(System.Int32)">
            <summary> Advanced: set how many times to try incrementing the
            gen when loading the segments file.  This only runs if
            the primary (listing directory) and secondary (opening
            segments.gen file) methods fail to find the segments
            file.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.GetDefaultGenLookahedCount">
            <seealso cref="M:Lucene.Net.Index.SegmentInfos.SetDefaultGenLookaheadCount(System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.GetInfoStream">
            <seealso cref="M:Lucene.Net.Index.SegmentInfos.SetInfoStream(System.IO.StreamWriter)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.Range(System.Int32,System.Int32)">
            <summary> Returns a new SegmentInfos containg the SegmentInfo
            instances in the specified range first (inclusive) to
            last (exclusive), so total number of segments returned
            is last-first.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.PrepareCommit(Lucene.Net.Store.Directory)">
            <summary>Call this to start a commit.  This writes the new
            segments file, but writes an invalid checksum at the
            end, so that it is not visible to readers.  Once this
            is called you must call <see cref="M:Lucene.Net.Index.SegmentInfos.FinishCommit(Lucene.Net.Store.Directory)"/> to complete
            the commit or <see cref="M:Lucene.Net.Index.SegmentInfos.RollbackCommit(Lucene.Net.Store.Directory)"/> to abort it. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.Files(Lucene.Net.Store.Directory,System.Boolean)">
            <summary>Returns all file names referenced by SegmentInfo
            instances matching the provided Directory (ie files
            associated with any "external" segments are skipped).
            The returned collection is recomputed on each
            invocation.  
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.Commit(Lucene.Net.Store.Directory)">
            <summary>Writes &amp; syncs to the Directory dir, taking care to
            remove the segments file on exception 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.Replace(Lucene.Net.Index.SegmentInfos)">
            <summary>Replaces all segments in this instance, but keeps
            generation, version, counter so that future commits
            remain write once.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.Equals(System.Object)">
            <summary>
            Simple brute force implementation.
            If size is equal, compare items one by one.
            </summary>
            <param name="obj">SegmentInfos object to check equality for</param>
            <returns>true if lists are equal, false otherwise</returns>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.GetHashCode">
            <summary>
            Calculate hash code of SegmentInfos
            </summary>
            <returns>hash code as in java version of ArrayList</returns>
        </member>
        <member name="T:Lucene.Net.Index.SegmentInfos.FindSegmentsFile">
            <summary> Utility class for executing code that needs to do
            something with the current segments file.  This is
            necessary with lock-less commits because from the time
            you locate the current segments file name, until you
            actually open it, read its contents, or check modified
            time, etc., it could have been deleted due to a writer
            commit finishing.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.SegmentInfos.FindSegmentsFile.DoBody(System.String)">
            <summary> Subclass must implement this.  The assumption is an
            IOException will be thrown if something goes wrong
            during the processing that could have been caused by
            a writer committing.
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.IndexReader.FieldOption">
            <summary> Constants describing field properties, for example used for
            <see cref="M:Lucene.Net.Index.IndexReader.GetFieldNames(Lucene.Net.Index.IndexReader.FieldOption)"/>.
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexReader.FieldOption.ALL">
            <summary>All fields </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexReader.FieldOption.INDEXED">
            <summary>All indexed fields </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexReader.FieldOption.STORES_PAYLOADS">
            <summary>All fields that store payloads </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexReader.FieldOption.OMIT_TERM_FREQ_AND_POSITIONS">
            <summary>All fields that omit tf </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexReader.FieldOption.OMIT_TF">
            <deprecated> Renamed to <see cref="F:Lucene.Net.Index.IndexReader.FieldOption.OMIT_TERM_FREQ_AND_POSITIONS"/> 
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Index.IndexReader.FieldOption.UNINDEXED">
            <summary>All fields which are not indexed </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexReader.FieldOption.INDEXED_WITH_TERMVECTOR">
            <summary>All fields which are indexed with termvectors enabled </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexReader.FieldOption.INDEXED_NO_TERMVECTOR">
            <summary>All fields which are indexed but don't have termvectors enabled </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexReader.FieldOption.TERMVECTOR">
            <summary>All fields with termvectors enabled. Please note that only standard termvector fields are returned </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexReader.FieldOption.TERMVECTOR_WITH_POSITION">
            <summary>All fields with termvectors with position values enabled </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexReader.FieldOption.TERMVECTOR_WITH_OFFSET">
            <summary>All fields with termvectors with offset values enabled </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexReader.FieldOption.TERMVECTOR_WITH_POSITION_OFFSET">
            <summary>All fields with termvectors with offset values and position values enabled </summary>
        </member>
        <member name="M:Lucene.Net.Index.FilterIndexReader.#ctor(Lucene.Net.Index.IndexReader)">
            <summary> <p/>Construct a FilterIndexReader based on the specified base reader.
            Directory locking for delete, undeleteAll, and setNorm operations is
            left to the base reader.<p/>
            <p/>Note that base reader is closed if this FilterIndexReader is closed.<p/>
            </summary>
             <param name="in_Renamed">specified base reader.
            </param>
        </member>
        <member name="M:Lucene.Net.Index.FilterIndexReader.DoCommit">
            <deprecated> 
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.FilterIndexReader.GetFieldCacheKey">
             <summary>
             If the subclass of FilteredIndexReader modifies the
             contents of the FieldCache, you must override this
             method to provide a different key */
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FilterIndexReader.GetDeletesCacheKey">
            <summary>
            If the subclass of FilteredIndexReader modifies the
            deleted docs, you must override this method to provide
            a different key */
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.FilterIndexReader.FilterTermDocs">
            <summary>Base class for filtering <see cref="T:Lucene.Net.Index.TermDocs"/> implementations. </summary>
        </member>
        <member name="T:Lucene.Net.Index.FilterIndexReader.FilterTermPositions">
            <summary>Base class for filtering <see cref="M:Lucene.Net.Index.FilterIndexReader.TermPositions"/> implementations. </summary>
        </member>
        <member name="T:Lucene.Net.Index.TermPositions">
            <summary> TermPositions provides an interface for enumerating the &lt;document,
            frequency, &lt;position&gt;* &gt; tuples for a term.  <p/> The document and
            frequency are the same as for a TermDocs.  The positions portion lists the ordinal
            positions of each occurrence of a term in a document.
            
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexReader.TermPositions">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.TermPositions.NextPosition">
            <summary>Returns next position in the current document.  It is an error to call
            this more than <see cref="M:Lucene.Net.Index.TermDocs.Freq"/> times
            without calling <see cref="M:Lucene.Net.Index.TermDocs.Next"/><p/> This is
            invalid until <see cref="M:Lucene.Net.Index.TermDocs.Next"/> is called for
            the first time.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.TermPositions.GetPayloadLength">
            <summary> Returns the length of the payload at the current term position.
            This is invalid until <see cref="M:Lucene.Net.Index.TermPositions.NextPosition"/> is called for
            the first time.<br/>
            </summary>
            <returns> length of the current payload in number of bytes
            </returns>
        </member>
        <member name="M:Lucene.Net.Index.TermPositions.GetPayload(System.Byte[],System.Int32)">
            <summary> Returns the payload data at the current term position.
            This is invalid until <see cref="M:Lucene.Net.Index.TermPositions.NextPosition"/> is called for
            the first time.
            This method must not be called more than once after each call
            of <see cref="M:Lucene.Net.Index.TermPositions.NextPosition"/>. However, payloads are loaded lazily,
            so if the payload data for the current position is not needed,
            this method may not be called at all for performance reasons.<br/>
            
            </summary>
            <param name="data">the array into which the data of this payload is to be
            stored, if it is big enough; otherwise, a new byte[] array
            is allocated for this purpose. 
            </param>
            <param name="offset">the offset in the array into which the data of this payload
            is to be stored.
            </param>
            <returns> a byte[] array containing the data of this payload
            </returns>
            <throws>  IOException </throws>
        </member>
        <member name="M:Lucene.Net.Index.TermPositions.IsPayloadAvailable">
            <summary> Checks if a payload can be loaded at this position.
            <p/>
            Payloads can only be loaded once per call to 
            <see cref="M:Lucene.Net.Index.TermPositions.NextPosition"/>.
            
            </summary>
            <returns> true if there is a payload available at this position that can be loaded
            </returns>
        </member>
        <member name="T:Lucene.Net.Index.FilterIndexReader.FilterTermEnum">
            <summary>Base class for filtering <see cref="T:Lucene.Net.Index.TermEnum"/> implementations. </summary>
        </member>
        <member name="T:Lucene.Net.Index.TermEnum">
            <summary>Abstract class for enumerating terms.
            <p/>Term enumerations are always ordered by Term.compareTo().  Each term in
            the enumeration is greater than all that precede it.  
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.TermEnum.Next">
            <summary>Increments the enumeration to the next element.  True if one exists.</summary>
        </member>
        <member name="M:Lucene.Net.Index.TermEnum.Term">
            <summary>Returns the current Term in the enumeration.</summary>
        </member>
        <member name="M:Lucene.Net.Index.TermEnum.DocFreq">
            <summary>Returns the docFreq of the current Term in the enumeration.</summary>
        </member>
        <member name="M:Lucene.Net.Index.TermEnum.Close">
            <summary>Closes the enumeration to further activity, freeing resources. </summary>
        </member>
        <member name="M:Lucene.Net.Index.TermEnum.SkipTo(Lucene.Net.Index.Term)">
            <summary>Skips terms to the first beyond the current whose value is
            greater or equal to <i>target</i>. <p/>Returns true iff there is such
            an entry.  <p/>Behaves as if written: <code>
            public boolean skipTo(Term target) {
                do {
                    if (!next())
                        return false;
                } while (target &gt; term());
                    return true;
            }
            </code>
            Some implementations *could* be considerably more efficient than a linear scan.
            Check the implementation to be sure.
            </summary>
            <deprecated> This method is not performant and will be removed in Lucene 3.0.
            Use <see cref="M:Lucene.Net.Index.IndexReader.Terms(Lucene.Net.Index.Term)"/> to create a new TermEnum positioned at a
            given term.
            </deprecated>
        </member>
        <member name="F:Lucene.Net.Index.DirectoryOwningReader.ref_Renamed">
            <summary> This member contains the ref counter, that is passed to each instance after cloning/reopening,
            and is global to all DirectoryOwningReader derived from the original one.
            This reuses the class <see cref="T:Lucene.Net.Index.SegmentReader.Ref"/>
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.DirectoryReader">
            <summary> An IndexReader which reads indexes with multiple segments.</summary>
        </member>
        <member name="M:Lucene.Net.Index.DirectoryReader.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Index.SegmentInfos,Lucene.Net.Index.IndexDeletionPolicy,System.Boolean,System.Int32)">
            <summary>Construct reading the named set of readers. </summary>
        </member>
        <member name="M:Lucene.Net.Index.DirectoryReader.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Index.SegmentInfos,Lucene.Net.Index.SegmentReader[],System.Int32[],System.Collections.IDictionary,System.Boolean,System.Boolean,System.Int32)">
            <summary>This constructor is only used for <see cref="M:Lucene.Net.Index.DirectoryReader.Reopen"/> </summary>
        </member>
        <member name="M:Lucene.Net.Index.DirectoryReader.GetVersion">
            <summary>Version number when this IndexReader was opened. </summary>
        </member>
        <member name="M:Lucene.Net.Index.DirectoryReader.IsOptimized">
            <summary> Checks is the index is optimized (if it has a single segment and no deletions)</summary>
            <returns> <c>true</c> if the index is optimized; <c>false</c> otherwise
            </returns>
        </member>
        <member name="M:Lucene.Net.Index.DirectoryReader.AcquireWriteLock">
            <summary> Tries to acquire the WriteLock on this directory. this method is only valid if this IndexReader is directory
            owner.
            
            </summary>
            <throws>  StaleReaderException  if the index has changed since this reader was opened </throws>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  Lucene.Net.Store.LockObtainFailedException </throws>
            <summary>                               if another writer has this index open (<c>write.lock</c> could not be
            obtained)
            </summary>
            <throws>  IOException           if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.DirectoryReader.DoCommit">
            <deprecated>  
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.DirectoryReader.DoCommit(System.Collections.Generic.IDictionary{System.String,System.String})">
            <summary> Commit changes resulting from delete, undeleteAll, or setNorm operations
            <p/>
            If an exception is hit, then either no changes or all changes will have been committed to the index (transactional
            semantics).
            
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.DirectoryReader.Directory">
            <summary>Returns the directory this index resides in. </summary>
        </member>
        <member name="M:Lucene.Net.Index.DirectoryReader.GetIndexCommit">
            <summary> Expert: return the IndexCommit that this reader has opened.
            <p/>
            <p/><b>WARNING</b>: this API is new and experimental and may suddenly change.<p/>
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DirectoryReader.ListCommits(Lucene.Net.Store.Directory)">
            <seealso cref="M:Lucene.Net.Index.IndexReader.ListCommits(Lucene.Net.Store.Directory)">
            </seealso>
        </member>
        <member name="T:Lucene.Net.Index.IndexCommit">
            <summary> <p/>Expert: represents a single commit into an index as seen by the
            <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/> or <see cref="T:Lucene.Net.Index.IndexReader"/>.<p/>
            
            <p/> Changes to the content of an index are made visible
            only after the writer who made that change commits by
            writing a new segments file
            (<c>segments_N</c>). This point in time, when the
            action of writing of a new segments file to the directory
            is completed, is an index commit.<p/>
            
            <p/>Each index commit point has a unique segments file
            associated with it. The segments file associated with a
            later index commit point would have a larger N.<p/>
            
            <p/><b>WARNING</b>: This API is a new and experimental and
            may suddenly change. <p/>
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.IndexCommitPoint">
            <deprecated> Please subclass IndexCommit class instead
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexCommitPoint.GetSegmentsFileName">
            <summary> Get the segments file (<c>segments_N</c>) associated 
            with this commit point.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexCommitPoint.GetFileNames">
            <summary> Returns all index files referenced by this commit point.</summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexCommitPoint.Delete">
            <summary> Delete this commit point.
            <p/>
            Upon calling this, the writer is notified that this commit 
            point should be deleted. 
            <p/>
            Decision that a commit-point should be deleted is taken by the <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/> in effect
            and therefore this should only be called by its <see cref="M:Lucene.Net.Index.IndexDeletionPolicy.OnInit(System.Collections.IList)"/> or 
            <see cref="M:Lucene.Net.Index.IndexDeletionPolicy.OnCommit(System.Collections.IList)"/> methods.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexCommit.GetSegmentsFileName">
            <summary> Get the segments file (<c>segments_N</c>) associated 
            with this commit point.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexCommit.GetFileNames">
            <summary> Returns all index files referenced by this commit point.</summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexCommit.GetDirectory">
            <summary> Returns the <see cref="T:Lucene.Net.Store.Directory"/> for the index.</summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexCommit.Delete">
            <summary> Delete this commit point.  This only applies when using
            the commit point in the context of IndexWriter's
            IndexDeletionPolicy.
            <p/>
            Upon calling this, the writer is notified that this commit 
            point should be deleted. 
            <p/>
            Decision that a commit-point should be deleted is taken by the <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/> in effect
            and therefore this should only be called by its <see cref="M:Lucene.Net.Index.IndexDeletionPolicy.OnInit(System.Collections.IList)"/> or 
            <see cref="M:Lucene.Net.Index.IndexDeletionPolicy.OnCommit(System.Collections.IList)"/> methods.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexCommit.IsOptimized">
            <summary> Returns true if this commit is an optimized index.</summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexCommit.Equals(System.Object)">
            <summary> Two IndexCommits are equal if both their Directory and versions are equal.</summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexCommit.GetVersion">
            <summary>Returns the version for this IndexCommit.  This is the
            same value that <see cref="M:Lucene.Net.Index.IndexReader.GetVersion"/> would
            return if it were opened on this commit. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexCommit.GetGeneration">
            <summary>Returns the generation (the _N in segments_N) for this
            IndexCommit 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexCommit.GetTimestamp">
            <summary>Convenience method that returns the last modified time
            of the segments_N file corresponding to this index
            commit, equivalent to
            getDirectory().fileModified(getSegmentsFileName()). 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexCommit.GetUserData">
            <summary>Returns userData, previously passed to 
            <see cref="M:Lucene.Net.Index.IndexWriter.Commit(System.Collections.Generic.IDictionary{System.String,System.String})"/>
            for this commit.  IDictionary is String -&gt; String. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DirectoryReader.MultiTermDocs.Read(System.Int32[],System.Int32[])">
            <summary>Optimized implementation. </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocConsumerPerThread.ProcessDocument">
            <summary>Process the document. If there is
            something for this document to be done in docID order,
            you should encapsulate that as a
            DocumentsWriter.DocWriter and return it.
            DocumentsWriter then calls finish() on this object
            when it's its turn. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocFieldConsumer.Flush(System.Collections.IDictionary,Lucene.Net.Index.SegmentWriteState)">
            <summary>Called when DocumentsWriter decides to create a new
            segment 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocFieldConsumer.CloseDocStore(Lucene.Net.Index.SegmentWriteState)">
            <summary>Called when DocumentsWriter decides to close the doc
            stores 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocFieldConsumer.Abort">
            <summary>Called when an aborting exception is hit </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocFieldConsumer.AddThread(Lucene.Net.Index.DocFieldProcessorPerThread)">
            <summary>Add a new thread </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocFieldConsumer.FreeRAM">
            <summary>Called when DocumentsWriter is using too much RAM.
            The consumer should free RAM, if possible, returning
            true if any RAM was in fact freed. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocFieldConsumerPerField.ProcessFields(Lucene.Net.Documents.Fieldable[],System.Int32)">
            <summary>Processes all occurrences of a single field </summary>
        </member>
        <member name="T:Lucene.Net.Index.DocFieldConsumers">
            <summary>This is just a "splitter" class: it lets you wrap two
            DocFieldConsumer instances as a single consumer. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.DocumentsWriter.DocWriter">
            <summary>Consumer returns this on each doc.  This holds any
            state that must be flushed synchronized "in docID
            order".  We gather these and flush them in order. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.DocumentsWriter">
             <summary> This class accepts multiple added documents and directly
             writes a single segment file.  It does this more
             efficiently than creating a single segment per document
             (with DocumentWriter) and doing standard merges on those
             segments.
             
             Each added document is passed to the <see cref="T:Lucene.Net.Index.DocConsumer"/>,
             which in turn processes the document and interacts with
             other consumers in the indexing chain.  Certain
             consumers, like <see cref="T:Lucene.Net.Index.StoredFieldsWriter"/> and <see cref="T:Lucene.Net.Index.TermVectorsTermsWriter"/>
            , digest a document and
             immediately write bytes to the "doc store" files (ie,
             they do not consume RAM per document, except while they
             are processing the document).
             
             Other consumers, eg <see cref="T:Lucene.Net.Index.FreqProxTermsWriter"/> and
             <see cref="T:Lucene.Net.Index.NormsWriter"/>, buffer bytes in RAM and flush only
             when a new segment is produced.
             Once we have used our allowed RAM buffer, or the number
             of added docs is large enough (in the case we are
             flushing by doc count instead of RAM usage), we create a
             real segment and flush it to the Directory.
             
             Threads:
             
             Multiple threads are allowed into addDocument at once.
             There is an initial synchronized call to getThreadState
             which allocates a ThreadState for this thread.  The same
             thread will get the same ThreadState over time (thread
             affinity) so that if there are consistent patterns (for
             example each thread is indexing a different content
             source) then we make better use of RAM.  Then
             processDocument is called on that ThreadState without
             synchronization (most of the "heavy lifting" is in this
             call).  Finally the synchronized "finishDocument" is
             called to flush changes to the directory.
             
             When flush is called by IndexWriter, or, we flush
             internally when autoCommit=false, we forcefully idle all
             threads and flush only once they are all idle.  This
             means you can call flush with a given thread even while
             other threads are actively adding/deleting documents.
             
             
             Exceptions:
             
             Because this class directly updates in-memory posting
             lists, and flushes stored fields and term vectors
             directly to files in the directory, there are certain
             limited times when an exception can corrupt this state.
             For example, a disk full while flushing stored fields
             leaves this file in a corrupt state.  Or, an OOM
             exception while appending to the in-memory posting lists
             can corrupt that posting list.  We call such exceptions
             "aborting exceptions".  In these cases we must call
             abort() to discard all docs added since the last flush.
             
             All other exceptions ("non-aborting exceptions") can
             still partially update the index structures.  These
             updates are consistent, but, they represent only a part
             of the document seen up until the exception was hit.
             When this happens, we immediately mark the document as
             deleted so that the document is always atomically ("all
             or none") added to the index.
             </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.NewPerDocBuffer">
            Create and return a new DocWriterBuffer.
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.HasProx">
            <summary>Returns true if any of the fields in the current
            buffered docs have omitTermFreqAndPositions==false 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.SetInfoStream(System.IO.StreamWriter)">
            <summary>If non-null, various details of indexing are printed
            here. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.SetRAMBufferSizeMB(System.Double)">
            <summary>Set how much RAM we can use before flushing. </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.SetMaxBufferedDocs(System.Int32)">
            <summary>Set max buffered docs, which means we will flush by
            doc count instead of by RAM usage. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.GetSegment">
            <summary>Get current segment name we are writing. </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.GetNumDocsInRAM">
            <summary>Returns how many docs are currently buffered in RAM. </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.GetDocStoreSegment">
            <summary>Returns the current doc store segment we are writing
            to.  This will be the same as segment when autoCommit
            * is true. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.GetDocStoreOffset">
            <summary>Returns the doc offset into the shared doc store for
            the current buffered docs. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.CloseDocStore">
            <summary>Closes the current open doc stores an returns the doc
            store segment name.  This returns null if there are *
            no buffered documents. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.Abort">
            <summary>Called if we hit an exception at a bad time (when
            updating the index files) and must discard all
            currently buffered docs.  This resets our state,
            discarding any docs added since last flush. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.DoAfterFlush">
            <summary>Reset after a flush </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.Flush(System.Boolean)">
            <summary>Flush all pending docs to a new segment </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.CreateCompoundFile(System.String)">
            <summary>Build compound file for the segment we just flushed </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.SetFlushPending">
            <summary>Set flushPending if it is not already set and returns
            whether it was set. This is used by IndexWriter to
            trigger a single flush even when multiple threads are
            trying to do so. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.GetThreadState(Lucene.Net.Documents.Document,Lucene.Net.Index.Term)">
            <summary>Returns a free (idle) ThreadState that may be used for
            indexing this one document.  This call also pauses if a
            flush is pending.  If delTerm is non-null then we
            buffer this deleted term after the thread state has
            been acquired. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.AddDocument(Lucene.Net.Documents.Document,Lucene.Net.Analysis.Analyzer)">
            <summary>Returns true if the caller (IndexWriter) should now
            flush. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.RemapDeletes(Lucene.Net.Index.SegmentInfos,System.Int32[][],System.Int32[],Lucene.Net.Index.MergePolicy.OneMerge,System.Int32)">
            <summary>Called whenever a merge has completed and the merged segments had deletions </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.FinishDocument(Lucene.Net.Index.DocumentsWriterThreadState,Lucene.Net.Index.DocumentsWriter.DocWriter)">
            <summary>Does the synchronized work to finish/flush the
            inverted document. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.DocumentsWriter.IndexingChain">
            <summary> The IndexingChain must define the <see cref="M:Lucene.Net.Index.DocumentsWriter.IndexingChain.GetChain(Lucene.Net.Index.DocumentsWriter)"/> method
            which returns the DocConsumer that the DocumentsWriter calls to process the
            documents. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.DocumentsWriter.DocWriter">
            <summary>Consumer returns this on each doc.  This holds any
            state that must be flushed synchronized "in docID
            order".  We gather these and flush them in order. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.DocumentsWriter.PerDocBuffer">
            RAMFile buffer for DocWriters.
        </member>
        <member name="M:Lucene.Net.Store.RAMFile.NewBuffer(System.Int32)">
            <summary> Expert: allocate a new buffer. 
            Subclasses can allocate differently. 
            </summary>
            <param name="size">size of allocated buffer.
            </param>
            <returns> allocated buffer.
            </returns>
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.PerDocBuffer.NewBuffer(System.Int32)">
            Allocate bytes used from shared pool.
        </member>
        <member name="M:Lucene.Net.Index.DocumentsWriter.PerDocBuffer.Recycle">
            Recycle the bytes used.
        </member>
        <member name="T:Lucene.Net.Index.DocFieldProcessor">
            <summary> This is a DocConsumer that gathers all fields under the
            same name, and calls per-field consumers to process field
            by field.  This class doesn't doesn't do any "real" work
            of its own: it just forwards the fields to a
            DocFieldConsumer.
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.DocFieldProcessorPerField">
            <summary> Holds all per thread, per field state.</summary>
        </member>
        <member name="T:Lucene.Net.Index.DocFieldProcessorPerThread">
            <summary> Gathers all Fieldables for a document under the same
            name, updates FieldInfos, and calls per-field consumers
            to process field by field.
            
            Currently, only a single thread visits the fields,
            sequentially, for processing.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.DocFieldProcessorPerThread.TrimFields(Lucene.Net.Index.SegmentWriteState)">
            <summary>If there are fields we've seen but did not see again
            in the last run, then free them up. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.DocInverter">
            <summary>This is a DocFieldConsumer that inverts each field,
            separately, from a Document, and accepts a
            InvertedTermsConsumer to process those terms. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.DocInverterPerField">
            <summary> Holds state for inverting all occurrences of a single
            field in the document.  This class doesn't do anything
            itself; instead, it forwards the tokens produced by
            analysis to its own consumer
            (InvertedDocConsumerPerField).  It also interacts with an
            endConsumer (InvertedDocEndConsumerPerField).
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.DocInverterPerThread">
            <summary>This is a DocFieldConsumer that inverts each field,
            separately, from a Document, and accepts a
            InvertedTermsConsumer to process those terms. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.DocumentsWriterThreadState">
            <summary>Used by DocumentsWriter to maintain per-thread state.
            We keep a separate Posting hash and other state for each
            thread and then merge postings hashes from all threads
            when writing the segment. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.FieldInfos">
            <summary>Access to the Fieldable Info file that describes document fields and whether or
            not they are indexed. Each segment has a separate Fieldable Info file. Objects
            of this class are thread-safe for multiple readers, but only one thread can
            be adding documents at a time, with no other reader or writer threads
            accessing this object.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FieldInfos.#ctor(Lucene.Net.Store.Directory,System.String)">
            <summary> Construct a FieldInfos object using the directory and the name of the file
            IndexInput
            </summary>
            <param name="d">The directory to open the IndexInput from
            </param>
            <param name="name">The name of the file to open the IndexInput from in the Directory
            </param>
            <throws>  IOException </throws>
        </member>
        <member name="M:Lucene.Net.Index.FieldInfos.Clone">
            <summary> Returns a deep clone of this FieldInfos instance.</summary>
        </member>
        <member name="M:Lucene.Net.Index.FieldInfos.Add(Lucene.Net.Documents.Document)">
            <summary>Adds field info for a Document. </summary>
        </member>
        <member name="M:Lucene.Net.Index.FieldInfos.HasProx">
            <summary>Returns true if any fields do not omitTermFreqAndPositions </summary>
        </member>
        <member name="M:Lucene.Net.Index.FieldInfos.AddIndexed(System.Collections.ICollection,System.Boolean,System.Boolean,System.Boolean)">
            <summary> Add fields that are indexed. Whether they have termvectors has to be specified.
            
            </summary>
            <param name="names">The names of the fields
            </param>
            <param name="storeTermVectors">Whether the fields store term vectors or not
            </param>
            <param name="storePositionWithTermVector">true if positions should be stored.
            </param>
            <param name="storeOffsetWithTermVector">true if offsets should be stored
            </param>
        </member>
        <member name="M:Lucene.Net.Index.FieldInfos.Add(System.Collections.Generic.ICollection{System.String},System.Boolean)">
            <summary> Assumes the fields are not storing term vectors.
            
            </summary>
            <param name="names">The names of the fields
            </param>
            <param name="isIndexed">Whether the fields are indexed or not
            
            </param>
            <seealso cref="M:Lucene.Net.Index.FieldInfos.Add(System.String,System.Boolean)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.FieldInfos.Add(System.String,System.Boolean)">
            <summary> Calls 5 parameter add with false for all TermVector parameters.
            
            </summary>
            <param name="name">The name of the Fieldable
            </param>
            <param name="isIndexed">true if the field is indexed
            </param>
            <seealso cref="M:Lucene.Net.Index.FieldInfos.Add(System.String,System.Boolean,System.Boolean,System.Boolean,System.Boolean)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.FieldInfos.Add(System.String,System.Boolean,System.Boolean)">
            <summary> Calls 5 parameter add with false for term vector positions and offsets.
            
            </summary>
            <param name="name">The name of the field
            </param>
            <param name="isIndexed"> true if the field is indexed
            </param>
            <param name="storeTermVector">true if the term vector should be stored
            </param>
        </member>
        <member name="M:Lucene.Net.Index.FieldInfos.Add(System.String,System.Boolean,System.Boolean,System.Boolean,System.Boolean)">
            <summary>If the field is not yet known, adds it. If it is known, checks to make
            sure that the isIndexed flag is the same as was given previously for this
            field. If not - marks it as being indexed.  Same goes for the TermVector
            parameters.
            
            </summary>
            <param name="name">The name of the field
            </param>
            <param name="isIndexed">true if the field is indexed
            </param>
            <param name="storeTermVector">true if the term vector should be stored
            </param>
            <param name="storePositionWithTermVector">true if the term vector with positions should be stored
            </param>
            <param name="storeOffsetWithTermVector">true if the term vector with offsets should be stored
            </param>
        </member>
        <member name="M:Lucene.Net.Index.FieldInfos.Add(System.String,System.Boolean,System.Boolean,System.Boolean,System.Boolean,System.Boolean)">
            <summary>If the field is not yet known, adds it. If it is known, checks to make
            sure that the isIndexed flag is the same as was given previously for this
            field. If not - marks it as being indexed.  Same goes for the TermVector
            parameters.
            
            </summary>
            <param name="name">The name of the field
            </param>
            <param name="isIndexed">true if the field is indexed
            </param>
            <param name="storeTermVector">true if the term vector should be stored
            </param>
            <param name="storePositionWithTermVector">true if the term vector with positions should be stored
            </param>
            <param name="storeOffsetWithTermVector">true if the term vector with offsets should be stored
            </param>
            <param name="omitNorms">true if the norms for the indexed field should be omitted
            </param>
        </member>
        <member name="M:Lucene.Net.Index.FieldInfos.Add(System.String,System.Boolean,System.Boolean,System.Boolean,System.Boolean,System.Boolean,System.Boolean,System.Boolean)">
            <summary>If the field is not yet known, adds it. If it is known, checks to make
            sure that the isIndexed flag is the same as was given previously for this
            field. If not - marks it as being indexed.  Same goes for the TermVector
            parameters.
            
            </summary>
            <param name="name">The name of the field
            </param>
            <param name="isIndexed">true if the field is indexed
            </param>
            <param name="storeTermVector">true if the term vector should be stored
            </param>
            <param name="storePositionWithTermVector">true if the term vector with positions should be stored
            </param>
            <param name="storeOffsetWithTermVector">true if the term vector with offsets should be stored
            </param>
            <param name="omitNorms">true if the norms for the indexed field should be omitted
            </param>
            <param name="storePayloads">true if payloads should be stored for this field
            </param>
            <param name="omitTermFreqAndPositions">true if term freqs should be omitted for this field
            </param>
        </member>
        <member name="M:Lucene.Net.Index.FieldInfos.FieldName(System.Int32)">
            <summary> Return the fieldName identified by its number.
            
            </summary>
            <param name="fieldNumber">
            </param>
            <returns> the fieldName or an empty string when the field
            with the given number doesn't exist.
            </returns>
        </member>
        <member name="M:Lucene.Net.Index.FieldInfos.FieldInfo(System.Int32)">
            <summary> Return the fieldinfo object referenced by the fieldNumber.</summary>
            <param name="fieldNumber">
            </param>
            <returns> the FieldInfo object or null when the given fieldNumber
            doesn't exist.
            </returns>
        </member>
        <member name="T:Lucene.Net.Index.FieldInvertState">
            <summary> This class tracks the number and position / offset parameters of terms
            being added to the index. The information collected in this class is
            also used to calculate the normalization factor for a field.
            
            <p/><b>WARNING</b>: This API is new and experimental, and may suddenly
            change.<p/>
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FieldInvertState.Reset(System.Single)">
            <summary> Re-initialize the state, using this boost value.</summary>
            <param name="docBoost">boost value to use.
            </param>
        </member>
        <member name="M:Lucene.Net.Index.FieldInvertState.GetPosition">
            <summary> Get the last processed term position.</summary>
            <returns> the position
            </returns>
        </member>
        <member name="M:Lucene.Net.Index.FieldInvertState.GetLength">
            <summary> Get total number of terms in this field.</summary>
            <returns> the length
            </returns>
        </member>
        <member name="M:Lucene.Net.Index.FieldInvertState.GetNumOverlap">
            <summary> Get the number of terms with <c>positionIncrement == 0</c>.</summary>
            <returns> the numOverlap
            </returns>
        </member>
        <member name="M:Lucene.Net.Index.FieldInvertState.GetOffset">
            <summary> Get end offset of the last processed term.</summary>
            <returns> the offset
            </returns>
        </member>
        <member name="M:Lucene.Net.Index.FieldInvertState.GetBoost">
            <summary> Get boost value. This is the cumulative product of
            document boost and field boost for all field instances
            sharing the same field name.
            </summary>
            <returns> the boost
            </returns>
        </member>
        <member name="T:Lucene.Net.Index.FieldReaderException">
            <summary> 
            
            
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FieldReaderException.#ctor">
            <summary> Constructs a new runtime exception with <c>null</c> as its
            detail message.  The cause is not initialized, and may subsequently be
            initialized by a call to <see cref="P:System.Exception.InnerException"/>.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FieldReaderException.#ctor(System.Exception)">
            <summary> Constructs a new runtime exception with the specified cause and a
            detail message of <tt>(cause==null ? null : cause.toString())</tt>
            (which typically contains the class and detail message of
            <tt>cause</tt>).  
            <p/>
            This constructor is useful for runtime exceptions
            that are little more than wrappers for other throwables.
            
            </summary>
            <param name="cause">the cause (which is saved for later retrieval by the
            <see cref="P:System.Exception.InnerException"/>).  (A <tt>null</tt> value is
            permitted, and indicates that the cause is nonexistent or
            unknown.)
            </param>
            <since> 1.4
            </since>
        </member>
        <member name="M:Lucene.Net.Index.FieldReaderException.#ctor(System.String)">
            <summary> Constructs a new runtime exception with the specified detail message.
            The cause is not initialized, and may subsequently be initialized by a
            call to <see cref="P:System.Exception.InnerException"/>.
            
            </summary>
            <param name="message">the detail message. The detail message is saved for
            later retrieval by the <see cref="P:System.Exception.Message"/> method.
            </param>
        </member>
        <member name="M:Lucene.Net.Index.FieldReaderException.#ctor(System.String,System.Exception)">
            <summary> Constructs a new runtime exception with the specified detail message and
            cause.  <p/>Note that the detail message associated with
            <c>cause</c> is <i>not</i> automatically incorporated in
            this runtime exception's detail message.
            
            </summary>
            <param name="message">the detail message (which is saved for later retrieval
            by the <see cref="P:System.Exception.Message"/> method).
            </param>
            <param name="cause">  the cause (which is saved for later retrieval by the
            <see cref="P:System.Exception.InnerException"/> method).  (A <tt>null</tt> value is
            permitted, and indicates that the cause is nonexistent or
            unknown.)
            </param>
            <since> 1.4
            </since>
        </member>
        <member name="T:Lucene.Net.Index.FieldSortedTermVectorMapper">
            <summary> For each Field, store a sorted collection of <see cref="T:Lucene.Net.Index.TermVectorEntry"/>s
            <p/>
            This is not thread-safe.
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.TermVectorMapper">
            <summary> The TermVectorMapper can be used to map Term Vectors into your own
            structure instead of the parallel array structure used by
            <see cref="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,System.String)"/>.
            <p/>
            It is up to the implementation to make sure it is thread-safe.
            
            
            
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.TermVectorMapper.#ctor(System.Boolean,System.Boolean)">
            <summary> </summary>
            <param name="ignoringPositions">true if this mapper should tell Lucene to ignore positions even if they are stored
            </param>
            <param name="ignoringOffsets">similar to ignoringPositions
            </param>
        </member>
        <member name="M:Lucene.Net.Index.TermVectorMapper.SetExpectations(System.String,System.Int32,System.Boolean,System.Boolean)">
            <summary> Tell the mapper what to expect in regards to field, number of terms, offset and position storage.
            This method will be called once before retrieving the vector for a field.
            
            This method will be called before <see cref="M:Lucene.Net.Index.TermVectorMapper.Map(System.String,System.Int32,Lucene.Net.Index.TermVectorOffsetInfo[],System.Int32[])"/>.
            </summary>
            <param name="field">The field the vector is for
            </param>
            <param name="numTerms">The number of terms that need to be mapped
            </param>
            <param name="storeOffsets">true if the mapper should expect offset information
            </param>
            <param name="storePositions">true if the mapper should expect positions info
            </param>
        </member>
        <member name="M:Lucene.Net.Index.TermVectorMapper.Map(System.String,System.Int32,Lucene.Net.Index.TermVectorOffsetInfo[],System.Int32[])">
            <summary> Map the Term Vector information into your own structure</summary>
            <param name="term">The term to add to the vector
            </param>
            <param name="frequency">The frequency of the term in the document
            </param>
            <param name="offsets">null if the offset is not specified, otherwise the offset into the field of the term
            </param>
            <param name="positions">null if the position is not specified, otherwise the position in the field of the term
            </param>
        </member>
        <member name="M:Lucene.Net.Index.TermVectorMapper.IsIgnoringPositions">
            <summary> Indicate to Lucene that even if there are positions stored, this mapper is not interested in them and they
            can be skipped over.  Derived classes should set this to true if they want to ignore positions.  The default
            is false, meaning positions will be loaded if they are stored.
            </summary>
            <returns> false
            </returns>
        </member>
        <member name="M:Lucene.Net.Index.TermVectorMapper.IsIgnoringOffsets">
            <summary> </summary>
            <seealso cref="M:Lucene.Net.Index.TermVectorMapper.IsIgnoringPositions"> Same principal as <see cref="M:Lucene.Net.Index.TermVectorMapper.IsIgnoringPositions"/>, but applied to offsets.  false by default.
            </seealso>
            <returns> false
            </returns>
        </member>
        <member name="M:Lucene.Net.Index.TermVectorMapper.SetDocumentNumber(System.Int32)">
            <summary> Passes down the index of the document whose term vector is currently being mapped,
            once for each top level call to a term vector reader.
            <p/>
            Default implementation IGNORES the document number.  Override if your implementation needs the document number.
            <p/> 
            NOTE: Document numbers are internal to Lucene and subject to change depending on indexing operations.
            
            </summary>
            <param name="documentNumber">index of document currently being mapped
            </param>
        </member>
        <member name="M:Lucene.Net.Index.FieldSortedTermVectorMapper.#ctor(System.Collections.Generic.IComparer{System.Object})">
            <summary> </summary>
            <param name="comparator">A Comparator for sorting <see cref="T:Lucene.Net.Index.TermVectorEntry"/>s
            </param>
        </member>
        <member name="M:Lucene.Net.Index.FieldSortedTermVectorMapper.GetFieldToTerms">
            <summary> Get the mapping between fields and terms, sorted by the comparator
            
            </summary>
            <returns> A map between field names and <see cref="T:System.Collections.Generic.SortedDictionary`2"/>s per field.  SortedSet entries are <see cref="T:Lucene.Net.Index.TermVectorEntry"/>
            </returns>
        </member>
        <member name="T:Lucene.Net.Index.FieldsReader">
            <summary> Class responsible for access to stored document fields.
            <p/>
            It uses &lt;segment&gt;.fdt and &lt;segment&gt;.fdx; files.
            
            </summary>
            <version>  $Id: FieldsReader.java 801344 2009-08-05 18:05:06Z yonik $
            </version>
        </member>
        <member name="M:Lucene.Net.Index.FieldsReader.Clone">
            <summary>Returns a cloned FieldsReader that shares open
            IndexInputs with the original one.  It is the caller's
            job not to close the original FieldsReader until all
            clones are called (eg, currently SegmentReader manages
            this logic). 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FieldsReader.EnsureOpen">
            <throws>  AlreadyClosedException if this FieldsReader is closed </throws>
        </member>
        <member name="M:Lucene.Net.Index.FieldsReader.Close">
            <summary> Closes the underlying <see cref="T:Lucene.Net.Store.IndexInput"/> streams, including any ones associated with a
            lazy implementation of a Field.  This means that the Fields values will not be accessible.
            
            </summary>
            <throws>  IOException </throws>
        </member>
        <member name="M:Lucene.Net.Index.FieldsReader.RawDocs(System.Int32[],System.Int32,System.Int32)">
            <summary>Returns the length in bytes of each raw document in a
            contiguous range of length numDocs starting with
            startDocID.  Returns the IndexInput (the fieldStream),
            already seeked to the starting point for startDocID.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FieldsReader.SkipField(System.Boolean,System.Boolean)">
            <summary> Skip the field.  We still have to read some of the information about the field, but can skip past the actual content.
            This will have the most payoff on large fields.
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.FieldsReader.LazyField">
            <summary> A Lazy implementation of Fieldable that differs loading of fields until asked for, instead of when the Document is
            loaded.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FieldsReader.LazyField.BinaryValue">
            <summary>The value of the field in Binary, or null.  If null, the Reader value,
            String value, or TokenStream value is used. Exactly one of stringValue(), 
            readerValue(), binaryValue(), and tokenStreamValue() must be set. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FieldsReader.LazyField.ReaderValue">
            <summary>The value of the field as a Reader, or null.  If null, the String value,
            binary value, or TokenStream value is used.  Exactly one of stringValue(), 
            readerValue(), binaryValue(), and tokenStreamValue() must be set. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FieldsReader.LazyField.TokenStreamValue">
            <summary>The value of the field as a TokenStream, or null.  If null, the Reader value,
            String value, or binary value is used. Exactly one of stringValue(), 
            readerValue(), binaryValue(), and tokenStreamValue() must be set. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FieldsReader.LazyField.StringValue">
            <summary>The value of the field as a String, or null.  If null, the Reader value,
            binary value, or TokenStream value is used.  Exactly one of stringValue(), 
            readerValue(), binaryValue(), and tokenStreamValue() must be set. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FieldsWriter.AddRawDocuments(Lucene.Net.Store.IndexInput,System.Int32[],System.Int32)">
            <summary>Bulk write a contiguous series of documents.  The
            lengths array is the length (in bytes) of each raw
            document.  The stream IndexInput is the
            fieldsStream from which we should bulk-copy all
            bytes. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.FormatPostingsDocsConsumer">
            <summary> NOTE: this API is experimental and will likely change</summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsDocsConsumer.AddDoc(System.Int32,System.Int32)">
            <summary>Adds a new doc in this term.  If this returns null
            then we just skip consuming positions/payloads. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsDocsConsumer.Finish">
            <summary>Called when we are done adding docs to this term </summary>
        </member>
        <member name="T:Lucene.Net.Index.FormatPostingsDocsWriter">
            <summary>Consumes doc and freq, writing them using the current
            index file format 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsDocsWriter.AddDoc(System.Int32,System.Int32)">
            <summary>Adds a new doc in this term.  If this returns null
            then we just skip consuming positions/payloads. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsDocsWriter.Finish">
            <summary>Called when we are done adding docs to this term </summary>
        </member>
        <member name="T:Lucene.Net.Index.FormatPostingsFieldsConsumer">
            <summary>Abstract API that consumes terms, doc, freq, prox and
            payloads postings.  Concrete implementations of this
            actually do "something" with the postings (write it into
            the index in a specific format).
            
            NOTE: this API is experimental and will likely change
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsFieldsConsumer.AddField(Lucene.Net.Index.FieldInfo)">
            <summary>Add a new field </summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsFieldsConsumer.Finish">
            <summary>Called when we are done adding everything. </summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsFieldsWriter.AddField(Lucene.Net.Index.FieldInfo)">
            <summary>Add a new field </summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsFieldsWriter.Finish">
            <summary>Called when we are done adding everything. </summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsPositionsConsumer.AddPosition(System.Int32,System.Byte[],System.Int32,System.Int32)">
            <summary>Add a new position &amp; payload.  If payloadLength > 0
            you must read those bytes from the IndexInput. 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsPositionsConsumer.Finish">
            <summary>Called when we are done adding positions &amp; payloads </summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsPositionsWriter.AddPosition(System.Int32,System.Byte[],System.Int32,System.Int32)">
            <summary>Add a new position &amp; payload </summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsPositionsWriter.Finish">
            <summary>Called when we are done adding positions &amp; payloads </summary>
        </member>
        <member name="T:Lucene.Net.Index.FormatPostingsTermsConsumer">
            <summary> NOTE: this API is experimental and will likely change</summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsTermsConsumer.AddTerm(System.Char[],System.Int32)">
            <summary>Adds a new term in this field; term ends with U+FFFF
            char 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsTermsConsumer.Finish">
            <summary>Called when we are done adding terms to this field </summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsTermsWriter.AddTerm(System.Char[],System.Int32)">
            <summary>Adds a new term in this field </summary>
        </member>
        <member name="M:Lucene.Net.Index.FormatPostingsTermsWriter.Finish">
            <summary>Called when we are done adding terms to this field </summary>
        </member>
        <member name="T:Lucene.Net.Index.FreqProxFieldMergeState">
            <summary>Used by DocumentsWriter to merge the postings from
            multiple ThreadStates when creating a segment 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.RawPostingList">
            <summary>This is the base class for an in-memory posting list,
            keyed by a Token.  <see cref="T:Lucene.Net.Index.TermsHash"/> maintains a hash
            table holding one instance of this per unique Token.
            Consumers of TermsHash (<see cref="T:Lucene.Net.Index.TermsHashConsumer"/>) must
            subclass this class with its own concrete class.
            FreqProxTermsWriter.PostingList is a private inner class used 
            for the freq/prox postings, and 
            TermVectorsTermsWriter.PostingList is a private inner class
            used to hold TermVectors postings. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.TermsHashConsumerPerField">
            <summary>Implement this class to plug into the TermsHash
            processor, which inverts and stores Tokens into a hash
            table and provides an API for writing bytes into
            multiple streams for each unique Token. 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.IndexDeletionPolicy">
            <summary> <p/>Expert: policy for deletion of stale <see cref="T:Lucene.Net.Index.IndexCommit">index commits</see>. 
            
            <p/>Implement this interface, and pass it to one
            of the <see cref="T:Lucene.Net.Index.IndexWriter"/> or <see cref="T:Lucene.Net.Index.IndexReader"/>
            constructors, to customize when older
            <see cref="T:Lucene.Net.Index.IndexCommit">point-in-time commits</see>
            are deleted from the index directory.  The default deletion policy
            is <see cref="T:Lucene.Net.Index.KeepOnlyLastCommitDeletionPolicy"/>, which always
            removes old commits as soon as a new commit is done (this
            matches the behavior before 2.2).<p/>
            
            <p/>One expected use case for this (and the reason why it
            was first created) is to work around problems with an
            index directory accessed via filesystems like NFS because
            NFS does not provide the "delete on last close" semantics
            that Lucene's "point in time" search normally relies on.
            By implementing a custom deletion policy, such as "a
            commit is only removed once it has been stale for more
            than X minutes", you can give your readers time to
            refresh to the new commit before <see cref="T:Lucene.Net.Index.IndexWriter"/>
            removes the old commits.  Note that doing so will
            increase the storage requirements of the index.  See <a target="top" href="http://issues.apache.org/jira/browse/LUCENE-710">LUCENE-710</a>
            for details.<p/>
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexDeletionPolicy.OnInit(System.Collections.IList)">
            <summary> <p/>This is called once when a writer is first
            instantiated to give the policy a chance to remove old
            commit points.<p/>
            
            <p/>The writer locates all index commits present in the 
            index directory and calls this method.  The policy may 
            choose to delete some of the commit points, doing so by
            calling method <see cref="M:Lucene.Net.Index.IndexCommit.Delete"/> 
            of <see cref="T:Lucene.Net.Index.IndexCommit"/>.<p/>
            
            <p/><u>Note:</u> the last CommitPoint is the most recent one,
            i.e. the "front index state". Be careful not to delete it,
            unless you know for sure what you are doing, and unless 
            you can afford to lose the index content while doing that. 
            
            </summary>
            <param name="commits">List of current 
            <see cref="T:Lucene.Net.Index.IndexCommit">point-in-time commits</see>,
            sorted by age (the 0th one is the oldest commit).
            </param>
        </member>
        <member name="M:Lucene.Net.Index.IndexDeletionPolicy.OnCommit(System.Collections.IList)">
            <summary> <p/>This is called each time the writer completed a commit.
            This gives the policy a chance to remove old commit points
            with each commit.<p/>
            
            <p/>The policy may now choose to delete old commit points 
            by calling method <see cref="M:Lucene.Net.Index.IndexCommit.Delete"/> 
            of <see cref="T:Lucene.Net.Index.IndexCommit"/>.<p/>
            
            <p/>If writer has <c>autoCommit = true</c> then
            this method will in general be called many times during
            one instance of <see cref="T:Lucene.Net.Index.IndexWriter"/>.  If
            <c>autoCommit = false</c> then this method is
            only called once when <see cref="M:Lucene.Net.Index.IndexWriter.Close"/> is
            called, or not at all if the <see cref="M:Lucene.Net.Index.IndexWriter.Abort"/>
            is called. 
            
            <p/><u>Note:</u> the last CommitPoint is the most recent one,
            i.e. the "front index state". Be careful not to delete it,
            unless you know for sure what you are doing, and unless 
            you can afford to lose the index content while doing that.
            
            </summary>
            <param name="commits">List of <see cref="T:Lucene.Net.Index.IndexCommit"/>,
            sorted by age (the 0th one is the oldest commit).
            </param>
        </member>
        <member name="T:Lucene.Net.Index.IndexFileDeleter">
             <summary>
             <para>This class keeps track of each SegmentInfos instance that
             is still "live", either because it corresponds to a
             segments_N file in the Directory (a "commit", i.e. a
             committed SegmentInfos) or because it's an in-memory
             SegmentInfos that a writer is actively updating but has
             not yet committed.  This class uses simple reference
             counting to map the live SegmentInfos instances to
             individual files in the Directory.</para>
            
             <para>When autoCommit=true, IndexWriter currently commits only
             on completion of a merge (though this may change with
             time: it is not a guarantee).  When autoCommit=false,
             IndexWriter only commits when it is closed.  Regardless
             of autoCommit, the user may call IndexWriter.commit() to
             force a blocking commit.</para>
             
             <para>The same directory file may be referenced by more than
             one IndexCommit, i.e. more than one SegmentInfos.
             Therefore we count how many commits reference each file.
             When all the commits referencing a certain file have been
             deleted, the refcount for that file becomes zero, and the
             file is deleted.</para>
            
             <para>A separate deletion policy interface
             (IndexDeletionPolicy) is consulted on creation (onInit)
             and once per commit (onCommit), to decide when a commit
             should be removed.</para>
             
             <para>It is the business of the IndexDeletionPolicy to choose
             when to delete commit points.  The actual mechanics of
             file deletion, retrying, etc, derived from the deletion
             of commit points is the business of the IndexFileDeleter.</para>
             
             <para>The current default deletion policy is
             <see cref="T:Lucene.Net.Index.KeepOnlyLastCommitDeletionPolicy"/>, which removes all
             prior commits when a new commit has completed.  This
             matches the behavior before 2.2.</para>
            
             <para>Note that you must hold the write.lock before
             instantiating this class.  It opens segments_N file(s)
             directly with no retry logic.</para>
             </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileDeleter.deletable">
            because they are open and we are running on Windows),
            so we will retry them again later: ////
        </member>
        <member name="F:Lucene.Net.Index.IndexFileDeleter.refCounts">
            Counts how many existing commits reference a file.
            Maps String to RefCount (class below) instances: ////
        </member>
        <member name="F:Lucene.Net.Index.IndexFileDeleter.commits">
            This will have just 1 commit if you are using the
            default delete policy (KeepOnlyLastCommitDeletionPolicy).
            Other policies may leave commit points live for longer
            in which case this list would be longer than 1: ////
        </member>
        <member name="F:Lucene.Net.Index.IndexFileDeleter.lastFiles">
            non-commit checkpoint: ////
        </member>
        <member name="F:Lucene.Net.Index.IndexFileDeleter.VERBOSE_REF_COUNTS">
            <summary>Change to true to see details of reference counts when
            infoStream != null 
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexFileDeleter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Index.IndexDeletionPolicy,Lucene.Net.Index.SegmentInfos,System.IO.StreamWriter,Lucene.Net.Index.DocumentsWriter,System.Collections.Generic.Dictionary{System.String,System.String})">
            <summary> Initialize the deleter: find all previous commits in
            the Directory, incref the files they reference, call
            the policy to let it delete commits.  This will remove
            any files not referenced by any of the commits.
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexFileDeleter.DeleteCommits">
            <summary> Remove the CommitPoints in the commitsToDelete List by
            DecRef'ing all files from each SegmentInfos.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexFileDeleter.Refresh(System.String)">
            <summary> Writer calls this when it has hit an error and had to
            roll back, to tell us that there may now be
            unreferenced files in the filesystem.  So we re-list
            the filesystem and delete such files.  If segmentName
            is non-null, we will only delete files corresponding to
            that segment.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexFileDeleter.Checkpoint(Lucene.Net.Index.SegmentInfos,System.Boolean)">
            <summary> For definition of "check point" see IndexWriter comments:
            "Clarification: Check Points (and commits)".
            
            Writer calls this when it has made a "consistent
            change" to the index, meaning new files are written to
            the index and the in-memory SegmentInfos have been
            modified to point to those files.
            
            This may or may not be a commit (segments_N may or may
            not have been written).
            
            We simply incref the files referenced by the new
            SegmentInfos and decref the files we had previously
            seen (if any).
            
            If this is a commit, we also call the policy to give it
            a chance to remove other commits.  If any commits are
            removed, we decref their files as well.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexFileDeleter.DeleteNewFiles(System.Collections.Generic.ICollection{System.String})">
            <summary>Deletes the specified files, but only if they are new
            (have not yet been incref'd). 
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.IndexFileDeleter.RefCount">
            <summary> Tracks the reference count for a single index file:</summary>
        </member>
        <member name="T:Lucene.Net.Index.IndexFileDeleter.CommitPoint">
            <summary> Holds details for each commit point.  This class is
            also passed to the deletion policy.  Note: this class
            has a natural ordering that is inconsistent with
            equals.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexFileDeleter.CommitPoint.Delete">
            <summary> Called only be the deletion policy, to remove this
            commit point from the index.
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.IndexFileNameFilter">
            <summary> Filename filter that accept filenames and extensions only created by Lucene.
            
            </summary>
            <version>  $rcs = ' $Id: Exp $ ' ;
            </version>
        </member>
        <member name="M:Lucene.Net.Index.IndexFileNameFilter.IsCFSFile(System.String)">
            <summary> Returns true if this is a file that would be contained
            in a CFS file.  This function should only be called on
            files that pass the above "accept" (ie, are already
            known to be a Lucene index file).
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.IndexFileNames">
            <summary> Useful constants representing filenames and extensions used by lucene
            
            </summary>
            <version>  $rcs = ' $Id: Exp $ ' ;
            </version>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.SEGMENTS">
            <summary>Name of the index segment file </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.SEGMENTS_GEN">
            <summary>Name of the generation reference file name </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.DELETABLE">
            <summary>Name of the index deletable file (only used in
            pre-lockless indices) 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.NORMS_EXTENSION">
            <summary>Extension of norms file </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.FREQ_EXTENSION">
            <summary>Extension of freq postings file </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.PROX_EXTENSION">
            <summary>Extension of prox postings file </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.TERMS_EXTENSION">
            <summary>Extension of terms file </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.TERMS_INDEX_EXTENSION">
            <summary>Extension of terms index file </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.FIELDS_INDEX_EXTENSION">
            <summary>Extension of stored fields index file </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.FIELDS_EXTENSION">
            <summary>Extension of stored fields file </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.VECTORS_FIELDS_EXTENSION">
            <summary>Extension of vectors fields file </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.VECTORS_DOCUMENTS_EXTENSION">
            <summary>Extension of vectors documents file </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.VECTORS_INDEX_EXTENSION">
            <summary>Extension of vectors index file </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.COMPOUND_FILE_EXTENSION">
            <summary>Extension of compound file </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.COMPOUND_FILE_STORE_EXTENSION">
            <summary>Extension of compound file for doc store files</summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.DELETES_EXTENSION">
            <summary>Extension of deletes </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.FIELD_INFOS_EXTENSION">
            <summary>Extension of field infos </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.PLAIN_NORMS_EXTENSION">
            <summary>Extension of plain norms </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.SEPARATE_NORMS_EXTENSION">
            <summary>Extension of separate norms </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.GEN_EXTENSION">
            <summary>Extension of gen file </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.INDEX_EXTENSIONS">
            <summary> This array contains all filename extensions used by
            Lucene's index files, with two exceptions, namely the
            extension made up from <c>.f</c> + a number and
            from <c>.s</c> + a number.  Also note that
            Lucene's <c>segments_N</c> files do not have any
            filename extension.
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.INDEX_EXTENSIONS_IN_COMPOUND_FILE">
            <summary>File extensions that are added to a compound file
            (same as above, minus "del", "gen", "cfs"). 
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.COMPOUND_EXTENSIONS">
            <summary>File extensions of old-style index files </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexFileNames.VECTOR_EXTENSIONS">
            <summary>File extensions for term vector support </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexFileNames.FileNameFromGeneration(System.String,System.String,System.Int64)">
            <summary> Computes the full file name from base, extension and
            generation.  If the generation is -1, the file name is
            null.  If it's 0, the file name is 
            If it's > 0, the file name is 
            
            </summary>
            <param name="base_Renamed">-- main part of the file name
            </param>
            <param name="extension">-- extension of the filename (including .)
            </param>
            <param name="gen">-- generation
            </param>
        </member>
        <member name="M:Lucene.Net.Index.IndexFileNames.IsDocStoreFile(System.String)">
            <summary> Returns true if the provided filename is one of the doc
            store files (ends with an extension in
            STORE_INDEX_EXTENSIONS).
            </summary>
        </member>
        <member name="T:Lucene.Net.Index.IndexModifier">
             <summary> <p/>[Note that as of <b>2.1</b>, all but one of the
             methods in this class are available via <see cref="T:Lucene.Net.Index.IndexWriter"/>
            .  The one method that is not available is
             <see cref="M:Lucene.Net.Index.IndexModifier.DeleteDocument(System.Int32)"/>.]<p/>
             
             A class to modify an index, i.e. to delete and add documents. This
             class hides <see cref="T:Lucene.Net.Index.IndexReader"/> and <see cref="T:Lucene.Net.Index.IndexWriter"/> so that you
             do not need to care about implementation details such as that adding
             documents is done via IndexWriter and deletion is done via IndexReader.
             
             <p/>Note that you cannot create more than one <c>IndexModifier</c> object
             on the same directory at the same time.
             
             <p/>Example usage:
             
             <!-- ======================================================== -->
             <!-- = Java Sourcecode to HTML automatically converted code = -->
             <!-- =   Java2Html Converter V4.1 2004 by Markus Gebhard  markus@jave.de   = -->
             <!-- =     Further information: http://www.java2html.de     = -->
             <div align="left" class="java">
             <table border="0" cellpadding="3" cellspacing="0" bgcolor="#ffffff">
             <tr>
             <!-- start source code -->
             <td nowrap="nowrap" valign="top" align="left">
             <code>
             <font color="#ffffff">    </font><font color="#000000">Analyzer analyzer = </font><font color="#7f0055"><b>new </b></font><font color="#000000">StandardAnalyzer</font><font color="#000000">()</font><font color="#000000">;</font><br/>
             <font color="#ffffff">    </font><font color="#3f7f5f">// create an index in /tmp/index, overwriting an existing one:</font><br/>
             <font color="#ffffff">    </font><font color="#000000">IndexModifier indexModifier = </font><font color="#7f0055"><b>new </b></font><font color="#000000">IndexModifier</font><font color="#000000">(</font><font color="#2a00ff">"/tmp/index"</font><font color="#000000">, analyzer, </font><font color="#7f0055"><b>true</b></font><font color="#000000">)</font><font color="#000000">;</font><br/>
             <font color="#ffffff">    </font><font color="#000000">Document doc = </font><font color="#7f0055"><b>new </b></font><font color="#000000">Document</font><font color="#000000">()</font><font color="#000000">;</font><br/>
             <font color="#ffffff">    </font><font color="#000000">doc.add</font><font color="#000000">(</font><font color="#7f0055"><b>new </b></font><font color="#000000">Field</font><font color="#000000">(</font><font color="#2a00ff">"id"</font><font color="#000000">, </font><font color="#2a00ff">"1"</font><font color="#000000">, Field.Store.YES, Field.Index.NOT_ANALYZED</font><font color="#000000">))</font><font color="#000000">;</font><br/>
             <font color="#ffffff">    </font><font color="#000000">doc.add</font><font color="#000000">(</font><font color="#7f0055"><b>new </b></font><font color="#000000">Field</font><font color="#000000">(</font><font color="#2a00ff">"body"</font><font color="#000000">, </font><font color="#2a00ff">"a simple test"</font><font color="#000000">, Field.Store.YES, Field.Index.ANALYZED</font><font color="#000000">))</font><font color="#000000">;</font><br/>
             <font color="#ffffff">    </font><font color="#000000">indexModifier.addDocument</font><font color="#000000">(</font><font color="#000000">doc</font><font color="#000000">)</font><font color="#000000">;</font><br/>
             <font color="#ffffff">    </font><font color="#7f0055"><b>int </b></font><font color="#000000">deleted = indexModifier.delete</font><font color="#000000">(</font><font color="#7f0055"><b>new </b></font><font color="#000000">Term</font><font color="#000000">(</font><font color="#2a00ff">"id"</font><font color="#000000">, </font><font color="#2a00ff">"1"</font><font color="#000000">))</font><font color="#000000">;</font><br/>
             <font color="#ffffff">    </font><font color="#000000">System.out.println</font><font color="#000000">(</font><font color="#2a00ff">"Deleted " </font><font color="#000000">+ deleted + </font><font color="#2a00ff">" document"</font><font color="#000000">)</font><font color="#000000">;</font><br/>
             <font color="#ffffff">    </font><font color="#000000">indexModifier.flush</font><font color="#000000">()</font><font color="#000000">;</font><br/>
             <font color="#ffffff">    </font><font color="#000000">System.out.println</font><font color="#000000">(</font><font color="#000000">indexModifier.docCount</font><font color="#000000">() </font><font color="#000000">+ </font><font color="#2a00ff">" docs in index"</font><font color="#000000">)</font><font color="#000000">;</font><br/>
             <font color="#ffffff">    </font><font color="#000000">indexModifier.close</font><font color="#000000">()</font><font color="#000000">;</font></code>
             </td>
             <!-- end source code -->
             </tr>
             </table>
             </div>
             <!-- =       END of automatically generated HTML code       = -->
             <!-- ======================================================== -->
             
             <p/>Not all methods of IndexReader and IndexWriter are offered by this
             class. If you need access to additional methods, either use those classes
             directly or implement your own class that extends <c>IndexModifier</c>.
             
             <p/>Although an instance of this class can be used from more than one
             thread, you will not get the best performance. You might want to use
             IndexReader and IndexWriter directly for that (but you will need to
             care about synchronization yourself then).
             
             <p/>While you can freely mix calls to add() and delete() using this class,
             you should batch you calls for best performance. For example, if you
             want to update 20 documents, you should first delete all those documents,
             then add all the new documents.
             
             </summary>
             <deprecated> Please use <see cref="T:Lucene.Net.Index.IndexWriter"/> instead.
             </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean)">
            <summary> Open an index with write access.
            
            </summary>
            <param name="directory">the index directory
            </param>
            <param name="analyzer">the analyzer to use for adding new documents
            </param>
            <param name="create"><c>true</c> to create the index or overwrite the existing one;
            <c>false</c> to append to the existing index
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.#ctor(System.String,Lucene.Net.Analysis.Analyzer,System.Boolean)">
            <summary> Open an index with write access.
            
            </summary>
            <param name="dirName">the index directory
            </param>
            <param name="analyzer">the analyzer to use for adding new documents
            </param>
            <param name="create"><c>true</c> to create the index or overwrite the existing one;
            <c>false</c> to append to the existing index
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.#ctor(System.IO.FileInfo,Lucene.Net.Analysis.Analyzer,System.Boolean)">
            <summary> Open an index with write access.
            
            </summary>
            <param name="file">the index directory
            </param>
            <param name="analyzer">the analyzer to use for adding new documents
            </param>
            <param name="create"><c>true</c> to create the index or overwrite the existing one;
            <c>false</c> to append to the existing index
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.Init(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean)">
            <summary> Initialize an IndexWriter.</summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.AssureOpen">
            <summary> Throw an IllegalStateException if the index is closed.</summary>
            <throws>  IllegalStateException </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.CreateIndexWriter">
            <summary> Close the IndexReader and open an IndexWriter.</summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.CreateIndexReader">
            <summary> Close the IndexWriter and open an IndexReader.</summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.Flush">
            <summary> Make sure all changes are written to disk.</summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.AddDocument(Lucene.Net.Documents.Document,Lucene.Net.Analysis.Analyzer)">
            <summary> Adds a document to this index, using the provided analyzer instead of the
            one specific in the constructor.  If the document contains more than
            <see cref="M:Lucene.Net.Index.IndexModifier.SetMaxFieldLength(System.Int32)"/> terms for a given field, the remainder are
            discarded.
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document,Lucene.Net.Analysis.Analyzer)">
            </seealso>
            <throws>  IllegalStateException if the index is closed </throws>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.AddDocument(Lucene.Net.Documents.Document)">
            <summary> Adds a document to this index.  If the document contains more than
            <see cref="M:Lucene.Net.Index.IndexModifier.SetMaxFieldLength(System.Int32)"/> terms for a given field, the remainder are
            discarded.
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document)">
            </seealso>
            <throws>  IllegalStateException if the index is closed </throws>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.DeleteDocuments(Lucene.Net.Index.Term)">
            <summary> Deletes all documents containing <c>term</c>.
            This is useful if one uses a document field to hold a unique ID string for
            the document.  Then to delete such a document, one merely constructs a
            term with the appropriate field and the unique ID string as its text and
            passes it to this method.  Returns the number of documents deleted.
            </summary>
            <returns> the number of documents deleted
            </returns>
            <seealso cref="M:Lucene.Net.Index.IndexReader.DeleteDocuments(Lucene.Net.Index.Term)">
            </seealso>
            <throws>  IllegalStateException if the index is closed </throws>
            <throws>  StaleReaderException if the index has changed </throws>
            <summary>  since this reader was opened
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.DeleteDocument(System.Int32)">
            <summary> Deletes the document numbered <c>docNum</c>.</summary>
            <seealso cref="M:Lucene.Net.Index.IndexReader.DeleteDocument(System.Int32)">
            </seealso>
            <throws>  StaleReaderException if the index has changed </throws>
            <summary>  since this reader was opened
            </summary>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IllegalStateException if the index is closed </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.DocCount">
             <summary> Returns the number of documents currently in this
             index.  If the writer is currently open, this returns
             <see cref="M:Lucene.Net.Index.IndexWriter.DocCount"/>, else <see cref="M:Lucene.Net.Index.IndexReader.NumDocs"/>
            .  But, note that <see cref="M:Lucene.Net.Index.IndexWriter.DocCount"/>
             does not take deletions into
             account, unlike <see cref="M:Lucene.Net.Index.IndexReader.NumDocs"/>.
             </summary>
             <throws>  IllegalStateException if the index is closed </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.Optimize">
            <summary> Merges all segments together into a single segment, optimizing an index
            for search.
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexWriter.Optimize">
            </seealso>
            <throws>  IllegalStateException if the index is closed </throws>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.SetInfoStream(System.IO.StreamWriter)">
            <summary> If non-null, information about merges and a message when
            <see cref="M:Lucene.Net.Index.IndexModifier.GetMaxFieldLength"/> is reached will be printed to this.
            <p/>Example: <tt>index.setInfoStream(System.err);</tt>
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexWriter.SetInfoStream(System.IO.StreamWriter)">
            </seealso>
            <throws>  IllegalStateException if the index is closed </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.GetInfoStream">
            <seealso cref="M:Lucene.Net.Index.IndexModifier.SetInfoStream(System.IO.StreamWriter)">
            </seealso>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.SetUseCompoundFile(System.Boolean)">
            <summary> Setting to turn on usage of a compound file. When on, multiple files
            for each segment are merged into a single file once the segment creation
            is finished. This is done regardless of what directory is in use.
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexWriter.SetUseCompoundFile(System.Boolean)">
            </seealso>
            <throws>  IllegalStateException if the index is closed </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.GetUseCompoundFile">
            <seealso cref="M:Lucene.Net.Index.IndexModifier.SetUseCompoundFile(System.Boolean)">
            </seealso>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.SetMaxFieldLength(System.Int32)">
            <summary> The maximum number of terms that will be indexed for a single field in a
            document.  This limits the amount of memory required for indexing, so that
            collections with very large files will not crash the indexing process by
            running out of memory.<p/>
            Note that this effectively truncates large documents, excluding from the
            index terms that occur further in the document.  If you know your source
            documents are large, be sure to set this value high enough to accommodate
            the expected size.  If you set it to Integer.MAX_VALUE, then the only limit
            is your memory, but you should anticipate an OutOfMemoryError.<p/>
            By default, no more than 10,000 terms will be indexed for a field.
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexWriter.SetMaxFieldLength(System.Int32)">
            </seealso>
            <throws>  IllegalStateException if the index is closed </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.GetMaxFieldLength">
            <seealso cref="M:Lucene.Net.Index.IndexModifier.SetMaxFieldLength(System.Int32)">
            </seealso>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.SetMaxBufferedDocs(System.Int32)">
            <summary> Determines the minimal number of documents required before the buffered
            in-memory documents are merging and a new Segment is created.
            Since Documents are merged in a <see cref="T:Lucene.Net.Store.RAMDirectory"/>,
            large value gives faster indexing.  At the same time, mergeFactor limits
            the number of files open in a FSDirectory.
            
            <p/>The default value is 10.
            
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexWriter.SetMaxBufferedDocs(System.Int32)">
            </seealso>
            <throws>  IllegalStateException if the index is closed </throws>
            <throws>  IllegalArgumentException if maxBufferedDocs is smaller than 2 </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.GetMaxBufferedDocs">
            <seealso cref="M:Lucene.Net.Index.IndexModifier.SetMaxBufferedDocs(System.Int32)">
            </seealso>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.SetMergeFactor(System.Int32)">
            <summary> Determines how often segment indices are merged by addDocument().  With
            smaller values, less RAM is used while indexing, and searches on
            unoptimized indices are faster, but indexing speed is slower.  With larger
            values, more RAM is used during indexing, and while searches on unoptimized
            indices are slower, indexing is faster.  Thus larger values (&gt; 10) are best
            for batch index creation, and smaller values (&lt; 10) for indices that are
            interactively maintained.
            <p/>This must never be less than 2.  The default value is 10.
            
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexWriter.SetMergeFactor(System.Int32)">
            </seealso>
            <throws>  IllegalStateException if the index is closed </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.GetMergeFactor">
            <seealso cref="M:Lucene.Net.Index.IndexModifier.SetMergeFactor(System.Int32)">
            </seealso>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexModifier.Close">
            <summary> Close this index, writing all pending changes to disk.
            
            </summary>
            <throws>  IllegalStateException if the index has been closed before already </throws>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  IOException if there is a low-level IO error </throws>
        </member>
        <member name="T:Lucene.Net.Index.IndexWriter">
             <summary>An <c>IndexWriter</c> creates and maintains an index.
             <p/>The <c>create</c> argument to the 
             <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean)">constructor</see> determines 
             whether a new index is created, or whether an existing index is
             opened.  Note that you can open an index with <c>create=true</c>
             even while readers are using the index.  The old readers will 
             continue to search the "point in time" snapshot they had opened, 
             and won't see the newly created index until they re-open.  There are
             also <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer)">constructors</see>
             with no <c>create</c> argument which will create a new index
             if there is not already an index at the provided path and otherwise 
             open the existing index.<p/>
             <p/>In either case, documents are added with <see cref="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document)"/>
             and removed with <see cref="M:Lucene.Net.Index.IndexWriter.DeleteDocuments(Lucene.Net.Index.Term)"/> or
             <see cref="M:Lucene.Net.Index.IndexWriter.DeleteDocuments(Lucene.Net.Search.Query)"/>. A document can be updated with
             <see cref="M:Lucene.Net.Index.IndexWriter.UpdateDocument(Lucene.Net.Index.Term,Lucene.Net.Documents.Document)"/> (which just deletes
             and then adds the entire document). When finished adding, deleting 
             and updating documents, <see cref="M:Lucene.Net.Index.IndexWriter.Close"/> should be called.<p/>
             <a name="flush"></a>
             <p/>These changes are buffered in memory and periodically
             flushed to the <see cref="T:Lucene.Net.Store.Directory"/> (during the above method
             calls).  A flush is triggered when there are enough
             buffered deletes (see <see cref="M:Lucene.Net.Index.IndexWriter.SetMaxBufferedDeleteTerms(System.Int32)"/>)
             or enough added documents since the last flush, whichever
             is sooner.  For the added documents, flushing is triggered
             either by RAM usage of the documents (see 
             <see cref="M:Lucene.Net.Index.IndexWriter.SetRAMBufferSizeMB(System.Double)"/>) or the number of added documents.
             The default is to flush when RAM usage hits 16 MB.  For
             best indexing speed you should flush by RAM usage with a
             large RAM buffer.  Note that flushing just moves the
             internal buffered state in IndexWriter into the index, but
             these changes are not visible to IndexReader until either
             <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> or <see cref="M:Lucene.Net.Index.IndexWriter.Close"/> is called.  A flush may
             also trigger one or more segment merges which by default
             run with a background thread so as not to block the
             addDocument calls (see <a href="#mergePolicy">below</a>
             for changing the <see cref="T:Lucene.Net.Index.MergeScheduler"/>).<p/>
             <a name="autoCommit"></a>
             <p/>The optional <c>autoCommit</c> argument to the 
             <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,System.Boolean,Lucene.Net.Analysis.Analyzer)">constructors</see>
             controls visibility of the changes to <see cref="T:Lucene.Net.Index.IndexReader"/>
             instances reading the same index.  When this is
             <c>false</c>, changes are not visible until 
             <see cref="M:Lucene.Net.Index.IndexWriter.Close"/> or <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> is called.  Note that changes will still be
             flushed to the <see cref="T:Lucene.Net.Store.Directory"/> as new files, but are 
             not committed (no new <c>segments_N</c> file is written 
             referencing the new files, nor are the files sync'd to stable storage)
             until <see cref="M:Lucene.Net.Index.IndexWriter.Close"/> or <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> is called.  If something
             goes terribly wrong (for example the JVM crashes), then
             the index will reflect none of the changes made since the
             last commit, or the starting state if commit was not called.
             You can also call <see cref="M:Lucene.Net.Index.IndexWriter.Rollback"/>, which closes the writer
             without committing any changes, and removes any index
             files that had been flushed but are now unreferenced.
             This mode is useful for preventing readers from refreshing
             at a bad time (for example after you've done all your
             deletes but before you've done your adds).  It can also be
             used to implement simple single-writer transactional
             semantics ("all or none").  You can do a two-phase commit
             by calling <see cref="M:Lucene.Net.Index.IndexWriter.PrepareCommit"/>
             followed by <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/>. This is necessary when
             Lucene is working with an external resource (for example,
             a database) and both must either commit or rollback the
             transaction.<p/>
             <p/>When <c>autoCommit</c> is <c>true</c> then
             the writer will periodically commit on its own.  [<b>Deprecated</b>: Note that in 3.0, IndexWriter will
             no longer accept autoCommit=true (it will be hardwired to
             false).  You can always call <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> yourself
             when needed]. There is
             no guarantee when exactly an auto commit will occur (it
             used to be after every flush, but it is now after every
             completed merge, as of 2.4).  If you want to force a
             commit, call <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/>, or, close the writer.  Once
             a commit has finished, newly opened <see cref="T:Lucene.Net.Index.IndexReader"/> instances will
             see the changes to the index as of that commit.  When
             running in this mode, be careful not to refresh your
             readers while optimize or segment merges are taking place
             as this can tie up substantial disk space.<p/>
             </summary>
             <summary><p/>Regardless of <c>autoCommit</c>, an 
             <see cref="T:Lucene.Net.Index.IndexReader"/> or <see cref="T:Lucene.Net.Search.IndexSearcher"/> will only see the
             index as of the "point in time" that it was opened.  Any
             changes committed to the index after the reader was opened
             are not visible until the reader is re-opened.<p/>
             <p/>If an index will not have more documents added for a while and optimal search
             performance is desired, then either the full <see cref="M:Lucene.Net.Index.IndexWriter.Optimize"/>
             method or partial <see cref="M:Lucene.Net.Index.IndexWriter.Optimize(System.Int32)"/> method should be
             called before the index is closed.<p/>
             <p/>Opening an <c>IndexWriter</c> creates a lock file for the directory in use. Trying to open
             another <c>IndexWriter</c> on the same directory will lead to a
             <see cref="T:Lucene.Net.Store.LockObtainFailedException"/>. The <see cref="T:Lucene.Net.Store.LockObtainFailedException"/>
             is also thrown if an IndexReader on the same directory is used to delete documents
             from the index.<p/>
             </summary>
             <summary><a name="deletionPolicy"></a>
             <p/>Expert: <c>IndexWriter</c> allows an optional
             <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/> implementation to be
             specified.  You can use this to control when prior commits
             are deleted from the index.  The default policy is <see cref="T:Lucene.Net.Index.KeepOnlyLastCommitDeletionPolicy"/>
             which removes all prior
             commits as soon as a new commit is done (this matches
             behavior before 2.2).  Creating your own policy can allow
             you to explicitly keep previous "point in time" commits
             alive in the index for some time, to allow readers to
             refresh to the new commit without having the old commit
             deleted out from under them.  This is necessary on
             filesystems like NFS that do not support "delete on last
             close" semantics, which Lucene's "point in time" search
             normally relies on. <p/>
             <a name="mergePolicy"></a> <p/>Expert:
             <c>IndexWriter</c> allows you to separately change
             the <see cref="T:Lucene.Net.Index.MergePolicy"/> and the <see cref="T:Lucene.Net.Index.MergeScheduler"/>.
             The <see cref="T:Lucene.Net.Index.MergePolicy"/> is invoked whenever there are
             changes to the segments in the index.  Its role is to
             select which merges to do, if any, and return a <see cref="T:Lucene.Net.Index.MergePolicy.MergeSpecification"/>
             describing the merges.  It
             also selects merges to do for optimize().  (The default is
             <see cref="T:Lucene.Net.Index.LogByteSizeMergePolicy"/>.  Then, the <see cref="T:Lucene.Net.Index.MergeScheduler"/>
             is invoked with the requested merges and
             it decides when and how to run the merges.  The default is
             <see cref="T:Lucene.Net.Index.ConcurrentMergeScheduler"/>. <p/>
             <a name="OOME"></a><p/><b>NOTE</b>: if you hit an
             OutOfMemoryError then IndexWriter will quietly record this
             fact and block all future segment commits.  This is a
             defensive measure in case any internal state (buffered
             documents and deletions) were corrupted.  Any subsequent
             calls to <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> will throw an
             IllegalStateException.  The only course of action is to
             call <see cref="M:Lucene.Net.Index.IndexWriter.Close"/>, which internally will call <see cref="M:Lucene.Net.Index.IndexWriter.Rollback"/>
            , to undo any changes to the index since the
             last commit.  If you opened the writer with autoCommit
             false you can also just call <see cref="M:Lucene.Net.Index.IndexWriter.Rollback"/>
             directly.<p/>
             <a name="thread-safety"></a><p/><b>NOTE</b>: 
             <see cref="T:Lucene.Net.Index.IndexWriter"/> instances are completely thread
             safe, meaning multiple threads can call any of its
             methods, concurrently.  If your application requires
             external synchronization, you should <b>not</b>
             synchronize on the <c>IndexWriter</c> instance as
             this may cause deadlock; use your own (non-Lucene) objects
             instead. <p/>
             </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexWriter.WRITE_LOCK_NAME">
            <summary> Name of the write lock in the index.</summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexWriter.DISABLE_AUTO_FLUSH">
            <summary> Value to denote a flush trigger is disabled</summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexWriter.DEFAULT_RAM_BUFFER_SIZE_MB">
            <summary> Default value is 16 MB (which means flush when buffered
            docs consume 16 MB RAM).  Change using <see cref="M:Lucene.Net.Index.IndexWriter.SetRAMBufferSizeMB(System.Double)"/>.
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexWriter.DEFAULT_MAX_FIELD_LENGTH">
            <summary> Default value is 10,000. Change using <see cref="M:Lucene.Net.Index.IndexWriter.SetMaxFieldLength(System.Int32)"/>.</summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexWriter.DEFAULT_TERM_INDEX_INTERVAL">
            <summary> Default value is 128. Change using <see cref="M:Lucene.Net.Index.IndexWriter.SetTermIndexInterval(System.Int32)"/>.</summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexWriter.WRITE_LOCK_TIMEOUT">
            <summary> Default value for the write lock timeout (1,000).</summary>
            <seealso cref="M:Lucene.Net.Index.IndexWriter.SetDefaultWriteLockTimeout(System.Int64)">
            </seealso>
        </member>
        <member name="F:Lucene.Net.Index.IndexWriter.DEFAULT_MERGE_FACTOR">
            <deprecated>
            </deprecated>
            <seealso cref="F:Lucene.Net.Index.LogMergePolicy.DEFAULT_MERGE_FACTOR">
            </seealso>
        </member>
        <member name="F:Lucene.Net.Index.IndexWriter.DEFAULT_MAX_BUFFERED_DOCS">
            <summary> Disabled by default (because IndexWriter flushes by RAM usage
            by default). Change using <see cref="M:Lucene.Net.Index.IndexWriter.SetMaxBufferedDocs(System.Int32)"/>.
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexWriter.DEFAULT_MAX_BUFFERED_DELETE_TERMS">
            <summary> Disabled by default (because IndexWriter flushes by RAM usage
            by default). Change using <see cref="M:Lucene.Net.Index.IndexWriter.SetMaxBufferedDeleteTerms(System.Int32)"/>.
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexWriter.DEFAULT_MAX_MERGE_DOCS">
            <deprecated>
            </deprecated>
            <seealso cref="F:Lucene.Net.Index.LogMergePolicy.DEFAULT_MAX_MERGE_DOCS">
            </seealso>
        </member>
        <member name="F:Lucene.Net.Index.IndexWriter.MAX_TERM_LENGTH">
            <summary> Absolute hard maximum length for a term.  If a term
            arrives from the analyzer longer than this length, it
            is skipped and a message is printed to infoStream, if
            set (see <see cref="M:Lucene.Net.Index.IndexWriter.SetInfoStream(System.IO.StreamWriter)"/>).
            </summary>
        </member>
        <member name="F:Lucene.Net.Index.IndexWriter.DEFAULT_MAX_SYNC_PAUSE_SECONDS">
            <summary> Default for <see cref="M:Lucene.Net.Index.IndexWriter.GetMaxSyncPauseSeconds"/>.  On
            Windows this defaults to 10.0 seconds; elsewhere it's
            0.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.GetReader">
             <summary> Expert: returns a readonly reader, covering all committed as well as
             un-committed changes to the index. This provides "near real-time"
             searching, in that changes made during an IndexWriter session can be
             quickly made available for searching without closing the writer nor
             calling <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/>.
             
             <p/>
             Note that this is functionally equivalent to calling {#commit} and then
             using <see cref="M:Lucene.Net.Index.IndexReader.Open(System.String)"/> to open a new reader. But the turarnound
             time of this method should be faster since it avoids the potentially
             costly <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/>.
             <p/>
             
             You must close the <see cref="T:Lucene.Net.Index.IndexReader"/> returned by  this method once you are done using it.
             
             <p/>
             It's <i>near</i> real-time because there is no hard
             guarantee on how quickly you can get a new reader after
             making changes with IndexWriter.  You'll have to
             experiment in your situation to determine if it's
             faster enough.  As this is a new and experimental
             feature, please report back on your findings so we can
             learn, improve and iterate.<p/>
             
             <p/>The resulting reader suppports <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/>
            , but that call will simply forward
             back to this method (though this may change in the
             future).<p/>
             
             <p/>The very first time this method is called, this
             writer instance will make every effort to pool the
             readers that it opens for doing merges, applying
             deletes, etc.  This means additional resources (RAM,
             file descriptors, CPU time) will be consumed.<p/>
             
             <p/>For lower latency on reopening a reader, you should call <see cref="M:Lucene.Net.Index.IndexWriter.SetMergedSegmentWarmer(Lucene.Net.Index.IndexWriter.IndexReaderWarmer)"/> 
             to call <see cref="M:Lucene.Net.Index.IndexWriter.SetMergedSegmentWarmer(Lucene.Net.Index.IndexWriter.IndexReaderWarmer)"/> to
             pre-warm a newly merged segment before it's committed
             to the index. This is important for minimizing index-to-search 
             delay after a large merge.
             
             <p/>If an addIndexes* call is running in another thread,
             then this reader will only search those segments from
             the foreign index that have been successfully copied
             over, so far<p/>.
             
             <p/><b>NOTE</b>: Once the writer is closed, any
             outstanding readers may continue to be used.  However,
             if you attempt to reopen any of those readers, you'll
             hit an <see cref="T:Lucene.Net.Store.AlreadyClosedException"/>.<p/>
             
             <p/><b>NOTE:</b> This API is experimental and might
             change in incompatible ways in the next release.<p/>
             
             </summary>
             <returns> IndexReader that covers entire index plus all
             changes made so far by this IndexWriter instance
             
             </returns>
             <throws>  IOException </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.GetReader(System.Int32)">
            <summary>Expert: like <see cref="M:Lucene.Net.Index.IndexWriter.GetReader"/>, except you can
            specify which termInfosIndexDivisor should be used for
            any newly opened readers.
            </summary>
            <param name="termInfosIndexDivisor">Subsambles which indexed
            terms are loaded into RAM. This has the same effect as <see cref="M:Lucene.Net.Index.IndexWriter.SetTermIndexInterval(System.Int32)"/>
            except that setting
            must be done at indexing time while this setting can be
            set per reader.  When set to N, then one in every
            N*termIndexInterval terms in the index is loaded into
            memory.  By setting this to a value &gt; 1 you can reduce
            memory usage, at the expense of higher latency when
            loading a TermInfo.  The default value is 1.  Set this
            to -1 to skip loading the terms index entirely. 
            </param>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.NumDeletedDocs(Lucene.Net.Index.SegmentInfo)">
            <summary> Obtain the number of deleted docs for a pooled reader.
            If the reader isn't being pooled, the segmentInfo's 
            delCount is returned.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.EnsureOpen(System.Boolean)">
            <summary> Used internally to throw an <see cref="T:Lucene.Net.Store.AlreadyClosedException"/>
            if this IndexWriter has been
            closed.
            </summary>
            <throws>  AlreadyClosedException if this IndexWriter is </throws>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.Message(System.String)">
            <summary> Prints a message to the infoStream (if non-null),
            prefixed with the identifying information for this
            writer and the thread that's calling it.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.GetLogMergePolicy">
            <summary> Casts current mergePolicy to LogMergePolicy, and throws
            an exception if the mergePolicy is not a LogMergePolicy.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.GetUseCompoundFile">
            <summary><p/>Get the current setting of whether newly flushed
            segments will use the compound file format.  Note that
            this just returns the value previously set with
            setUseCompoundFile(boolean), or the default value
            (true).  You cannot use this to query the status of
            previously flushed segments.<p/>
            
            <p/>Note that this method is a convenience method: it
            just calls mergePolicy.getUseCompoundFile as long as
            mergePolicy is an instance of <see cref="T:Lucene.Net.Index.LogMergePolicy"/>.
            Otherwise an IllegalArgumentException is thrown.<p/>
            
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexWriter.SetUseCompoundFile(System.Boolean)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.SetUseCompoundFile(System.Boolean)">
            <summary><p/>Setting to turn on usage of a compound file. When on,
            multiple files for each segment are merged into a
            single file when a new segment is flushed.<p/>
            
            <p/>Note that this method is a convenience method: it
            just calls mergePolicy.setUseCompoundFile as long as
            mergePolicy is an instance of <see cref="T:Lucene.Net.Index.LogMergePolicy"/>.
            Otherwise an IllegalArgumentException is thrown.<p/>
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.SetSimilarity(Lucene.Net.Search.Similarity)">
            <summary>Expert: Set the Similarity implementation used by this IndexWriter.
            
            </summary>
            <seealso cref="M:Lucene.Net.Search.Similarity.SetDefault(Lucene.Net.Search.Similarity)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.GetSimilarity">
            <summary>Expert: Return the Similarity implementation used by this IndexWriter.
            
            <p/>This defaults to the current value of <see cref="M:Lucene.Net.Search.Similarity.GetDefault"/>.
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.SetTermIndexInterval(System.Int32)">
            <summary>Expert: Set the interval between indexed terms.  Large values cause less
            memory to be used by IndexReader, but slow random-access to terms.  Small
            values cause more memory to be used by an IndexReader, and speed
            random-access to terms.
            
            This parameter determines the amount of computation required per query
            term, regardless of the number of documents that contain that term.  In
            particular, it is the maximum number of other terms that must be
            scanned before a term is located and its frequency and position information
            may be processed.  In a large index with user-entered query terms, query
            processing time is likely to be dominated not by term lookup but rather
            by the processing of frequency and positional data.  In a small index
            or when many uncommon query terms are generated (e.g., by wildcard
            queries) term lookup may become a dominant cost.
            
            In particular, <c>numUniqueTerms/interval</c> terms are read into
            memory by an IndexReader, and, on average, <c>interval/2</c> terms
            must be scanned for each random term access.
            
            </summary>
            <seealso cref="F:Lucene.Net.Index.IndexWriter.DEFAULT_TERM_INDEX_INTERVAL">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.GetTermIndexInterval">
            <summary>Expert: Return the interval between indexed terms.
            
            </summary>
            <seealso cref="M:Lucene.Net.Index.IndexWriter.SetTermIndexInterval(System.Int32)">
            </seealso>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(System.String,Lucene.Net.Analysis.Analyzer,System.Boolean,Lucene.Net.Index.IndexWriter.MaxFieldLength)">
            <summary> Constructs an IndexWriter for the index in <c>path</c>.
            Text will be analyzed with <c>a</c>.  If <c>create</c>
            is true, then a new, empty index will be created in
            <c>path</c>, replacing the index already there,
            if any.
            
            <p/><b>NOTE</b>: autoCommit (see <a href="#autoCommit">above</a>) is set to false with this
            constructor.
            
            </summary>
            <param name="path">the path to the index directory
            </param>
            <param name="a">the analyzer to use
            </param>
            <param name="create"><c>true</c> to create the index or overwrite
            the existing one; <c>false</c> to append to the existing
            index
            </param>
            <param name="mfl">Maximum field length in number of tokens/terms: LIMITED, UNLIMITED, or user-specified
            via the MaxFieldLength constructor.
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if the directory cannot be read/written to, or </throws>
            <summary>  if it does not exist and <c>create</c> is
            <c>false</c> or if there is any other low-level
            IO error
            </summary>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean,Lucene.Net.Index.IndexWriter.MaxFieldLength)"/>
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(System.String,Lucene.Net.Analysis.Analyzer,System.Boolean)">
             <summary> Constructs an IndexWriter for the index in <c>path</c>.
             Text will be analyzed with <c>a</c>.  If <c>create</c>
             is true, then a new, empty index will be created in
             <c>path</c>, replacing the index already there, if any.
             
             </summary>
             <param name="path">the path to the index directory
             </param>
             <param name="a">the analyzer to use
             </param>
             <param name="create"><c>true</c> to create the index or overwrite
             the existing one; <c>false</c> to append to the existing
             index
             </param>
             <throws>  CorruptIndexException if the index is corrupt </throws>
             <throws>  LockObtainFailedException if another writer </throws>
             <summary>  has this index open (<c>write.lock</c> could not
             be obtained)
             </summary>
             <throws>  IOException if the directory cannot be read/written to, or </throws>
             <summary>  if it does not exist and <c>create</c> is
             <c>false</c> or if there is any other low-level
             IO error
             </summary>
             <deprecated> This constructor will be removed in the 3.0 release.
             Use <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean,Lucene.Net.Index.IndexWriter.MaxFieldLength)"/>
            
             instead, and call <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> when needed.
             </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(System.IO.FileInfo,Lucene.Net.Analysis.Analyzer,System.Boolean,Lucene.Net.Index.IndexWriter.MaxFieldLength)">
            <summary> Constructs an IndexWriter for the index in <c>path</c>.
            Text will be analyzed with <c>a</c>.  If <c>create</c>
            is true, then a new, empty index will be created in
            <c>path</c>, replacing the index already there, if any.
            
            <p/><b>NOTE</b>: autoCommit (see <a href="#autoCommit">above</a>) is set to false with this
            constructor.
            
            </summary>
            <param name="path">the path to the index directory
            </param>
            <param name="a">the analyzer to use
            </param>
            <param name="create"><c>true</c> to create the index or overwrite
            the existing one; <c>false</c> to append to the existing
            index
            </param>
            <param name="mfl">Maximum field length in number of terms/tokens: LIMITED, UNLIMITED, or user-specified
            via the MaxFieldLength constructor.
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if the directory cannot be read/written to, or </throws>
            <summary>  if it does not exist and <c>create</c> is
            <c>false</c> or if there is any other low-level
            IO error
            </summary>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean,Lucene.Net.Index.IndexWriter.MaxFieldLength)"/>
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(System.IO.FileInfo,Lucene.Net.Analysis.Analyzer,System.Boolean)">
             <summary> Constructs an IndexWriter for the index in <c>path</c>.
             Text will be analyzed with <c>a</c>.  If <c>create</c>
             is true, then a new, empty index will be created in
             <c>path</c>, replacing the index already there, if any.
             
             </summary>
             <param name="path">the path to the index directory
             </param>
             <param name="a">the analyzer to use
             </param>
             <param name="create"><c>true</c> to create the index or overwrite
             the existing one; <c>false</c> to append to the existing
             index
             </param>
             <throws>  CorruptIndexException if the index is corrupt </throws>
             <throws>  LockObtainFailedException if another writer </throws>
             <summary>  has this index open (<c>write.lock</c> could not
             be obtained)
             </summary>
             <throws>  IOException if the directory cannot be read/written to, or </throws>
             <summary>  if it does not exist and <c>create</c> is
             <c>false</c> or if there is any other low-level
             IO error
             </summary>
             <deprecated> This constructor will be removed in the 3.0 release.
             Use <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean,Lucene.Net.Index.IndexWriter.MaxFieldLength)"/>
            
             instead, and call <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> when needed.
             </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean,Lucene.Net.Index.IndexWriter.MaxFieldLength)">
            <summary> Constructs an IndexWriter for the index in <c>d</c>.
            Text will be analyzed with <c>a</c>.  If <c>create</c>
            is true, then a new, empty index will be created in
            <c>d</c>, replacing the index already there, if any.
            
            <p/><b>NOTE</b>: autoCommit (see <a
            href="#autoCommit">above</a>) is set to false with this
            constructor.
            
            </summary>
            <param name="d">the index directory
            </param>
            <param name="a">the analyzer to use
            </param>
            <param name="create"><c>true</c> to create the index or overwrite
            the existing one; <c>false</c> to append to the existing
            index
            </param>
            <param name="mfl">Maximum field length in number of terms/tokens: LIMITED, UNLIMITED, or user-specified
            via the MaxFieldLength constructor.
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if the directory cannot be read/written to, or </throws>
            <summary>  if it does not exist and <c>create</c> is
            <c>false</c> or if there is any other low-level
            IO error
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean)">
            <summary> Constructs an IndexWriter for the index in <c>d</c>.
            Text will be analyzed with <c>a</c>.  If <c>create</c>
            is true, then a new, empty index will be created in
            <c>d</c>, replacing the index already there, if any.
            
            </summary>
            <param name="d">the index directory
            </param>
            <param name="a">the analyzer to use
            </param>
            <param name="create"><c>true</c> to create the index or overwrite
            the existing one; <c>false</c> to append to the existing
            index
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if the directory cannot be read/written to, or </throws>
            <summary>  if it does not exist and <c>create</c> is
            <c>false</c> or if there is any other low-level
            IO error
            </summary>
            <deprecated> This constructor will be removed in the 3.0
            release, and call <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> when needed.
            Use <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean,Lucene.Net.Index.IndexWriter.MaxFieldLength)"/> instead.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(System.String,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexWriter.MaxFieldLength)">
            <summary> Constructs an IndexWriter for the index in
            <c>path</c>, first creating it if it does not
            already exist.  Text will be analyzed with
            <c>a</c>.
            
            <p/><b>NOTE</b>: autoCommit (see <a href="#autoCommit">above</a>) is set to false with this
            constructor.
            
            </summary>
            <param name="path">the path to the index directory
            </param>
            <param name="a">the analyzer to use
            </param>
            <param name="mfl">Maximum field length in number of terms/tokens: LIMITED, UNLIMITED, or user-specified
            via the MaxFieldLength constructor.
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if the directory cannot be </throws>
            <summary>  read/written to or if there is any other low-level
            IO error
            </summary>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexWriter.MaxFieldLength)"/>
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(System.String,Lucene.Net.Analysis.Analyzer)">
            <summary> Constructs an IndexWriter for the index in
            <c>path</c>, first creating it if it does not
            already exist.  Text will be analyzed with
            <c>a</c>.
            
            </summary>
            <param name="path">the path to the index directory
            </param>
            <param name="a">the analyzer to use
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if the directory cannot be </throws>
            <summary>  read/written to or if there is any other low-level
            IO error
            </summary>
            <deprecated> This constructor will be removed in the 3.0
            release, and call <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> when needed.
            Use <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexWriter.MaxFieldLength)"/> instead.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(System.IO.FileInfo,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexWriter.MaxFieldLength)">
            <summary> Constructs an IndexWriter for the index in
            <c>path</c>, first creating it if it does not
            already exist.  Text will be analyzed with
            <c>a</c>.
            
            <p/><b>NOTE</b>: autoCommit (see <a href="#autoCommit">above</a>) is set to false with this
            constructor.
            
            </summary>
            <param name="path">the path to the index directory
            </param>
            <param name="a">the analyzer to use
            </param>
            <param name="mfl">Maximum field length in number of terms/tokens: LIMITED, UNLIMITED, or user-specified
            via the MaxFieldLength constructor.
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if the directory cannot be </throws>
            <summary>  read/written to or if there is any other low-level
            IO error
            </summary>
            <deprecated> Use <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexWriter.MaxFieldLength)"/>
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(System.IO.FileInfo,Lucene.Net.Analysis.Analyzer)">
            <summary> Constructs an IndexWriter for the index in
            <c>path</c>, first creating it if it does not
            already exist.  Text will be analyzed with
            <c>a</c>.
            
            </summary>
            <param name="path">the path to the index directory
            </param>
            <param name="a">the analyzer to use
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if the directory cannot be </throws>
            <summary>  read/written to or if there is any other low-level
            IO error
            </summary>
            <deprecated> This constructor will be removed in the 3.0 release.
            Use <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexWriter.MaxFieldLength)"/>
            instead, and call <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> when needed.
            </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexWriter.MaxFieldLength)">
            <summary> Constructs an IndexWriter for the index in
            <c>d</c>, first creating it if it does not
            already exist.  Text will be analyzed with
            <c>a</c>.
            
            <p/><b>NOTE</b>: autoCommit (see <a
            href="#autoCommit">above</a>) is set to false with this
            constructor.
            
            </summary>
            <param name="d">the index directory
            </param>
            <param name="a">the analyzer to use
            </param>
            <param name="mfl">Maximum field length in number of terms/tokens: LIMITED, UNLIMITED, or user-specified
            via the MaxFieldLength constructor.
            </param>
            <throws>  CorruptIndexException if the index is corrupt </throws>
            <throws>  LockObtainFailedException if another writer </throws>
            <summary>  has this index open (<c>write.lock</c> could not
            be obtained)
            </summary>
            <throws>  IOException if the directory cannot be </throws>
            <summary>  read/written to or if there is any other low-level
            IO error
            </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer)">
             <summary> Constructs an IndexWriter for the index in
             <c>d</c>, first creating it if it does not
             already exist.  Text will be analyzed with
             <c>a</c>.
             
             </summary>
             <param name="d">the index directory
             </param>
             <param name="a">the analyzer to use
             </param>
             <throws>  CorruptIndexException if the index is corrupt </throws>
             <throws>  LockObtainFailedException if another writer </throws>
             <summary>  has this index open (<c>write.lock</c> could not
             be obtained)
             </summary>
             <throws>  IOException if the directory cannot be </throws>
             <summary>  read/written to or if there is any other low-level
             IO error
             </summary>
             <deprecated> This constructor will be removed in the 3.0 release.
             Use <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexWriter.MaxFieldLength)"/>
            
             instead, and call <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> when needed.
             </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,System.Boolean,Lucene.Net.Analysis.Analyzer)">
             <summary> Constructs an IndexWriter for the index in
             <c>d</c>, first creating it if it does not
             already exist.  Text will be analyzed with
             <c>a</c>.
             
             </summary>
             <param name="d">the index directory
             </param>
             <param name="autoCommit">see <a href="#autoCommit">above</a>
             </param>
             <param name="a">the analyzer to use
             </param>
             <throws>  CorruptIndexException if the index is corrupt </throws>
             <throws>  LockObtainFailedException if another writer </throws>
             <summary>  has this index open (<c>write.lock</c> could not
             be obtained)
             </summary>
             <throws>  IOException if the directory cannot be </throws>
             <summary>  read/written to or if there is any other low-level
             IO error
             </summary>
             <deprecated> This constructor will be removed in the 3.0 release.
             Use <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexWriter.MaxFieldLength)"/>
            
             instead, and call <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> when needed.
             </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,System.Boolean,Lucene.Net.Analysis.Analyzer,System.Boolean)">
             <summary> Constructs an IndexWriter for the index in <c>d</c>.
             Text will be analyzed with <c>a</c>.  If <c>create</c>
             is true, then a new, empty index will be created in
             <c>d</c>, replacing the index already there, if any.
             
             </summary>
             <param name="d">the index directory
             </param>
             <param name="autoCommit">see <a href="#autoCommit">above</a>
             </param>
             <param name="a">the analyzer to use
             </param>
             <param name="create"><c>true</c> to create the index or overwrite
             the existing one; <c>false</c> to append to the existing
             index
             </param>
             <throws>  CorruptIndexException if the index is corrupt </throws>
             <throws>  LockObtainFailedException if another writer </throws>
             <summary>  has this index open (<c>write.lock</c> could not
             be obtained)
             </summary>
             <throws>  IOException if the directory cannot be read/written to, or </throws>
             <summary>  if it does not exist and <c>create</c> is
             <c>false</c> or if there is any other low-level
             IO error
             </summary>
             <deprecated> This constructor will be removed in the 3.0 release.
             Use <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean,Lucene.Net.Index.IndexWriter.MaxFieldLength)"/>
            
             instead, and call <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> when needed.
             </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexDeletionPolicy,Lucene.Net.Index.IndexWriter.MaxFieldLength)">
             <summary> Expert: constructs an IndexWriter with a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>
            , for the index in <c>d</c>,
             first creating it if it does not already exist.  Text
             will be analyzed with <c>a</c>.
             
             <p/><b>NOTE</b>: autoCommit (see <a href="#autoCommit">above</a>) is set to false with this
             constructor.
             
             </summary>
             <param name="d">the index directory
             </param>
             <param name="a">the analyzer to use
             </param>
             <param name="deletionPolicy">see <a href="#deletionPolicy">above</a>
             </param>
             <param name="mfl">whether or not to limit field lengths
             </param>
             <throws>  CorruptIndexException if the index is corrupt </throws>
             <throws>  LockObtainFailedException if another writer </throws>
             <summary>  has this index open (<c>write.lock</c> could not
             be obtained)
             </summary>
             <throws>  IOException if the directory cannot be </throws>
             <summary>  read/written to or if there is any other low-level
             IO error
             </summary>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,System.Boolean,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexDeletionPolicy)">
             <summary> Expert: constructs an IndexWriter with a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>
            , for the index in <c>d</c>,
             first creating it if it does not already exist.  Text
             will be analyzed with <c>a</c>.
             
             </summary>
             <param name="d">the index directory
             </param>
             <param name="autoCommit">see <a href="#autoCommit">above</a>
             </param>
             <param name="a">the analyzer to use
             </param>
             <param name="deletionPolicy">see <a href="#deletionPolicy">above</a>
             </param>
             <throws>  CorruptIndexException if the index is corrupt </throws>
             <throws>  LockObtainFailedException if another writer </throws>
             <summary>  has this index open (<c>write.lock</c> could not
             be obtained)
             </summary>
             <throws>  IOException if the directory cannot be </throws>
             <summary>  read/written to or if there is any other low-level
             IO error
             </summary>
             <deprecated> This constructor will be removed in the 3.0 release.
             Use <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexDeletionPolicy,Lucene.Net.Index.IndexWriter.MaxFieldLength)"/>
            
             instead, and call <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> when needed.
             </deprecated>
        </member>
        <member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean,Lucene.Net.Index.IndexDeletionPolicy,