analysis

package
v0.0.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 24, 2020 License: Apache-2.0 Imports: 5 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

View Source
var DEFAULT_TOKEN_ATTRIBUTE_FACTORY = assembleAttributeFactory(
	DEFAULT_ATTRIBUTE_FACTORY,
	map[string]bool{
		"CharTermAttribute":          true,
		"TermToBytesRefAttribute":    true,
		"TypeAttribute":              true,
		"PositionIncrementAttribute": true,
		"PositionLengthAttribute":    true,
		"OffsetAttribute":            true,
	},
	NewPackedTokenAttribute,
)
View Source
var GLOBAL_REUSE_STRATEGY = new(GlobalReuseStrategy)

A predefined ReuseStrategy that reuses the same components for every field

View Source
var ILLEGAL_STATE_READER = new(illegalStateReader)
View Source
var PER_FIELD_REUSE_STRATEGY = &PerFieldReuseStrategy{}

L423 A predefined ReuseStrategy that reuses components per-field by maintaining a Map of TokenStreamComponent per field name.

Functions

This section is empty.

Types

type Analyzer

type Analyzer interface {
	TokenStreamForReader(string, io.RuneReader) (TokenStream, error)
	// Returns a TokenStream suitable for fieldName, tokenizing the
	// contents of text.
	//
	// This method uses createComponents(string, Reader) to obtain an
	// instance of TokenStreamComponents. It returns the sink of the
	// components and stores the components internally. Subsequent
	// calls to this method will reuse the previously stored components
	// after resetting them through TokenStreamComponents.SetReader(Reader).
	//
	// NOTE: After calling this method, the consumer must follow the
	// workflow described in TokenStream to propperly consume its
	// contents. See the Analysis package documentation for some
	// examples demonstrating this.
	TokenStreamForString(fieldName, text string) (TokenStream, error)
	PositionIncrementGap(string) int
	OffsetGap(string) int
}

An Analyzer builds TokenStreams, which analyze text. It thus reprents a policy for extracting index terms for text.

In order to define what analysis is done, subclass must define their TokenStreamConents in CreateComponents(string, Reader). The components are then reused in each call to TokenStream(string, Reader).

Also note that one should Clone() Analyzer for each Go routine if default ReuseStrategy is used.

type AnalyzerImpl

type AnalyzerImpl struct {
	Spi AnalyzerSPI
	// contains filtered or unexported fields
}

func NewAnalyzer

func NewAnalyzer() *AnalyzerImpl

Create a new Analyzer, reusing the same set of components per-thread across calls to TokenStream(string, Reader).

func NewAnalyzerWithStrategy

func NewAnalyzerWithStrategy(reuseStrategy ReuseStrategy) *AnalyzerImpl

func (*AnalyzerImpl) CreateComponents

func (a *AnalyzerImpl) CreateComponents(fieldName string, reader io.RuneReader) *TokenStreamComponents

func (*AnalyzerImpl) InitReader

func (a *AnalyzerImpl) InitReader(fieldName string, reader io.RuneReader) io.RuneReader

func (*AnalyzerImpl) OffsetGap

func (a *AnalyzerImpl) OffsetGap(fieldName string) int

func (*AnalyzerImpl) PositionIncrementGap

func (a *AnalyzerImpl) PositionIncrementGap(fieldName string) int

func (*AnalyzerImpl) SetVersion

func (a *AnalyzerImpl) SetVersion(v util.Version)

func (*AnalyzerImpl) TokenStreamForReader

func (a *AnalyzerImpl) TokenStreamForReader(fieldName string, reader io.RuneReader) (TokenStream, error)

func (*AnalyzerImpl) TokenStreamForString

func (a *AnalyzerImpl) TokenStreamForString(fieldName, text string) (TokenStream, error)

func (*AnalyzerImpl) Version

func (a *AnalyzerImpl) Version() util.Version

type AnalyzerSPI

type AnalyzerSPI interface {
	// Creates a new TokenStreamComponents instance for this analyzer.
	CreateComponents(fieldName string, reader io.RuneReader) *TokenStreamComponents
	// Override this if you want to add a CharFilter chain.
	//
	// The default implementation returns reader unchanged.
	InitReader(fieldName string, reader io.RuneReader) io.RuneReader
}

type CachingTokenFilter

type CachingTokenFilter struct {
	*TokenFilter
	// contains filtered or unexported fields
}

func NewCachingTokenFilter

func NewCachingTokenFilter(input TokenStream) *CachingTokenFilter

func (*CachingTokenFilter) IncrementToken

func (f *CachingTokenFilter) IncrementToken() (bool, error)

func (*CachingTokenFilter) Reset

func (f *CachingTokenFilter) Reset()

type CharFilterService

type CharFilterService interface {
	// Chains the corrected offset through the input CharFilter(s).
	CorrectOffset(int) int
}

type GlobalReuseStrategy

type GlobalReuseStrategy struct {
	*ReuseStrategyImpl
}

func (*GlobalReuseStrategy) ReusableComponents

func (rs *GlobalReuseStrategy) ReusableComponents(a *AnalyzerImpl, fieldName string) *TokenStreamComponents

func (*GlobalReuseStrategy) SetReusableComponents

func (rs *GlobalReuseStrategy) SetReusableComponents(a *AnalyzerImpl, fieldName string, components *TokenStreamComponents)

type PerFieldReuseStrategy

type PerFieldReuseStrategy struct {
}

Implementation of ReuseStrategy that reuses components per-field by maintianing a Map of TokenStreamComponent per field name.

func (*PerFieldReuseStrategy) ReusableComponents

func (rs *PerFieldReuseStrategy) ReusableComponents(a *AnalyzerImpl, fieldName string) *TokenStreamComponents

func (*PerFieldReuseStrategy) SetReusableComponents

func (rs *PerFieldReuseStrategy) SetReusableComponents(a *AnalyzerImpl, fieldName string, components *TokenStreamComponents)

type ReusableStringReader

type ReusableStringReader struct {
	// contains filtered or unexported fields
}

Internal class to enale reuse of the string reader by Analyzer.TokenStreamForString()

func (*ReusableStringReader) Close

func (r *ReusableStringReader) Close() error

func (*ReusableStringReader) Read

func (r *ReusableStringReader) Read(p []byte) (int, error)

func (*ReusableStringReader) ReadRune

func (r *ReusableStringReader) ReadRune() (rune, int, error)

type ReuseStrategy

type ReuseStrategy interface {
	// Gets the reusable TokenStreamComponents for the field with the
	// given name.
	ReusableComponents(*AnalyzerImpl, string) *TokenStreamComponents
	// Stores the given TokenStreamComponents as the reusable
	// components for the field with the given name.
	SetReusableComponents(*AnalyzerImpl, string, *TokenStreamComponents)
}

Strategy defining how TokenStreamComponents are reused per call to TokenStream(string, io.Reader)

type ReuseStrategyImpl

type ReuseStrategyImpl struct {
}

type TokenFilter

type TokenFilter struct {
	*TokenStreamImpl
	// contains filtered or unexported fields
}

A TokenFilter is a TokenStream whose input is another TokenStream.

This is an abstract class; subclasses must override IncrementToken().

func NewTokenFilter

func NewTokenFilter(input TokenStream) *TokenFilter

Construct a token stream filtering the given input.

func (*TokenFilter) Close

func (f *TokenFilter) Close() error

func (*TokenFilter) End

func (f *TokenFilter) End() error

func (*TokenFilter) Reset

func (f *TokenFilter) Reset() error

type TokenStream

type TokenStream interface {
	// Releases resouces associated with this stream.
	//
	// If you override this method, always call TokenStreamImpl.Close(),
	// otherwise some internal state will not be correctly reset (e.g.,
	// Tokenizer will panic on reuse).
	io.Closer
	Attributes() *util.AttributeSource
	// Consumers (i.e., IndexWriter) use this method to advance the
	// stream to the next token. Implementing classes must implement
	// this method and update the appropriate AttributeImpls with the
	// attributes of he next token.
	//
	// The producer must make no assumptions about the attributes after
	// the method has been returned: the caller may arbitrarily change
	// it. If the producer needs to preserve the state for subsequent
	// calls, it can use captureState to create a copy of the current
	// attribute state.
	//
	// This method is called for every token of a docuent, so an
	// efficient implementation is crucial for good performance.l To
	// avoid calls to AddAttribute(clas) and GetAttribute(Class),
	// references to all AttributeImpls that this stream uses should be
	// retrived during instantiation.
	//
	// To ensure that filters and consumers know which attributes are
	// available, the attributes must be added during instantiation.
	// Filters and consumers are not required to check for availability
	// of attribute in IncrementToken().
	IncrementToken() (bool, error)
	// This method is called by the consumer after the last token has
	// been consumed, after IncrementToken() returned false (using the
	// new TokenSream API). Streams implementing the old API should
	// upgrade to use this feature.
	//
	// This method can be used to perform any end-of-stream operations,
	// such as setting the final offset of a stream. The final offset
	// of a stream might difer from the offset of the last token eg in
	// case one or more whitespaces followed after the last token, but
	// a WhitespaceTokenizer was used.
	//
	// Additionally any skipped positions (such as those removed by a
	// stopFilter) can be applied to the position increment, or any
	// adjustment or other attributes where the end-of-stream value may
	// be important.
	//
	// If you override this method, alwasy call TokenStreamImpl.End().
	End() error
	// This method is called by a consumer before it begins consumption
	// using IncrementToken().
	//
	// Resets this stream to a clean state. Stateful implementation
	// must implement this method so that they can be reused, just as
	// if they had been created fresh.
	//
	// If you override this method, alwasy call TokenStreamImpl.Reset(),
	// otherwise some internal state will not be correctly reset (e.g.,
	// Tokenizer will panic on further usage).
	Reset() error
}

*

  • A <code>TokenStream</code> enumerates the sequence of tokens, either from
  • {@link Field}s of a {@link Document} or from query text.
  • <p>
  • This is an abstract class; concrete subclasses are:
  • <ul>
  • <li>{@link Tokenizer}, a <code>TokenStream</code> whose input is a Reader; and
  • <li>{@link TokenFilter}, a <code>TokenStream</code> whose input is another
  • <code>TokenStream</code>.
  • </ul>
  • A new <code>TokenStream</code> API has been introduced with Lucene 2.9. This API
  • has moved from being {@link Token}-based to {@link Attribute}-based. While
  • {@link Token} still exists in 2.9 as a convenience class, the preferred way
  • to store the information of a {@link Token} is to use {@link AttributeImpl}s.
  • <p>
  • <code>TokenStream</code> now extends {@link AttributeSource}, which provides
  • access to all of the token {@link Attribute}s for the <code>TokenStream</code>.
  • Note that only one instance per {@link AttributeImpl} is created and reused
  • for every token. This approach reduces object creation and allows local
  • caching of references to the {@link AttributeImpl}s. See
  • {@link #incrementToken()} for further details.
  • <p>
  • <b>The workflow of the new <code>TokenStream</code> API is as follows:</b>
  • <ol>
  • <li>Instantiation of <code>TokenStream</code>/{@link TokenFilter}s which add/get
  • attributes to/from the {@link AttributeSource}.
  • <li>The consumer calls {@link TokenStream#reset()}.
  • <li>The consumer retrieves attributes from the stream and stores local
  • references to all attributes it wants to access.
  • <li>The consumer calls {@link #incrementToken()} until it returns false
  • consuming the attributes after each call.
  • <li>The consumer calls {@link #end()} so that any end-of-stream operations
  • can be performed.
  • <li>The consumer calls {@link #close()} to release any resource when finished
  • using the <code>TokenStream</code>.
  • </ol>
  • To make sure that filters and consumers know which attributes are available,
  • the attributes must be added during instantiation. Filters and consumers are
  • not required to check for availability of attributes in
  • {@link #incrementToken()}.
  • <p>
  • You can find some example code for the new API in the analysis package level
  • Javadoc.
  • <p>
  • Sometimes it is desirable to capture a current state of a <code>TokenStream</code>,
  • e.g., for buffering purposes (see {@link CachingTokenFilter},
  • TeeSinkTokenFilter). For this usecase
  • {@link AttributeSource#captureState} and {@link AttributeSource#restoreState}
  • can be used.
  • <p>The {@code TokenStream}-API in Lucene is based on the decorator pattern.
  • Therefore all non-abstract subclasses must be final or have at least a final
  • implementation of {@link #incrementToken}! This is checked when Java
  • assertions are enabled.

type TokenStreamComponents

type TokenStreamComponents struct {

	// Resets the encapculated components with the given reader. If the
	// components canno be reset, an error should be returned.
	SetReader func(io.RuneReader) error
	// contains filtered or unexported fields
}

This class encapsulates the outer components of a token stream. It provides access to the source Tokenizer and the outer end (sink), an instance of TokenFilter which also serves as the TokenStream returned by Analyzer.tokenStream(string, Reader).

func NewTokenStreamComponents

func NewTokenStreamComponents(source myTokenizer, result TokenStream) *TokenStreamComponents

func (*TokenStreamComponents) TokenStream

func (cp *TokenStreamComponents) TokenStream() TokenStream

Returns the sink TokenStream

type TokenStreamImpl

type TokenStreamImpl struct {
	// contains filtered or unexported fields
}

func NewTokenStream

func NewTokenStream() *TokenStreamImpl

A TokenStream using the default attribute factory.

func NewTokenStreamWith

func NewTokenStreamWith(input *util.AttributeSource) *TokenStreamImpl

A TokenStream that uses the same attributes as the supplied one.

func (*TokenStreamImpl) Attributes

func (ts *TokenStreamImpl) Attributes() *util.AttributeSource

func (*TokenStreamImpl) Close

func (ts *TokenStreamImpl) Close() error

func (*TokenStreamImpl) End

func (ts *TokenStreamImpl) End() error

func (*TokenStreamImpl) Reset

func (ts *TokenStreamImpl) Reset() error

type Tokenizer

type Tokenizer struct {
	*TokenStreamImpl
	// The text source for this Tokenizer
	Input io.RuneReader
	// contains filtered or unexported fields
}

A Tokenizer is a TokenStream whose input is a Reader.

This is an abstract class; subclasses must override IncrementToken()

NOTE: Subclasses overriding IncrementToken() must call Attributes().ClearAttributes() before setting attributes.

func NewTokenizer

func NewTokenizer(input io.RuneReader) *Tokenizer

Constructs a token stream processing the given input.

func (*Tokenizer) Close

func (t *Tokenizer) Close() error

func (*Tokenizer) CorrectOffset

func (t *Tokenizer) CorrectOffset(currentOff int) int

Return the corrected offset. If input is a CharFilter subclass, this method calls CharFilter.correctOffset(), else returns currentOff.

func (*Tokenizer) Reset

func (t *Tokenizer) Reset() error

func (*Tokenizer) SetReader

func (t *Tokenizer) SetReader(input io.RuneReader) error

Expert: Set a new reader on the Tokenizer. Typically, an analyzer (in its tokenStream method) will use this to re-use a previously created tokenizer.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL