public class NGramTokenizer extends Tokenizer
On the contrary to NGramTokenFilter, this class sets offsets so
that characters between startOffset and endOffset in the original stream are
the same as the term chars.
For example, "abcde" would be tokenized as (minGram=2, maxGram=3):
| Term | ab | abc | bc | bcd | cd | cde | de |
|---|---|---|---|---|---|---|---|
| Position increment | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| Position length | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| Offsets | [0,2[ | [0,3[ | [1,3[ | [1,4[ | [2,4[ | [2,5[ | [3,5[ |
This tokenizer changed a lot in Lucene 4.4 in order to:
pre-tokenize the stream
before computing n-grams.Additionally, this class doesn't trim trailing whitespaces and emits tokens in a different order, tokens are now emitted by increasing start offsets while they used to be emitted by increasing lengths (which prevented from supporting large input streams).
AttributeSource.State| Modifier and Type | Field and Description |
|---|---|
private int[] |
buffer |
private int |
bufferEnd |
private int |
bufferStart |
private CharacterUtils.CharacterBuffer |
charBuffer |
static int |
DEFAULT_MAX_NGRAM_SIZE |
static int |
DEFAULT_MIN_NGRAM_SIZE |
private boolean |
edgesOnly |
private boolean |
exhausted |
private int |
gramSize |
private int |
lastCheckedChar |
private int |
lastNonTokenChar |
private int |
maxGram |
private int |
minGram |
private int |
offset |
private OffsetAttribute |
offsetAtt |
private PositionIncrementAttribute |
posIncAtt |
private PositionLengthAttribute |
posLenAtt |
private CharTermAttribute |
termAtt |
DEFAULT_TOKEN_ATTRIBUTE_FACTORY| Constructor and Description |
|---|
NGramTokenizer()
Creates NGramTokenizer with default min and max n-grams.
|
NGramTokenizer(AttributeFactory factory,
int minGram,
int maxGram)
Creates NGramTokenizer with given min and max n-grams.
|
NGramTokenizer(AttributeFactory factory,
int minGram,
int maxGram,
boolean edgesOnly) |
NGramTokenizer(int minGram,
int maxGram)
Creates NGramTokenizer with given min and max n-grams.
|
NGramTokenizer(int minGram,
int maxGram,
boolean edgesOnly) |
| Modifier and Type | Method and Description |
|---|---|
private void |
consume()
Consume one code point.
|
void |
end()
This method is called by the consumer after the last token has been
consumed, after
TokenStream.incrementToken() returned false
(using the new TokenStream API). |
boolean |
incrementToken()
Consumers (i.e.,
IndexWriter) use this method to advance the stream to
the next token. |
private void |
init(int minGram,
int maxGram,
boolean edgesOnly) |
protected boolean |
isTokenChar(int chr)
Only collect characters which satisfy this condition.
|
void |
reset()
This method is called by a consumer before it begins consumption using
TokenStream.incrementToken(). |
private void |
updateLastNonTokenChar() |
close, correctOffset, setReaderaddAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, endAttributes, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, removeAllAttributes, restoreState, toStringpublic static final int DEFAULT_MIN_NGRAM_SIZE
public static final int DEFAULT_MAX_NGRAM_SIZE
private CharacterUtils.CharacterBuffer charBuffer
private int[] buffer
private int bufferStart
private int bufferEnd
private int offset
private int gramSize
private int minGram
private int maxGram
private boolean exhausted
private int lastCheckedChar
private int lastNonTokenChar
private boolean edgesOnly
private final CharTermAttribute termAtt
private final PositionIncrementAttribute posIncAtt
private final PositionLengthAttribute posLenAtt
private final OffsetAttribute offsetAtt
NGramTokenizer(int minGram,
int maxGram,
boolean edgesOnly)
public NGramTokenizer(int minGram,
int maxGram)
minGram - the smallest n-gram to generatemaxGram - the largest n-gram to generateNGramTokenizer(AttributeFactory factory, int minGram, int maxGram, boolean edgesOnly)
public NGramTokenizer(AttributeFactory factory, int minGram, int maxGram)
factory - AttributeFactory to useminGram - the smallest n-gram to generatemaxGram - the largest n-gram to generatepublic NGramTokenizer()
private void init(int minGram,
int maxGram,
boolean edgesOnly)
public final boolean incrementToken()
throws java.io.IOException
TokenStreamIndexWriter) use this method to advance the stream to
the next token. Implementing classes must implement this method and update
the appropriate AttributeImpls with the attributes of the next
token.
The producer must make no assumptions about the attributes after the method
has been returned: the caller may arbitrarily change it. If the producer
needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState() to create a copy of the current attribute state.
This method is called for every token of a document, so an efficient
implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class) and AttributeSource.getAttribute(Class),
references to all AttributeImpls that this stream uses should be
retrieved during instantiation.
To ensure that filters and consumers know which attributes are available,
the attributes must be added during instantiation. Filters and consumers
are not required to check for availability of attributes in
TokenStream.incrementToken().
incrementToken in class TokenStreamjava.io.IOExceptionprivate void updateLastNonTokenChar()
private void consume()
protected boolean isTokenChar(int chr)
public final void end()
throws java.io.IOException
TokenStreamTokenStream.incrementToken() returned false
(using the new TokenStream API). Streams implementing the old API
should upgrade to use this feature.
This method can be used to perform any end-of-stream operations, such as setting the final offset of a stream. The final offset of a stream might differ from the offset of the last token eg in case one or more whitespaces followed after the last token, but a WhitespaceTokenizer was used.
Additionally any skipped positions (such as those removed by a stopfilter) can be applied to the position increment, or any adjustment of other attributes where the end-of-stream value may be important.
If you override this method, always call super.end().
end in class TokenStreamjava.io.IOException - If an I/O error occurspublic final void reset()
throws java.io.IOException
TokenStreamTokenStream.incrementToken().
Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.
If you override this method, always call super.reset(), otherwise
some internal state will not be correctly reset (e.g., Tokenizer will
throw IllegalStateException on further usage).