Package opennlp.tools.tokenize
Class WordpieceTokenizer
java.lang.Object
opennlp.tools.tokenize.WordpieceTokenizer
- All Implemented Interfaces:
Tokenizer
A
Tokenizer
implementation which performs tokenization
using word pieces.
Adapted under MIT license from https://github.com/robrua/easy-bert.
For reference see:
-
Constructor Summary
ConstructorDescriptionWordpieceTokenizer
(Set<String> vocabulary) WordpieceTokenizer
(Set<String> vocabulary, int maxTokenLength) -
Method Summary
Modifier and TypeMethodDescriptionint
String[]
Splits a string into its atomic parts.Span[]
tokenizePos
(String text) Finds the boundaries of atomic parts in a string.
-
Constructor Details
-
WordpieceTokenizer
- Parameters:
vocabulary
- A set of tokens considered the vocabulary.
-
WordpieceTokenizer
- Parameters:
vocabulary
- A set of tokens considered the vocabulary.maxTokenLength
- A non-negative number that is used as maximum token length.
-
-
Method Details
-
tokenizePos
Description copied from interface:Tokenizer
Finds the boundaries of atomic parts in a string.- Specified by:
tokenizePos
in interfaceTokenizer
- Parameters:
text
- The string to be tokenized.- Returns:
- The
spans (offsets into
for each token as the individuals array elements.s
)
-
tokenize
Description copied from interface:Tokenizer
Splits a string into its atomic parts. -
getMaxTokenLength
public int getMaxTokenLength()- Returns:
- The maximum token length.
-