Package opennlp.tools.ml.perceptron
Class PerceptronTrainer
- java.lang.Object
-
- opennlp.tools.ml.AbstractTrainer
-
- opennlp.tools.ml.AbstractEventTrainer
-
- opennlp.tools.ml.perceptron.PerceptronTrainer
-
- All Implemented Interfaces:
EventTrainer
public class PerceptronTrainer extends AbstractEventTrainer
Trains models using the perceptron algorithm. Each outcome is represented as a binary perceptron classifier. This supports standard (integer) weighting as well average weighting as described in: Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with the Perceptron Algorithm. Michael Collins, EMNLP 2002.
-
-
Field Summary
Fields Modifier and Type Field Description static StringPERCEPTRON_VALUEstatic doubleTOLERANCE_DEFAULT-
Fields inherited from class opennlp.tools.ml.AbstractEventTrainer
DATA_INDEXER_ONE_PASS_REAL_VALUE, DATA_INDEXER_ONE_PASS_VALUE, DATA_INDEXER_PARAM, DATA_INDEXER_TWO_PASS_VALUE
-
Fields inherited from class opennlp.tools.ml.AbstractTrainer
ALGORITHM_PARAM, CUTOFF_DEFAULT, CUTOFF_PARAM, ITERATIONS_DEFAULT, ITERATIONS_PARAM, TRAINER_TYPE_PARAM, VERBOSE_DEFAULT, VERBOSE_PARAM
-
Fields inherited from interface opennlp.tools.ml.EventTrainer
EVENT_VALUE
-
-
Constructor Summary
Constructors Constructor Description PerceptronTrainer()PerceptronTrainer(TrainingParameters parameters)
-
Method Summary
All Methods Instance Methods Concrete Methods Deprecated Methods Modifier and Type Method Description AbstractModeldoTrain(DataIndexer indexer)booleanisSortAndMerge()booleanisValid()Deprecated.voidsetSkippedAveraging(boolean averaging)Enables skipped averaging, this flag changes the standard averaging to special averaging instead.voidsetStepSizeDecrease(double decrease)Enables and sets step size decrease.voidsetTolerance(double tolerance)Specifies the tolerance.AbstractModeltrainModel(int iterations, DataIndexer di, int cutoff)AbstractModeltrainModel(int iterations, DataIndexer di, int cutoff, boolean useAverage)voidvalidate()Check parameters.-
Methods inherited from class opennlp.tools.ml.AbstractEventTrainer
getDataIndexer, train, train
-
Methods inherited from class opennlp.tools.ml.AbstractTrainer
getAlgorithm, getCutoff, getIterations, init, init
-
Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface opennlp.tools.ml.EventTrainer
init, init
-
-
-
-
Field Detail
-
PERCEPTRON_VALUE
public static final String PERCEPTRON_VALUE
- See Also:
- Constant Field Values
-
TOLERANCE_DEFAULT
public static final double TOLERANCE_DEFAULT
- See Also:
- Constant Field Values
-
-
Constructor Detail
-
PerceptronTrainer
public PerceptronTrainer()
-
PerceptronTrainer
public PerceptronTrainer(TrainingParameters parameters)
-
-
Method Detail
-
validate
public void validate()
Description copied from class:AbstractTrainerCheck parameters. If subclass overrides this, it should call super.validate();- Overrides:
validatein classAbstractEventTrainer
-
isValid
@Deprecated public boolean isValid()
Deprecated.- Overrides:
isValidin classAbstractEventTrainer- Returns:
-
isSortAndMerge
public boolean isSortAndMerge()
- Specified by:
isSortAndMergein classAbstractEventTrainer
-
doTrain
public AbstractModel doTrain(DataIndexer indexer) throws IOException
- Specified by:
doTrainin classAbstractEventTrainer- Throws:
IOException
-
setTolerance
public void setTolerance(double tolerance)
Specifies the tolerance. If the change in training set accuracy is less than this, stop iterating.- Parameters:
tolerance-
-
setStepSizeDecrease
public void setStepSizeDecrease(double decrease)
Enables and sets step size decrease. The step size is decreased every iteration by the specified value.- Parameters:
decrease- - step size decrease in percent
-
setSkippedAveraging
public void setSkippedAveraging(boolean averaging)
Enables skipped averaging, this flag changes the standard averaging to special averaging instead.If we are doing averaging, and the current iteration is one of the first 20 or it is a perfect square, then updated the summed parameters.
The reason we don't take all of them is that the parameters change less toward the end of training, so they drown out the contributions of the more volatile early iterations. The use of perfect squares allows us to sample from successively farther apart iterations.
- Parameters:
averaging- averaging flag
-
trainModel
public AbstractModel trainModel(int iterations, DataIndexer di, int cutoff)
-
trainModel
public AbstractModel trainModel(int iterations, DataIndexer di, int cutoff, boolean useAverage)
-
-