Evaluate language model perplexity on given text#
Use pre-trained language model to calculate perplexity on given text.
One must first run the script lmp.script.train_model before running this script.
See also
- lmp.model
All available language models.
- lmp.script.eval_dset_ppl
Use pre-trained language model to calculate average perplexity on a particular dataset.
- lmp.script.train_model
Train language model.
Examples
The following example used pre-trained language model under experiment my_model_exp
to calculate perplexity of the
given text "Hello world"
.
It use checkpoint number 5000
to perform evaluation.
python -m lmp.script.eval_txt_ppl \
--ckpt 5000 \
--exp_name my_model_exp \
--txt "Hello world"
The following example calculate perplexity using the last checkpoint of experiment my_model_exp
.
python -m lmp.script.eval_txt_ppl \
--ckpt -1 \
--exp_name my_model_exp \
--txt "Hello world"
You can use -h
or --help
options to get a list of supported CLI arguments.
python -m lmp.script.eval_txt_ppl -h