lmp.script.eval_txt_ppl#

Use pre-trained language model to calculate perplexity on given text.

One must first run the script lmp.script.train_model before running this script.

See also

lmp.model

All available language models.

lmp.script.eval_dset_ppl

Use pre-trained language model to calculate average perplexity on a particular dataset.

lmp.script.train_model

Train language model.

Examples

The following example used pre-trained language model under experiment my_model_exp to calculate perplexity of the given text "Hello world". It use checkpoint number 5000 to perform evaluation.

python -m lmp.script.eval_txt_ppl \
  --ckpt 5000 \
  --exp_name my_model_exp \
  --txt "Hello world"

The following example calculate perplexity using the last checkpoint of experiment my_model_exp.

python -m lmp.script.eval_txt_ppl \
  --ckpt -1 \
  --exp_name my_model_exp \
  --txt "Hello world"

You can use -h or --help options to get a list of supported CLI arguments.

python -m lmp.script.eval_txt_ppl -h
lmp.script.eval_txt_ppl.main(argv: List[str]) None[source]

Script entry point.

Parameters

argv (list[str]) – List of CLI arguments.

Return type

None

lmp.script.eval_txt_ppl.parse_args(argv: List[str]) Namespace[source]

Parse CLI arguments.

Parameters

argv (list[str]) – List of CLI arguments.

See also

sys.argv

Python CLI arguments interface.

Returns

Parsed CLI arguments.

Return type

argparse.Namespace