lmp.script.gen_txt#

Use pre-trained language model checkpoint to generate continual text of given text segment.

One must first run the script lmp.script.train_model before running this script. This script use pre-trained language model checkpoint to generate continual text of given text segment. Most inference (generation) methods are stochastic process, only some are deterministic.

See also

lmp.infer

All available inference methods.

lmp.model

All available language models.

lmp.script.train_model

Train language model.

Examples

The following example use "Hello world" as conditioned text segment to generate continual text with pre-trained language model experiment my_model_exp. It use top-1 inference method to generate continual text.

python -m lmp.script.gen_txt top-1 \
  --ckpt 5000 \
  --exp_name my_model_exp \
  --max_seq_len 128 \
  --txt "Hello world"

The following example use the same conditioned text segment but inferencing with top-k inference method.

python -m lmp.script.gen_txt top-1 \
  --ckpt 5000 \
  --exp_name my_model_exp \
  --k 10 \
  --max_seq_len 128 \
  --txt "Hello world"

You can use -h or --help options to get a list of available inference methods.

python -m lmp.script.gen_txt -h

You can use -h or --help options on a specific inference method to get a list of supported CLI arguments.

python -m lmp.script.gen_txt top-k -h
lmp.script.gen_txt.main(argv: List[str]) None[source]

Script entry point.

Parameters

argv (list[str]) – List of CLI arguments.

Return type

None

lmp.script.gen_txt.parse_args(argv: List[str]) Namespace[source]

Parse CLI arguments.

Parameters

argv (list[str]) – List of CLI arguments.

See also

sys.argv

Python CLI arguments interface.

Returns

Parsed CLI arguments.

Return type

argparse.Namespace