Yoyodyne Pretrained provides sequence-to-sequence transduction with pretrained transformer modules.
These models are implemented using PyTorch, Lightning, and Hugging Face transformers.
Yoyodyne Pretrained inherits many of the same features as Yoyodyne itself, but limits itself to two types of pretrained transformers:
- a pretrained transformer encoder and a pretrained transformer decoder with a randomly-initialized cross-attention (à la Rothe et al. 2020)
- a T5 model
Because these modules are pretrained, there are few architectural hyperparameters to set once one has determined which encoder and decoder to warm-start from. To keep Yoyodyne as simple as possible, Yoyodyne Pretrained is a separate library though it has many of the same features and interfaces.
To install Yoyodyne Pretrained and its dependencies, run the following command:
pip install .
Yoyodyne Pretrained is also compatible with Google Colab GPU runtimes.
- Click "Runtime" > "Change Runtime Type".
- In the dialogue box, under the "Hardware accelerator" dropdown box, select "GPU", then click "Save".
- You may be prompted to delete the old runtime. Do so if you wish.
- Then install and run using the
!as a prefix to shell commands.
Yoyodyne Pretrained uses YAML configuration files; see the example configuration files for examples, and see the Yoyodyne documentation for information on variable interpolation.
Yoyodyne Pretrained operates on basic tab-separated values (TSV) data files. The user can specify source, features, and target columns. If a feature column is specified, it is concatenated (with a separating space) to the source.
The yoyodyne_pretrained command-line tool uses a subcommand interface with
four different modes. To see the full set of options available for each
subcommand, use the --print_config flag. For example:
yoyodyne_pretrained fit --print_config
will show all configuration options (and their default values) for the fit
subcommand.
In fit mode, one trains a Yoyodyne Pretrained model from scratch. Naturally,
most configuration options need to be set at training time. E.g., it is not
possible to switch between different pretrained encoders after training a model.
This mode is invoked using the fit subcommand, like so:
yoyodyne_pretrained fit --config path/to/config.yaml
Setting the seed_everything: argument to some fixed value ensures a
reproducible experiment (modulo hardware non-determism).
In practice it is usually wise to tie the encoder and decoder parameters, as in the following YAML snippet:
...
model:
class_path: yoyodyne_pretrained.models.EncoderDecoderModel
init_args:
model_name: google-bert/bert-base-multilingual-cased
tie_encoder_decoder: true
...
The following snippet shows a simple configuration T5 configuration using ByT5:
...
class_path: yoyodyne_pretrained.models.T5Model
init_args:
model_name: google/byt5-base
tie_encoder_decoder: true
...
Yoyodyne Pretrained requires an optimizer and a learning rate scheduler. The system is borrowed from Yoyodyne; see here for more information.
A checkpoint config must be specified or no checkpoints will be generated; see here for more information.
See here for more information.
See here for more information.
Dropout probability and/or label smoothing are specified as arguments to the
model, as shown in the following YAML snippet.
...
model:
dropout: 0.5
label_smoothing: 0.1
...
Decoding is performed with beam search if model: num_beams: ... is set to a
value greater than 1; the beam width ("number of beams") defaults to 5.
Batch size is specified using data: batch_size: ... and defaults to 32.
By default, training uses 32-bit precision. However, the trainer.: precision:
flag allows the user to perform training with half precision (16), or with
mixed-precision formats like bf16-mixed if supported by the accelerator. This
might reduce the size of the model and batches in memory, allowing one to use
larger batches, or it may simply provide small speed-ups.
There are a number of ways to specify how long a model should train for. For example, the following YAML snippet specifies that training should run for 100 epochs or 6 wall-clock hours, whichever comes first.
...
trainer:
max_epochs: 100
max_time: 00:06:00:00
...
In validation mode, one runs the validation step over labeled validation data
(specified as data: val: path/to/validation.tsv) using a previously trained
checkpoint (--ckpt_path path/to/checkpoint.ckpt from the command line),
recording loss and other statistics for the validation set. In practice this is
mostly useful for debugging.
This mode is invoked using the validate subcommand, like so:
yoyodyne_pretrained validate --config path/to/config.yaml --ckpt_path path/to/checkpoint.ckpt
In test mode, one computes accuracy over held-out test data (specified as
data: test: path/to/test.tsv) using a previously trained checkpoint
(--ckpt_path path/to/checkpoint.ckpt from the command line); it differs from
validation mode in that it uses the test file rather than the val file.
This mode is invoked using the test subcommand, like so:
yoyodyne_pretrained test --config path/to/config.yaml --ckpt_path path/to/checkpoint.ckpt
In predict mode, a previously trained model checkpoint
(--ckpt_path path/to/checkpoint.ckpt from the command line) is used to label
an input file. One must also specify the path where the predictions will be
written.
...
predict:
path: /Users/Shinji/predictions.conllu
...
This mode is invoked using the predict subcommand, like so:
yoyodyne_pretrained predict --config path/to/config.yaml --ckpt_path path/to/checkpoint.ckpt
Many tokenizers, including the BERT tokenizer, are lossy in the sense that they may introduce spaces not present in the input, particularly adjacent to word-internal punctuation like dashes (e.g., state-of-the-art). Unfortunately, there is little that can be done about this within this library, but it may be possible to fix this as a post-processing step.
See examples for some worked examples including
hyperparameter sweeping with Weights & Biases.
Given the size of the models, a basic integration test of Yoyodyne Pretrained exceeds what is feasible without access to reasonably powerful GPU. Thus tests have to be run locally rather than via cloud-based continuous integration systems. The integration tests take roughly 30 minutes in total. To test the system, run the following:
pytest -vvv tests
Yoyodyne Pretrained is distributed under an Apache 2.0 license.
We welcome contributions using the fork-and-pull model.
- Create a new branch. E.g., if you want to call this branch "release":
git checkout -b release - Sync your fork's branch to the upstream master branch. E.g., if the upstream
remote is called "upstream":
git pull upstream master - Increment the version field in
pyproject.toml. - Stage your changes:
git add pyproject.toml. - Commit your changes:
git commit -m "your commit message here" - Push your changes. E.g., if your branch is called "release":
git push origin release - Submit a PR for your release and wait for it to be merged into
master. - Tag the
masterbranch's last commit. The tag should begin withv; e.g., if the new version is 3.1.4, the tag should bev3.1.4. This can be done:- on GitHub itself: click the "Releases" or "Create a new release" link on the right-hand side of the Yoyodyne GitHub page) and follow the dialogues.
- from the command-line using
git tag.
- Build the new release:
python -m build - Upload the result to PyPI:
twine upload dist/*
Rothe, S., Narayan, S., and Severyn, A. 2020. Leveraging pre-trained checkpoints for sequence generation tasks. Transactions of the Association for Computational Linguistics 8: 264-280.
(See also yoyodyne-pretrained.bib for more work
used during the development of this library.)