diff --git a/notebooks/mt5-xnli.ipynb b/notebooks/mt5-xnli.ipynb new file mode 100644 index 0000000..4c305c3 --- /dev/null +++ b/notebooks/mt5-xnli.ipynb @@ -0,0 +1,493 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\"Open" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "##### Copyright 2020 The mT5 Authors and Stephen Mayhew\n", + "\n", + "Licensed under the Apache License, Version 2.0 (the \"License\");" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Copyright 2020 The mT5 Authors, and Stephen Mayhew. All Rights Reserved.\n", + "#\n", + "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "#\n", + "# http://www.apache.org/licenses/LICENSE-2.0\n", + "#\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License.\n", + "# ==============================================================================" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Fine-Tuning the Multilingual Text-To-Text Transfer Transformer (mT5) on XNLI\n", + "\n", + "*The following tutorial guides you through the process of fine-tuning a pre-trained mT5 model, and evaluating its accuracy on XNLI,\n", + "all on a free Google Cloud TPU. This is largely based on the notebook for [fine-tuning T5 on Closed-Book Question Answering](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/notebooks/t5-trivia.ipynb). \"Open*\n", + "\n", + "### Background\n", + "\n", + "Many multilingual language models report results on the [Cross-lingual Natural Language Inference (XNLI) dataset](https://arxiv.org/pdf/1809.05053) ([official](https://cims.nyu.edu/~sbowman/xnli/) and [unofficial](http://mayhewsw.github.io/2020/11/26/xnli-details/) blog posts). The mT5 model, introduced in [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934), is a recent model based on T5, only trained on a massive multilingual corpus called [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual), consisting of about 26TB of text from Common Crawl. mT5 reports very strong results on XNLI, beating all prior baselines.\n", + "\n", + "In this notebook, we'll walk through how to fine-tune a pre-trained mT5 model on XNLI." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Set Up\n", + "\n", + "

  Train on TPU

\n", + "\n", + " 1. Create a Cloud Storage bucket for your data and model checkpoints at http://console.cloud.google.com/storage, and fill in the `BASE_DIR` parameter in the following form. There is a [free tier](https://cloud.google.com/free/) if you do not yet have an account.\n", + " \n", + " 1. On the main menu, click Runtime and select **Change runtime type**. Set \"TPU\" as the hardware accelerator.\n", + " 1. Run the following cell and follow instructions to:\n", + " * Set up a Colab TPU running environment\n", + " * Verify that you are connected to a TPU device\n", + " * Upload your credentials to TPU to access your GCS bucket" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(\"Installing dependencies...\")\n", + "%tensorflow_version 2.x\n", + "!pip install -q t5\n", + "!git clone https://github.com/google-research/multilingual-t5.git\n", + "!mv multilingual-t5/multilingual_t5/ .\n", + "\n", + "import functools\n", + "import os\n", + "import time\n", + "import warnings\n", + "warnings.filterwarnings(\"ignore\", category=DeprecationWarning)\n", + "\n", + "import tensorflow.compat.v1 as tf\n", + "import tensorflow_datasets as tfds\n", + "\n", + "from multilingual_t5 import preprocessors\n", + "from multilingual_t5 import utils\n", + "\n", + "import t5.data\n", + "from t5.evaluation import metrics\n", + "\n", + "import t5\n", + "\n", + "BASE_DIR = \"gs://\" #@param { type: \"string\" }\n", + "if not BASE_DIR or BASE_DIR == \"gs://\":\n", + " raise ValueError(\"You must enter a BASE_DIR.\")\n", + "DATA_DIR = os.path.join(BASE_DIR, \"data\")\n", + "MODELS_DIR = os.path.join(BASE_DIR, \"models\")\n", + "ON_CLOUD = True\n", + "\n", + "\n", + "if ON_CLOUD:\n", + " print(\"Setting up GCS access...\")\n", + " import tensorflow_gcs_config\n", + " from google.colab import auth\n", + " # Set credentials for GCS reading/writing from Colab and TPU.\n", + " TPU_TOPOLOGY = \"v2-8\"\n", + " try:\n", + " tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection\n", + " TPU_ADDRESS = tpu.get_master()\n", + " print('Running on TPU:', TPU_ADDRESS)\n", + " except ValueError:\n", + " raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')\n", + " auth.authenticate_user()\n", + " tf.enable_eager_execution()\n", + " tf.config.experimental_connect_to_host(TPU_ADDRESS)\n", + " tensorflow_gcs_config.configure_gcs_from_colab_auth()\n", + "\n", + "tf.disable_v2_behavior()\n", + "\n", + "# Improve logging.\n", + "from contextlib import contextmanager\n", + "import logging as py_logging\n", + "\n", + "if ON_CLOUD:\n", + " tf.get_logger().propagate = False\n", + " py_logging.root.setLevel('INFO')\n", + "\n", + "@contextmanager\n", + "def tf_verbosity_level(level):\n", + " og_level = tf.logging.get_verbosity()\n", + " tf.logging.set_verbosity(level)\n", + " yield\n", + " tf.logging.set_verbosity(og_level)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Creating new Tasks and Mixtures\n", + "\n", + "Two core components of the T5 library are `Task` and `Mixture` objects.\n", + "\n", + "A `Task` is a dataset along with preprocessing functions and evaluation metrics. A `Mixture` is a collection of `Task` objects along with a mixing rate or a function defining how to compute a mixing rate based on the properties of the constituent `Tasks`.\n", + "\n", + "For this example, we will fine-tune the model to do XNLI. It seems that all of these tasks are already present in the registry, but they use a vocabulary with `extra_ids = None`, which causes problems later on. We remove all tasks from all registries, define a new vocabulary, and put the tasks back in. This code is based on [`tasks.py`](https://github.com/google-research/multilingual-t5/blob/master/multilingual_t5/tasks.py)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Clear the registries\n", + "task_registry_names = list(t5.data.TaskRegistry.names())\n", + "for n in task_registry_names:\n", + " t5.data.TaskRegistry.remove(n)\n", + "\n", + "mixture_registry_names = list(t5.data.MixtureRegistry.names())\n", + "for n in mixture_registry_names:\n", + " t5.data.MixtureRegistry.remove(n)\n", + "\n", + "DEFAULT_SPM_PATH = \"gs://t5-data/vocabs/mc4.250000.100extra/sentencepiece.model\"\n", + "\n", + "DEFAULT_TEMPERATURE = 1.0 / 0.3\n", + "DEFAULT_MIX_RATE = functools.partial(\n", + " t5.data.utils.rate_num_examples, temperature=DEFAULT_TEMPERATURE)\n", + "\n", + "# TODO (mayhewsw): is this extra_ids value correct? It seems to work...\n", + "DEFAULT_VOCAB = t5.data.SentencePieceVocabulary(DEFAULT_SPM_PATH, extra_ids=10)\n", + "\n", + "DEFAULT_OUTPUT_FEATURES = {\n", + " \"inputs\": t5.data.Feature(\n", + " vocabulary=DEFAULT_VOCAB, add_eos=True, required=False),\n", + " \"targets\": t5.data.Feature(\n", + " vocabulary=DEFAULT_VOCAB, add_eos=True)\n", + "}\n", + "\n", + "MC4_LANGS = tfds.text.c4.MC4_LANGUAGES\n", + "\n", + "# Multilingual BERT was trained on 104 languages. We include 103 of these\n", + "# languages, as tfds.wikipedia doesn't distinguish between simplified and\n", + "# traditional Chinese, and only contains \"zh\" (which is a mix of simplified\n", + "# and traditional).\n", + "# https://github.com/google-research/bert/blob/master/multilingual.md\n", + "WIKI_LANGS = [\n", + " \"af\", \"an\", \"ar\", \"ast\", \"az\", \"azb\", \"ba\", \"bar\", \"be\", \"bg\", \"bn\", \"bpy\",\n", + " \"br\", \"bs\", \"ca\", \"ce\", \"ceb\", \"cs\", \"cv\", \"cy\", \"da\", \"de\", \"el\", \"en\",\n", + " \"es\", \"et\", \"eu\", \"fa\", \"fi\", \"fr\", \"fy\", \"ga\", \"gl\", \"gu\", \"he\", \"hi\",\n", + " \"hr\", \"ht\", \"hu\", \"hy\", \"id\", \"io\", \"is\", \"it\", \"ja\", \"jv\", \"ka\", \"kk\",\n", + " \"kn\", \"ko\", \"ky\", \"la\", \"lb\", \"lmo\", \"lt\", \"lv\", \"mg\", \"min\", \"mk\", \"ml\",\n", + " \"mn\", \"mr\", \"ms\", \"my\", \"nds-nl\", \"ne\", \"new\", \"nl\", \"nn\", \"no\", \"oc\",\n", + " \"pa\", \"pl\", \"pms\", \"pnb\", \"pt\", \"ro\", \"ru\", \"scn\", \"sco\", \"sh\", \"sk\", \"sl\",\n", + " \"sq\", \"sr\", \"su\", \"sv\", \"sw\", \"ta\", \"te\", \"tg\", \"th\", \"tl\", \"tr\", \"tt\",\n", + " \"uk\", \"ur\", \"uz\", \"vi\", \"vo\", \"war\", \"yo\", \"zh\"\n", + "]\n", + "\n", + "# =========================== Pretraining Tasks/Mixtures =======================\n", + "# mC4\n", + "for lang in MC4_LANGS:\n", + " t5.data.TaskRegistry.add(\n", + " \"mc4.{}\".format(lang.replace(\"-\", \"_\")),\n", + " t5.data.TfdsTask,\n", + " tfds_name=\"c4/multilingual:3.0.1\",\n", + " tfds_data_dir=DATA_DIR,\n", + " splits={\"train\": lang,\n", + " \"validation\": f\"{lang}-validation\"},\n", + " text_preprocessor=functools.partial(\n", + " t5.data.preprocessors.rekey,\n", + " key_map={\"inputs\": None, \"targets\": \"text\"}),\n", + " token_preprocessor=t5.data.preprocessors.span_corruption,\n", + " output_features=DEFAULT_OUTPUT_FEATURES,\n", + " metric_fns=[])\n", + "\n", + "mc4 = [\"mc4.{}\".format(lang.replace(\"-\", \"_\")) for lang in MC4_LANGS]\n", + "t5.data.MixtureRegistry.add(\"mc4\", mc4, default_rate=DEFAULT_MIX_RATE)\n", + "\n", + "# Wikipedia\n", + "for lang in WIKI_LANGS:\n", + " t5.data.TaskRegistry.add(\n", + " \"wiki.{}\".format(lang.replace(\"-\", \"_\")),\n", + " t5.data.TfdsTask,\n", + " tfds_name=\"wikipedia/20200301.{}:1.0.0\".format(lang),\n", + " tfds_data_dir=DATA_DIR,\n", + " text_preprocessor=[\n", + " functools.partial(\n", + " t5.data.preprocessors.rekey,\n", + " key_map={\n", + " \"inputs\": None,\n", + " \"targets\": \"text\"\n", + " }),\n", + " ],\n", + " token_preprocessor=t5.data.preprocessors.span_corruption,\n", + " output_features=DEFAULT_OUTPUT_FEATURES,\n", + " metric_fns=[])\n", + "\n", + "wiki = [\"wiki.{}\".format(lang.replace(\"-\", \"_\")) for lang in WIKI_LANGS]\n", + "t5.data.MixtureRegistry.add(\"wiki\", wiki, default_rate=DEFAULT_MIX_RATE)\n", + "\n", + "# Mixture of mC4 and WIKI\n", + "t5.data.MixtureRegistry.add(\n", + " \"mc4_wiki\", mc4 + wiki, default_rate=DEFAULT_MIX_RATE)\n", + "\n", + "# =========================== Fine-tuning Tasks/Mixtures =======================\n", + "# ----- XNLI -----\n", + "# XNLI zero-shot task. This fine-tunes on English MNLI training data and then\n", + "# evaluates on multilingual XNLI dev/test data.\n", + "\n", + "XNLI_LANGS = [\n", + " \"ar\", \"bg\", \"de\", \"el\", \"en\", \"es\", \"fr\", \"hi\", \"ru\", \"sw\", \"th\", \"tr\",\n", + " \"ur\", \"vi\", \"zh\"\n", + "]\n", + "\n", + "t5.data.TaskRegistry.add(\n", + " \"xnli_train\",\n", + " t5.data.TfdsTask,\n", + " tfds_name=\"multi_nli:1.1.0\",\n", + " tfds_data_dir=DATA_DIR,\n", + " splits=[\"train\"],\n", + " text_preprocessor=preprocessors.process_mnli,\n", + " output_features=DEFAULT_OUTPUT_FEATURES,\n", + " metric_fns=[metrics.accuracy])\n", + "for lang in XNLI_LANGS:\n", + " t5.data.TaskRegistry.add(\n", + " \"xnli_dev_test.{}\".format(lang),\n", + " t5.data.TfdsTask,\n", + " tfds_name=\"xnli:1.1.0\",\n", + " tfds_data_dir=DATA_DIR,\n", + " splits=[\"validation\", \"test\"],\n", + " text_preprocessor=[\n", + " functools.partial(\n", + " preprocessors.process_xnli, target_languages=[lang])\n", + " ],\n", + " output_features=DEFAULT_OUTPUT_FEATURES,\n", + " metric_fns=[metrics.accuracy])\n", + " if lang == \"en\":\n", + " continue\n", + " t5.data.TaskRegistry.add(\n", + " \"xnli_translate_train.{}\".format(lang),\n", + " t5.data.TfdsTask,\n", + " tfds_name=\"xtreme_xnli:1.1.0\",\n", + " tfds_data_dir=DATA_DIR,\n", + " splits=[\"train\"],\n", + " text_preprocessor=[\n", + " functools.partial(\n", + " preprocessors.process_xnli, target_languages=[lang])\n", + " ],\n", + " output_features=DEFAULT_OUTPUT_FEATURES,\n", + " metric_fns=[metrics.accuracy])\n", + "t5.data.TaskRegistry.add(\n", + " \"xnli_dev_test.all_langs\",\n", + " t5.data.TfdsTask,\n", + " tfds_name=\"xnli:1.1.0\",\n", + " tfds_data_dir=DATA_DIR,\n", + " splits=[\"validation\", \"test\"],\n", + " text_preprocessor=[\n", + " functools.partial(\n", + " preprocessors.process_xnli, target_languages=XNLI_LANGS)\n", + " ],\n", + " output_features=DEFAULT_OUTPUT_FEATURES,\n", + " metric_fns=[metrics.accuracy])\n", + "xnli_zeroshot = ([\"xnli_train\", \"xnli_dev_test.all_langs\"] + \\\n", + " [\"xnli_dev_test.{}\".format(lang) for lang in XNLI_LANGS])\n", + "t5.data.MixtureRegistry.add(\"xnli_zeroshot\", xnli_zeroshot, default_rate=1.0)\n", + "xnli_translate_train = xnli_zeroshot + [\n", + " \"xnli_translate_train.{}\".format(lang)\n", + " for lang in XNLI_LANGS\n", + " if lang != \"en\"\n", + "]\n", + "t5.data.MixtureRegistry.add(\n", + " \"xnli_translate_train\", xnli_translate_train, default_rate=1.0)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Transferring to new Tasks\n", + "\n", + "We are now ready to fine-tune one of the pre-trained mT5 models on our new mixture of XNLI tasks.\n", + "\n", + "First, we'll instantiate a `Model` object using the model size of your choice. Note that larger models are slower to train and use but will likely achieve higher accuracy. You also may be able to increase accuracy by training longer with more `FINETUNE_STEPS` below.\n", + "\n", + "\n", + "## Caveats\n", + "\n", + "* Due to its memory requirements, you will not be able to train the `11B` parameter model on the TPU provided by Colab. Instead, you will need to fine-tune inside of a GCP instance (see [README](https://github.com/google-research/text-to-text-transfer-transformer/)).\n", + "* Due to the checkpoint size, you will not be able use the 5GB GCS free tier for the `3B` parameter models. You will need at least 25GB of space, which you can purchase with your $300 of initial credit on GCP.\n", + "* While `large` can achieve decent results, it is recommended that you fine-tune at least the `3B` parameter model." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Define Model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "MODEL_SIZE = \"small\" #@param[\"small\", \"base\", \"large\", \"3B\", \"11B\"]\n", + "# Public GCS path for mT5 pre-trained model checkpoints\n", + "BASE_PRETRAINED_DIR = \"gs://t5-data/pretrained_models/mt5/\"\n", + "PRETRAINED_DIR = os.path.join(BASE_PRETRAINED_DIR, MODEL_SIZE)\n", + "MODEL_DIR = os.path.join(MODELS_DIR, MODEL_SIZE)\n", + "\n", + "if ON_CLOUD and MODEL_SIZE == \"3B\":\n", + " tf.logging.warning(\n", + " \"The `3B` model is too large to use with the 5GB GCS free tier. \"\n", + " \"Make sure you have at least 25GB on GCS before continuing.\"\n", + " )\n", + "elif ON_CLOUD and MODEL_SIZE == \"11B\":\n", + " raise ValueError(\n", + " \"The `11B` parameter is too large to fine-tune on the `v2-8` TPU \"\n", + " \"provided by Colab. Please comment out this Error if you're running \"\n", + " \"on a larger TPU.\"\n", + " )\n", + "\n", + "# Set parallelism and batch size to fit on v2-8 TPU (if possible).\n", + "# Limit number of checkpoints to fit within 5GB (if possible).\n", + "model_parallelism, train_batch_size, keep_checkpoint_max = {\n", + " \"small\": (1, 256, 16),\n", + " \"base\": (2, 128, 8),\n", + " \"large\": (8, 64, 4),\n", + " \"3B\": (8, 16, 1),\n", + " \"11B\": (8, 16, 1)}[MODEL_SIZE]\n", + "\n", + "tf.io.gfile.makedirs(MODEL_DIR)\n", + "# The models from our paper are based on the Mesh Tensorflow Transformer.\n", + "# Sequence-length values defined in: multilingual_t5/gin/sequence_lengths/xnli.gin\n", + "model = t5.models.MtfModel(\n", + " model_dir=MODEL_DIR,\n", + " tpu=TPU_ADDRESS,\n", + " tpu_topology=TPU_TOPOLOGY,\n", + " model_parallelism=model_parallelism,\n", + " batch_size=train_batch_size,\n", + " sequence_length={\"inputs\": 1028, \"targets\": 128}, \n", + " learning_rate_schedule=0.003,\n", + " save_checkpoints_steps=5000,\n", + " keep_checkpoint_max=keep_checkpoint_max if ON_CLOUD else None,\n", + " iterations_per_loop=100,\n", + " extra_gin_bindings=\"Bitransformer.decode.max_decode_length = 128\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Before we continue, let's load a [TensorBoard](https://www.tensorflow.org/tensorboard) visualizer so that we can keep monitor our progress. The page should automatically update as fine-tuning and evaluation proceed." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "if ON_CLOUD:\n", + " %reload_ext tensorboard\n", + " import tensorboard as tb\n", + "tb.notebook.start(\"--logdir \" + MODELS_DIR)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Fine-tune\n", + "\n", + "We are now ready to fine-tune our model. This will take a while (~10 hours with default settings), so please be patient! The larger the model and more `FINETUNE_STEPS` you use, the longer it will take.\n", + "\n", + "Don't worry, you can always come back later and increase the number of steps, and it will automatically pick up where you left off." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "FINETUNE_STEPS = 20000#@param {type: \"integer\"}\n", + "\n", + "model.finetune(\n", + " mixture_or_task_name=\"xnli_zeroshot\",\n", + " pretrained_model_dir=PRETRAINED_DIR,\n", + " finetune_steps=FINETUNE_STEPS\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Evaluate\n", + "\n", + "We now evaluate on the validation sets of the tasks in our mixture. Accuracy results will be logged and added to the TensorBoard above.\n", + "\n", + "The `-1` value means that the last checkpoint will be chosen, although the authors chose the fine-tuning checkpoint with the highest validation performance. Use `checkpoint_steps=\"all\"` to run on every checkpoint in the model directory. See [t5/models/mtf_model.py](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/models/mtf_model.py#L303) for all arguments to this function." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Use a larger batch size for evaluation, which requires less memory.\n", + "model.batch_size = train_batch_size * 4\n", + "model.eval(\n", + " mixture_or_task_name=\"xnli_zeroshot\",\n", + " checkpoint_steps=-1\n", + ")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.3" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +}