Skip to content

Latest commit

 

History

History
172 lines (128 loc) · 4.45 KB

t5.md

File metadata and controls

172 lines (128 loc) · 4.45 KB

T5 Comparisons

Data

Using OpenWebText https://huggingface.co/datasets/openwebtext

from datasets import load_dataset
dataset = load_dataset("openwebtext", split='train')
dataset = load_dataset("stas/openwebtext-10k", split='train')

Megatron-LM t5 uses a subword-tokenized vocab from bert.

Ready datasets:

  1. HF datasets use:

    • openwebtext - 8M records --dataset_name "openwebtext"
    • stas/openwebtext-10k - 10K records --dataset_name "stas/openwebtext-10k"
  2. Jsonlines (derived):

    • $six_ALL_CCFRWORK/datasets-custom/openwebtext/openwebtext.jsonl
    • $six_ALL_CCFRWORK/datasets-custom/openwebtext-10k/openwebtext-10k.jsonl
  3. Megatron-preprocessed datasets (derived):

    • $six_ALL_CCFRWORK/datasets-custom/openwebtext/meg-t5_text_document.*
    • $six_ALL_CCFRWORK/datasets-custom/openwebtext-10k/meg-t5_text_document.*
  4. Vocabs (from HF):

    • $six_ALL_CCFRWORK/datasets-custom/vocabs/bert-large-uncased-vocab.txt

How the above was done

For HF datasets and Jsonlines creation details, see gpt2.md. We only need to create the differently pre-processed datasets here.

t5 uses the same tokenizer/indexer as bert - can use it for either t5 or bert meg-lm trainings

Get uncased bert vocab:

cd $six_ALL_CCFRWORK/datasets-custom/vocabs
wget https://huggingface.co/bert-large-uncased/resolve/main/vocab.txt -O bert-large-uncased-vocab.txt

To prep a 10k-sample for megatron

source $six_ALL_CCFRWORK/start-prod
cd $six_ALL_CCFRWORK/code/megatron-lm
python tools/preprocess_data.py \
       --input $six_ALL_CCFRWORK/datasets-custom/openwebtext-10k/openwebtext-10k.jsonl \
       --output-prefix $six_ALL_CCFRWORK/datasets-custom/openwebtext-10k/meg-t5 \
       --vocab $six_ALL_CCFRWORK/datasets-custom/vocabs/bert-large-uncased-vocab.txt \
       --dataset-impl mmap \
       --tokenizer-type BertWordPieceLowerCase \
       --split-sentences \
       --workers 8

To prep a full dataset for megatron

source $six_ALL_CCFRWORK/start-prod
cd $six_ALL_CCFRWORK/code/megatron-lm
python tools/preprocess_data.py \
       --input $six_ALL_CCFRWORK/datasets-custom/openwebtext-10k/openwebtext.jsonl \
       --output-prefix $six_ALL_CCFRWORK/datasets-custom/openwebtext-10k/meg-t5 \
       --vocab $six_ALL_CCFRWORK/datasets-custom/vocabs/bert-large-uncased-vocab.txt \
       --dataset-impl mmap \
       --tokenizer-type BertWordPieceLowerCase \
       --split-sentences \
       --workers 8

as it should take a few hours to convert, use slurm/jsonl-to-meg-t5.slurm job to complete it

sbatch jsonl-to-meg-t5.slurm

Training

Megatron-LM distributed with MP

Pipeline Parallelism is not yet support for T5 (in works)

Setup: 1 node / 4 gpus

srun --pty --nodes=1 --ntasks=1 --cpus-per-task=40 --gres=gpu:4 --hint=nomultithread --time=60 bash --rcfile $six_ALL_CCFRWORK/start-prod
cd $six_ALL_CCFRWORK/code/megatron-lm

GPUS_PER_NODE=4

# Change for multinode config
MASTER_ADDR=localhost
MASTER_PORT=6000
NNODES=1
NODE_RANK=0
WORLD_SIZE=$(($GPUS_PER_NODE*$NNODES))

VOCAB_FILE=$six_ALL_CCFRWORK/datasets-custom/vocabs/bert-large-uncased-vocab.txt
DATA_PATH=$six_ALL_CCFRWORK/datasets-custom/openwebtext-10k/meg-t5_text_sentence
SAVE_CHECKPOINT_PATH=$six_ALL_CCFRWORK/checkpoints/t5

DISTRIBUTED_ARGS=" \
    --nproc_per_node $GPUS_PER_NODE \
    --nnodes $NNODES \
    --node_rank $NODE_RANK \
    --master_addr $MASTER_ADDR \
    --master_port $MASTER_PORT \
    "

# from t5 training:
#   --global-batch-size 2048 \
GPT_ARGS=" \
    --num-layers 12 \
    --hidden-size 768 \
    --num-attention-heads 12 \
    --kv-channels 64 \
    --ffn-hidden-size 3072 \
    --encoder-seq-length 512 \
    --decoder-seq-length 128 \
    --micro-batch-size 16 \
    --max-position-embeddings 512 \
    --train-iters 1000000 \
    --lr-decay-iters 1000000 \
    --lr 0.0001 \
    --min-lr 0.00001 \
    --lr-decay-style linear \
    --lr-warmup-fraction .01 \
    --weight-decay 1e-2 \
    --clip-grad 1.0 \
    --fp16 \
    "

OUTPUT_ARGS=" \
    --log-interval 10 \
    --save-interval 500 \
    --eval-interval 100 \
    --eval-iters 10 \
    "

python -m torch.distributed.launch \
    $DISTRIBUTED_ARGS \
    pretrain_t5.py \
    --tensor-model-parallel-size 2 \
    $GPT_ARGS \
    $OUTPUT_ARGS \
    --save $SAVE_CHECKPOINT_PATH \
    --load $SAVE_CHECKPOINT_PATH \
    --data-path $DATA_PATH \
    --data-impl mmap \
    --vocab-file $VOCAB_FILE \
    --vocab-extra-ids 100 \
    --split 949,50,1 \
    --distributed-backend nccl