Want to get better model with limited budgets? You are in the right place
pip install text-denoising
-
R-Denoiser (μ=3,r=0.15,n)∪ (μ=8,r=0.15,n)
The regular denoising is the standard span corruption introduced in Raffel et al. (2019) that uses a range of 2 to 5 tokens as the span length, which masks about 15% of input tokens
-
S-Denoiser (μ=L/4,r=0.25,1)
A specific case of denoising where we observe a strict sequential order when framing the inputs-to-targets task, i.e., prefix language modeling
-
X-Denoiser (μ = 3,r = 0.5,n)∪(μ = 8,r = 0.5,n)∪(μ = 64,r =0.15,n)∪ (μ=64,r=0.5,n)
An extreme version of denoising where the model must recover a large part of the input, given a small to moderate part of it. This simulates a situation where a model needs to generate long target from a memory with relatively limited information. To do so, we opt to include examples with aggressive denoising where approximately 50% of the input sequence is masked
2022 papers : Transcending Scaling Laws with 0.1% Extra Compute
we show an approximately 2x computational savings rate
-
Regular denoising whereby the noise is sampled as spans, replaced with sentinel tokens. This is also the standard span corruption task used in Raffel et al. (2019). Spans are typically uniformly sampled with a mean of 3 and a corruption rate of 15%.
-
Extreme denoising whereby the noise is increased to relatively ‘extreme‘ amounts in either a huge percentage of the original text or being very long in nature. Spans are typically uniformly sampled with a mean length of 32 OR a corruption rate of up to 50%.
-
Sequential denoising whereby the noise is always sampled from the start of the text to a randomly sampled point in the text. This is also known as the PrefixLM objective (not to be confused with the architecture).
This repo will just aim for accompolish this task instead, UL2 is way too complicated for my likings
50% PrefixLM, 25% Long (extreme) span corruption, and 25% regular span corruption to be quite simple and efficient
Run a mT5 encoder pretraining on 3090 on pythia json.zst files
pip install text-denoising
python examples/pretrain_example.py
training loss was stable and no weird spikes
Core Papers
Transcending Scaling Laws with 0.1% Extra Compute
Unifying Language Learning Paradigms
Implements of t5 noise masking in huggingface transformers or python code
OSLO : very underrated, some tidy and documentation, this will be a very useful tool
-
Heavily inspired from this section