Releases: LoicGrobol/zeldarose
Releases · LoicGrobol/zeldarose
v0.11.0
Changed
- Several dumps of environments added to the output dir of transformer training to help with reproducibility and bug reporting.
Full Changelog: v0.10.0...v0.11.0
v0.10.0
Changed
- Bumped minimal (Pytorch) Lightning version to
2.0.0
- Pytorch compatibility changed to
>= 2.0, < 2.4
- 🤗 datasets compatibility changed to
>= 2.18, < 2.20
- Added support for the new lightning precision plugins.
Full Changelog: v0.9.0...v0.10.0
v0.9.0
Fixed
- Training a m2m100 model on a language (code) not originally included in its tokenizer now works.
Changed
- Pytorch compatibility changed to
>= 2.0, < 2.3
- 🤗 datasets compatibility changed to
>= 2.18, < 2.19
Full Changelog: v0.8.0...v0.9.0
v0.8.0
Fixed
- Fixed multiple save when using step-save-period in conjunction with bach accumulation (close #30)
Changed
- Maximum Pyorch compatibility bumped to 2.1
max_steps
andmax_epochs
can now be set in the tuning config. Setting them via command line
options is deprecated and will be removed in a future version.
v0.7.3 — Bug Fix
Fixed
- Behaviour when asking for denoising in mBART with a model that has no mask token.
v0.7.2 — Now with a doc??!?
Fixed
- In mBART training, loss scaling now works as it was supposed to.
- We have a documentation now! Check it out at https://zeldarose.readthedocs.io, it will get
better over time (hopefully!).
v0.7.1 Bug fix
Fixed
- Translate loss logging is not always zero anymore.
Now with mBART translations!
The main highlight of this release is the addition of mBART training as a task, so far slightly different from the original one, but similar enough to work in our tests.
Added
- The
--tf32-mode
option allows to select the level of NVidia Ampère matmul otpimisations. - The
--seed
option allows to fix a random seed. - The
mbart
task allows training general seq2seq and translation models. - A
zeldarose
command that serves as entry point for both tokenizer and transformer training.
Changed
- BREAKING
--use-fp16
has been replaced by--precision
, which allows to also use fp64 and
bfloat. Previous behaviour can be emulated with--precision 16
. - Remove the GPU stats logging from the profile mode since Lightning stopped supporting it
- Switched TOML library from toml to
tomli - BREAKING Bumped the min version of several dependency
pytorch-lightning >= 1.8.0
torch >= 1.12
- Bumped max version of several dependency
datasets < 2.10
pytorch-lightning < 1.9
tokenizers < 0.14
v0.6.0 — Dependencies compatibilities
This one to fix compatibilities issues with our dependencies. Bumps minimal versions and add upper version limits.
Changed
- Bumped
torchmetrics
minimal version to 0.9 - Bumped
datasets
minimal version to 2.4 - Bumped
torch
max version to 1.12
Fixed
- Dataset fingerprinting/caching issues #31
Full Changelog: v0.5.0...v0.6.0
v0.5.0 — Housekeeping
The minor bump is because we have several new minimal version requirements (and to fairly recent versions with that). Otherwise, this is mostly internal stuff.
Added
lint
extra that install linting tools and plugins- Config for flakeheaven
- Support for
pytorch-lightning 1.6
Changed
- Move packaging config to
pyproject.toml
and requiresetuptools>=61
. click_pathlib
is no longer a dependency andclick
has a minimal version of8.0.3
Full Changelog: v0.4.0...v0.5.0