Skip to content

Releases: LoicGrobol/zeldarose

v0.11.0

12 Jun 12:23
Compare
Choose a tag to compare

Changed

  • Several dumps of environments added to the output dir of transformer training to help with reproducibility and bug reporting.

Full Changelog: v0.10.0...v0.11.0

v0.10.0

06 Jun 15:53
Compare
Choose a tag to compare

Changed

  • Bumped minimal (Pytorch) Lightning version to 2.0.0
  • Pytorch compatibility changed to >= 2.0, < 2.4
  • 🤗 datasets compatibility changed to >= 2.18, < 2.20
  • Added support for the new lightning precision plugins.

Full Changelog: v0.9.0...v0.10.0

v0.9.0

17 Apr 15:20
Compare
Choose a tag to compare

Fixed

  • Training a m2m100 model on a language (code) not originally included in its tokenizer now works.

Changed

  • Pytorch compatibility changed to >= 2.0, < 2.3
  • 🤗 datasets compatibility changed to >= 2.18, < 2.19

Full Changelog: v0.8.0...v0.9.0

v0.8.0

06 Oct 21:45
264d962
Compare
Choose a tag to compare

Fixed

  • Fixed multiple save when using step-save-period in conjunction with bach accumulation (close #30)

Changed

  • Maximum Pyorch compatibility bumped to 2.1
  • max_steps and max_epochs can now be set in the tuning config. Setting them via command line
    options is deprecated and will be removed in a future version.

v0.7.3 — Bug Fix

27 Feb 08:40
b45c889
Compare
Choose a tag to compare

Fixed

  • Behaviour when asking for denoising in mBART with a model that has no mask token.

v0.7.2 — Now with a doc??!?

26 Feb 14:12
0f9ee82
Compare
Choose a tag to compare

Fixed

  • In mBART training, loss scaling now works as it was supposed to.
  • We have a documentation now! Check it out at https://zeldarose.readthedocs.io, it will get
    better over time (hopefully!).

v0.7.1 Bug fix

25 Feb 17:36
27cd364
Compare
Choose a tag to compare

Fixed

  • Translate loss logging is not always zero anymore.

Now with mBART translations!

24 Feb 23:01
21746b0
Compare
Choose a tag to compare

The main highlight of this release is the addition of mBART training as a task, so far slightly different from the original one, but similar enough to work in our tests.

Added

  • The --tf32-mode option allows to select the level of NVidia Ampère matmul otpimisations.
  • The --seed option allows to fix a random seed.
  • The mbart task allows training general seq2seq and translation models.
  • A zeldarose command that serves as entry point for both tokenizer and transformer training.

Changed

  • BREAKING --use-fp16 has been replaced by --precision, which allows to also use fp64 and
    bfloat. Previous behaviour can be emulated with --precision 16.
  • Remove the GPU stats logging from the profile mode since Lightning stopped supporting it
  • Switched TOML library from toml to
    tomli
  • BREAKING Bumped the min version of several dependency
    • pytorch-lightning >= 1.8.0
    • torch >= 1.12
  • Bumped max version of several dependency
    • datasets < 2.10
    • pytorch-lightning < 1.9
    • tokenizers < 0.14

v0.6.0 — Dependencies compatibilities

28 Jul 08:21
e5623c9
Compare
Choose a tag to compare

This one to fix compatibilities issues with our dependencies. Bumps minimal versions and add upper version limits.

Changed

  • Bumped torchmetrics minimal version to 0.9
  • Bumped datasets minimal version to 2.4
  • Bumped torch max version to 1.12

Fixed

  • Dataset fingerprinting/caching issues #31

Full Changelog: v0.5.0...v0.6.0

v0.5.0 — Housekeeping

31 Mar 09:58
36ad88a
Compare
Choose a tag to compare

The minor bump is because we have several new minimal version requirements (and to fairly recent versions with that). Otherwise, this is mostly internal stuff.

Added

Changed

  • Move packaging config to pyproject.toml and require setuptools>=61.
  • click_pathlib is no longer a dependency and click has a minimal version of 8.0.3

Full Changelog: v0.4.0...v0.5.0