Releases: zama-ai/concrete-ml
v1.7.0
Summary
Concrete ML 1.7 adds functionality to fine-tune LLMs and neural networks on encrypted data using low rank approximation parameter efficient fine-tuning. This allows users to securely outsource large weight matrix computations to remote servers while keeping a small set of private fine-tuned parameters locally. Additionally, this release includes GPU support, providing a 1-2x speed-up for large neural networks on server-grade GPUs like the NVIDIA H100. Concrete ML now also supports Python 3.11 and PyTorch 2.
What's Changed
New features
- Lora fine-tuning in FHE with MLP tutorial and gpt2 use case example (#823) (
4d2f2e6
) - Add GPU support (#849) (
945aead
) - Add support for Python3.11 (#701) (
819dca7
) - Support for embedding layers (#778) (
296bc8c
) - Support for encrypted multiplication and division (#690) (
a1bd9b8
) - Upgrade PyTorch to 2.3.1, and Brevitas to 0.10 (#788) (
c3d7c81
)
Improvements
- Relax Python version restrictions for deployment (#853) (
040c308
) - Remove Protobuf 2GB limit when checking ONNX (#811) (
c8908fa
) - Always using evaluation key compression (#726) (
b4e1060
)
Fixes
- Fix dtype check in quantizer dequant (
77ced60
) - Dynamic import of
transformers
in hybrid model (829b68b
) - Use correct torch version for Intel Mac (#798) (
a8eab89
)
Resources
- Documentation:
- Demo & Examples:
- Encrypted DNA Ancestry : https://huggingface.co/spaces/zama-fhe/encrypted_dna
v1.6.1
Summary
Minor fixes.
Links
Docker Image: zamafhe/concrete-ml:v1.6.1
Docker Hub: https://hub.docker.com/r/zamafhe/concrete-ml/tags
pip: https://pypi.org/project/concrete-ml/1.6.1
Documentation: https://docs.zama.ai/concrete-ml
v1.6.1
Fix
- Dynamic import of
transformers
in hybrid model (270efbe
)
v1.6.0
Summary
Concrete ML 1.6 includes the following enhancements:
- Latency improvements on large neural networks
- Support for pre-trained tree-based models such as those trained using Federated Learning
- Enhanced collaborative computation
- Introduction of DataFrame schemas
- Deployment of logistic regression training
What's Changed
New features
- Enable non-interactive encrypted training for logistic regression (#660) (
ec58bca
) - Support pre-trained tree-based models using
from_sklearn
(5ca282b
) - Add FHE training deployment (#665) (
b718629
) - Support approximate rounding to speed up neural networks (
9ef890e
) - Allow users to define a schema for dataframe encryption (‘ccd6641’)
Fixes
- Fix fhe-training classes behavior (
a88d704
) - Update qgpt2_class.py to fix typo (
d376d85
) - Fix post-processing shape mismatches for linear models (#585) (
b097022
) - Disable overflow protection in rounding (
4db0157
) - Make skorch import fail without error (
81de55c
)
Improvements
- Replace python release install with setup-python (
899b9f1
) - Add support to AvgPool's missing parameters (
15a8340
)
Resources
-
Documentation:
-
Demo & Examples:
- Add NN-20 and NN-50 deep MLPs for MNIST classification (
1b5ce84
)
- Add NN-20 and NN-50 deep MLPs for MNIST classification (
v1.6.0-rc3
Summary
1.6.0 - Release Candidate - 3
v1.6.0-rc0
Summary
1.6.0 - Release Candidate - 0
v1.5.0
Summary
Concrete ML v1.5 introduces several significant enhancements in this release, including a DataFrame API that enables working with encrypted stored data, a new option that speeds up neural networks by 2-3 times, an improved FHE simulation mode to quickly evaluate the impact of the speed-up on neural network accuracy.
What's Changed
Features:
- Add encrypted dataframe API (
d2d6250
) - Add an option to allow approximate rounding to speed up NNs (
9ef890e
) - Support ONNX conv1d operator (
09ad7a6
) - Implement quantized unfold operation (
fa3ef88
)
Improvements:
- More accurate FHE simulation
- Encrypted aggregation of the outputs of tree ensembles
- Allow different quantization bits for tree model leaves and inputs
Fix
- Import skorch without errors due to bad docstrings (
81de55c
) - Add support to AvgPool's missing parameters (
15a8340
)
Resources
- Documentation:
- New structure and landing page (
85cb962
) - Add links to credit card approval space in use case examples (
df81aca
) - Improve contributing section (
1696799
) - Document n_bits for compile torch functions (
0306c65
) - Add explanation of encrypted training and federated learning (
57dbdff
) - Add documentation about scaling (
9252f57
) - Add dataframe documentation (#576) (
d3bf5ac
)
- New structure and landing page (
Links
Docker Image: zamafhe/concrete-ml:v1.5.0
Docker Hub: https://hub.docker.com/r/zamafhe/concrete-ml/tags
pip: https://pypi.org/project/concrete-ml/1.5.0
Documentation: https://docs.zama.ai/concrete-ml
v1.5.0-rc1
Summary
Add support for encrypted data-frames and approximate rounding. Fix AvgPool's count_include_pad
missing parameter error.
Links
Docker Image: zamafhe/concrete-ml:v1.5.0-rc1
Docker Hub: https://hub.docker.com/r/zamafhe/concrete-ml/tags
pip: https://pypi.org/project/concrete-ml/1.5.0-rc1
Documentation: https://docs.zama.ai/concrete-ml
v1.5.0-rc1
Feature
- Add new approx rounding (
9ef890e
) - Encrypted data-frame (
d2d6250
) - Support conv1d operator (
09ad7a6
) - Implement quantized unfold (
fa3ef88
)
Fix
- Fix survey link (
e5661c1
) - Reinstate apidoc generated tags (
2878a07
) - Make skorch import fail without error (
81de55c
) - Replace python release install with setup-python (
899b9f1
) - Fix concurrency issue in release process (
1295ea9
) - Add support to AvgPool's missing parameters (
15a8340
) - Update README.md (
ba1fdad
)
Documentation
- Add dataframe documentation (#576) (
d3bf5ac
) - Update main landing pages (
cfb862e
) - New structure and landing page (
85cb962
) - Update operator list in torch support's documentation section (
b617740
) - Add links to credit card approval space in use case examples (
df81aca
) - Improve contributing section (
1696799
) - Document n_bits for compile torch functions (
0306c65
) - Add explanation of encrypted training and federated learning (
57dbdff
) - Add documentation about scaling (
9252f57
)
v1.5.0-rc0
Summary
Add support to torch's Conv1d and Unfold operators
Links
Docker Image: zamafhe/concrete-ml:v1.5.0-rc0
Docker Hub: https://hub.docker.com/r/zamafhe/concrete-ml/tags
pip: https://pypi.org/project/concrete-ml/1.5.0-rc0
Documentation: https://docs.zama.ai/concrete-ml
v1.5.0-rc0
Feature
Fix
- Make skorch import fail without error (
81de55c
) - Replace python release install with setup-python (
899b9f1
) - Fix concurrency issue in release process (
1295ea9
) - Add support to AvgPool's missing parameters (
15a8340
) - Update README.md (
ba1fdad
)
Documentation
- Update operator list in torch support's documentation section (
b617740
) - Add links to credit card approval space in use case examples (
df81aca
) - Improve contributing section (
1696799
) - Document n_bits for compile torch functions (
0306c65
) - Add explanation of encrypted training and federated learning (
57dbdff
) - Add documentation about scaling (
9252f57
)
v1.4.1
Summary
Update Concrete-Python to 2.5.1 and fixes AvgPool's missing parameters.
Links
Docker Image: zamafhe/concrete-ml:v1.4.1
Docker Hub: https://hub.docker.com/r/zamafhe/concrete-ml/tags
pip: https://pypi.org/project/concrete-ml/1.4.1
Documentation: https://docs.zama.ai/concrete-ml
v1.4.1
Fix
- Make skorch import fail without error (
5863a4b
) - Replace python release install with setup-python (
f41c65c
) - Update README.md (
8bef8e5
) - Add support to AvgPool's missing parameters (
559d99c
)
Documentation
v1.4.0
Summary
This release adds training a model on encrypted data and introduces latency optimization for the inference of tree-based models such as XGBoost, random forest, and decision trees. This optimization offers 2-3x speed-ups in typical quantization settings and allows even more accurate, high bit-width tree-based models to run with good latency.
Links
Docker Image: zamafhe/concrete-ml:v1.4.0
Docker Hub: https://hub.docker.com/r/zamafhe/concrete-ml/tags
pip: https://pypi.org/project/concrete-ml/1.4.0
Documentation: https://docs.zama.ai/concrete-ml
v1.4.0
Feature
- SGDClassifier training in FHE (
0893718
) - Support Expand Equal ONNX op (
cf3ce49
) - Add rounding feature on cml trees (
064eb82
) - Add multi-output support (
fef23a9
) - Allow QuantizedAdd produces_output_graph (
0b57c71
) - Encrypted gemm support - 3d inputs - better rounding control - sgd training test (
111c7e3
)
Fix
- Add --no-warnings flag to linkchecker (
1dc547e
) - Fix wrong assumption in ReduceSum operator's axis parameter (
1a592d7
) - Mark flaky tests due to issue in simulation (
4f67883
) - Update learning rate default value for XGB models (
e4984d6
)