...with applications to adversarially attacking and training neural networks.
We define stochastic optimizers in the chop.stochastic
module. These follow PyTorch Optimizer conventions, similar to the torch.optim
module.
We also define full-gradient algorithms which operate on a batch of optimization problems in the chop.optim
module. These are used for adversarial attacks, using the chop.Adversary
wrapper.
Run the following:
pip install chop-pytorch
or
pip install git+https://github.com/openopt/chop.git
for the latest development version.
Welcome to chop
!
See examples
directory and our webpage.
Run the tests with pytests tests
.
If this software is useful to your research, please consider citing it as
@article{chop,
author = {Geoffrey Negiar, Fabian Pedregosa},
title = {CHOP: continuous optimization built on Pytorch},
year = 2020,
url = {https://github.com/openopt/chop}
}
Geoffrey Négiar is in the Mahoney lab and the El Ghaoui lab at UC Berkeley.
Fabian Pedregosa is at Google Research.