Machine Learning plugins for GIMP 3. These include: Background Removal, Super Resolution, Monocular Depth Map (as opposed to stereo), Inpainting, Noise Removal, Deblur, Denoise, among others. See below for the full list.
These plugins wrap functionality from other projects in order to make them easily accessible from within GIMP. See below for links to specific projects.
This was forked from UserUnknownFactor GIMP3-ML which was forked from original version
Both of those forks seem abandoned. The original GIMP-ML is being moved in a different direction to use DALL-E, Stable Diffusion, etc. But doesn't seem to include these specific Machine Learning tools for specific tasks which I find very useful. In the last fork the Remove Background plug-in isn't working, as it hasn't kept up with the latest version of rembg, among other issues.
I've made this fork to update the plug-ins to work with the latest GIMP version. When it was last worked on in the old Fork it was supposedly working with a 2.99.12 version of GIMP. First I will make sure it is working with GIMP 2.99.12, then move to the latest GIMP 3 RC (Release Canditate).
Forked from the original version to improve the user experience in several aspects:
- Added more models.
- Models are run with Python 3.10+.
- Full error text is shown in the GIMP error dailog and in debug console.
- Additional alpha channel handling in some plugins.
- Automatic installation for Windows systems.
- And other smaller improvements.
The plugins have been tested with GIMP 2.99.12 on the following systems:
- Windows 10
- Install GIMP3.
- Download this repository.
- On Windows:
- Install Python 3.10.
- Run
install.cmd
from the unpacked folder.
- You should now find the GIMP-ML plugins under Layers → GIMP-ML.
- You can download the weights here, or from the weight links below.
- Source: https://github.com/danielgatis/rembg
- Weights:
- u2net (download, source): A pre-trained model for general use cases.
- u2netp (download, source): A lightweight version of u2net model.
- u2net_human_seg (download, source): A pre-trained model for human segmentation.
- u2net_cloth_seg (download, source): A pre-trained model for Cloths Parsing from human portrait. Here clothes are parsed into 3 category: Upper body, Lower body and Full body.
- License: MIT License
- Source: https://github.com/youyuge34/Anime-InPainting
- Weights: Google Drive | Baidu
- License: Creative Commons Attribution-NonCommercial 4.0 International
@inproceedings{nazeri2019edgeconnect,
title={EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning},
author={Nazeri, Kamyar and Ng, Eric and Joseph, Tony and Qureshi, Faisal and Ebrahimi, Mehran},
journal={arXiv preprint},
year={2019}}
- Source:
- Weights: 4x_FatalPixels
- Licenses:
- Demosaics: GNU Affero General Public License v3.0
- ESRGAN: Apache-2.0 license
[Paper]
Xintao Wang, Liangbin Xie, Chao Dong, Ying Shan
Applied Research Center (ARC), Tencent PCG
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
- Source: https://github.com/a-mos/High_Resolution_Image_Inpainting
- License: Creative Commons Attribution-NonCommercial 4.0 International
@article{Moskalenko_2020,
doi = {10.51130/graphicon-2020-2-4-18},
url = {https://doi.org/10.51130%2Fgraphicon-2020-2-4-18},
year = 2020,
month = {dec},
pages = {short18--1--short18--9},
author = {Andrey Moskalenko and Mikhail Erofeev and Dmitriy Vatolin},
title = {Deep Two-Stage High-Resolution Image Inpainting},
journal = {Proceedings of the 30th International Conference on Computer Graphics and Machine Vision ({GraphiCon} 2020). Part 2}}
- Source: https://github.com/twtygqyy/pytorch-SRResNet
- Torch Hub fork: https://github.com/valgur/pytorch-SRResNet
- License: MIT
- C. Ledig et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 105–114.
- Source: https://github.com/zeruniverse/neural-colorization
- Torch Hub fork: https://github.com/valgur/neural-colorization
- License:
- GNU GPL 3.0 for personal or research use
- Commercial use prohibited
- Model weights released under CC BY 4.0
- Based on fast-neural-style:
- https://github.com/jcjohnson/fast-neural-style
- License:
- Free for personal or research use
- For commercial use please contact the authors
- J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual Losses for Real-Time Style Transfer and Super-Resolution,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9906 LNCS, 2016, pp. 694–711.
- Source: https://github.com/xavysp/DexiNed
- Weights: BIPED
- License: MIT license
@misc{soria2021dexined_ext,
title={Dense Extreme Inception Network for Edge Detection},
author={Xavier Soria and Angel Sappa and Patricio Humanante and Arash Arbarinia},
year={2021},
eprint={arXiv:2112.02250},
archivePrefix={arXiv},
primaryClass={cs.CV}}
- Source: https://github.com/TAMU-VITA/DeblurGANv2
- Torch Hub fork: https://github.com/valgur/DeblurGANv2
- License: BSD 3-clause
- O. Kupyn, T. Martyniuk, J. Wu, and Z. Wang, “DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 8877–8886.
- Source: https://github.com/nianticlabs/monodepth2
- Torch Hub fork: https://github.com/valgur/monodepth2
- License:
- See the license file for terms
- Copyright © Niantic, Inc. 2019. Patent Pending. All rights reserved.
- Non-commercial use only
- C. Godard, O. Mac Aodha, M. Firman, and G. Brostow, “Digging Into Self-Supervised Monocular Depth Estimation,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 3827–3837.
- UserUnknownFactor
- Kritik Soman (kritiksoman) – original GIMP-ML implementation
MIT
Please note that additional license terms apply for each individual model. See the references list for details. Many of the models restrict usage to non-commercial or research purposes only.