Keras implementation for learning a mapping from input images to corresponding ASCII art, for example:
Due to computational limitations, I have taken a very small dataset containing 400 images. All the data have been collected from https://picsum.photos/. The pairs were generated using basic image processing. The generation process has been documented in this notebook.
The whole project has been carried out on Google Colab.
The dataset can be found in this folder and is also available on Kaggle
A pretrained model is provided can be downloaded from https://drive.google.com/file/d/1Z3xFnExTV-Jv3-KY1VU3wQCNm_0zLjq-/view?usp=sharing In case, I am late in sharing the model, feel free to mail here: [email protected]
The general training process has been described in this notebook.
A streamlit app is developed to show the results. The app can be accessed here: ASCII app
Image-to-Image Translation with Conditional Adversarial Networks
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros
CVPR, 2017.
Code borrows heavily from How to Develop a Pix2Pix GAN for Image-to-Image Translation and Build a Pix2Pix GAN With Python. The data is modified from Lorem Picsum.