Skip to content

AgainstEntropy/kanji

Repository files navigation

Kanji Streaming

Hugging Face Space

This project aims to build an interesting dialogue system utilizing the characteristics of StreamDiffusion.

Users are not talking to a common Chatbot in English, but in a kanji-like fake language, where responses are rendered with diffusion-based models.

We build this system based on StreamDiffusionIO, a modified version of StreamDiffusion that supports rendering text streams into image streams.

seq_no_bgm.mp4

News

🔥 Mar 05, 2024 | Kanji Streaming is reposted by enpitsu (original author of Fake Kanji Generation) on X(twitter)!

⬆️ Mar 04, 2024 | We update the demo and it now allows to chat with mistralai/Mixtral-8x7B-Instruct-v0.1 with HF InferenceClient, which also significantly saves GPU memory usage (from ~18.5G ➡️ ~5G). Also checkout the demo deployed on Huggingface Space!

🔥 Mar 01, 2024 | Kanji Streaming is reposted by AK on X(twitter)!

🚀 Feb 29, 2024 | Kanji Streaming is released!

Deploy

Step0: Clone this repo

git clone https://github.com/AgainstEntropy/kanji.git
cd kanji

Step1: Setup environment

conda create -n kanji python=3.10
conda activate kanji
pip install -r requirements.txt

Step2: Install StreamDiffusionIO

For Users

pip install StreamDiffusionIO

For Developers

To pull the source code of StreamDiffusionIO, one can either do

git submodule update --init --recursive

or

git clone https://github.com/AgainstEntropy/StreamDiffusionIO.git

Then install StreamDiffusionIO in editable mode

pip install --editable StreamDiffusionIO/

Tip

See repository of StreamDiffusionIO for more details.

Step3: Download model weights

Step4: Serve with Gradio

Run git submodule update --init --recursive to pull the code in demo folder.

Modify the arguments (e.g., model paths and conda installation path) in the launching scripts to match your case before running.

Serve with Mixtral-8x7B-Instruct-v0.1 (HF InferenceClient)

cd demo
sh run-app-mixtral.sh

Serve with Llama (run LLM locally)

cd demo
sh run-kanji-local_llama.sh

Tip

It will take ~18.5G GPU memory when using Llama-2-7b-chat and Stable-Diffusion-v1-5.

Reproduce

Check out the guide on reproducing Kanji generation model used in this project.

Acknowledgements & References

About

Chat with another civilization!

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published