This tool is designed to quickly and accurately calculate the token amounts in prompts using CSV structure files. By analyzing the CSV files, this tool can provide a precise token count, saving you time and improving accuracy.
To use this tool, you will need to install the following dependencies:
Models | Encoding Methods |
---|---|
gpt-3.5-turbo-0301 | cl100k_base |
gpt-3.5-turbo | p50k_base |
text-davinci-003 | r50k_base |
text-davinci-002 | gpt2 |
text-davinci-001 | |
davinci-instruct-beta | |
davinci | |
text-curie-001 | |
curie-instruct-beta | |
curie | |
text-babbage-001 | |
babbage | |
text-embedding-ada-002 | |
text-ada-001 | |
ada | |
code-davinci-002 | |
code-cushman-001 |
Just in 1 step.
- Run the Docker image on this repository of Docker hub. Don't forget to mount your csv file folder into container.
$ docker run --rm -itv ${PWD}:/app_temp dockliu/chatgpt-token-calculator:latest
Just in 5 steps.
- Run in virtual environment.
$ poetry install
$ poetry shell
- Execute tool of this calculator.
$ python3 main.py
-
Depending on the previous selection, choose which model or encoding method you want to use.
-
Type the CSV file path you want to calculate then press 'ENTER'. The data format can refer to the following link.
- set up environment
- demo
- demo result csv. The data format can refer to the following link.
If you encounter any issues or have suggestions for how to improve this tool, please submit an issue or pull request. We welcome contributions from the community and appreciate your feedback.