-
When you start GPT-Runner, it first reads the project-level configuration file.
-
This isn't necessary, but it's useful when you want to override some global configurations.
-
Sorted by priority, it prefers to read the file at topmost.
<rootPath>/.gpt-runner/gptr.config.ts
<rootPath>/.gpt-runner/gptr.config.js
<rootPath>/.gpt-runner/gptr.config.json
<rootPath>/gptr.config.ts
<rootPath>/gptr.config.js
<rootPath>/gptr.config.json
-
Then GPT-Runner will deeply retrieve all
*.gpt.md
files under the current folder. -
This process defaults to skipping the files in the project's
.gitignore
which saves time. -
You can change the retrieval range by configuring the
gptr.config.ts
file. -
Each
*.gpt.md
file is parsed into an AI preset.
-
<rootPath>/.gpt-runner/
directory is a special directory. Even if you include it in.gitignore
, it will be retrieved. This is useful for people who hope GPT-Runner doesn't intrude into the project. -
You can put both
gptr.config.json
and*.gpt.md
files in this directory. -
Then add
.gpt-runner
in.gitignore
. So you can keep the project clean and let GPT-Runner read the configuration files at the same time. -
If you want to git ignore the
.gpt-runner
directory once and for all, you can execute this command to achieve global git ignore:
git config --global core.excludesfile '~/.gitignore_global'
echo '.gpt-runner' >> ~/.gitignore_global
-
gpt.config.ts/js/json is a configuration file, it can override project-level global configurations.
-
Its configuration type is as follows
export interface UserConfig {
/**
* Model configuration
*/
model?: ModelConfig
/**
* Deep retrieval includes file paths, support glob
* @default null
*/
includes?: string | RegExp | string[] | RegExp[] | ((source: string) => boolean) | null
/**
* Deep retrieval excludes file paths, support glob
* @default [
"** /node_modules",
"** /.git",
"** /__pycache__",
"** /.Python",
"** /.DS_Store",
"** /.cache",
"** /.next",
"** /.nuxt",
"** /.out",
"** /dist",
"** /.serverless",
"** /.parcel-cache"
]
*/
excludes?: string | RegExp | string[] | RegExp[] | ((source: string) => boolean) | null
/**
* Skip the files in .gitignore
* Recommended to turn on, this can save retrieval time
* @default true
*/
respectGitIgnore?: boolean
/**
* api url configuration
* @default {}
* @example
* {
* "https://api.openai.com/*": {
* "modelNames": ["gpt-3.5-turbo-16k", "gpt-4"],
* "httpRequestHeader": {
* "User-Agent": "GPT-Runner"
* }
* }
* }
*/
urlConfig?: {
[urlMatch: string]: {
/**
* The model name that will be displayed in the model selector
*/
modelNames?: string[]
/**
* Additional request headers are required
*/
httpRequestHeader?: Record<string, string>
}
}
}
export interface ModelConfig {
/**
* Model type
*/
type?: 'openai' | 'anthropic'
/**
* Model name
*/
modelName?: string
// ...more configurations please refer to specific model
}
- You can use
defineConfig
function ingptr.config.ts
to configureUserConfig
type configuration file. You can install@nicepkg/gpt-runner
package.
npm i @nicepkg/gpt-runner
- You can create a new
gptr.config.ts
file, then fill in the sample configuration:
import { defineConfig } from '@nicepkg/gpt-runner'
export default defineConfig({
model: {
type: 'openai',
modelName: 'gpt-3.5-turbo-16k',
temperature: 0.9,
},
})
-
Of course, you can also install our VSCode plugin, it will automatically prompt your configuration file based on our JSON Schema.
-
This is the simple example for
gptr.config.json
:
{
"model": {
"type": "openai",
"modelName": "gpt-3.5-turbo-16k"
}
}
- This is the complete example for
gptr.config.json
:
{
"model": {
"type": "openai",
"modelName": "gpt-3.5-turbo-16k",
"temperature": 0.9,
"maxTokens": 2000,
"topP": 1,
"frequencyPenalty": 0,
"presencePenalty": 0
},
"includes": null,
"excludes": [
"**/node_modules",
"**/.git",
"**/__pycache__",
"**/.Python",
"**/.DS_Store",
"**/.cache",
"**/.next",
"**/.nuxt",
"**/.out",
"**/dist",
"**/.serverless",
"**/.parcel-cache"
],
"respectGitIgnore": true,
"urlConfig": {
"https://openrouter.ai/*": {
"modelNames": [
"openai/gpt-3.5-turbo-16k",
"openai/gpt-4",
"openai/gpt-4-32k"
],
"httpRequestHeader": {
"HTTP-Referer": "http://localhost:3003/",
"X-Title": "localhost"
}
}
}
}
-
xxx.gpt.md
files are AI preset files, each file represents an AI character. -
For example, a
uni-test.gpt.md
is specifically for this project to write unit tests, and adoc.gpt.md
is specifically for this project to write documentation. -
It has great value and can be reused by team members.
-
Why not
xxx.gpt.json
? Because in that case, the content withinSystem Prompt
andUser Prompt
often need to escape characters, which makes it very troublesome to write. -
It's easy to write, read, and maintain
xxx.gpt.md
. -
A minimalist AI preset file looks like this:
```json
{
"title": "Category/AI character name"
}
```
# System Prompt
You're a coding master specializing in refactoring code. Please follow SOLID, KISS and DRY principles, and refactor this section of code to make it better.
- A complete AI preset file looks like this:
```json
{
"title": "Category/AI Character Name",
"model": {
"type": "openai",
"modelName": "gpt-3.5-turbo-16k",
"temperature": 0.9,
"maxTokens": 2000,
"topP": 1,
"frequencyPenalty": 0,
"presencePenalty": 0
}
}
```
# System Prompt
You are a coding master, skilled at refactoring code. Please adhere to the SOLID, KISS and DRY principles, and refactor this code to make it better.
# User Prompt
When you use this preset to create a new chat, the User Prompt text will automatically fill in the chat input box. You can edit it before sending it to the AI robot.
# Remark
You can write your remarks here.
`model` / `modelName` / `temperature` / `System Prompt` / `User Prompt` are all **optional** parameters, and there are many more to customize.
You can also override many default parameter values through the `gptr.config.json` at the root directory of the project.
Official Request Parameters Documentation
export interface OpenaiModelConfig {
type: 'openai'
/**
* Model name
*/
modelName: string
/**
* Temperature
*/
temperature?: number
/**
* Max reply token number
*/
maxTokens?: number
/**
* Total probability mass of tokens per step
*/
topP?: number
/**
* Penalize repeated tokens according to frequency
*/
frequencyPenalty?: number
/**
* Penalizes repeated tokens
*/
presencePenalty?: number
}
Official Request Parameters Documentation
export interface AnthropicModelConfig {
type: 'anthropic'
/**
* Model name
*/
modelName: string
/**
* Temperature
*/
temperature?: number
/**
* Max reply token number
*/
maxTokens?: number
/**
* Total probability mass of tokens per step
*/
topP?: number
/**
* Only sample subsequent choices from the top K options
*/
topK?: number
}
- If you have installed GPT Runner VSCode extension. You can set in
.vscode/settings.json
:
{
"[markdown]": {
"editor.quickSuggestions": {
"other": true,
"comments": false,
"strings": true
}
}
}
Thus, in xxx.gpt.md
file, you can open suggestions and fast code snippets, for instance, create a new test.gpt.md
file, type in gptr
then hit Enter, you will quickly get a simple AI preset file.
- In the future, we will support more llm models