Skip to content

A template engine for LLM prompts with support for writing prompts with prompts

License

Notifications You must be signed in to change notification settings

promptware/metaprompt

Repository files navigation

metaprompt CI Status Documentation Join Discord

Metaprompt is a domain-specific language for LLM prompt engineering. It is a template engine for textual prompts, where expression expansion can depend on LLM outputs.

The goal is to extend the usual techniques of parametrized prompts with programmability, reusability and meta-prompting abilities.

Quick example

The text you are reading right now is a valid metaprompt program.

[# this is a comment that is ignored by the interpreter, that can be
used to add some info for the human-developer]

[# This whole text is a parametrized prompt, one of the parameters
being [:subject]]

[# [:subject] here is a variable reference. Variables can be defined
in-place, or passed from the external environment]

Give me a detailed poetic description of [:subject], using one or more
of the following metaphoric expressions:

[# Now I want to specialize my prompt depending on the value of
[:subject]. The output of the prompt below will be included *instead*
of the [$ ... block]: ]

[$ Write me a bullet list of metaphors for [:subject]. Do not produce
any other output]

[# Conditionals allow for logic branching: ]

[:if [:subject] is a human
 :then
   Use jokingly exaggerated style
 :else
   Include some references to [$ List some people who have any
   relation to [:subject], comma-separated]
]

See examples/ for more.

Project status

This is an early work-in-progress. Follow me on twitter for updates

  • Specify the initial version of the syntax
  • Implement a parser
    • implement parse tree -> AST conversion
    • return error throwing to the parser
    • implement escaping
    • [:variable] and [:variable=some value]
    • [:if ... :then ... :else ...]
      • short-circuit if the condition is literally true or false
    • [$ meta-prompt]
      • syntax for ignoring $ output - for now [:_=...] works (assignment to the _ variable)
    • [:use module :param1=value1]
    • [# comments]
    • [:STUATUS=some-status] - to show during prompt evaluation
    • [:call ffi-function :param1=foo :param2=bar]
  • Implement an evaluator
    • meta-prompting
    • conditionals
    • externally-defined variables
    • implement a 'manual' evaluator that asks the user to complete LLM inputs
    • API provider wrapper classes
      • OpenAI
      • Anthropic
      • llama
      • Mock for testing
  • Runtime system
    • Support variable definition at runtime
    • dynamic model switching (via MODEL variable - example)
    • Multiple chat instances and ability to switch between them, to distribute data between chat contexts. E.g. [chat1$ the object is the moon][chat1$ what is the object?] (example)
    • message role system (system, user) via ROLE variable (example)
    • exceptions
      • throwing exceptions
      • recovering from exceptions
    • LLM output validation?
      • via regexps?
      • via parsing?
  • FFI
    • syntax - preferably via [:use @ffi-function :param1=foo :param2=bar]
    • how to throw exceptions from FFI
    • API
    • standard library
      • text processing
      • shell access
      • running executables
      • file system access
        • isolation?
      • HTTP stack
  • Utils
    • Unbound variable auto discovery
    • Machinery to turn metaprompts into interfaces (parameters become form fields)
      • static validation?
  • Add a module system
    • syntax
    • module loading at runtime
    • preload modules on startup - is needed?
    • module caching
    • tests
  • Add a package system
    • specify package format
    • create a package registry
    • on-the-fly package installer

Architecture decisions

  • functions, files, and modules are essentially the same - invoked with [:use ...]
  • metaprompt parameters are just variables that are not bound before first use - this and the above decision allow to get rid of function syntax entirely

To consider

  • dynamic module loading vs. static module loading: dynamic is lazy, so skips unneeded modules, but static loading guarantees absence of runtime errors due to module resolution failures (which saves costs)
  • exception system. how to pass payloads with exceptions
  • turning exceptions into continuations in spirit of hurl

Notable sources of inspiration

About

A template engine for LLM prompts with support for writing prompts with prompts

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published