0.9.0
What's Changed
New
-
Add fetching autofeedback by completion id to cli by @kxtran in #175
To get auto generated feedback for a completion, use
log10 feedback autofeedback get
-
Use non blocking async for AsyncOpenAI and AsyncAnthropic by @wenzhe-log10 in #179
Release
0.9.0
includes significant improvements in how we handle concurrency while using LLM in asynchronous streaming mode.
This update is designed to ensure that logging at steady state incurs no overhead (previously up to 1-2 seconds), providing a smoother and more efficient experience in latency critical settings.Important Considerations for Short-Lived Scripts:
💡For short-lived scripts using asynchronous streaming, it's important to note that you may need to wait until all logging requests have been completed before terminating your script.
We have provided a convenient method calledfinalize()
to handle this.
Here's how you can implement this in your code:from log10._httpx_utils import finalize ... await finalize()
Ensure
finalize()
is called once, at the very end of your event loop to guarantee that all pending logging requests are processed before the script exits.
For more details, check async logging examples.
Chores
- Add dependabot workflow by @kxtran in #169
- Remove setup.py file by @kxtran in #174
- Verify generated completions submitted to the platform by @kxtran in #172
Full Changelog: 0.8.6...0.9.0