-
-
Notifications
You must be signed in to change notification settings - Fork 10k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ feat: Add Fireworks AI Model Provider #3392
Conversation
@hezhijie0327 is attempting to deploy a commit to the LobeHub Pro Team on Vercel. A member of the Team first needs to authorize it. |
Thank you for raising your pull request and contributing to our Community |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3392 +/- ##
==========================================
+ Coverage 91.83% 91.85% +0.02%
==========================================
Files 455 457 +2
Lines 30452 30654 +202
Branches 2116 2960 +844
==========================================
+ Hits 27965 28158 +193
- Misses 2487 2496 +9
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Stream 中存在一段
|
@arvinxx
|
对比了下其他 Provider,看上去是 Stream 第一个流中缺少 content? Anyway, 暂时他们家也只有一个模型支持 function call,暂时禁用 stream 当 tools 存在时 |
@hezhijie0327 abort 大概率是 stream 的解析过程抛错导致。可能比较好的做法应该是等我搞完这个: #3361 然后针对 Firework 写一个自定义的 stream chunk 解析函数来规避 content 为空的情况 |
@hezhijie0327 abort is most likely caused by an error in the stream parsing process. Maybe a better approach would be to wait until I finish this: #3361 Then write a custom stream chunk parsing function for Firework to avoid the situation where content is empty. |
OpenAI 的流默认解析规则在这里:https://github.com/lobehub/lobe-chat/blob/main/src/libs/agent-runtime/utils/streams/openai.ts#L15-L65 当然另外一种策略就是直接把 firework 的解析问题在 OpenAI 的流解析里也默认集成了,这样的好处就是能提升 OpenAI 流解析函数的鲁棒性,坏处就是存在耦合性 |
那先等改造把,Spark 的 function call 我昨天也是试了下,他的 Stream 流会报一个 |
@arvinxx 从昨天折腾 Bedrock Stream 时有了改造的思路 对以下两种情况进行优化:
|
@hezhijie0327 第一种状态看下能不能单独提出来拆一个 PR 看看?我刚有优化一个PR: #3872 ,有调整这部分,但是有人反馈说对一些模型有影响,看看你的方案是不是会更好一些 |
@hezhijie0327 In the first state, can you propose a separate PR and split it? I just optimized a PR: #3872. I have adjusted this part, but some people reported that it has an impact on some models. Let’s see if your solution is better. |
另外我现在优化了一个实现,如果 stream 解析时候抛错了的话会直接传递 error chunk,所以这个应该可以先合了,后续有人反馈抛错可以再优化 |
In addition, I have now optimized an implementation. If an error is thrown during stream parsing, the error chunk will be passed directly, so this should be able to be combined first. If someone reports an error later, it can be optimized again. |
好的 我过会儿拆一下 |
Okay, I'll take it apart later. |
@arvinxx 刚把 CI 修了下,没啥问题了 |
@arvinxx just fixed the CI and there is no problem anymore |
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
❤️ Great PR @hezhijie0327 ❤️ The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our discord and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world. |
## [Version 1.16.0](v1.15.35...v1.16.0) <sup>Released on **2024-09-10**</sup> #### ✨ Features - **misc**: Add Fireworks AI Model Provider, Add Spark model provider. <br/> <details> <summary><kbd>Improvements and Fixes</kbd></summary> #### What's improved * **misc**: Add Fireworks AI Model Provider, closes [#3392](#3392) [#48](#48) ([fa0d84d](fa0d84d)) * **misc**: Add Spark model provider, closes [#3098](#3098) [#25](#25) ([fc85c20](fc85c20)) </details> <div align="right"> [![](https://img.shields.io/badge/-BACK_TO_TOP-151515?style=flat-square)](#readme-top) </div>
🎉 This PR is included in version 1.16.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
## [Version 1.59.0](v1.58.16...v1.59.0) <sup>Released on **2024-09-11**</sup> #### ✨ Features - **misc**: Add Fireworks AI Model Provider, Add Spark model provider. #### 🐛 Bug Fixes - **misc**: Add `LLM_VISION_IMAGE_USE_BASE64` to support local s3 in vision model. #### 💄 Styles - **ui**: Improve UI layout and text. - **misc**: Reorder the provider list, Update CustomLogo, update spark check model to spark-lite & default disable useless model, update Upstage model list. <br/> <details> <summary><kbd>Improvements and Fixes</kbd></summary> #### What's improved * **misc**: Add Fireworks AI Model Provider, closes [lobehub#3392](https://github.com/bentwnghk/lobe-chat/issues/3392) [lobehub#48](https://github.com/bentwnghk/lobe-chat/issues/48) ([fa0d84d](fa0d84d)) * **misc**: Add Spark model provider, closes [lobehub#3098](https://github.com/bentwnghk/lobe-chat/issues/3098) [lobehub#25](https://github.com/bentwnghk/lobe-chat/issues/25) ([fc85c20](fc85c20)) #### What's fixed * **misc**: Add `LLM_VISION_IMAGE_USE_BASE64` to support local s3 in vision model, closes [lobehub#3887](https://github.com/bentwnghk/lobe-chat/issues/3887) ([16e57ed](16e57ed)) #### Styles * **ui**: Improve UI layout and text, closes [lobehub#3762](https://github.com/bentwnghk/lobe-chat/issues/3762) ([7c08f29](7c08f29)) * **misc**: Reorder the provider list, closes [lobehub#3886](https://github.com/bentwnghk/lobe-chat/issues/3886) ([4d641f5](4d641f5)) * **misc**: Update CustomLogo, closes [lobehub#3874](https://github.com/bentwnghk/lobe-chat/issues/3874) ([dd7c8df](dd7c8df)) * **misc**: Update spark check model to spark-lite & default disable useless model, closes [lobehub#3885](https://github.com/bentwnghk/lobe-chat/issues/3885) ([9d7e47c](9d7e47c)) * **misc**: Update Upstage model list, closes [lobehub#3890](https://github.com/bentwnghk/lobe-chat/issues/3890) ([82e2570](82e2570)) </details> <div align="right"> [![](https://img.shields.io/badge/-BACK_TO_TOP-151515?style=flat-square)](#readme-top) </div>
💻 变更类型 | Change Type
🔀 变更说明 | Description of Change
firefunction-v2
及firellava-13b
模型(FireworksAI model)firefunction-v2
模型 function call 调用[TODO]禁用 Stream as Work Around已支持流式调用📝 补充信息 | Additional Information
设置页面
对话页面
工具调用