-
Notifications
You must be signed in to change notification settings - Fork 10.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for streaming text IAsyncEnumerable<string> results #50501
Comments
How is the client consuming this response? Do you have an example? |
@davidfowl here's a simple example built by me, based on the great tutorial from Streamlit https://docs.streamlit.io/knowledge-base/tutorials/build-conversational-apps#write-the-app Streamlit ChatBot app #!/usr/bin/env python3
import os, logging
import streamlit as st
from azureapi import AzureAPI
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
azure_api_endpoint = os.getenv('AZUREAPI_ENDPOINT')
azure_api = AzureAPI(endpoint=azure_api_endpoint)
#https://docs.streamlit.io/knowledge-base/tutorials/build-conversational-apps
st.set_page_config(
page_title="ChatBot",
page_icon=":robot:"
)
st.title("ChatBot")
# Initialize chat history
if "messages" not in st.session_state:
st.session_state.messages = []
if "chat_history" not in st.session_state:
st.session_state.chat_history = ""
# Display chat messages from history on app rerun
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# React to user input
if prompt := st.chat_input("Enter message here"):
# Add user message to chat history
st.session_state.messages.append({"role": "user", "content": prompt})
# Display user message in chat message container
with st.chat_message("user"):
st.markdown(prompt)
with st.spinner(text="In progress..."):
with st.chat_message("assistant"):
message_placeholder = st.empty()
response_text = ''
response_stream = azure_api.ChatStream(question=prompt, chatHistory=st.session_state.chat_history)
for response_chunk in response_stream:
if response_chunk:
response_text += response_chunk
message_placeholder.markdown(response_text + "▌")
message_placeholder.markdown(response_text)
# Display assistant response in chat message container
if response_text:
# Add assistant response to chat history
st.session_state.messages.append({"role": "assistant", "content": response_text})
else:
st.error("ERROR") API Client class AzureAPI:
SESSION = requests.Session()
DEFAULT_TIMEOUT = 180
API_ENDPOINT = None
def __init__(self, endpoint: str) -> None:
self.API_ENDPOINT = endpoint
def ChatStream(self, question: str, chatHistory: str = None):
url = f'{self.API_ENDPOINT}/ChatAsyncStream'
json = {
"question": question,
"chatHistory": chatHistory
}
with AzureAPI.SESSION.request(method='POST', url=url, json=json, stream=True, timeout=AzureAPI.DEFAULT_TIMEOUT) as response:
yield from response.iter_content(chunk_size=None, decode_unicode=True) |
A more generic solution might be to have a Server-Sent-Events result object that accepts an IAsyncEnumerable - a client can then consume it via javascript's EventSource class. |
Looks like this PR hasn't been active for some time and the codebase could have been changed in the meantime. |
Looks like this PR hasn't been active for some time and the codebase could have been changed in the meantime. |
Related to dotnet/runtime#98105 |
Is there an existing issue for this?
Is your feature request related to a problem? Please describe the problem.
I am trying to return a streaming
IAsyncEnumerable<string>
SematicKernel chat completion fromGetStreamingChatCompletionsAsync
,GetStreamingChatMessageAsync
methods.Currently simply returning
IAsyncEnumerable<string>
produces a streaming JSON array of strings result. The desired behavior is simple streaming text strings result.This would effectively produce a streaming ChatGTP-like completion response generated by the method as the results become available from the OpenAI endpoints.
Describe the solution you'd like
Usage example:
Additional context
No response
The text was updated successfully, but these errors were encountered: