Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI kit can only be used with the <AIConversation> component #8125

Open
rpostulart opened this issue Nov 22, 2024 · 3 comments
Open

AI kit can only be used with the <AIConversation> component #8125

rpostulart opened this issue Nov 22, 2024 · 3 comments

Comments

@rpostulart
Copy link

rpostulart commented Nov 22, 2024

Describe the content issue:
I miss clear examples / pages how to use the AI kit in my own UI instead of using only.

I have solved it partially, but face errors when attaching attachments like images of pdf's.

URL page where content issue is:
https://docs.amplify.aws/nextjs/ai/concepts/streaming/

@atierian are you able to share the docs here already?

@atierian
Copy link
Member

atierian commented Nov 22, 2024

You're absolutely right, thanks for the feedback.

This work in progress PR should cover what you're looking for. We expect to get it onto the docs site shortly.

Specifically, this section addresses how to include images.

Customizing the message content

sendMessage() accepts a object type with a content property that provides a flexible way to send different types of content to the AI assistant.

Image Content

Use image to send an image to the AI assistant.
Supported image formats are png, gif, jpeg, and webp.

const { data: message, errors } = await chat.sendMessage({
  content: [
    {
      image: {
        format: 'png',
        source: {
          bytes: new Uint8Array([1, 2, 3]),
        },
      },
    },
  ],
});

Mixing text and image in a single message is supported.

const { data: message, errors } = await chat.sendMessage({
  content: [
    {
      text: 'describe the image in detail',
    },
    {
      image: {
        format: 'png',
        source: {
          bytes: new Uint8Array([1, 2, 3]),
        },
      },
    },
  ],
});

Conversation routes don't support sending documents. If that's something you'd like to see, please open a feature request in the https://github.com/aws-amplify/amplify-category-api repository. Thanks!

@dbanksdesign
Copy link
Contributor

If you are using React, you can also use the hooks by themselves and build your own UI. We can probably make that clearer in the docs.

import { generateClient } from "aws-amplify/api";
import { Schema } from "../amplify/data/resource";
import { createAIHooks } from "@aws-amplify/ui-react-ai";

const client = generateClient<Schema>({ authMode: "userPool" });
const { useAIConversation, useAIGeneration } = createAIHooks(client);

export default function App() {
  const [
    {
      data: { messages },
      isLoading,
    },
    handleSendMessage,
  ] = useAIConversation('chat');

  // render your own UI here with message, isLoading, and handleSendMessage

}

@rpostulart
Copy link
Author

Thanks this is helpful.

I see that PDF is indeed not supported yet with Bedrock but coming soon:

Supported platforms and models
PDF support is currently available on both Claude 3.5 Sonnet models (claude-3-5-sonnet-20241022, claude-3-5-sonnet-20240620) via direct API access. This functionality will be supported on Amazon Bedrock and Google Vertex AI soon.

I will already make a request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants