Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release: 4.79.0 #1262

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "4.78.1"
".": "4.79.0"
}
21 changes: 21 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,26 @@
# Changelog

## 4.79.0 (2025-01-17)

Full Changelog: [v4.78.1...v4.79.0](https://github.com/openai/openai-node/compare/v4.78.1...v4.79.0)

### Features

* **client:** add Realtime API support ([#1266](https://github.com/openai/openai-node/issues/1266)) ([7160ebe](https://github.com/openai/openai-node/commit/7160ebe647769fbf48a600c9961d1a6f86dc9622))


### Bug Fixes

* **logs/azure:** redact sensitive header when DEBUG is set ([#1218](https://github.com/openai/openai-node/issues/1218)) ([6a72fd7](https://github.com/openai/openai-node/commit/6a72fd736733db19504a829bf203b39d5b9e3644))


### Chores

* fix streaming ([379c743](https://github.com/openai/openai-node/commit/379c7435ed5d508458e9cdc22386039b84fcec5e))
* **internal:** streaming refactors ([#1261](https://github.com/openai/openai-node/issues/1261)) ([dd4af93](https://github.com/openai/openai-node/commit/dd4af939792583854a313367c5fe2f98eea2f3c8))
* **types:** add `| undefined` to client options properties ([#1264](https://github.com/openai/openai-node/issues/1264)) ([5e56979](https://github.com/openai/openai-node/commit/5e569799b9ac8f915b16de90d91d38b568c1edce))
* **types:** rename vector store chunking strategy ([#1263](https://github.com/openai/openai-node/issues/1263)) ([d31acee](https://github.com/openai/openai-node/commit/d31acee860c80ba945d4e70b956c7ed75f5f849a))

## 4.78.1 (2025-01-10)

Full Changelog: [v4.78.0...v4.78.1](https://github.com/openai/openai-node/compare/v4.78.0...v4.78.1)
Expand Down
87 changes: 87 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,93 @@ main();
If you need to cancel a stream, you can `break` from the loop
or call `stream.controller.abort()`.

## Realtime API beta

The Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well as [function calling](https://platform.openai.com/docs/guides/function-calling) through a `WebSocket` connection.

The Realtime API works through a combination of client-sent events and server-sent events. Clients can send events to do things like update session configuration or send text and audio inputs. Server events confirm when audio responses have completed, or when a text response from the model has been received. A full event reference can be found [here](https://platform.openai.com/docs/api-reference/realtime-client-events) and a guide can be found [here](https://platform.openai.com/docs/guides/realtime).

This SDK supports accessing the Realtime API through the [WebSocket API](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket) or with [ws](https://github.com/websockets/ws).

Basic text based example with `ws`:

```ts
// requires `yarn add ws @types/ws`
import { OpenAIRealtimeWS } from 'openai/beta/realtime/ws';

const rt = new OpenAIRealtimeWS({ model: 'gpt-4o-realtime-preview-2024-12-17' });

// access the underlying `ws.WebSocket` instance
rt.socket.on('open', () => {
console.log('Connection opened!');
rt.send({
type: 'session.update',
session: {
modalities: ['text'],
model: 'gpt-4o-realtime-preview',
},
});

rt.send({
type: 'conversation.item.create',
item: {
type: 'message',
role: 'user',
content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
},
});

rt.send({ type: 'response.create' });
});

rt.on('error', (err) => {
// in a real world scenario this should be logged somewhere as you
// likely want to continue procesing events regardless of any errors
throw err;
});

rt.on('session.created', (event) => {
console.log('session created!', event.session);
console.log();
});

rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
rt.on('response.text.done', () => console.log());

rt.on('response.done', () => rt.close());

rt.socket.on('close', () => console.log('\nConnection closed!'));
```

To use the web API `WebSocket` implementation, replace `OpenAIRealtimeWS` with `OpenAIRealtimeWebSocket` and adjust any `rt.socket` access:

```ts
import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';

const rt = new OpenAIRealtimeWebSocket({ model: 'gpt-4o-realtime-preview-2024-12-17' });
// ...
rt.socket.addEventListener('open', () => {
// ...
});
```

A full example can be found [here](https://github.com/openai/openai-node/blob/master/examples/realtime/web.ts).

### Realtime error handling

When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime/realtime-api-beta#handling-errors), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.

It is **highly recommended** that you register an `error` event listener and handle errors approriately as typically the underlying connection is still usable.

```ts
const rt = new OpenAIRealtimeWS({ model: 'gpt-4o-realtime-preview-2024-12-17' });
rt.on('error', (err) => {
// in a real world scenario this should be logged somewhere as you
// likely want to continue procesing events regardless of any errors
throw err;
});
```

### Request & Response types

This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:
Expand Down
2 changes: 1 addition & 1 deletion api.md
Original file line number Diff line number Diff line change
Expand Up @@ -283,7 +283,7 @@ Types:
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">OtherFileChunkingStrategyObject</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategy</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategyObject</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategyParam</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategyObjectParam</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">VectorStore</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">VectorStoreDeleted</a></code>

Expand Down
7 changes: 4 additions & 3 deletions examples/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,15 @@
"license": "MIT",
"private": true,
"dependencies": {
"@azure/identity": "^4.2.0",
"express": "^4.18.2",
"next": "^14.1.1",
"openai": "file:..",
"zod-to-json-schema": "^3.21.4",
"@azure/identity": "^4.2.0"
"zod-to-json-schema": "^3.21.4"
},
"devDependencies": {
"@types/body-parser": "^1.19.3",
"@types/express": "^4.17.19"
"@types/express": "^4.17.19",
"@types/web": "^0.0.194"
}
}
48 changes: 48 additions & 0 deletions examples/realtime/websocket.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';

async function main() {
const rt = new OpenAIRealtimeWebSocket({ model: 'gpt-4o-realtime-preview-2024-12-17' });

// access the underlying `ws.WebSocket` instance
rt.socket.addEventListener('open', () => {
console.log('Connection opened!');
rt.send({
type: 'session.update',
session: {
modalities: ['text'],
model: 'gpt-4o-realtime-preview',
},
});

rt.send({
type: 'conversation.item.create',
item: {
type: 'message',
role: 'user',
content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
},
});

rt.send({ type: 'response.create' });
});

rt.on('error', (err) => {
// in a real world scenario this should be logged somewhere as you
// likely want to continue procesing events regardless of any errors
throw err;
});

rt.on('session.created', (event) => {
console.log('session created!', event.session);
console.log();
});

rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
rt.on('response.text.done', () => console.log());

rt.on('response.done', () => rt.close());

rt.socket.addEventListener('close', () => console.log('\nConnection closed!'));
}

main();
55 changes: 55 additions & 0 deletions examples/realtime/ws.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
import { OpenAIRealtimeWS } from 'openai/beta/realtime/ws';

async function main() {
const rt = new OpenAIRealtimeWS({ model: 'gpt-4o-realtime-preview-2024-12-17' });

// access the underlying `ws.WebSocket` instance
rt.socket.on('open', () => {
console.log('Connection opened!');
rt.send({
type: 'session.update',
session: {
modalities: ['foo'] as any,
model: 'gpt-4o-realtime-preview',
},
});
rt.send({
type: 'session.update',
session: {
modalities: ['text'],
model: 'gpt-4o-realtime-preview',
},
});

rt.send({
type: 'conversation.item.create',
item: {
type: 'message',
role: 'user',
content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
},
});

rt.send({ type: 'response.create' });
});

rt.on('error', (err) => {
// in a real world scenario this should be logged somewhere as you
// likely want to continue procesing events regardless of any errors
throw err;
});

rt.on('session.created', (event) => {
console.log('session created!', event.session);
console.log();
});

rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
rt.on('response.text.done', () => console.log());

rt.on('response.done', () => rt.close());

rt.socket.on('close', () => console.log('\nConnection closed!'));
}

main();
2 changes: 1 addition & 1 deletion jsr.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@openai/openai",
"version": "4.78.1",
"version": "4.79.0",
"exports": "./index.ts",
"publish": {
"exclude": [
Expand Down
8 changes: 7 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "openai",
"version": "4.78.1",
"version": "4.79.0",
"description": "The official TypeScript library for the OpenAI API",
"author": "OpenAI <[email protected]>",
"types": "dist/index.d.ts",
Expand Down Expand Up @@ -36,6 +36,7 @@
"@swc/core": "^1.3.102",
"@swc/jest": "^0.2.29",
"@types/jest": "^29.4.0",
"@types/ws": "^8.5.13",
"@typescript-eslint/eslint-plugin": "^6.7.0",
"@typescript-eslint/parser": "^6.7.0",
"eslint": "^8.49.0",
Expand All @@ -52,6 +53,7 @@
"tsc-multi": "^1.1.0",
"tsconfig-paths": "^4.0.0",
"typescript": "^4.8.2",
"ws": "^8.18.0",
"zod": "^3.23.8"
},
"sideEffects": [
Expand Down Expand Up @@ -126,9 +128,13 @@
},
"bin": "./bin/cli",
"peerDependencies": {
"ws": "^8.18.0",
"zod": "^3.23.8"
},
"peerDependenciesMeta": {
"ws": {
"optional": true
},
"zod": {
"optional": true
}
Expand Down
1 change: 1 addition & 0 deletions src/beta/realtime/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
export { OpenAIRealtimeError } from './internal-base';
83 changes: 83 additions & 0 deletions src/beta/realtime/internal-base.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
import { RealtimeClientEvent, RealtimeServerEvent, ErrorEvent } from '../../resources/beta/realtime/realtime';
import { EventEmitter } from '../../lib/EventEmitter';
import { OpenAIError } from '../../error';

export class OpenAIRealtimeError extends OpenAIError {
/**
* The error data that the API sent back in an `error` event.
*/
error?: ErrorEvent.Error | undefined;

/**
* The unique ID of the server event.
*/
event_id?: string | undefined;

constructor(message: string, event: ErrorEvent | null) {
super(message);

this.error = event?.error;
this.event_id = event?.event_id;
}
}

type Simplify<T> = { [KeyType in keyof T]: T[KeyType] } & {};

type RealtimeEvents = Simplify<
{
event: (event: RealtimeServerEvent) => void;
error: (error: OpenAIRealtimeError) => void;
} & {
[EventType in Exclude<RealtimeServerEvent['type'], 'error'>]: (
event: Extract<RealtimeServerEvent, { type: EventType }>,
) => unknown;
}
>;

export abstract class OpenAIRealtimeEmitter extends EventEmitter<RealtimeEvents> {
/**
* Send an event to the API.
*/
abstract send(event: RealtimeClientEvent): void;

/**
* Close the websocket connection.
*/
abstract close(props?: { code: number; reason: string }): void;

protected _onError(event: null, message: string, cause: any): void;
protected _onError(event: ErrorEvent, message?: string | undefined): void;
protected _onError(event: ErrorEvent | null, message?: string | undefined, cause?: any): void {
message =
event?.error ?
`${event.error.message} code=${event.error.code} param=${event.error.param} type=${event.error.type} event_id=${event.error.event_id}`
: message ?? 'unknown error';

if (!this._hasListener('error')) {
const error = new OpenAIRealtimeError(
message +
`\n\nTo resolve these unhandled rejection errors you should bind an \`error\` callback, e.g. \`rt.on('error', (error) => ...)\` `,
event,
);
// @ts-ignore
error.cause = cause;
Promise.reject(error);
return;
}

const error = new OpenAIRealtimeError(message, event);
// @ts-ignore
error.cause = cause;

this._emit('error', error);
}
}

export function buildRealtimeURL(props: { baseURL: string; model: string }): URL {
const path = '/realtime';

const url = new URL(props.baseURL + (props.baseURL.endsWith('/') ? path.slice(1) : path));
url.protocol = 'wss';
url.searchParams.set('model', props.model);
return url;
}
Loading
Loading