Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

De-XHRify OfflineAudioContext examples #30418

Merged
merged 3 commits into from
Nov 21, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
87 changes: 42 additions & 45 deletions files/en-us/web/api/offlineaudiocontext/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,67 +48,64 @@ Listen to these events using [`addEventListener()`](/en-US/docs/Web/API/EventTar

## Examples

In this simple example, we declare both an {{domxref("AudioContext")}} and an `OfflineAudioContext` object. We use the `AudioContext` to load an audio track via XHR ({{domxref("BaseAudioContext.decodeAudioData")}}), then the `OfflineAudioContext` to render the audio into an {{domxref("AudioBufferSourceNode")}} and play the track through. After the offline audio graph is set up, you need to render it to an {{domxref("AudioBuffer")}} using {{domxref("OfflineAudioContext.startRendering")}}.
### Playing audio with an offline audio context

In this example, we declare both an {{domxref("AudioContext")}} and an `OfflineAudioContext` object. We use the `AudioContext` to load an audio track {{domxref("fetch()")}}, then the `OfflineAudioContext` to render the audio into an {{domxref("AudioBufferSourceNode")}} and play the track through. After the offline audio graph is set up, we render it to an {{domxref("AudioBuffer")}} using `OfflineAudioContext.startRendering()`.

When the `startRendering()` promise resolves, rendering has completed and the output `AudioBuffer` is returned out of the promise.

At this point we create another audio context, create an {{domxref("AudioBufferSourceNode")}} inside it, and set its buffer to be equal to the promise `AudioBuffer`. This is then played as part of a simple standard audio graph.

> **Note:** For a working example, see our [offline-audio-context-promise](https://mdn.github.io/webaudio-examples/offline-audio-context-promise/) GitHub repo (see the [source code](https://github.com/mdn/webaudio-examples/tree/master/offline-audio-context-promise) too.)
> **Note:** You can [run the full example live](https://mdn.github.io/webaudio-examples/offline-audio-context-promise/), or [view the source](https://github.com/mdn/webaudio-examples/blob/main/offline-audio-context-promise/).

```js
// define online and offline audio context

const audioCtx = new AudioContext();
// Define both online and offline audio contexts
let audioCtx; // Must be initialized after a user interaction
const offlineCtx = new OfflineAudioContext(2, 44100 * 40, 44100);

source = offlineCtx.createBufferSource();

// use XHR to load an audio track, and
// decodeAudioData to decode it and OfflineAudioContext to render it
// Define constants for dom nodes
const play = document.querySelector("#play");

function getData() {
request = new XMLHttpRequest();

request.open("GET", "viper.ogg", true);

request.responseType = "arraybuffer";

request.onload = () => {
const audioData = request.response;

audioCtx.decodeAudioData(audioData, (buffer) => {
myBuffer = buffer;
source.buffer = myBuffer;
// Fetch an audio track, decode it and stick it in a buffer.
// Then we put the buffer into the source and can play it.
fetch("viper.ogg")
.then((response) => response.arrayBuffer())
.then((downloadedBuffer) => audioCtx.decodeAudioData(downloadedBuffer))
.then((decodedBuffer) => {
console.log("File downloaded successfully.");
const source = new AudioBufferSourceNode(offlineCtx, {
buffer: decodedBuffer,
});
source.connect(offlineCtx.destination);
source.start();
//source.loop = true;
offlineCtx
.startRendering()
.then((renderedBuffer) => {
console.log("Rendering completed successfully");
const song = audioCtx.createBufferSource();
song.buffer = renderedBuffer;

song.connect(audioCtx.destination);

play.onclick = () => {
song.start();
};
})
.catch((err) => {
console.error(`Rendering failed: ${err}`);
// Note: The promise should reject when startRendering is called a second time on an OfflineAudioContext
});
return source.start();
})
.then(() => offlineCtx.startRendering())
.then((renderedBuffer) => {
console.log("Rendering completed successfully.");
play.disabled = false;
const song = new AudioBufferSourceNode(audioCtx, {
buffer: renderedBuffer,
});
song.connect(audioCtx.destination);

// Start the song
song.start();
})
.catch((err) => {
console.error(`Error encountered: ${err}`);
});
};

request.send();
}

// Run getData to start the process off
// Activate the play button
play.onclick = () => {
play.disabled = true;
// We can initialize the context as the user clicked.
audioCtx = new AudioContext();

getData();
// Fetch the data and start the song
getData();
};
```

## Specifications
Expand Down
93 changes: 43 additions & 50 deletions files/en-us/web/api/offlineaudiocontext/startrendering/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,71 +31,64 @@ A {{jsxref("Promise")}} that fulfills with an {{domxref("AudioBuffer")}}.

## Examples

In this simple example, we declare both an {{domxref("AudioContext")}} and an `OfflineAudioContext` object.
We use the `AudioContext` to load an audio track via XHR ({{domxref("BaseAudioContext.decodeAudioData")}}), then the `OfflineAudioContext` to render the audio into an {{domxref("AudioBufferSourceNode")}} and play the track through.
After the offline audio graph is set up, you need to render it to an {{domxref("AudioBuffer")}} using {{domxref("OfflineAudioContext.startRendering")}}.
### Playing audio with an offline audio context

In this example, we declare both an {{domxref("AudioContext")}} and an `OfflineAudioContext` object. We use the `AudioContext` to load an audio track {{domxref("fetch()")}}, then the `OfflineAudioContext` to render the audio into an {{domxref("AudioBufferSourceNode")}} and play the track through. After the offline audio graph is set up, we render it to an {{domxref("AudioBuffer")}} using `OfflineAudioContext.startRendering()`.

When the `startRendering()` promise resolves, rendering has completed and the output `AudioBuffer` is returned out of the promise.

At this point we create another audio context, create an {{domxref("AudioBufferSourceNode")}} inside it, and set its buffer to be equal to the promise `AudioBuffer`.
This is then played as part of a simple standard audio graph.
At this point we create another audio context, create an {{domxref("AudioBufferSourceNode")}} inside it, and set its buffer to be equal to the promise `AudioBuffer`. This is then played as part of a simple standard audio graph.

> **Note:** For a working example, see our [offline-audio-context-promise](https://mdn.github.io/webaudio-examples/offline-audio-context-promise/) GitHub repo (see the [source code](https://github.com/mdn/webaudio-examples) too.)
> **Note:** You can [run the full example live](https://mdn.github.io/webaudio-examples/offline-audio-context-promise/), or [view the source](https://github.com/mdn/webaudio-examples/blob/main/offline-audio-context-promise/).

```js
// define online and offline audio context

const audioCtx = new AudioContext();
// Define both online and offline audio contexts
let audioCtx; // Must be initialized after a user interaction
const offlineCtx = new OfflineAudioContext(2, 44100 * 40, 44100);

source = offlineCtx.createBufferSource();

// use XHR to load an audio track, and
// decodeAudioData to decode it and OfflineAudioContext to render it
// Define constants for dom nodes
const play = document.querySelector("#play");

function getData() {
request = new XMLHttpRequest();

request.open("GET", "viper.ogg", true);

request.responseType = "arraybuffer";

request.onload = () => {
const audioData = request.response;

audioCtx.decodeAudioData(audioData, (buffer) => {
myBuffer = buffer;
source.buffer = myBuffer;
// Fetch an audio track, decode it and stick it in a buffer.
// Then we put the buffer into the source and can play it.
fetch("viper.ogg")
.then((response) => response.arrayBuffer())
.then((downloadedBuffer) => audioCtx.decodeAudioData(downloadedBuffer))
.then((decodedBuffer) => {
console.log("File downloaded successfully.");
const source = new AudioBufferSourceNode(offlineCtx, {
buffer: decodedBuffer,
});
source.connect(offlineCtx.destination);
source.start();
//source.loop = true;
offlineCtx
.startRendering()
.then((renderedBuffer) => {
console.log("Rendering completed successfully");
const offlineAudioCtx = new AudioContext();
const song = offlineAudioCtx.createBufferSource();
song.buffer = renderedBuffer;

song.connect(offlineAudioCtx.destination);

play.onclick = () => {
song.start();
};
})
.catch((err) => {
console.error(`Rendering failed: ${err}`);
// Note: The promise should reject when startRendering is called a second time on an OfflineAudioContext
});
return source.start();
})
.then(() => offlineCtx.startRendering())
.then((renderedBuffer) => {
console.log("Rendering completed successfully.");
play.disabled = false;
const song = new AudioBufferSourceNode(audioCtx, {
buffer: renderedBuffer,
});
song.connect(audioCtx.destination);

// Start the song
song.start();
})
.catch((err) => {
console.error(`Error encountered: ${err}`);
});
};

request.send();
}

// Run getData to start the process off
// Activate the play button
play.onclick = () => {
play.disabled = true;
// We can initialize the context as the user clicked.
audioCtx = new AudioContext();

getData();
// Fetch the data and start the song
getData();
};
```

## Specifications
Expand Down