Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update WebGL architecture doc with edits #6791

Merged
merged 4 commits into from
Mar 6, 2024
Merged
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
93 changes: 60 additions & 33 deletions contributor_docs/webgl_mode_architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,13 @@ We keep track of the progress of WebGL issues in [a GitHub Project.](https://git

When evaluating a new feature, we consider whether it aligns with the goals of p5.js and WebGL mode:

1. **Features should be beginner friendly:** It should provide a **beginner-friendly introduction to WebGL** and the features it offers. This means that we should offer simple APIs for 3D shapes, cameras, lighting, and shaders. We can still support advanced features, but only if they do not interfere with the simplicity of core features.
1. **Features should be beginner-friendly:** It should provide a **beginner-friendly introduction to WebGL** and the features it offers. This means that we should offer simple APIs for 3D shapes, cameras, lighting, and shaders. We can still support advanced features, but only if they do not interfere with the simplicity of core features.
2. **Improving feature parity with 2D mode:** It should be a **frictionless transition from 2D mode,** making 3D and WebGL "click" more easily for users. This means that we try to create features that work in 2D mode and also in WebGL mode. Since WebGL also has 3D and shader features, this means WebGL mode aims to have a superset of 2D mode's features.
3. **Simplicity and Extensibility are paramount:** It should **have a small core and be extensible for libraries.** Keeping WebGL mode small makes it easier to optimize core features and reduce bug surface area. Extension provides an avenue to include more advanced features via libraries.
3. **Simplicity and extensibility are paramount:** It should **have a small core and be extensible for libraries.** Keeping WebGL mode small makes it easier to optimize core features and reduce bug surface area. Extension provides an avenue to include more advanced features via libraries.
4. **Improve p5.js performance:** It should **run as fast as possible without interfering with the previous goals.** Good performance keeps sketches accessible to a wide variety of viewers and devices. When designing new APIs, we try to ensure the design has a performant implementation. However, we give preference to simplicity and parity with 2D mode.


## Design Differences with 2D Mode
## Design differences with 2D mode

The browser's 2D and WebGL canvas context APIs offer very different levels of abstraction, with WebGL being generally lower-level and 2D being higher-level. This motivates some fundamental design differences between p5.js's WebGL and 2D modes.

Expand All @@ -29,13 +29,13 @@ The browser's 2D and WebGL canvas context APIs offer very different levels of ab
- **WebGL mode must balance high- and low-level APIs.** Since finer-grained control is available with the browser WebGL API, p5.js's WebGL mode is able to offer users some of that control where 2D mode cannot. We then are faced with the task of picking the right level of abstraction for users. Too high, and they are unable to take advantage of some of what the browser has to offer; too low, and we pass too much of the work of managing complexity and performance onto the user.


## Drawing Shapes
## Drawing shapes

### Creating Shapes: Fills, Strokes, and 3D Geometry
### Creating shapes: fills, strokes, and 3D geometry

Everything drawn by p5.js, both in 2D and WebGL, consists of fills and strokes. Sometimes, we only draw one or the other, but every shape must be ready to draw either component.

All shapes in webGL are composed of triangles. When a user calls a function like `circle()`, `beginShape(),` or `vertex()`, the renderer must [break the shape down into a series of points](https://github.com/processing/p5.js/blob/main/src/webgl/3d_primitives.js). The points are connected into lines, and the lines into triangles. For example, `circle()` uses trigonometry to figure out where to place points along a circle. `curveVertex()` and `bezierVertex()` create look-up tables to turn any Bezier curve into points.
All shapes in WebGL are composed of triangles. When a user calls a function like `circle()`, `beginShape(),` or `vertex()`, the renderer must [break the shape down into a series of points](https://github.com/processing/p5.js/blob/main/src/webgl/3d_primitives.js). The points are connected into lines, and the lines into triangles. For example, `circle()` uses trigonometry to figure out where to place points along a circle. `curveVertex()` and `bezierVertex()` create look-up tables to turn any Bézier curve into points.


#### Fills
Expand All @@ -45,7 +45,7 @@ To create fills, the outline of a shape needs to be filled in with triangles. So

#### Strokes

Despite their name, strokes also need to be filled in to support strokes of varying widths and styles. The lines along the outline of a shape need to expand out from their centers to form shapes with area. Expansion of strokes creates three types of shapes: joins, caps, and segments, illustrated below.
Despite their name, strokes also need to be filled in to support strokes of varying widths and styles. The lines along the outline of a shape need to expand out from their centers to form shapes with area. The expansion of strokes creates three types of shapes: joins, caps, and segments, illustrated below.

<img alt="Illustration of the segment, join, and cap shapes created by a stroke." src="images/line-diagram.svg" width="600" />
<!-- Generated via https://codepen.io/davepvm/pen/ZEVdppQ -->
Expand All @@ -67,17 +67,17 @@ We use a similar strategy for stroke caps, present at the disconnected ends of l
3D shapes can also have strokes, but stroke shapes are calculated in 2D. This means they can change based on the camera's perspective. We want to avoid as much recalculation as possible, so we store all the information about the line that is not camera-dependent:

- We include the **center points of the line** in model space, shown below in red.
- We include the **direction of the line**, its tangent, at the start and end of each shape, shown in blue and pink, respectively. This helps us compute the shape of joins where two lines connect.
- We include the **direction of the line** using its tangent at the start and end of each shape, shown in blue and pink, respectively. This helps us compute the shape of joins where two lines connect.
- We include **a flag that uniquely identifies each corner of the shape.** Combined with the tangent and the normal (a 90-degree rotation of the tangent), it helps determine in what direction to expand the line to give it thickness.

To draw the line, we combine that information with camera intrinsics in a shader to produce the final line positions in screen space.

<img alt="The information stored about lines, and the final shapes that they turn into." src="images/flags.svg" width="600" />


### Rendering Shapes: Immediate and Retained Modes
### Rendering shapes: immediate and retained modes

There are two routes that p5.js uses to draw shapes onto the screen: **immediate mode** and **retained mode.**
There are two modes that p5.js uses to draw shapes onto the screen: **immediate mode** and **retained mode.**

**Immediate mode** is optimized for shapes that change every frame. If you were drawing a curve that changes each frame, its shape data would be different every time you drew it. Because of this, immediate mode fits it best. It indicates to p5.js that it does not need to spend time storing the shape for reuse, and it saves graphics memory from being filled up with all the shape variations over time. The following functions use this mode:

Expand All @@ -88,21 +88,21 @@ There are two routes that p5.js uses to draw shapes onto the screen: **immediate
- `line()`
- `image()`

Retained mode is optimized for shapes that you will need to keep redrawing and don’t change shape. Once a shape is made of triangles and has been sent to the GPU to draw, retained mode keeps it there. It can then be drawn again without having to spend time re-triangulating it or sending it to the GPU again. The saved shape data is kept in a p5.Geometry object. p5.Geometry stores triangle data and keeps track of its uploaded buffers on the GPU. Calling `freeGeometry()` clears the GPU data to make space. Drawing it again after that will re-upload the data. Many 3D shape drawing functions in p5.js, such as `sphere()` or `cone()`, use this internally.
Retained mode is optimized for shapes that you will need to keep redrawing and don’t change shape. Once a shape is made of triangles and has been sent to the GPU to draw, retained mode keeps it there. It can then be drawn again without having to spend time re-triangulating it or sending it to the GPU again. The saved shape data is kept in a `p5.Geometry` object. `p5.Geometry` stores triangle data and keeps track of its uploaded buffers on the GPU. Calling `freeGeometry()` clears the GPU data to make space. Drawing it again after that will re-upload the data. Many 3D shape drawing functions in p5.js, such as `sphere()` or `cone()`, use this internally.

You can use buildGeometry() to make a p5.Geometry out of immediate mode commands. You call it with a function that runs a series of any p5.js shape drawing functions. It runs the function, collects the shapes into a new p5.Geometry, and returns it. The p5.Geometry can then be drawn and redrawn efficiently in the future.
You can use `buildGeometry()` to make a `p5.Geometry` out of immediate mode commands. You call it with a function that runs a series of any p5.js shape drawing functions. It runs the function, collects the shapes into a new p5.Geometry, and returns it. The p5.Geometry can then be drawn and redrawn efficiently in the future.


## Materials, Lights, and Shaders
## Materials, lights, and shaders

Every shape we draw uses a single shader for its fills, and a single shader for its strokes. There are a few default shaders that one can pick from in p5.js. You can also write and use your own shader instead of the default ones.
Every shape we draw uses a single shader for its fills and a single shader for its strokes. There are a few default shaders that one can pick from in p5.js. You can also write and use your own shader instead of the default ones.

The default shaders work with p5.js's lighting and materials system. The user can specify what lights are in the scene with a shape and how each object reacts to light, including color and shininess. This information is given to the shader for each object being drawn. Custom shaders can also access the same lighting and material information, allowing users and library makers to extend the default rendering behavior.


### Shaders

P5 has a few shaders built in:
p5.js has a few shaders built in:

- **Color Shader:** for drawing flat colors, activated by using `fill()` or `stroke()`.
- **Lighting Shader:** for drawing 2D and 3D shapes with complex lighting and textures. Activated by calling `lights()`, `ambientLight()`, `directionalLight()`, `pointLight()`, and `spotLight()`. Each adds a light to the lighting list. All added lights contribute to the shading of the shape. If you do not use lights, the shape will be drawn using the color shader, which only uses the fill color.
Expand All @@ -129,11 +129,11 @@ Each 3D object has a few material properties that can be set by the user:
- **Emissive material**: Set with `emissiveMaterial()`, this adds a constant color to the lighting of the shape, as if it were producing its own light of that color.


### Shader Implementation
### Shader implementation

The lighting and material parameters get turned into shader attributes and uniforms. If you reference them in a custom shader, p5.js will supply them automatically.

While advanced shader writers can take advantage of these properties, it can be unclear for new users. In the Future Goals section, we describe some plans for improving the API. We may want to improve it before publicly documenting and supporting it.
While advanced shader writers can take advantage of these properties, it can be unclear for new users. In the <a href="#future-goals">Future Goals section</a>, we describe some plans for improving the API. We may want to improve it before publicly documenting and supporting it.


#### Global
Expand Down Expand Up @@ -242,22 +242,49 @@ It also has the following per-vertex attributes:

## Classes

The entrypoint to most WebGL code is through **p5.RendererGL**. Top-level p5.js functions are passed to the current renderer. Both 2D and WebGL modes have renderer classes that conform to this common interface. Immediate mode and retained mode functions are split up into **p5.RendererGL.Immediate.js** and **p5.RendererGL.Retained.js**.

Within the renderer, references to models are stored in the **retainedMode.geometry** map. Each value is an object storing the buffers of a **p5.Geometry**. When calling model(yourGeometry) for the first time, the renderer adds an entry in the map. It then stores references to the geometry's GPU resources there. If you draw a p5.Geometry to the main canvas and also to a WebGL p5.Graphics, it will have entries in two renderers.

Each material is represented by a **p5.Shader.** You set the current shader in the renderer via the shader(yourShader) function. This class handles compiling shader source code and setting shader uniforms.

When setting a shader uniform, if the uniform type is an image, then the renderer creates a p5.Texture for it. Each p5.Image, p5.Graphics, p5.MediaElement, or p5.Framebuffer asset will get one. It is what keeps track of the image data's representation on the GPU. Before using the asset in a shader, p5.js will send new data to the GPU if necessary. For images, this happens when a user has manually updated the pixels of an image. This happens every frame for assets with data that may have changed each frame, such as a video or a p5.Graphics.

Textures corresponding to **p5.Framebuffer** objects are unique. Framebuffers are like graphics: they represent surfaces that can be drawn to. Unlike p5.Graphics, framebuffers live entirely on the GPU. If one uses a p5.Graphics as a texture in a shader, the data needs to be transferred to and from the CPU. This can often be a performance bottleneck. In contrast, when drawing to a framebuffer, you draw directly to its GPU texture. Because of this, no extra data transfer is necessary. WebGL mode tries to use p5.Framebuffers over p5.Graphics where possible for this reason.


## Future Goals

Currently, WebGL mode is functional for a variety of tasks, but many users and library makers want to extend it in new directions. We aim to create a set of building blocks for users and library makers from which they can craft extensions. A block can be considered "done" when it has an extensible API we can confidently commit to supporting. A major milestone for WebGL mode will be when we have a sufficient set of such blocks for an ecosystem of libraries. The main areas currently lacking in extension support are geometry and materials.

- **Extend p5.Geometry to support richer content.** One can create geometry, but many tasks a user might want to accomplish are not yet supported with a stable API. One might want to efficiently update geometry, which is necessary to support animated gltf models. One might want to group multiple materials in one object, if they are present in an imported model. One might want to add custom vertex attributes for a shader to work with. These tasks are currently unsupported.
```mermaid
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neat! 🧜🏽‍♀️

---
title: p5.js WebGL Classes
---
classDiagram
class Base["p5.Renderer"] {
}
class P2D["p5.Renderer2D"] {
}
class WebGL["p5.RendererGL"] {
}
class Geometry["p5.Geometry"] {
}
class Shader["p5.Shader"] {
}
class Texture["p5.Texture"] {
}
class Framebuffer["p5.Framebuffer"] {
}
Base <|-- P2D
Base <|-- WebGL
WebGL "*" o-- "*" Geometry
WebGL "1" *-- "*" Shader
WebGL "1" *-- "*" Texture
WebGL "1" *-- "*" Framebuffer
```

The entry point to most WebGL code is through **p5.RendererGL**. Top-level p5.js functions are passed to the current renderer. Both 2D and WebGL modes have renderer classes that conform to a common `p5.Renderer` interface. Immediate mode and retained mode functions are split up into **p5.RendererGL.Immediate.js** and **p5.RendererGL.Retained.js**.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Syntax highlighting: p5.RendererGL should probably be p5.RendererGL


Within the renderer, references to models are stored in the **retainedMode.geometry** map. Each value is an object storing the buffers of a **p5.Geometry**. When calling `model(yourGeometry)` for the first time, the renderer adds an entry in the map. It then stores references to the geometry's GPU resources there. If you draw a `p5.Geometry` to the main canvas and also to a WebGL `p5.Graphics`, it will have entries in two renderers.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Syntax highlighting: retainedMode.geometry should probably be retainedMode.geometry and p5.Geometry should be p5.Geometry


Each material is represented by a **p5.Shader.** You set the current shader in the renderer via the `shader(yourShader)` function. This class handles compiling shader source code and setting shader uniforms.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Syntax highlighting: p5.Shader should probably be p5.Shader


When setting a shader uniform, if the uniform type is an image, then the renderer creates a `p5.Texture` for it. Each `p5.Image`, `p5.Graphics`, `p5.MediaElement`, or `p5.Framebuffer` asset will get one. It is what keeps track of the image data's representation on the GPU. Before using the asset in a shader, p5.js will send new data to the GPU if necessary. For images, this happens when a user has manually updated the pixels of an image. This happens every frame for assets with data that may have changed each frame, such as a video or a `p5.Graphics`.

Textures corresponding to **p5.Framebuffer** objects are unique. Framebuffers are like graphics: they represent surfaces that can be drawn to. Unlike `p5.Graphics`, `p5.Framebuffer`s live entirely on the GPU. If one uses a `p5.Graphics` as a texture in a shader, the data needs to be transferred to and from the CPU. This can often be a performance bottleneck. In contrast, when drawing to a `p5.Framebuffer`, you draw directly to its GPU texture. Because of this, no extra data transfer is necessary. WebGL mode tries to use `p5.Framebuffer`s over `p5.Graphics` where possible for this reason.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Syntax highlighting: p5.Framebuffer should probably be p5.Framebuffer

Consistency: p5.Framebuffers are like graphics: ...



## Future goals

Currently, WebGL mode is functional for a variety of tasks, but many users and library makers want to extend it in new directions. We aim to create a set of building blocks from which users and library makers can craft extensions. A block can be considered "done" when it has an extensible API we can confidently commit to supporting. A major milestone for WebGL mode will be when we have a sufficient set of such blocks for an ecosystem of libraries. The main areas currently lacking in extension support are geometry and materials.

- **Extend p5.Geometry to support richer content.** Creating geometry is possible, but many tasks a user might want to accomplish are not yet supported with a stable API. One might want to efficiently update geometry, which is necessary to support animated gltf models. One might want to group multiple materials in one object, if they are present in an imported model. One might want to add custom vertex attributes for a shader to work with. These tasks are currently unsupported.
- **Enable less brittle custom shaders.** To create a shader that integrates p5.js's lighting and materials system, a user is currently forced to create shaders from scratch. These shaders often copy and paste parts of default shaders. This may break between versions if internal naming or structure changes. To be less brittle, libraries should be able to import and reuse default pieces. This lets libraries reuse positioning logic or augment positioning logic but reuse shading logic. There is currently [an issue open for this task.](https://github.com/processing/p5.js/issues/6144)
- **Improve performance.** WebGL mode tries to strike a balance between features and performance. One method is to introduce APIs to tune output quality, like how `curveDetail()` allows faster but lower-quality curves. Line rendering is one of the common performance bottlenecks in its present state, and it could benefit from having lower fidelity but higher performance options. Another method is to introduce new types of objects and rendering methods that are optimized for different usage patterns, like how `endShape(shouldClose, count)` now supports WebGL 2 instanced rendering for more efficient drawing of many shapes.

Loading