Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image Light Feature - GSOC 2023 #6255

Merged
merged 56 commits into from
Nov 3, 2023
Merged

Conversation

AryanKoundal
Copy link
Contributor

@AryanKoundal AryanKoundal commented Jul 6, 2023

This PR is for the work under the GSOC 2023 project "Improving p5.js WebGL/3d functionality by adding support for image-based lighting in p5.js WebGL mode."

Project Description:

This project aims to add lightning to a 3D object, using the surrounding environment as a single light source, which is generally called Image-Based Lightning. In simple words, one can very quickly drop in an image from real life to use as surrounding lights, rather than continuously tweaking the colors and positions of ambient, point, etc lights.

Changes:

This includes the whole progress broken down into steps. The boxes will be checked off according to the progress made.

  • CubeMap Convolution
  • PBR and Irradiance Lighting
  • Pre-filtering environment map
  • Pre-computing the BRDF

Screenshots of the change:

PR Checklist

  • npm test
  • npm run build passes
  • npm run lint passes
  • Inline documentation is included/updated
  • Examples included/updated
  • Functions Descriptions and Documentation is included/updated

The project is completed and here are some screenshots and videos demonstrating the work.

Image of Example 1
example 1

Video of Example 1
https://github.com/processing/p5.js/assets/77334487/44b30c77-33c1-41d0-ada5-282424978832

Image oF Example 2
example 2

Video of Example 2
https://github.com/processing/p5.js/assets/77334487/a0a6b3f9-b25b-451f-961e-b2970cb9e907

Copy link
Contributor

@davepagurek davepagurek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good so far!


const float PI = 3.141;

vec2 normalToEquirectangular( vec3 v){
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we use this function?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, I actually re-wrote the function in line 22. The function N2E is the one being used for now. I think I should remove the function on line 10 and rename the function at line 22.

// Turn it into -theta, but in the 0-2PI range
theta = 2.0 * PI - theta;
}
theta = theta / (2.0 * 3.14159);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe where we use 3.14159 right now we can use the const float PI we defined earlier, and add more decimals to that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I'll clean up this file. It has some extra code as well.

@@ -446,6 +448,17 @@ p5.RendererGL = class RendererGL extends p5.Renderer {
this.spotLightAngle = [];
this.spotLightConc = [];

// repetition
this.textures = new Map();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we don't need to declare this any more now that we have blurryTextures below

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I will remove this. I was going to ask you about this, that's the reason I marked it as repetition.

@@ -494,6 +494,33 @@ p5.prototype.pointLight = function(v1, v2, v3, x, y, z) {
return this;
};

p5.prototype.imageLight = function(img){
// ! if the graphic is always true, do we really need this section?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're going to use img as a light source, it means we'll need the blurry version of its texture in order to pass into the shader. If this.blurryTextures(img) is defined, then we've already made the blurry texture in a previous frame and don't need to do anything more here. If not though, that's when we need to run the commented out code below to make it, and then after line 512 we'll need to store it with this.blurryTextures.set(img, newGraphic)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could alternatively do this if statement right before we do shader.setUniform('equiRectangularTextures', this.blurryTextures.get(this.activeImageLight)); later on. The idea is that we need that texture to exist by that point

// used in imageLight
// create a new texture from the img input
getBlurryTexture(input){
const blurrytex = new p5.Texture(this, input);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the main difference between the blurry textures we store here and the textures already used by RendererGL is that for normal textures, we just make a new p5.Texture directly from the input. For this though, the input is non-blurry, and we want to store the blurry version, so somewhere (maybe in here actually?) we should run the shader to generate the blurry image onto a graphic and store that instead.

if (this.activeImageLight) {
// this.activeImageLight has image as a key
// look up the texture from the blurryTexture map
shader.setUniform('equiRectangularTextures', this.blurryTextures.get(this.activeImageLight));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we decide to make getBlurryTexture compute and store the blurry texture if it doesn't yet exist, instead of doing blurryTextures.get() here, we could do this.getBlurryTexture(this.activeImageLight)

// used in imageLight
// create a new texture from the img input
getBlurryTexture(input){
const blurrytex = new p5.Texture(this, input);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: blurryTexture or inputBlurryTexture

Copy link
Contributor Author

@AryanKoundal AryanKoundal Aug 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We actually refactored this function completely in which this line was removed.

const float specularFactor = 2.0;
const float diffuseFactor = 0.73;
const float PI = 3.141;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be used from another common constant file? Would be better to have a value closer to real PI, since 3.14159 is closer to 3.142

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we have a file for constants because mostly constants are taken via the "MATH" object but it isn't available in shader languages according to my knowledge.

We can do #define PI 3.141592 at the top of this file.
Another possible option is to do #define PI 3.141592 in the \webgl\shaders\webgl2Compatibility.glsl file.

I am not sure which one is better.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now since we only use this here and we don't use pi anywhere else in a shader yet (to my surprise, haha) I think we can keep it in this file for now, and if we find we need to use it in another file too (maybe we need it for the specular shader?) we can factor it out into a new shader constants file at that point

@@ -67,6 +75,44 @@ LightResult _light(vec3 viewDirection, vec3 normal, vec3 lightVector) {
return lr;
}

float map(float value, float min1, float max1, float min2, float max2) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: A clearer name for this if possible

Copy link
Contributor Author

@AryanKoundal AryanKoundal Aug 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The map(vale, min1, min2, max1, max2) function is converting the range of value from [min1 to max1] to [min2 to max2].

So should we make it something like convertRange() or mapRange() maybe?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function mirrors https://p5js.org/reference/#/p5/map so maybe we could link to that reference page + add a comment description of what it does?

theta = acos( normal.x / sin(phi) );
}

vec2 newTexCoor = vec2(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Identation is off

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, corrected it :)

#endif
// this is to make the darker sections more dark
// png and jpg usually flatten the brightness so it is to reverse that
return pow(outColor.xyz, 10.0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like tests are failing because pow needs both parameters to be the same type :') So maybe this needs to be:

Suggested change
return pow(outColor.xyz, 10.0);
return pow(outColor.xyz, vec3(10.0));

src/webgl/light.js Show resolved Hide resolved
src/webgl/light.js Show resolved Hide resolved
@@ -494,6 +494,12 @@ p5.prototype.pointLight = function(v1, v2, v3, x, y, z) {
return this;
};

p5.prototype.imageLight = function(img){
// setting activeImageLight so that the setUniform is executed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can be a bit more specific here and say that this is what _setImageLightUniforms looks at to see if it needs to send uniforms to the fill shader

@@ -468,6 +473,17 @@ p5.RendererGL = class RendererGL extends p5.Renderer {
this.spotLightAngle = [];
this.spotLightConc = [];

// Null if no image light is set, otherwise, it is set to a p5.Image
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can rephrase this into sentences so it's a bit easier to read? Also, for each map, we should say what the keys and values are. E.g. for this one, it maps a p5.Image used by imageLight() to a p5.Graphics containing the combined light it sends to each direction.

// this is to lookup image from blurrytexture Map rather from a
// texture map
this.diffusedTextures = new Map();
// map to store the created textures to prevent mis calculation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one maps the input p5.Image to a p5.MipmapTexture. Worth mentioning that we use a mipmap texture because it can contain light from multiple roughness levels.

src/webgl/p5.RendererGL.js Show resolved Hide resolved
@@ -0,0 +1,34 @@
// add precision
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did we want to copy the precision from the fragment shader to here?

@@ -0,0 +1,82 @@
precision mediump float;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh actually we probably want this to be highp; this basically gets ignored by the webgl driver on desktop, but on mobile it's actually used, and in my experience any time we have normals, the precision makes a big difference on the visual fidelity

Suggested change
precision mediump float;
precision highp float;

const float PI = 3.14159265359;

// not needed
vec2 normalToEquirectangular( vec3 v){
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can remove this function since we aren't using it

return min2 + (value - min1) * (max2 - min2) / (max1 - min1);
}

vec2 mapTextureToNormal( vec3 normal ){
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh should we be using your implementation of nTOE from the other fragment shaders instead of this? If I recall correctly, this is the older implementation that has a bug where it only uses the 0-PI range, which you fixed in your updated nTOE.

I think using nTOE would also mean we can remove the map function above since this is all that uses it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, i'm testing after the changes. If nToE works, which it will very likely. I'll remove the mapTextureToNormal one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, changed the nToE name to mapTextureToNormal for better understanding name.

@AryanKoundal AryanKoundal marked this pull request as ready for review October 27, 2023 12:21
…Light examples to not run for the npm run test, those examples have been manually verified to be working
Copy link
Contributor

@davepagurek davepagurek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation is looking great! just some very minor punctuation things for that.

I did a bit of debugging of the nToE function. It looks like it has to be a bit different from the other files to work properly? but it seems like there's a value that makes it work.

I also left some suggestions about the power-of-10 thing for diffuse light, let me know what you think!

*
* The image light stimulates light from all the directions.
* This is done by using the image as a texture for a relatively
* very huge light in a form of a sphere. This sphere contains
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "relatively very huge light in a form of a sphere" could maybe be "an infinitely large sphere light"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

* It will have different effect for varying shininess of the
* object in the drawing.
* Under the hood it is mainly doing 2 types of calculations,
* first one is creating an irradiance map from the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* first one is creating an irradiance map from the
* the first one is creating an irradiance map from the

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

* Under the hood it is mainly doing 2 types of calculations,
* first one is creating an irradiance map from the
* environment map of the input image.
* Second one is managing reflections based on the shininess
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Second one is managing reflections based on the shininess
* The second one is managing reflections based on the shininess

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

*
* Note: The image's diffuse light will be affected by fill()
* and the specular reflections will be affected by specularMaterial()
* and shininess()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* and shininess()
* and shininess().

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

* imageLight(img);
* // This will use specular once we add it
* specularMaterial(20);
* // shininess(slider.value());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can take out the commented out lines in this sketch I think

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

src/webgl/light.js Show resolved Hide resolved
src/webgl/light.js Outdated Show resolved Hide resolved
theta = theta / (2.0 * 3.14159);
phi = phi / 3.14159 ;

vec2 angles = vec2( phi, theta );
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like something might be up still:
image

I noticed that we had a note for diffused light to use the word normal instead of R. When I use that, it looks like this:
image

I tried using a two-toned background to check against and it seems like it's kind of flipping it:
image

That looks a bit more correct, but it looks like the green is on the side? I think that might be because in other files, we used theta and phi differently, but here, we want to return vec2(theta, phi) instead of the other way around (I think that makes sense that the x axis is theta here since that's the one whose range is 2*PI. That's different than in the other shaders, which is a little odd, but at least we tested those independently and they seem fine.) That looks like this:

image

so that still seems 90 degrees off, but in a different axis this time. I tried cycling -90 degrees in the theta axis. This seems to look right, and that's using vec2 angles = vec2( fract(theta + 0.75), 1.0 - phi );
image

With just diffuse:
image

With specular:
image

Suggested change
vec2 angles = vec2( phi, theta );
vec2 angles = vec2( fract(theta + 0.75), 1.0 - phi );

Copy link
Contributor Author

@AryanKoundal AryanKoundal Oct 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed the suggested line, but do i have to change the line

  // use worldNormal instead of R
  vec2 newTexCoor = mapTextureToNormal( R );

Did you use worldNormal or R in this function?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you use worldNormal or R in this function?

I think that one should be worldNormal.

I double checked the suggested code and I think maybe I was testing without that by accident though -- I think when using the world normal, to get the example with the stripes to work, it would be this:

vec2 angles = vec2( fract(theta + 0.25), 1.0 - phi );

(Basically, 0.25 instead of 0.75, which is like rotating by pi.)

// return texture.xyz;
// this is to make the darker sections more dark
// png and jpg usually flatten the brightness so it is to reverse that
return pow(texture.xyz, vec3(10.0));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried changing imageLight() with lights() in a sketch to see if they could be roughly equivalent. I think 10 might be a bit too aggressive for the diffuse light unfortunately. I tried playing around with some other curves:

Lighting call
pow(
  texture.xyz,
  vec3(10.0)
);
texture.xyz;
smoothstep(
  vec3(0.),
  vec3(0.8),
  texture.xyz
);
imageLight(img);

image

image

image

lights()

image

image

image

So although it's not physically accurate or anything, maybe doing this will help make it easier for people to switch from lights() to imageLight()? This maps texture values from 0-0.8 to 0-1 along a smooth curve:

Suggested change
return pow(texture.xyz, vec3(10.0));
return smoothstep(vec3(0.0), vec3(0.8), texture.xyz);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done for caluculateImageDiffuse, should I do this for calculateImageSpecular are well?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think specular looks ok how it is, the inconsistency is fine as long as it looks ok.

src/webgl/light.js Outdated Show resolved Hide resolved
Copy link
Contributor

@davepagurek davepagurek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The examples look great!

/**
* Creates an image light with the given image.
*
* The image light stimulates light from all the directions.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* The image light stimulates light from all the directions.
* The image light simulates light from all the directions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

// return texture.xyz;
// this is to make the darker sections more dark
// png and jpg usually flatten the brightness so it is to reverse that
return pow(texture.xyz, vec3(10.0));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think specular looks ok how it is, the inconsistency is fine as long as it looks ok.

theta = theta / (2.0 * 3.14159);
phi = phi / 3.14159 ;

vec2 angles = vec2( phi, theta );
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you use worldNormal or R in this function?

I think that one should be worldNormal.

I double checked the suggested code and I think maybe I was testing without that by accident though -- I think when using the world normal, to get the example with the stripes to work, it would be this:

vec2 angles = vec2( fract(theta + 0.25), 1.0 - phi );

(Basically, 0.25 instead of 0.75, which is like rotating by pi.)

Copy link
Contributor

@davepagurek davepagurek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One last thing, do you think for the example image we use, we should use an actual sphere map image that wraps correctly so users know what kind of input this is intended for? e.g. something from https://polyhaven.com/hdris since the license for these is free and requires no attribution?

vec3 lightDirection = normalize( vViewPosition - worldCameraPosition );
vec3 R = reflect(lightDirection, worldNormal);
// use worldNormal instead of R
vec2 newTexCoor = mapTextureToNormal( R );
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this still needs to be changed to worldNormal (and I believe that means we can remove lines 113 and 114 here too.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, I have commented out the lines to test it. If the tests pass then I'll completely remove this section.

@AryanKoundal
Copy link
Contributor Author

One last thing, do you think for the example image we use, we should use an actual sphere map image that wraps correctly so users know what kind of input this is intended for? e.g. something from https://polyhaven.com/hdris since the license for these is free and requires no attribution?

I agree with you. Those sphere map images would give a better understanding of the feature.
But also would that create confusion among users? They might feel that they need a sphere map image for the background they want to use(But in reality even a simple image would work fine as well). What do you think?

@davepagurek
Copy link
Contributor

We talked about this on our call, but just adding it again here for anyone who might be reading: we've decided to use a sphere map for one example and a regular image for another.

@davepagurek
Copy link
Contributor

Maybe we can scale down the sphere map a bit? Right now it's 8000px wide, we probably don't need quite so many pixels 😅

@davepagurek davepagurek merged commit cfe48f5 into processing:main Nov 3, 2023
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants