-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Material Compiler V2 #443
Comments
Specular Anti-Aliasing and Filtering : The missing pieces in NablaThe cause of Specular AliasingSpecular aliasing arises from the shape and direction of the peak of The value of
|
Ray Distribution RepresentationsWe need a way to represent So far I know of only two. Ray-Differentials and how they suckThe basics of how they workRay Differentials start with the derivative of the ray directions between neighbouring samples, if using sample sequence of let's say the The ray-origin derivative is obviously null when using a pin-hole camera as there is a singular focal point for all rays. This is how you perform the following operations Approximate a pixel's footprint on an intersected planar SurfaceYou can simply construct a frustum based on the positional and directional derivatives and intersect this with the surface. Turn an outgoing distribution into an incoming oneSimply negate the directional derivatives, rays will still travel through the same parametrized origins. Correct for CurvatureWe need a function to express a ray in the shading normal's tangent space, we then use the Chain Rule to derive first order derivatives in that tangent space. Variations in BxDF ParametersFind out the footprint in tangent and UV space, filter textures appropriately. So far I don't know of any method to make the viewing direction play a role, but this would only be needed for Fresnel which rarely ever varies spatially. ConvolutionThe only hope of achieving anything here, is to figure out transformations of parameters (usually just Then when importance sampling, compute the derivative to use as the new directional derivative with an approximation like this: but in a way that does not cancel out opposing directional derivatives. This adds the spread thanks to the importance sampling's roughness and the original incoming distribution, its nice because:
This method also sucks because its completely dependant on your sample generator:
Why they suck: They cannot deal with fuzzy distributionsRay differentials is that they prescribe a discrete solid (binary) shape of a frustum that the rays occupy. The problem with that is as you take a bundle of rays within a frustum (can be very thin) and reflect or refract it with a rough BxDF, they will spread over a large range of angles, away from a perfectly reflected frustum and not every angle will have the same intensity! A Ray Differential assumes all rays within the frustum have the same importance/intensity. In the end what really happens in renderers is that rather than actually work out the differentials analytically by differentiating the complex materials, the ray gets some metadata to keep track of finite differences in screenspace X and Y:
In Mitsuba and PBRT you essentially get 2 "ghost" rays which are not taken into account when traversing the BVH but are:
This is absolutely horrible because:
One way to combat both issues above is to also consider a If you take the alternative and account for all aliasing cases separately, we make our importance sampling functions compute their derivatives in The ideal would be to fit a frustum to the outgoing direction Projected BxDF value distribution, but the only way I can see how to do that is to use Linearly Transformed Cosines and fit the frustum to that. Why Covariance Matrices rockCovariance Tracing deals with 4D (if you ignore time) distributions (spreads) of ray origins and directions, they're modelled as a 4x4 covariance matrix which means fuzzy gaussian distributions. Every time you do something with the ray distribution, like:
there is a 4x4 matrix to multiply the original distribution with. Its actually less than 4x4, because covariance is symmetrical, so only need to concern ourselves with UL matrix. Then you just project when you need to get the 2x2 covariance matrix of the ray distribution as it intersects a surface's tangent plane which will be w.r.t surface's UV coordinates. This gives you an oriented gaussian filter elliptical footprint which is exactly the thing you need for EWA texture minification which is considered superiror to Anisotropic Filtering (which we'll still use). For Anisotropic Filtering you can just extract the ellipses minor and major axes multipled by some fixed cosntant (however many Why they are stupidly goodRecall our original convolution over a projected disk: We can express it as a convolution over a plane perpendicular to the importance sampled outgoing "hero" Now the absolute cool thing that Covariance Tracing does, is that it expresses the above in terms of a Fourier Transform. at this point, we should note that all the equations convolved using the incoming direction variable. Now the funny thing is that this formulation of $\tilde{f}v$ actually tells us how the light reflected from a constant $\omega_o$ varies with $\omega{\text{i}}$, which is not what we're looking for. But since BRDFs are supposed to be symmetric we can swap the This is where the strategic choice of using covariance as the approximation of This means that in order to find a but because the covariance matrix of the BRDF is not invertible in 4D a slightly different formulation is used Roughness Scaling for Geometrical Specular AA: Directions not clearThe spreading of our Covariance Matrix won't:
Therefore we'd need to deduce how to bump up the anisotropic Sums of BxDFs : Covariance Tracing's Achilles HeelObviously YOU COMPUTE THIS SEPARATELY FOR THE REFLECTION AND REFRACTION cases, you cannot put two vastly separated unimodal distributions (which together form a bimodal) into a single unimodal (which is what a gaussian given by covariance matrix is) and hope to get anything remotely useful. This is obviously a problem for your standard dielectric with has two peaks in the plot of outgoing directions, one for reflection and one for refraction. So since we cannot approximate multi-modal distributions with a Gaussian derived from a single Hessian, we have a problem defining a single Hessian to use for our ray distribution transformations. We could either:
Whatever the case seems like Durand 2005 is a much needed read first. Unlikely Conclusion : Another way to refit BxDF parametersSince both Covariance Tracing and an alternative version of Ray Differentials would both use the Jacobian to estimate |
Anti-aliasing of Derivative MapsAs one knows, mip-maps of bump-maps, derivative-maps and normal-maps tend to average the values in the footprint to a single value. This produces a smooth surface under minification which is not correct. One needs to filter the original NDF's Because the input Derivative Map's X and Y channel histograms do not need to be equal (think metal grating), the filtered NDF can be anisotropic even if every perturbed normal had an isotropic NDF applied to it. This is yet another argument why all NDFs should be implemented as anisotropic and not optimized for the isotropic case. Therefore filtering techniques that spit out covariant roughnesses are best (so that NDF is not only anisotropic, but that the anisotropy can be rotated). So there are Because of all of the above, the BxDF (or at least the NDF) intended for use with a Derivative Map needs to be known. It remains a challenge to formulate our "target" function to minimize, but one should hope its some sort of an error metric between the average BxDF value from the pixel footprint's pixels and the BxDF of the filtered perturbed normal and roughness. There is a choice of target functions, but:
We could of course define our spatially varying distribution beautifully in terms of a convolution, for example: but texels in an image are sampled, and therefore Dirac Impulse Trains, so by the sampling property the above turns into a sum Optimizing most of the following target functions is quite easy, there will always be an integral of Square Error in Solid Angle measure where the parameters we're optimizing are arguments to some distribution. This can be optimized easily provided we can differentiate the Square Error Integral w.r.t. The fact that its an integral doesn't matter we can randomly initialize the parameters and apply Stochastic Descent (basically do what Mitsuba 2 and 3 do). NDF MatchingThe idea behind this one is simple, our "Ground Truth" NDF is the weighted sum of all per-texel NDFs of varying roughness rotated by Then with the given constraints we try to make our model fit the ground truth to the best of our ability. NOTE: If we start to use Schussler et al. 2017, then VNDF MatchingAnother insight is that maybe we shouldn't care about the normals we're not going to see anyway. One would think to attempt to generalize by making So we are left with this: Whole BxDF MatchingFinally one might approach the problem from the angle of "who cares about the microsurface profile, I care about appearance". Obviously to try to apply this to a mixture or blend of BxDFs would be insanity requiring proper differentiable rendering (due to the absolute explosion of parameters to optimize). But we could make the "Rough Plastic" a special case and allow treating that blend as a singular BxDF, or subexpressions in the BxDF tree which use the same roughness and derivative maps. Stochastic Descent Optimization ImplementationWe re-use our GPU accelerated polyphase filter, which preloads a neighbourhood of There are some obvious constraints:
We then repeat the following process a few times:
|
Numerically Stable MISMotivationWhen integrating the light transport equation at a path vertex: we can use multiple importance sampling techniques Each technique's PDF
Note that we omit Each Technique has different strengths and exhibits more or less variance in different placesAs per the famous image in the Veach thesis:
Other techniques such as path guiding may do a great job of approximating the Incoming Radiance, but its only an approximate and therefore be inefficient at "hard to sample" light distributions (usually sparse things or accounting for the windowing by the BxDF). Ideally we'd like to combine them. However the codomains of the directions produced by the Techniques are not disjointThis means that accounting for the overlap to be able to use both techniques is necessary. Simply averaging the results will not decrease Variance, you want to weight them so that a technique contributes more to the final result when it is "better" than the others. And ideally weigh them smoothly, to have transitions without visual artefacts. The Idea: Blending contributions with outgoing direction dependant weightsBasically we weigh each contribution There are various ways of obtaining and defining these weights. MIS: Make the weights dependant on the PDFThe intuiting is that higher the PDF of generating an outgoing direction with a given Technique the "better" it should be compared to the others. The Power HeuristicThe definition is simple: where usually However this must be made numerically stable!Where numerical instability comes fromFor now we're ignoring if we could compute Because we operate in the real world, and all our computations happen using IEEE754, we cannot:
When importance sampling tight distributions, the PDFs are very large (infinite for delta distributions), which makes the above definition of The Solution: ReformulationThe most important thing to note is, that for every sample
|
Importance Sampling BxDF MixturesSuppose we are given the following "mixture" of BxDFs where Lets assume that each Now let us think about "how do we derive a sampling function
In reality we only have two options for combined samplingA-priori O(n) : Stochastically pick between sampling strategiesThe idea is simple, we stochastically choose the produce the sample with based on We choose the generator BEFORE sampling, so note that Then our combined sampler PDF Our choice then becomes quite limited, as no formulation of But this becomes more and more acceptable as Either: Split-sum approximation of contribution of each BxDFWe essentially attempt this Or: Eliminate any
|
Arbitrary Output Value Extraction (Albedo, Shading Normals)Generally speaking, for diffuse materials you simply output the shading normal and albedo. But for specular reflectors and transmitters you really want to be able to "see through" and see the albedo and shading normals of the reflected or refracted geometry. AOV Transport EquationWe basically pretend that AOVs are "special" light, emitted at every path vertex where Definiting the BATF for BxDFsSingle BxDFsDiffuseRegardless of the roughness, Cook TorranceThere is no difference between a transmitter or a reflector. We want to make And we want to compute
Either becomes a simple function of Subsurface ScatteringGeneralle speaking, BxDF Weighted SumsWe define our However, we ensure the weights sum up to Special Treatment for Absorption Weights?There is an open question of whether we want to include things like the diffuse On the one hand there is the option of making the contribution of each BxDF proportional to its weight, on the other to make it proportional to the contribution measured in terms of luminosity. Dealing with RGBThis depends on the AOV:
When you weight by a scalar, it should be done after all the transport is accounted for. You don't want a split sum approximation of R G B screening at every vertex, just the emissive one. Integrating the AOV TransportThe AOV signal is really low frequency, its similar to rendering scenes with a lot of very diffuse ambient light, notice there's no directional component in the "emitted" light. This means good BxDF importance sampling is the perfect approach. However we are not at liberty to run a separate path tracing simulation for this. Since the spp required to get a converged AOV image are much lower than the real render, and even slightly noisy AOVs are usable, we get our integral from left-overs in the original Path Tracing computation. We simply reuse the same samples as for the Path Tracing! So we simply divide our overall Special Handling of Cook-TorranceWhenever the sample was generated by a particular Cook Torrance BxDF generator This is because Bonus Round: The Velocity AOVVelocity probably shouldn't follow the regular AOV transport, because a non zero time derivative of any Light Tranport Equation term at any path vertex induces screenspace motion. We should probably find a way to hack Covariance Tracing or Differentiable Rendering into providing us the real screenspace motion derivatives. |
How the old compiler worked and new compiler will workFrontendA frontend parses a Material Representation specific to an Interchange Format to produce a format-independent Intermediate Representation/Language of the Material. For now only the following frontends will be developed/maintained:
Other planned front-ends which have no ETA:
Abstract Syntax ForestA material is represented by an expression tree, it can look like this: graph TD
F[Front Face Root Node] --> |Root isn't really an extra node we point straight to Sum|A{Sum}
A --> |attenuate|w_0{Metallic Fresnel}
w_0 --> Conductor(Conductor)
A --> |attenuate|w_1{Albedo}
w_1 --> DiffTrans(Diffuse Transmitter)
A --> |attenuate|w_2{Non-PBR Transmittance/Reflectance}
w_2 --> GlassF(Glass Eta=k)
B[Back Face Root Node] --> |Root isn't really an extra node we point straight to Sum|C{Sum}
C --> w_1
C --> |no extra weights| GlassB(Glass Eta=1/k)
This in reality is a an Abstract Syntax Forest, not an Abstract Syntax Tree (and in V2 it will be an Abstract Syntax Directed Acyclic Graph). The nodes in the diagram are as follows:
Logically (not codegen-wise, for reasons I'll explain farther down), you do the following to evaluate the BxDF:
V2 NodesLeaf BxDF (0 children)This BxDF is always in the form that passes the White Furnace Test, absorption not included as far as possible! So:
Every BxDF should be able to tell us how much energy it will loose Normal Modifer (exactly 1 child)Makes the whole subexpression directly below it use a different shading normal, either:
If another Normal Modifier appears in the subexpression, it replaces this normal for its own subexpression. This means that for subexpression elimination, one needs to include the normal used for shading in the hash and comparison. Basically you cannot eliminate a subexpression if it will be using a different shading normal. AttenuatorsThis node multiplies the value of its child subexpression by it value. It is important to re-order attenuators such that:
This ordering favours the speed of evaluation as opposed to importance sampling. We divide into several classes (if I'm missing something, let me know): Constant (exactly 1 child)This will used to implement Diffuse BxDF's We'll make it have a 2-bit flag whether it should be applied to reflective, transmissive paths or both. Absorption (exactly 1 child)Only applied on Transmissive paths, basically its similar to Constant Attenuator, but we raise the parameter to the power of Also we store a 1-bit flag to tell us which side of the interface (using which Diffuse Substrate (exactly 2 children)Left Child: Must be Cook Torrance Reflector (so we can snoop For now, compute the two transmittance factors of 1-Fresnel of the Shading Normal Future Improvements:
Current Old IR/IL FeaturesForest not a TreeWe want to optimize the Material VM or Callable SBT by erasing cases for unused opcodes, and eliminating duplicate materials. We also want to not cause instruction cache misses, this is an important situation where the divergence has already been optimized (BxDFs type and parameters are identical) but the data fetches have not (there are multiple copies in memory of the BxDF parameters) so two objects/samples/pixels using the same material would be fetching from divergent addresses causing cache misses. Separate Back and Front face root nodesThis allows us to perform optimizations by removing entire branches which do not need to be evaluated knowing the sign of Furthermore we can precompute a lot of parameters for BSDFs which change depending on the sign of Special Bump-map and Geometrical Normal reset IR NodesThey don't really do anything to the accumulated expression, they're more of a "marker" that all BxDF nodes in the branch below (until the next such opcode) will use a normal different to the vanilla shading normal. If we were to go back to Schussler at some point, this node could change from a 1:1 node to a 1:3 node in case certain subexpressions could be eliminated. New IR/IL Must Have FeaturesDirected Acyclic Graph instead of a ForestOur current de-duplication only allows for sharing an entire Material Expression, we want to be able to share subexpressions. We also haven't coded the thing with sharing subexpressions in mind. Construction time Optimization and CanonicalizationEach subexpression needs to have a canonicalized (and therefore optimized, at least in the terms of constant folding, etc.) form. BxDF always passes WFT conventionAttenuation NodesRight now most of our BRDFs have the factors that forsake 100% Energy Conservation, folded into their opcodes, parameters and execution. This is a bad idea, especially when importance sampling mixtures of BxDFs, as we'll cover later. Therefore all factors that make the BxDF loose energy such as:
BackendThe backend traverses the IR/IL (possibly multiple times) to produce a means of performing the following on the target architecture and environment:
For all the above the functions take as paramters:
Whether this is achieved via the means of outputting a constant instruction stream (Assembly/IR or High Level) with a separater entry point per root node, or as a Virtual Instruction stream for a handwritten VM, or any hybrid of approaches is the choice of the backend. How the old compiler shits the bedBad Importance Sampling of mixesObscene Register usageImprovements in new compilerCompile TimeMerkle Trees and Hash Consing at runtime (MASDAG)Instead of having a Material Abstract Syntax Forest, have a Material Abstract Syntax Directed Acyclic Graph. Right now we only de-duplicate entire expression trees, and it also seems to be happenning in the Frontend. The Mitsuba Frontend actually constructs suboptimal IR/IL and then attempts to optimize it later (constant folding etc). If we use MerkleTrees and HashConsing in the IR/IL allocator/builder then we can make all Frontends benefit from subexpression de-duplication. Canonical Subexpression FormsIf we can't compute a unique equivalent subgraph for a subexpression we cannot hopefor a constant hash and subexpression graph topology comparison, both of which are needed for hash consing and deduplication of subexpressions This means for example:
"Linking" / Material DAG joiningA cool side-effect of being able to do subexpression elimination, is being able to link/join multiple ASDAGs together by treating them as plain old common subexpressions. Arbitrary AttenuatorsRuntimeTexture Prefetch SchedulerSeparate AoV instruction streamMiscellaneousFuture improvements which are not targetted nowImplement all BxDFs as ALTCsBringing_Linearly_Transformed_Cosines_to_Anisotrop.pdf Precise: Use ALTC only for importance samplingOptimized: Use ALTC for evaluating BxDF value as wellBonus: Compensate for Energy Loss in very rough BxDFsImplement #156 Scheduler for the Material Graph eval |
Glossary
BxDF = Either a BRDF or BSDF
LTC = Linearly Transformed Cosines
The implementation of BxDFs in Nabla
For the GLSL source you can browse here:
https://github.com/Devsh-Graphics-Programming/Nabla/tree/master/include/nbl/builtin/glsl/bxdf
and for the in-progress HLSL here:
https://github.com/Devsh-Graphics-Programming/Nabla/tree/master/include/nbl/builtin/hlsl/bxdf
Unfortunately the DXC integration (#433, #437, #442) is not operational yet.
General Case
Because the rendering equation can be written as
You have a recursive integral which features two distinct factors.
The Projected BxDF:
and the Incoming Radiance (not to be confused with irradiance):
Generally speaking its impossible to find a constant time closed form solution to importance sampling the product of Reflectance and Incoming Radiance, except for the simplest of cases (point light, line light and lambertian BRDF). This is because$\omega_{\text{o}}$ appears in both factors, hence they are not independent of each other.
Furthemore the entire incoming radiance is not known (NEE is a special case and does not take into account whether the emitter is actually visible, and Path Guiding samples a much smoother approximation of the incoming radiance).
The logical choice is to use MIS or RIS to sample this product of distributions efficiently and split the techniques into Projected BxDF and Incoming Radiance sample generators, this also helps up to keep our code modular and free of combinatorial explosion of specialized code for each light source type and BxDF combination.
This is why all BxDFs have the following functions:
You will often find that you can importance sample the BxDFs in a way such that the$|\omega_{\text{i}}\cdot\mathbf n|$ factor appears in the PDF of the sample generator and therefore disappears from the throughput/quotient hence bringing it closer to a constant.
For a dumb reason the quotient computing functions are contain
rem_
in the nameI cannot explain the brain aneurysm which caused me to call the value divided by the pdf a remainder, and not a quotient. Sorry for that.
Super Interesting Tangent: Relationship of the Jacobian of the sample generator to its PDF
While you might take that in Monte Carlo Integration of$f$ your sample contribution is $\frac{f(g(\xi))}{p_{g}(g(\xi))}$ as the word of God and not investigate further, it is important to consider the relationship between a trivial change of variables and importance sampling (hint: they're the same thing).
Let us take the original thing we wanted to integrate:
lets now perform a change of variables$\omega = g(\xi)$ :
since we expect applying Monte Carlo importance sampling as defined above must yield the same answer as not importance sampling a simple change of variables, we have:
or as given in the Practical Product Importance Sampling via Composing Warps paper:
There is an important caveat, for the above trick to work, the sample generation function must be a bijection. If more than one subdomain maps to the same subimage you start needing to add the probability densities (so adding the Jacobian determinants like you'd add the resistances of resistors connected in parallel) together to get the real one.
Therefore it's easy to validate if your importance sampling generator implementation/scheme matches the PDF you think it has, here's an observation and derivation with intuitive density-based discussion I made, waaay before I read the Practical Product Importance Sampling paper.
"Smooth" Diffuse BxDF
For the strandard Lambertian BRDF we have:
and for the BSDF we have:
To remain sane we pretend that$\mathbf x$ does not depend on $\omega_{\text{o}}$ and therefore $\rho$ is constant.
If we importance sample by generating uniformly distributed points$(r,\theta) \times [0,1]x[0,2\pi]$ on a disk perpendicular to $\mathbf n$ and then unproject them onto the upper hemisphere we get the following PDF
but if we then randomly make half of the points go onto the lower hemisphere for a BSDF, we get
This is really nice because the throughput/quotient works out to just$q_{f_{\text{r}}} = \rho$ .
"Rough" Oren-Nayar BxDF
We use Oren-Nayar for rough-diffuse, as its important that our rough BxDFs degenerate nicely to smooth versions as$\alpha \to 0$ .
For importance sampling we figured it doesn't matter much and just used the same generator as the Lambertian Diffuse, we could of course, improve that.
Cook-Torrance BxDFs
Equations
Assuming a spatially varying roughness$\alpha(\mathbf x)$ and index of refraction $\eta(\mathbf x)$ , the BRDF has the following form:
note the absolute value of the dot products to handle an Internal Reflections for the dielectric case.
Also the$\eta$ needs to be properly oriented w.r.t. $\omega_{\text{i}}$ , whenever the latter is in the lower hemisphere we need to use $\frac{1}{/eta}$ .
While the BTDF has the form:
where$\omega_{\text{m}}$ is the microsurface normal, which is fixed by (and can be worked out from) $\omega_{\text{i}}$ , $\omega_{\text{o}}$ , and $\eta$ .
Upon first glance you might think that this BTDF violates the law of reciprocity$f_{\text{t}}(\omega_{\text{i}}, \omega_{\text{o}}) = f_{\text{t}}(\omega_{\text{o}}, \omega_{\text{i}})$ needed for PBR. However note that if you change the directions in the tranmissive case, the $\eta$ changes too, becoming its own reciprocal.
For certain transmissive configurations of $\omega{\text{i}}$,$\omega_{\text{o}}$ , and $\eta$ the refractive path will have a zero throughput (and therefore we must report a zero pdf) because either:_
Conclusions
Most of these I've covered and derived in the our recorded Discord call, which you have access to on Google Drive.
It should be immediately apparent that when$f$ is a Cook Torrance BRDF or BTDF, the following expression
contains$|\omega_{\text{o}}\cdot\mathbf n|$ in both the numerator and denominator, which can be cancelled out to avoid a lot of headaches.
Sampling the Distribution of Visible Normals (VNDF) will remove almost every factor from the quotient
There are two variants of the Masking and Shadowing function$G$ :
As derived by Heitz in his 2014 and 2016 papers, either variant of the masking and shadowing function$G$ for a microfacet cook torrance model is entirely fixed by the distribution of normals $D(\alpha)$ , as it defines $\Lambda_{D(\alpha)}(\omega,\alpha)$ which in turn defines:
This allows us to treat$D G$ as a single function. Nabla's $D G_{\text{GGX}}$ is extremely optimized, we pretty much avoid computing $\Lambda_{D_{\text{GGX}}}$ because it contains many of the same terms as $D_{\text{GGX}}$ .
Also as shown by Heitz and others,$D$ by itself is not a probability distribution, $D |\omega_{\text{m}}\cdot\mathbf n|$ is. This is because the projected area of the microfacets onto the macrosurface needs to add up to a constant (which is actually the same as the area of the macrosurface), not the area of the microfacets themselves (a more corrugated surface has a greater surface area).
Note that this formulation makes$\omega_{\text{m}}$ independent of $\omega_{\text{i}}$ .
We can also define the Visible Normal Distribution Function for a given fixed$\omega_{\text{i}}$
which tells you what propotion of the macrosurface area projected onto the viewing direction (which is why there is a division by$|\omega_{\text{i}}\cdot\mathbf n|$ ) is made up of microfacets with normal equal to $\omega_{\text{m}}$ (they also need to be projected onto the viewing direction hence the dot product in the numerator).
When importance sampling, the$\omega_{\text{i}}$ is a fixed constant, so the best sampling of $\omega_{\text{o}}$ you can hope to achieve is done according to $f |\omega_{\text{o}} \cdot \mathbf n|$ , but realistically you can only sample $\omega_{\text{m}}$ according to $D_{\omega_{\text{i}}}$ and then reflect $\omega_{\text{i}}$ about it.
The rest of the terms in the quotient composed of dot products involving$\omega$ can be factored out by construction
When you reflect$\omega_{\text{i}}$ about $\omega_{\text{m}}$ a change of variable happens and your PDF is
note how when you sample according to VNDF, this becomes:
A similar thing happens with refraction as its PDF is:
which actually makes it so that$p_{\omega_{\text{o}}}$ is exactly the same for the reflective and refractive case.
Therefore when you sample a$\omega_{\text{m}}$ from a VNDF first, and then generate your $\omega_{\text{o}}$ via this reflection or refraction the same factors arise in your generators pdf as the value, so that the throughput simply becomes:
for a BRDF, and
for a BTDF.
This should all intuitively make sense when you consider that as$\alpha \to 0$ both $G_1 \to 1$ and $G_2 \to 1$ so you get the exact same thing as an importance sampling an explicitly implemented smooth mirror or glass BSDF.
When you have a White Furnace Test passing BSDF in a Spectral Renderer even the Fresnel disappears
Assuming you're constructing your BSDF by applying a BRDF on reflective paths and BTDF on refractive
If, when importance sampling, you choose whether to reflect$\omega_{\text{o}}$ or refract it about $\omega_{\text{o}}$ based on a probability proportional to $F$ , you get a factor of $F$ in the pdf $p_{f}$ when reflecting and $1-F$ when refracting.
This makes it completely drop out of your throughput so that:
There are however important caveats:
Bonus Round: Thindielectric BSDF
This is a great "specialization for known geometry", you basically assume that each surface with this material is in reality two surfaces which are:
as you'll see these are important assumptions later on.
Smooth Thindielectric
The BSDF for smooth glass is:
What should be immediately apparent is that any ray that enters the Thindielectric medium will hit its backside with the exact same angle as it left the front face, this means:
When you put these facts together you see that the equations for infinite scattering Thindielectric BSDF (infinite TIR) become
The$(1-F)^{2}$ factor is present on all but the simple single scattering reflection, because you need a transmission event to enter and exit the medium.
It is further possible to incorporate an extinction function (but not a phase function) e.g. simple Beer's Law.
Let us assume that a ray of light orthogonal to the surface will be attenuated by$\tau_{\perp}$ after passing through the medium, then for a ray penetrating the medium at a different angle we have:
We can compute the following factor to account for all chains of Total Internal Reflection which exit at the same side:
then our BSDF becomes:
[Perpetual TODO] Rough Thindielectric
This will need the energy conservation fix for regular Rough Dielectrics (Multiple Scattering Cook Torrance correction).
Whats important is that our implementation of this will reduce to Smooth Thindielectric as$\alpha \to 0$ .
All the BxDFs we didn't cover and which might be needed for Ditt (in order of importance)
Velvet
This is the only one I've had requests for so far.
We should do the same one Mitsuba had, or something more up to date if we can find.
Hair/Fur
For carpets.
The BSDF is what gives you rim-lighting and anisotropic gloss.
https://andrew-pham.blog/2019/09/13/hair-rendering/
But its the heightfield / volume tracing that gives the fur effect.
Human Skin
For RenderPeople (not requested yet).
True Diffuse
Lambertian (constant irrespective of angle of observation) is completely made up, if you replace it with an actual volume which does uniform scattering and let the mean free path tend to 0, you'll get a different looking material.
Sony Imageworks has a 2017-ish presentation on that.
The text was updated successfully, but these errors were encountered: