Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CIS 565 Project 6 #20

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
297 changes: 105 additions & 192 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,192 +1,105 @@
-------------------------------------------------------------------------------
CIS565: Project 6: Deferred Shader
-------------------------------------------------------------------------------
Fall 2013
-------------------------------------------------------------------------------
Due Friday 11/15/2013
-------------------------------------------------------------------------------

-------------------------------------------------------------------------------
NOTE:
-------------------------------------------------------------------------------
This project requires any graphics card with support for a modern OpenGL
pipeline. Any AMD, NVIDIA, or Intel card from the past few years should work
fine, and every machine in the SIG Lab and Moore 100 is capable of running
this project.

-------------------------------------------------------------------------------
INTRODUCTION:
-------------------------------------------------------------------------------
In this project, you will get introduced to the basics of deferred shading. You will write GLSL and OpenGL code to perform various tasks in a deferred lighting pipeline such as creating and writing to a G-Buffer.

-------------------------------------------------------------------------------
CONTENTS:
-------------------------------------------------------------------------------
The Project6 root directory contains the following subdirectories:

* base/
* PROJ_WIN/ contains the vs2010 project files
* PROJ_NIX/ contains makefile for building (tested on ubuntu 12.04 LTS)
* res/ contains resources including shader source and obj files
* src/ contains the c++ code for the project along with SOIL and tiny_obj_loader
* shared32/ contains freeglut, glm, and glew.

---
BASE CODE TOUR
---

Most of your edits will be confined to the various fragment shader programs and main.cpp.

Some methods worth exploring are:

[initShader](https://github.com/CIS565-Fall-2013/Project6-DeferredShader/blob/master/base/src/main.cpp#L223):
This method initializes each shader program from specified source files. Note that the source name is declared inside a `#ifdef WIN32` guard block. This is done to reflect the relative directory structure between the linux and windows versions of the code.

[initFBO](https://github.com/CIS565-Fall-2013/Project6-DeferredShader/blob/master/base/src/main.cpp#L360):
This method initializes the framebuffer objects used as render targets for the first and second stage of the pipeline. When you go to add another slot to the G buffer you will need to modify to first FBO accordingly. Try finding all the places where `colorTexture` is used (ctrl+F in windows will be helpful) and look at how textures are created, freed, added to the FBO, and assigned to the appropriate shader programs before adding your own. Also keep in mind that textures can be reused as inputs in other pipeline stages, for instance you might want access to the normals both in the lighting stage and in the post process stage.

[draw_mesh](https://github.com/CIS565-Fall-2013/Project6-DeferredShader/blob/master/base/src/main.cpp#L574),
[draw_quad](https://github.com/CIS565-Fall-2013/Project6-DeferredShader/blob/master/base/src/main.cpp#L647),
[draw_light](https://github.com/CIS565-Fall-2013/Project6-DeferredShader/blob/master/base/src/main.cpp#L657):
These methods render the scene geometry, viewing quad, and point light quad to the screen. The draw_light method in particular is interesting because it will set up the scissor window for efficient rendering of point lights.

[display](https://github.com/CIS565-Fall-2013/Project6-DeferredShader/blob/master/base/src/main.cpp#L742):
This is where the graphical work of your program is done. The method is separated into three stages with the majority of work being done in stage 2.

Stage 1 renders the scene geometry to the G-Buffer
* pass.vert
* pass.frag

Stage 2 renders the lighting passes and accumulates to the P-Buffer
* shade.vert
* ambient.frag
* point.frag
* diagnostic.frag

Stage 3 renders the post processing
* post.vert
* post.frag

[keyboard](https://github.com/CIS565-Fall-2013/Project6-DeferredShader/blob/master/base/src/main.cpp#L870):
This is a good reference for the key mappings in the program.
WASDQZ - Movement
X - Toggle scissor test
R - Reload shaders
1 - View depth
2 - View eye space normals
3 - View Diffuse color
4 - View eye space positions
5 - View lighting debug mode
0 - Standard view

-------------------------------------------------------------------------------
REQUIREMENTS:
-------------------------------------------------------------------------------

In this project, you are given code for:
* Loading .obj files
* Rendering to a minimal G buffer:
* Depth
* Normal
* Color
* Eye space position
* Rendering simple ambient and directional lighting to texture
* Example post process shader to add a vignette

You are required to implement:
* Either of the following effects
* Bloom (feel free to use [GPU Gems](http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html) as a rough guide)
* "Toon" Shading (with basic silhouetting)
* Point light sources
* An additional G buffer slot and some effect showing it off

**NOTE**: Implementing separable convolution will require another link in your pipeline and will count as an extra feature if you do performance analysis with a standard one-pass 2D convolution. The overhead of rendering and reading from a texture _may_ offset the extra computations for smaller 2D kernels.

You must implement two of the following extras:
* The effect you did not choose above
* Screen space ambient occlusion
* Compare performance to a normal forward renderer with
* No optimizations
* Coarse sort geometry front-to-back for early-z
* Z-prepass for early-z
* Optimize g-buffer format, e.g., pack things together, quantize, reconstruct z from normal x and y (because it is normalized), etc.
* Must be accompanied with a performance analysis to count
* Additional lighting and pre/post processing effects! (email first please, if they are good you may add multiple).

-------------------------------------------------------------------------------
README
-------------------------------------------------------------------------------
All students must replace or augment the contents of this Readme.md in a clear
manner with the following:

* A brief description of the project and the specific features you implemented.
* At least one screenshot of your project running.
* A 30 second or longer video of your project running. To create the video you
can use http://www.microsoft.com/expression/products/Encoder4_Overview.aspx
* A performance evaluation (described in detail below).

-------------------------------------------------------------------------------
PERFORMANCE EVALUATION
-------------------------------------------------------------------------------
The performance evaluation is where you will investigate how to make your
program more efficient using the skills you've learned in class. You must have
performed at least one experiment on your code to investigate the positive or
negative effects on performance.

We encourage you to get creative with your tweaks. Consider places in your code
that could be considered bottlenecks and try to improve them.

Each student should provide no more than a one page summary of their
optimizations along with tables and or graphs to visually explain any
performance differences.

-------------------------------------------------------------------------------
THIRD PARTY CODE POLICY
-------------------------------------------------------------------------------
* Use of any third-party code must be approved by asking on the Google groups.
If it is approved, all students are welcome to use it. Generally, we approve
use of third-party code that is not a core part of the project. For example,
for the ray tracer, we would approve using a third-party library for loading
models, but would not approve copying and pasting a CUDA function for doing
refraction.
* Third-party code must be credited in README.md.
* Using third-party code without its approval, including using another
student's code, is an academic integrity violation, and will result in you
receiving an F for the semester.

-------------------------------------------------------------------------------
SELF-GRADING
-------------------------------------------------------------------------------
* On the submission date, email your grade, on a scale of 0 to 100, to Liam,
[email protected], with a one paragraph explanation. Be concise and
realistic. Recall that we reserve 30 points as a sanity check to adjust your
grade. Your actual grade will be (0.7 * your grade) + (0.3 * our grade). We
hope to only use this in extreme cases when your grade does not realistically
reflect your work - it is either too high or too low. In most cases, we plan
to give you the exact grade you suggest.
* Projects are not weighted evenly, e.g., Project 0 doesn't count as much as
the path tracer. We will determine the weighting at the end of the semester
based on the size of each project.


---
SUBMISSION
---
As with the previous projects, you should fork this project and work inside of
your fork. Upon completion, commit your finished project back to your fork, and
make a pull request to the master repository. You should include a README.md
file in the root directory detailing the following

* A brief description of the project and specific features you implemented
* At least one screenshot of your project running.
* A link to a video of your project running.
* Instructions for building and running your project if they differ from the
base code.
* A performance writeup as detailed above.
* A list of all third-party code used.
* This Readme file edited as described above in the README section.

---
ACKNOWLEDGEMENTS
---
This project makes use of [tinyobjloader](http://syoyo.github.io/tinyobjloader/) and [SOIL](http://lonesock.net/soil.html)
#CIS 565 : Project 6 : Deferred Shader

----

##Overview

We have implemented a deferred shader that utilizes screen space techniques to enable real-time rendering with visually desirable effects.

<div align="center">
<img src="https://raw.github.com/harmoli/Project6-DeferredShader/master/renders/depth_of_field_sponza.JPG" "DOF Sponza">
<img src="https://raw.github.com/harmoli/Project6-DeferredShader/master/renders/toon_shaded_suzanne.JPG" "Toon Shaded Suzanne">
</div>
----

##Features

We have implemneted the following requirements:
* Mutliple point lights
* Bloom (light emissive materials)
* Toon Shading

and the following extras:
* Depth of Field
* General Gaussian Blur
* Poisson disk screen space ambient occlusion

Some of the following are planned / in-progress:
* optimizing G-Buffer (condensing color formats into 32-bit uints instead of vec3/4s)

----

##Performance Analysis

Toon Shading

Optimization | FPS
---- | ----
No G-Buffer Optimizations | 16.60
Compressed Normals | 16.98
Compressed Normals & 8-bit precision RGBA | 17.11
Compressed Normals & 8-bit precision RGB | 17.27

==================

Bloom

Optimization | FPS
----- | ----
Compressed Normals | 2.29
Compressed Normals & 8-bit precision RGB | 3.57

------

##Discussion

#### Bloom (Light Emissive Objects)
In implementing light-emissive objects, we have used the "Ke" term in the .mtl file to hold the emissive color of the
material. Thus, we added a vec3 to the G-buffer. In turn, in post-processing, we sample this frame buffer for the
color, blur it and add the color to the original diffuse coloring. Using this techinque, we can get images such
as Suzanne with glowing red eyes even though the original diffuse material has blue eyes.

<div align="center">
<img src="https://raw.github.com/harmoli/Project6-DeferredShader/master/renders/glowing_suzanne.JPG" "Glowing Suzanne">
<img src="https://raw.github.com/harmoli/Project6-DeferredShader/master/renders/evil_suzanne.JPG" "Suzanne with Glowing Red eyes">
</div>

#### Toon Shading
We have first used the Sobel convolution for edge detection. Then we have quantized the color to yield color banding for toon shading.

#### Depth of Field
Here we have implented a reverse Z-buffer depth-of-field. Here, we scale the strength (the standard deviation of the Gaussian distribution) linearly to the distance away from the focus plane.

#### G-Buffer Optimizations
We have done a little bit of g-buffer optimization, namely compressing normals and color channels. We can see from the
numbers above that this does aid in speeding up the frame rate. This is sensible as deferred shading has a heavy load from
writing to and from framebuffers. Here, we are able to achieve a significant speed-up by compressing and "shaving off" 8/32-bits
from each color/normal.

#### Poisson-Disk Screen Space Ambient Occlusion
We have implemented screen space ambient occlusion. This simply uses a generated set for a poisson disk and samples
along the disk, rotating the sample in a random angle. We do not blur the AO, so there are banding artifacts.

<div align="center">
<img src="https://raw.github.com/harmoli/Project6-DeferredShader/master/renders/no_ssao_sponza.JPG" "No AO">
<img src="https://raw.github.com/harmoli/Project6-DeferredShader/master/renders/ssao_only_poisson_disk_sponza.JPG" "AO">
<img src="https://raw.github.com/harmoli/Project6-DeferredShader/master/renders/ssao_poisson_disk_sponza.JPG" "Composite">
</div>

-----

## Video

https://vimeo.com/79866921

-----

## References

[CIS 565 Fall 2012](https://github.com/CIS565-Fall-2012/Project5-AdvancedGLSL)

[GPU Gems Real-Time Glow](http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html)

[GPU Gems Depth of Field](http://http.developer.nvidia.com/GPUGems/gpugems_ch23.html)


2 changes: 1 addition & 1 deletion base/PROJ_WIN/P6/P6/P6.vcxproj
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
<AdditionalLibraryDirectories>$(SolutionDir)$(Configuration);..\..\..\..\shared32\glew\lib;..\..\..\..\shared32\freeglut\lib;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>freeglut.lib;glew32.lib;%(AdditionalDependencies)</AdditionalDependencies>
<AdditionalDependencies>freeglut.lib;glew32.lib;SOIL.lib;%(AdditionalDependencies)</AdditionalDependencies>
<SubSystem>Console</SubSystem>
</Link>
</ItemDefinitionGroup>
Expand Down
Binary file added base/PROJ_WIN/P6/P6/freeglut.dll
Binary file not shown.
Binary file added base/PROJ_WIN/P6/P6/glew32.dll
Binary file not shown.
3 changes: 3 additions & 0 deletions base/res/shaders/ambient.frag
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,9 @@
#define DISPLAY_COLOR 3
#define DISPLAY_TOTAL 4
#define DISPLAY_LIGHTS 5
#define DISPLAY_TOON 6
#define DISPLAY_BLUR 7
#define DISPLAY_DOF 8


/////////////////////////////////////
Expand Down
25 changes: 18 additions & 7 deletions base/res/shaders/diagnostic.frag
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,10 @@
#define DISPLAY_COLOR 3
#define DISPLAY_TOTAL 4
#define DISPLAY_LIGHTS 5
#define DISPLAY_TOON 6
#define DISPLAY_BLUR 7
#define DISPLAY_DOF 8
#define DISPLAY_GLOW 9


/////////////////////////////////////
Expand Down Expand Up @@ -40,13 +44,9 @@ in vec2 fs_Texcoord;
out vec4 out_Color;
///////////////////////////////////////




uniform float zerothresh = 1.0f;
uniform float falloff = 0.1f;


/////////////////////////////////////
// UTILITY FUNCTIONS
/////////////////////////////////////
Expand All @@ -57,19 +57,26 @@ float linearizeDepth(float exp_depth, float near, float far) {
return (2 * near) / (far + near - exp_depth * (far - near));
}

vec3 retrieveNormal(vec2 n){
vec3 normal = vec3(n.x, n.y, 0.0);
normal.z = sqrt(1.0 - dot(n.xy, n.xy));
return normal;
}

//Helper function to automatically sample and unpack normals
vec3 sampleNrm(vec2 texcoords) {
return texture(u_Normaltex,texcoords).xyz;
return retrieveNormal(texture(u_Normaltex,texcoords).xy);
}

//Helper function to automicatlly sample and unpack positions
vec3 samplePos(vec2 texcoords) {
return texture(u_Positiontex,texcoords).xyz;
}

//Helper function to automicatlly sample and unpack positions
//Helper function to automatically sample and unpack color
vec3 sampleCol(vec2 texcoords) {
return texture(u_Colortex,texcoords).xyz;
vec3 u = texture(u_Colortex,texcoords).xyz;
return vec3(float(u.x) / 255.0, float(u.y) / 255.0, float(u.z) / 255.0);
}

//Get a random normal vector given a screen-space texture coordinate
Expand Down Expand Up @@ -117,8 +124,12 @@ void main() {
case(DISPLAY_COLOR):
out_Color = vec4(color, 1.0);
break;
case(DISPLAY_GLOW):
case(DISPLAY_LIGHTS):
case(DISPLAY_TOTAL):
case(DISPLAY_TOON):
case(DISPLAY_BLUR):
case(DISPLAY_DOF):
break;
}

Expand Down
1 change: 1 addition & 0 deletions base/res/shaders/directional.frag
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
#define DISPLAY_COLOR 3
#define DISPLAY_TOTAL 4
#define DISPLAY_LIGHTS 5
#define DISPLAY_TOON 6


/////////////////////////////////////
Expand Down
Loading