-
Notifications
You must be signed in to change notification settings - Fork 65
Textures
In the last tutorial we looked how to use shaders to render your scene, we also saw how to do interpolation between updates. There is just one thing left, learning how to use textures.
Since you already know how to setup a simple shader program we will skip the initialization.
The method for generating a texture handle and binding it should be no surprise by now, it is similar to creating buffers, vertex arrays, shaders or programs.
int texture = glGenTextures();
glBindTexture(GL_TEXTURE_2D, handle);
Now we have a texture handle, the next thing we should do is setting wrapping and filtering values.
But before that it should be noted that in OpenGL textures are mapped with texture coordinates that range from 0.0
to 1.0
. It could be compared with a (x,y)
coordinate system, but with textures it is a (s,t)
coordinate system, sometimes you also see (u,v)
as coordinates but there is a difference between them, but most of the time you can assume that they are just the same.
For OpenGL we stick to the specification and use (s,t)
coordinates.
For three dimensional texture coordinates it is a (s,t,r)
coordinate system.
The texture coordinate (0,0)
for example is the origin of the texture. In OpenGL this is bottom left by convention, so we have following coordinates:
-
(0,0)
at the bottom left -
(1,0)
at the bottom right -
(1,1)
at the top right -
(0,1)
at the top left
Of course you could also use values outside of 0.0
and 1.0
and the choosen wrapping mode will handle it.
This can be done with four different ways.
-
GL_REPEAT
simply repeats the texture -
GL_MIRRORED_REPEAT
will repeat the texture too, but it gets mirrored with odd coordinates -
GL_CLAMP_TO_EDGE
clamps the coordinates between0.0
and1.0
-
GL_CLAMP_TO_BORDER
will give the coordinates outside of0.0
and1.0
a specified border color
For setting the wrapping mode you use glTexParameteri(target, pname, parameter)
like in following example.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
You could also set GL_TEXTURE_WRAP_R
if you are using a GL_TEXTURE_3D
.
The next parameter you should set is the texture filtering. This is used if your texture is scaled to a size different to the original image size. For simple texture there are two values.
-
GL_NEAREST
selects the value that is next to the selected texture coordinate -
GL_LINEAR
calculates weighted average on the four surrounding neighbors
Like the wrapping mode you use the glTexParameteri(target, pname, parameter)
method.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
You may also want to generate a mipmap that contains images at multiple level of details. This is simply done with a call to glGenerateMipMap(target)
.
glGenerateMipMap(GL_TEXTURE_2D);
To use that mipmap there are four different filtering parameters.
-
GL_NEAREST_MIPMAP_NEAREST
takes the closest mipmap that matches the size of the pixel and samples with nearest neighbor interpolation -
GL_LINEAR_MIPMAP_NEAREST
samples the closest mipmaps with bilinear interpolation -
GL_NEAREST_MIPMAP_LINEAR
take the two closest mipmaps that matches the size of the pixel and samples with nearest neighbor interpolation -
GL_LINEAR_MIPMAP_LINEAR
samples the closest mipmaps with trilinear interpolation
You set them exaclty like GL_NEAREST
and GL_LINEAR
.
Now that we have set the texture parameters we can upload the image data!
But in OpenGL there is no method to load the pure image data, but luckily for us in LWJGL you can do it with the help of STBImage
.
With STBImage
loading a texture is done with a few lines. The supported formats are JPEG
, PNG
, TGA
, BMP
, PSD
, GIF
, HDR
, PIC
and PNM
.
Before loading the image you have to prepare some buffers for storing width, height and components of the image. Like in the last tutorial we are using the MemoryStack
for that.
IntBuffer w = stack.mallocInt(1);
IntBuffer h = stack.mallocInt(1);
IntBuffer comp = stack.mallocInt(1);
If you want to have your origin at the bottom left instead of the top left you can call stbi_set_flip_vertically_on_load(true)
, so that the image will get loaded with the origin on the bottom left.
After that you just have to call stbi_load(path, w, h, comp, req_comp)
where path
is the file path to your image. The variables w
, h
and comp
are used for storing width, height and the number of components like already said. With req_comp
you can force the number of components to use 1
, 2
, 3
or 4
components per pixel, if you set req_comp
to 0
it will use the image's default number of components. If loading fails stbi_load
will return null
, but you can get the error by calling stbi_failure_reason()
.
stbi_set_flip_vertically_on_load(true);
ByteBuffer image = stbi_load(path, w, h, comp, 4);
if (image == null) {
throw new RuntimeException("Failed to load a texture file!"
+ System.lineSeparator() + stbi_failure_reason());
}
int width = w.get();
int height = h.get();
Alternatively you can also load your picture with AWT, you can read how to do that in the Appendix.
Finally we can store the image data on our GPU by calling glTexImage2D(target, level, internalFormat, width, height, border, format, type, pixels)
. The target
should be clear by now, the level
specifies the level-of-detail number, it was used in legacy OpenGL for mipmaps, in modern OpenGL this value should be 0. The number of color components in the texture is specified by the internalFormat
, width
and height
is just the width and height of the image. The border
value should also be 0. The format
specifies just the format of the texture and the type
specifies the data type of the pixel data. Last but not least the pixels
is a buffer that contains the pure pixel data.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
After this has be done we just need to put that texture in our shader.
By now you know how to use shaders, so we just take a look at our new shader.
#version 150 core
in vec2 position;
in vec3 color;
in vec2 texcoord;
out vec3 vertexColor;
out vec2 textureCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main() {
vertexColor = color;
textureCoord = texcoord;
mat4 mvp = projection * view * model;
gl_Position = mvp * vec4(position, 0.0, 1.0);
}
Our vertex shader hasn't changed that much, the greatest difference is that we now have a in vec2 texcoord
which we just pass to the fragment shader.
The OpenGL 2.1 version is also similar to that, we just add attribute vec2 texcoord
which gets passed to the fragment shader.
#version 120
attribute vec2 position;
attribute vec3 color;
attribute vec2 texcoord;
varying vec3 vertexColor;
varying vec2 textureCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main() {
vertexColor = color;
textureCoord = texcoord;
mat4 mvp = projection * view * model;
gl_Position = mvp * vec4(position, 0.0, 1.0);
}
That shouldn't be too complicated, so whats about the fragment shader? Here we use a sampler to get the texture color.
#version 150 core
in vec3 vertexColor;
in vec2 textureCoord;
out vec4 fragColor;
uniform sampler2D texImage;
void main() {
vec4 textureColor = texture(texImage, textureCoord);
fragColor = vec4(vertexColor, 1.0) * textureColor;
}
You see that we have a new uniform variable sampler2D
, this is used to specify which texture color this fragment should have.
To get that color we use texture(sampler, texCoord)
, and then we can multiply it to the final out color.
With an OpenGL 2.1 context it is almost similar.
#version 120
varying vec3 vertexColor;
varying vec2 textureCoord;
uniform sampler2D texImage;
void main() {
vec4 textureColor = texture2D(texImage, textureCoord);
gl_FragColor = vec4(vertexColor, 1.0) * textureColor;
}
The biggest difference is that we use texture2D(sampler, texCoord)
, because in legacy OpenGL we can't use texture(sampler, texCoord)
.
Now that we have a new value in our vertex shader we have to slightly change the specifying of the vertex attributes like in the following code.
/* Specify Vertex Pointer */
int posAttrib = program.getAttributeLocation("position");
program.enableVertexAttribute(posAttrib);
program.pointVertexAttribute(posAttrib, 2, 7 * Float.BYTES, 0);
/* Specify Color Pointer */
int colAttrib = program.getAttributeLocation("color");
program.enableVertexAttribute(colAttrib);
program.pointVertexAttribute(colAttrib, 3, 7 * Float.BYTES, 2 * Float.BYTES);
/* Specify Texture Pointer */
int texAttrib = program.getAttributeLocation("texcoord");
program.enableVertexAttribute(texAttrib);
program.pointVertexAttribute(texAttrib, 2, 7 * Float.BYTES, 5 * Float.BYTES);
And of course we need to set the new uniform variable, but with one texture this is completely optional, because the texture gets bound to value 0 by default. This is more important if you want to have multiple textures in your shader.
int uniTex = program.getUniformLocation("texImage");
program.setUniform(uniTex, 0);
Another point is that for creating a textured quad we need to define six vertices instead of four because with modern OpenGL we can only render triangles. But we can make use of Element Buffer Objects, so that our VBO just contains four vertices and our EBO will point to the correct values. It is generated just like a VBO, but the binding is different.
int ebo = glGenBuffers();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
Now you can create an IntBuffer
which contains the indices for our VBO.
IntBuffer elements = stack.mallocInt(2 * 3);
elements.put(0).put(1).put(2);
elements.put(2).put(3).put(0);
elements.flip();
glBufferData(GL_ELEMENT_ARRAY_BUFFER, elements, GL_STATIC_DRAW);
And our VBO contains four vertices which will center the texture on screen.
long window = GLFW.glfwGetCurrentContext();
IntBuffer widthBuffer = stack.mallocInt(1);
IntBuffer heightBuffer = stack.mallocInt(1);
GLFW.glfwGetFramebufferSize(window, widthBuffer, heightBuffer);
int width = widthBuffer.get();
int height = heightBuffer.get();
float x1 = (width - texture.getWidth()) / 2f;
float y1 = (height - texture.getHeight()) / 2f;
float x2 = x1 + texture.getWidth();
float y2 = y1 + texture.getHeight();
FloatBuffer vertices = stack.mallocFloat(4 * 7);
vertices.put(x1).put(y1).put(1f).put(1f).put(1f).put(0f).put(0f);
vertices.put(x2).put(y1).put(1f).put(1f).put(1f).put(1f).put(0f);
vertices.put(x2).put(y2).put(1f).put(1f).put(1f).put(1f).put(1f);
vertices.put(x1).put(y2).put(1f).put(1f).put(1f).put(0f).put(1f);
vertices.flip();
int vbo = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, vertices, GL_STATIC_DRAW);
For easier scaling the orthographic projection was also changed to have a range that equals to our window width and height.
Matrix4f projection = Matrix4f.orthographic(0f, width, 0f, height, -1f, 1f);
int uniProjection = glGetUniformLocation(shaderProgram, "projection");
glUniformMatrix4fv(uniProjection, false, projection.getBuffer());
Now that we use a EBO we won't use glDrawArrays(type, first, count)
to render that quad, we have to change it to glDrawElements(mode, count, type, offset)
. The mode
will be GL_TRIANGLES
, count specifies how many vertices are to draw and type
indicates the type of the index values in the EBO, offset
should be clear.
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
At this point you should be able to create some nice scenes with textures and colors, for example you could do text rendering with textures!
That's it for this part, next time we will see how to use batch rendering to get away from static scenes.
If you want to use AWT with Mac OS X you need to set java.awt.headless
to true
by either calling System.setProperty("java.awt.headless", "true")
or by using the -Djava.awt.headless=true
JVM argument.
If you want to load your textures without using STB you can do it by using AWT's BufferedImage
and ImageIO
. Loading the image is pretty straight forward, you can do it with an InputStream
or with a File
for example.
You can get the supported image formats with ImageIO.getReaderFileSuffixes()
, but ImageIO
should be able to load *.jpg
, *.bmp
, *.gif
, *.png
, *.jpeg
or *.wbmp
files. For other file formats you could either write your own reader or search for an external API for loading images.
InputStream in = new FileInputStream(path);
BufferedImage image = ImageIO.read(in);
After loading the image into a BufferedImage
the origin is at the top left, but since we don't want that we just flip the picture vertically.
AffineTransform transform = AffineTransform.getScaleInstance(1f, -1f);
transform.translate(0, -image.getHeight());
AffineTransformOp operation = new AffineTransformOp(transform, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
image = operation.filter(image, null);
Now we have the image with it's origin at the bottom left, so we can start extracting each pixel with a simple call to getRGB(startX, startY, w, h, rgbArray, offset, scansize)
, the scansize
should be the image width and the offset
should be 0
.
int width = image.getWidth();
int height = image.getHeight();
int[] pixels = new int[width * height];
image.getRGB(0, 0, width, height, pixels, 0, width);
This gets the pixels in the ARGB format. With LWJGL3 you have to upload the data by using a ByteBuffer
, so the last thing we have to do is putting the pixels in the buffer, you should note that in OpenGL a color is represented in RGBA format with each component as a single byte, so our ByteBuffer
needs a capacity of width * height * 4
.
We also do some bit shifting, but let's take a look at the code.
ByteBuffer buffer = stack.malloc(width * height * 4);
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
/* Pixel as RGBA: 0xAARRGGBB */
int pixel = pixels[y * width + x];
/* Red component 0xAARRGGBB >> (4 * 4) = 0x0000AARR */
buffer.put((byte) ((pixel >> 16) & 0xFF));
/* Green component 0xAARRGGBB >> (2 * 4) = 0x00AARRGG */
buffer.put((byte) ((pixel >> 8) & 0xFF));
/* Blue component 0xAARRGGBB >> 0 = 0xAARRGGBB */
buffer.put((byte) (pixel & 0xFF));
/* Alpha component 0xAARRGGBB >> (6 * 4) = 0x000000AA */
buffer.put((byte) ((pixel >> 24) & 0xFF));
}
}
/* Do not forget to flip the buffer! */
buffer.flip();
The bit shifting may look a bit complicated, but let's take the red component as example.
We have the pixel in ARGB format, so in hexadecimal it is represented as 0xAARRGGBB
, but we only want the 0xRR
part of that pixel. Since we know that each hexadecimal value is represented as 4 bits we shift the 0xGGBB
out, that are 4 * 4 bits, that means that we shift the pixel 16 bits to the right and we have 0x0000AARR
left. To get the red component out of that value we do a bitwise and with a 0xFF
mask, so we get 0x000000RR
which is the same as 0xRR
.
But enough of bit operations, after going through both loops we have all the color data in our ByteBuffer
, but most importantly, do not forget to flip the buffer or it will crash your JVM!
This tutorial and it's source code is licensed under the MIT license.
Written by Heiko Brumme, Copyright © 2014-2018