OpenGL & GLSL

143  Download (0)

전체 글
(1)

OpenGL & GLSL

Wanho Choi

(2)
(3)

GPU

Graphics Processing Unit

: a specialized device to speed up the creation of both 2D and 3D images

• By having a separate processor,

GPU allows CPU resources to be used for other important tasks.

GPU ⊂ graphic(s) card = graphic(s) device = video card

http://graphicscardhub.com/graphics-card-component-connectors/ http://www.nvidia.co.kr/graphics-cards/geforce/pascal/kr/gtx-1080

(4)

What is OpenGL?

OpenGL is a hardware/platform independent Open Graphics Library.

OpenGL is an interface that allows a programmer to communicate with GPU. OpenGL is one of the industrial standard API.

(5)

A Very Simple OpenGL Example

#include <GL/glut.h> void Display() { glClearColor( 0.f, 0.f, 0.f, 0.f ); glBegin( GL_TRIANGLES ); { glColor3f( 1.f, 0.f, 0.f ); glVertex2f( -1.f, -1.f ); glColor3f( 0.f, 1.f, 0.f ); glVertex2f( 0.f, +1.f ); glColor3f( 0.f, 0.f, 1.f ); glVertex2f( +1.f, -1.f ); } glEnd(); glutSwapBuffers(); }

int main( int argc, char* argv[] ) {

glutInit( &argc, argv );

glutInitDisplayMode( GLUT_RGBA | GLUT_DOUBLE ); glutCreateWindow( "OpenGL Test" );

glutDisplayFunc( Display ); glutMainLoop();

return 0; }

(6)

Rendering Context vs Device Context

(7)

Computer Graphics Hardware

(8)

Single Buffering

• When a computer needs to display something on a monitor, it draws a picture

and sends it (which we will call a buffer) out to the monitor.

In the old days there was only one buffer and it was continually being both

drawn to and sent to the monitor.

• There are also very large drawbacks.

(9)

Double Buffering

In order to combat the issues with reading from while drawing to the same

buffer, double buffering, is employed.

• The idea behind double buffering is that the computer only draws to one buffer

(called the "back" buffer) and sends the other buffer (called the "front" buffer) to the screen.

• After the computer finishes drawing the back buffer, the program doing the

drawing does something called a buffer “swap."

This swap doesn't move anything: swap only changes the names of the two

buffers: the front buffer becomes the back buffer and the back buffer becomes the front buffer.

(10)

Double Buffering by Page Flipping

(11)

3D Graphics Coordinate Systems

https://www.ntu.edu.sg/home/ehchua/programming/opengl/CG_BasicsTheory.html • OpenGL adopts the Right-Hand Coordinate

System (RHS). RHS is counter-clockwise

(CCW). The 3D Cartesian Coordinates is a RHS.

• Some graphics software (such as Microsoft

DirectX) use Left-hand System (LHS), where

(12)

Front Face vs Back Face

http://tadis.tistory.com/entry/RenderingPipeLine • glEnable( GL_CULL_FACE );

glFrontFace( GL_CCW );glFrontFace( GL_CW );

(13)
(14)

The Origin of OpenGL

Silicon Graphics (commonly referred to as SGI) was a company founded in

1981 that specialized in 3D computer graphics and developed software and

hardware.

One software library that SGI developed was IRIS GL (Integrated Raster

Imaging System Graphical Library) API used for generating 2D and 3D graphics on SGI's high performance workstations.

(15)

The Origin of OpenGL

At that time competing vendors, including Sun Microsystems, IBM and

Hewlett-Packard were also bringing 3D hardware on the market.

They used another API called PHIGS.

• Because other vendors also brought new 3D hardware on the market,

SGI's market share became smaller.

• To turn the tide and influence the market,

(16)

The Birth of OpenGL

• However, because SGI could not make IRIS GL an open standard due to

licensing and patent issues, they made a new API based on IRIS GL called OpenGL.

SGI started developing OpenGL in 1991 and released it in January 1992. Since 2006, OpenGL is managed by the non-profit technology consortium

(17)

OpenGL ARB

In 1992, SGI led the creation of the OpenGL architectural review board (ARB). The founding companies of the ARB were: SGI, Microsoft, IBM, DEC and Intel. Today nine companies have voting seats on the ARB, and several more attend

the quarterly meetings to provide input to the evolution of OpenGL.

The role of the OpenGL ARB

is to establish and maintain the OpenGL specifications.

(18)

OpenGL Extensions

Not all advanced hardware-specific features can be accessed by the OpenGL

version.

• A new version of OpenGL is not released very often.

Fortunately, the video-card manufactures can and do provide OpenGL

extensions.

With these extensions you are able to access advanced hardware-specific

features.

• If these features are used by many vendors, the extensions can become an

(19)

The Brief History of OpenGL

In 1992, OpenGL 1.0 was released by SGI.

In 2004, OpenGL 2.0 was released with GLSL version 1.1.

In 2009, OpenGL 3.2 was released with geometry shaders.

In 2010, OpenGL 3.3 was released, and OpenGL and GLSL version matched. In 2010, OpenGL 4.0 was released with tessellation shaders.

In 2012, OpenGL 4.3 was released.

In 2014, OpenGL 4.5 was released, and it is the latest version of OpenGL.

(20)

The Evolution of OpenGL

Fixed Function Vertex / Fragment Shader Geometry Shader

Tessellation Shader

(21)

The Evolution of OpenGL

OpenGL 2.0 is one of the drastic changes in OpenGL by introducing

programmable graphics pipeline using vertex and fragment shaders.

In OpenGL 4.3, the immediate-mode(viz. glBegin()-glEnd()) was eliminated.

It allowed only retained-mode drawing calls of the glDraw*() and glMultiDraw*()

type for shaders.

• That is to say, VBOs and VAOs were compulsory. All data must be stored in

buffer objects.

(22)

Modern OpenGL

We call OpenGL 3.3 and later “modern OpenGL”.

The main differences between legacy OpenGL and modern OpenGL are:

1) Modern OpenGL uses GPU memory using Vertex Buffer Object (VBO).

2) Modern OpenGL allows programmers to define their own graphic pipeline using shaders.

(23)
(24)

OpenGL is a state machine.

Here, a state means a mode.

A state remains unchanged until the next change.

All subsequent API calls are based on the current state.

For example, you can set the current color to yellow, and then thereafter every

object is drawn with that color until you set the other current color.

Other states: projection and viewing matrix, colors and materials, light,

texture, shader, line width, point size, etc.

(25)

Client-Server Model

OpenGL is based on client-server model.

- Client: CPU with main memory on the main board

- Server: GPU with video memory on the graphic card

(In nVidia CUDA, host: CPU, device: GPU)

Client takes the main application and transfers the data to server via BUS. This is the main bottleneck for the overall graphics performance.

(26)

VA: Vertex Array

VA in client space allows improved performance by operating on larger chunks

of data at once, because VA reduces the number of function calls and redundant usage of the shared vertices.

However, VA still needs to transfer vertex data into server space, i.e. GPU

memory, usually repeatedly, because VA is stored in client space, i.e. in system memory.

(27)

DL: Display List

DL is server side function, so it does not suffer from overhead of data transfer. But, once DL is compiled, the data in the display list cannot be modified.

(28)

VBO: Vertex Buffer Object

A buffer is a block of memory in video card.

• It means that you can't simply grab a pointer to it, write into it, and expect the

changes to automatically affect subsequent operations.

(29)

VBO: Vertex Buffer Object

VBO allows storing of vertex array in server space, but unlike display lists, the

data in VBO can be read and updated by mapping the buffer into client's memory space.

VBO creates buffer(=memory) for vertex attributes in high-performance

memory on the server(GPU) side, and provides same access functions to

reference the arrays, which are used in vertex arrays, such as glVertexPointer(), glColorPointer(), glNormalPointer(), glTexCoordPointer(), etc.

(30)

VBO: Vertex Buffer Object

VBOs = GPU memory pools

Another important advantage of VBO is sharing the buffer objects with many

clients, like display lists and textures.

Since VBO is on the server's side, multiple clients will be able to access the

same buffer with the corresponding identifier.

Once the data are copied from client memory to VBO, GPU can then access the

data directly and save the cost of transferring from client memory to server memory every frame.

(31)

Three Types of Assignment

The memory manager in VBO will put the buffer objects into the best place of

memory based on user's hints: GL_STATIC_DRAW, GL_DYNAMIC_DRAW,

GL_STREAM_DRAW, etc.

• Recommendation (It is not mandatory!)

1) GL_STATIC_DRAW is for vertex buffers that are rendered many times, and whose contents are specified once and never change.

2) GL_DYNAMIC_DRAW is for vertex buffers that are rendered many times, and whose contents change during the rendering loop.

3) GL_STREAM_DRAW is for vertex buffers that are rendered a small number of times and then discarded.

(32)

Legacy vs Modern OpenGL

In the legacy OpenGL, there were specific functions like glVertexPointer(),

glColorPointer(), etc. to specify vertex attributes and its data layout.

In the modern OpenGL, these are replaced with generic vertex attributes with

an identifier (called index in function glVertexAttribPointer) that associates to a shader variable by its attribute location in the program for coordinates, color,

(33)

VAO: Vertex Array Object

VAO is a special type of object that encapsulates all the data which is

associated with the vertex processor.

Instead of containing the actual data, it holds references to the vertex buffers,

the index buffer and the layout specification of the vertex itself.

VAO encapsulates the buffer objects related to a given geometry object. Not using VAOs is deprecated and discouraged/prohibited in modern

(34)

VBO vs VAO

VBO is a buffer which is used to hold a vertex attribute array (aka vertex array)

on the GPU.

VAO = VBOs + states

VAOs are used in addition to VBOs in order to improve client-side (CPU-side)

performance, by reducing the number of calls needed to rebind individual

vertex buffers and re-set vertex attributes every time you want to change to render in a certain way.

(35)

How do VAOs work?

(36)

Why do we need VAO?

Instead of doing all that work every frame, you do it once (at initialization), and then

simply rebind the appropriate VAO for each (set of) draw call(s) that use the associated vertex attributes.

• For instance, you can bind one VAO, then bind a VBO, and IBO and configure the

vertex layout.

• You can bind another VAO, then another VOB and IBO and configure a different vertex

layout.

• Whenever you bind the first VAO, the VBO and IBO associated with it are bound and

the associated vertex format is used.

• Binding the second VAO will then switch to the other pair of VBO/IBO and respective

(37)
(38)

OpenGL Camera

By default, the camera is situated at the origin, points down the negative

z-axis, and has an up-vector of (0, 1, 0).

(39)

Perspective vs Orthographic

(40)

OpenGL Transformation Pipeline

• Coordinate = space Modeling Transform View Transform Model Space (Object Space) (Local Space) World Space View Space (Eye Space) (Camera Space) Perspective

Transform Clip Space

Perspective Division by w Normalized Device Space (NDC) Viewport Transform Window Space (x,y) ModelView Transform

(41)

OpenGL Transformation Pipeline

(42)

Local Space → World Space → Eye Space

(43)

Model Space vs World Space

(44)

World Space vs View Space

(45)

NDC

Normalized Device Coordinate

• A 3D point in a truncated pyramid frustum (eye coordinates) is mapped to a

cube of which size is 2.

(46)

NDC

(47)

Viewport Transformation

(48)
(49)

FFP: Fixed Function Pipeline

(50)

The Limitations of FFP

• The fixed functionality can only get you:

- Linear transformations - Gouraud shading

- Limited operators multi-texturing

- etc.

(51)

Programmable Pipeline

(52)

Programmable Pipeline

(53)

Programmable Pipeline

(54)

What is a vertex?

A vertex is the data structure which stores several attributes.

1) Space coordinates (position) 2) Color

3) Normal

4) Texture coordinated (uv or st) 5) etc.

(55)

Vertex vs Fragment

http://csc.lsu.edu/~kooima/courses/csc4356/notes/06-pipeline/raster-operations.html

(56)

Rasterization

(57)

Fragment vs Pixel

A fragment is a single pixel of a single primitive. Every fragment has position, depth, and color.

A pixel in the frame buffer may be composed of one or few different fragments. • If the fragment of the primitive A and the fragment of the primitive B have the

same pixel position in the frame buffer, they will be composed to create the pixel values (color and alpha) by the depth test, etc.

(58)

Fragment vs Pixel

(59)

Shader

Shaders based on GLSL are ancillary programs, attached to an OpenGL

program, that are executed on the GPU in the rendering pipeline.

In modern OpenGL,

there are four possible shader in the programmable pipeline. 1) Vertex shader

2) Tessellation shader 3) Geometry shader

(60)
(61)

Uniform vs Varying Variables

(62)

How many cores in GPU?

http://courses.cms.caltech.edu/cs101gpu/2013/lec1_pt2_history.pdf • Prior to GeForce 8000/Radeon 2000, vertex and fragment shaders were

executed in separate hardware.

Now, CUDA cores (programmable unified processors) replaced separate

(63)

How many cores in GPU?

(64)

HLSL vs GLSL vs CG

https://www.slideshare.net/Nordeus/writing-shaders-you-can-do-it

HLSL GLSL CG

High Level Shading Language OpenGL Shading Language C for Graphics

Microsoft OpenGL ARB nVidia

DirectX OpenGL deprecated but …

Windows, Xbox Windows, OS X, Linux, iOS, Android

Thanks to Unity, covers all platforms

(65)
(66)

VS: Vertex Shader

It is invoked for every single vertex sent by the main application program. The main role of it in the fixed functionality is to transforms every vertex

according to the model-view matrix and the projection matrix.

• The minimal requirement of a vertex shader is to set a position for the vertex by

(67)

VS: Vertex Shader

• It can also do:

- Custom manipulation of the position

- Transform and normalize the vertex normal - Per vertex lighting

- Per vertex color computation - Access textures

- Prepare variables for the fragment shader

(68)

Simple Vertex Shader

Since the gl_ProjectionMatrix, gl_ModelViewMatrix, and ftransform() were

removed after version 1.4, we have to provide them as uniform variables from the application program.

#version 430 core

uniform mat4 projectionMatrix; uniform mat4 modelViewMatrix;

layout(location=0) in vec4 inPosition; void main()

{

gl_Position = projectionMatrix * modelViewMatrix * inPosition; }

void main() {

gl_Position = ftransform();

// = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex; // = gl_ModelViewProjectionMatrix * gl_Vertex;

(69)
(70)

FS: Fragment Shader

It is invoked for every single fragment of every primitives.

The minimal requirement of a fragment shader is to assign a color to the

fragment.

(71)

FS: Fragment Shader

• It can also do:

- Compute per fragment color

- Per fragment normal computation - Per fragment lighting computation - Access textures

- Apply textures

(72)

Simple Fragment Shader

• In GLSL 1.10, fragment shaders produced a

single color value, writing that value to a

predefined variable named gl_FragColor, of type vec4.

• With the release of OpenGL 3.0, most of the API

was deprecated. Likewise, much of what was defined in GLSL<1.30 was also deprecated.

Programmers were required to declare named

fragment shader outputs.

• With the release of OpenGL 3.3, it became

possible to use layout qualifiers to statically associate fragment shader outputs with draw buffers in the GLSL source code.

http://io7m.com/documents/fso-tta/ #version 110 void main() { gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); } #version 330

out vec4 outColor; void main()

{

outColor = vec4(1.0, 0.0, 0.0, 1.0); }

#version 430 core

layout(location=0) out vec4 outColor void main()

{

outColor = vec4(1.0, 0.0, 0.0, 1.0); }

(73)
(74)

GS: Geometry Shader

It is invoked for every single primitive (points, lines, or triangles).

It is the last stage of geometry processing right before the rasterizer.

It can create new geometry on the fly from the output of the vertex shader as

input. (It can also reduce the amount of data.)

It can change the primitive mode midway in the pipeline.

(e.g. take triangles and produce points or lines as output)

• It works great if you want to draw particles from the input triangles,

(75)

The Input of GS from VS

Note that it is declared as an array, because most render primitives consist of

more than one vertex and the geometry shader receives all vertices of a

primitive as its input.

in gl_Vertex { vec4 gl_Position; float gl_PointSize; float gl_ClipDistance[]; } gl_in[];

(76)

The Input of GS

• At the start of every geometry shader we need to declare the type of primitive input

we're receiving from the vertex shader.

We do this by declaring a layout specifier in front of the in keyword.

This input layout qualifier can take any of the following primitive values from a vertex

shader:

- points: when drawing GL_POINTS primitives

- lines: when drawing GL_LINES or GL_LINE_STRIP

- lines_adjacency: GL_LINES_ADJACENCY or GL_LINE_STRIP_ADJACENCY

- triangles: GL_TRIANGLES, GL_TRIANGLE_STRIP or GL_TRIANGLE_FAN

(77)

The Output of GS

• Then we also need to specify a primitive type that the geometry shader will

actually output and we do this via a layout specifier in front of the out keyword.

Like the input layout qualifier, the output layout qualifier can also take several

primitive values:

- points

- line_strip

(78)
(79)

TS: Tessellation Shader

It operates on patches (usually triangles or quads). It sits directly after VS stage in the OpenGL pipeline.

It tessellates the given primitive (which is known as a patch in OpenGL) into

(80)

TS: Tessellation Shader

• An TS consists of three separate stages.

1) Tessellation Control Shader (TCS) [programmable]

2) Tessellation Primitive Generator (TPG) [fixed-function, non-programmable] 3) Tessellation Evaluation Shader (TES) [programmable]

(81)

TS: Tessellation Shader

http://victorbush.com/2015/01/tessellated-terrain/ per-vertex data per-vertex data tessellation levels gl_TessLevelOuter[] gl_TessLevelInner[] gl_TessCoord VS per-vertex data

(82)

TS: Tessellation Shader

If you want to use TS in your own pipeline, you have to use GL_PATCHES as the

primitive type.

• How to set the number of control points per each patch

: glPatchParameteri( GL_PATCH_VERTICES, N );

Usually, the maximum number of control points that can be used to form a

single patch is 32.

• How to set the number of output points

(83)

TCS: Tessellation Control Shader

It is executed once per vertex in the patch before everything is passed to the

next stage.

• It takes its input from VS and is primarily responsible for two things:

1) The determination of the level of tessellation that will be sent to TPG 2) The generation of data that will be used in TES.

• TCS has access to the whole patch data, so the input attributes are defined as

arrays.

TCS has the predefined variable gl_InvocationID that gives the current

execution vertex index in the current patch. (gl_InvocationID=0,1,2 for a

(84)

Tessellation Levels

(85)

Isoline Tessellation

(86)

Quad Tessellation

(87)

Triangle Tessellation

(88)

Triangle Tessellation

(89)

Examples

(90)

Examples

(91)

Examples

(92)

Examples

(93)

TPG: Tessellation Primitive Generator

Fixed function part of OpenGL pipeline

TPG consumes the output from the TCS and creates new tessellated primitives

(It creates additional vertices).

It outputs positions as coordinates in barycentric coordinates (u,v,w). - Line: u, Triangle: (u,v,w), Quad: (u,v)

(94)

TES: Tessellation Evaluation Shader

It is invoked for every single vertex generated by TPG.

The primary purpose of the TES is to determine the final attributes of each of

the tessellated vertices before they are passed to the fragment stage.

In TES, we have to set gl_Position for the vertex using gl_TessCoord and the

control points which is the input points of the VS.

Tessellation domain type: quads, triangles, isolines Face winding type: cw, ccw

Tessellation spacing type: equal_spacing, fractional_odd_spacing,

(95)

Tessellation Mode

• For example, if the tessellation level is 3.0, a edge will be divided in three

segments.

• But, how many segments will be subdivided if the tessellation level is 3.25 or

3.75?

(96)

Tessellation Spacing

The tessellation spacing mode is defined at the top of the TES with the

following syntax in the case of a triangle patch:

• layout(triangles, equal_spacing) in;

Why does OpenGL need floating point tessellation level?

• An answer is that floating point tessellation levels combined with

fractional_even_spacing and fractional_odd_spacing allow smooth transition

(97)

Three Spacing Modes

equal_spacing: the tessellation level is clamped to the range [1,64] and is rounded up to the nearest

integer n, and the corresponding edge is divided into n segments.

• Example: a level of 3.75 will be rounded up to 4 and the edge will be divided in 4 identical segments.

fractional_odd_spacing: the tessellation level is clamped to the range [1,63] and then rounded up to the

nearest odd integer n.

• Example: a level of 3.75 will be rounded up to 5 and the edge will be divided in 5 segments. • In this case, the segments may or may not be identical.

fractional_even_spacing: the tessellation level is clamped to the range [1,63] and then rounded up to the

nearest even integer n.

• Example: a level of 3.75 will be rounded up to 4 and the edge will be divied in 4 segments. • In this case, the segments may or may not be identical.

(98)
(99)

GLSL: OpenGL Shading Language

High-level, cross-platform shading language It is based on the syntax of the C language.

It was created by the OpenGL ARB.

Before the standardization of the GLSL for shaders programmers had to write

code in vendor-specific language to access individual GPU features.

Cross-platform compatibility on multiple operating systems, including GNU/

(100)

Scalar Data Types

bool: conditional type, values may be either true or false int: a signed 32-bit integer

uint: an unsigned 32-bit integer

float: an single-precision floating point number

(101)

Vector Data Types

bvecn: a vector of booleans

ivecn: a vector of signed integers

uvecn: a vector of unsigned integers

vecn: a vector of single-precision floating-point numbers

(102)

Swizzling

• More than one components can be accessed by appending their names, from

the same name set.

• vec4 v;

• vec3 a = v.xyz;

• vec3 a = vec3( v.x, v.y, v.z ); • vec4 b = v.xxyy;

• vec3 c = v.rgb; • vec3 d = v.gbr;

(103)

Matrix

matnxm: A matrix with n columns and m rows. OpenGL uses column-major

matrices, which is standard for mathematics users. Example: mat3x4.

matn: A matrix with n columns and n rows. Shorthand for matnxn.

Double-precision matrices (GL 4.0 and above) can be declared with a dmat

instead of mat.

• Swizzling does not work with matrices. You can instead access a matrix's fields

with array syntax: mat3 theMatrix;

theMatrix[1] = vec3(3.0, 3.0, 3.0); //Sets the second column to all 3.0s

theMatrix[2][0] = 16.0; //Sets the first entry of the third column to 16.0.

(104)

GLSL Built-in Variables

GLSL defines a number of special variables for the various shader stages.

https://www.khronos.org/opengl/wiki/Built-in_Variable_(GLSL) Input Output VS in int gl_VertexID; in int gl_InstanceID; out gl_PerVertex { vec4 gl_Position; float gl_PointSize; float gl_ClipDistance[]; }; TCS in int gl_PatchVerticesIn; in int gl_PrimitiveID; in int gl_InvocationID; in gl_PerVertex { vec4 gl_Position; float gl_PointSize; float gl_ClipDistance[]; } gl_in[gl_MaxPatchVertices];

patch out float gl_TessLevelOuter[4]; patch out float gl_TessLevelInner[2];

out gl_PerVertex { vec4 gl_Position; float gl_PointSize; float gl_ClipDistance[]; } gl_out[]; TES in vec3 gl_TessCoord; in int gl_PatchVerticesIn; in int gl_PrimitiveID;

patch in float gl_TessLevelOuter[4]; patch in float gl_TessLevelInner[2];

in gl_PerVertex { vec4 gl_Position; float gl_PointSize; float gl_ClipDistance[]; } gl_in[gl_MaxPatchVertices]; out gl_PerVertex { vec4 gl_Position; float gl_PointSize; float gl_ClipDistance[]; };

GS inin intint gl_PrimitiveIDIn; gl_InvocationID;

in gl_PerVertex { vec4 gl_Position; float gl_PointSize; float gl_ClipDistance[]; } gl_in[];

out int gl_PrimitiveID;

out gl_PerVertex { vec4 gl_Position; float gl_PointSize; float gl_ClipDistance[]; };

FS inin vec4bool gl_FragCoord; gl_FrontFacing;

in vec2 gl_PointCoord;

out vec4 gl_FragColor;

out float gl_FragDepth;

(105)
(106)

Simple Fragment Shader

void main() { gl_FragColor = vec4( 1.0, 1.0, 0.0, 1.0 ); } void main() { gl_Position = ftransform(); } myVS.glsl myFS.glsl

(107)

Per Fragment Noise

float random( vec2 p )

{

return fract( 6791.0 * sin( 47.0 * p.x + p.y * 9973.0 ) );

}

void main() {

gl_FragColor = vec4( gl_FragCoord.xy/300.0, 0.0, 1.0 )

* random( gl_FragCoord.xy ); } void main() { gl_Position = ftransform(); } myVS.glsl myFS.glsl

(108)

Varying Variable

varying vec3 vColor;

void main() {

gl_FragColor = vec4( vColor, 1.0 );

}

varying vec3 vColor;

void main() { vColor = gl_Color.xyz; gl_Position = ftransform(); } myVS.glsl myFS.glsl

(109)

Colors by Positions

varying vec3 position;

void main() {

gl_FragColor = vec4( 0.5*position.x+0.5, 0.0, 0.0, 1.0 );

}

varying vec3 position;

void main() { position = gl_Vertex.xyz; gl_Position = ftransform(); } myVS.glsl myFS.glsl

(110)

Normalized Positioned Colors

varying vec4 np; // normalized position

void main() {

gl_FragColor = np;

}

varying vec4 np; // normalized position

void main() { np = gl_Vertex; gl_Position = gl_ModelViewProjectionMatrix * np; } myVS.glsl myFS.glsl

(111)

Perlin Noise

// Classic Perlin noise

float cnoise( vec4 P );

uniform int frame; // current frame number

varying vec4 np; // normalized position

void main() {

gl_FragColor = cnoise( vec4( np.xyz*10, frame*0.05 ) );

}

uniform int frame; // current frame number

varying vec4 np; // normalized position

void main() { np = gl_Vertex; gl_Position = gl_ModelViewProjectionMatrix * np; } myVS.glsl myFS.glsl

(112)

Perlin Noise Displacement

varying vec4 np; // normalized position

void main() {

gl_FragColor = np;

}

// Classic Perlin noise

float cnoise( vec4 P );

uniform int frame; // current frame number

varying vec4 np; // normalized position

void main() {

np = gl_Vertex;

vec3 d = gl_Normal * cnoise( vec4( np.xyz*10, frame*0.05 ) );

gl_Position = gl_ModelViewProjectionMatrix * ( np + vec4(d,0.0) );

}

myVS.glsl

(113)

Texture Coordinated Colors

void main() {

gl_FragColor = vec4( gl_TexCoord[0].st, 0.0, 1.0 );

} void main() { gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } myVS.glsl myFS.glsl

(114)

Texture Coordinated Colors

void main() { vec2 st = gl_TexCoord[0].st; float s = st.s; float t = st.t;

if( 0.3<s && s<0.7 && 0.3<t && t<0.7 ) { gl_FragColor = vec4( 1.0, 0.0, 0.0, 0.0 ); } else { gl_FragColor = vec4( 1.0, 1.0, 0.0, 0.0 ); } } void main() { gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } myVS.glsl myFS.glsl

(115)

Swirl

uniform sampler2D tex0;

uniform int imgWidth;

uniform int imgHeight;

uniform int frameCount;

vec4 swirl( sampler2D tex, vec2 uv ) {

vec2 texSize = vec2( imgWidth, imgHeight );

float radius = min( texSize.x, texSize.y ) * 0.4;

float angle = sin( frameCount * 0.1 );

vec2 center = texSize * 0.5;

vec2 tc = ( uv * texSize ) - center;

float dist = length( tc );

if( dist < radius ) {

float percent = ( radius - dist ) / radius;

float theta = percent * angle;

float s = sin( theta );

float c = cos( theta );

tc *= mat2( c, -s, s, c ); }

tc += center;

return texture2D( tex, tc/texSize );

}

void main() {

vec2 st = gl_TexCoord[0].st;

gl_FragColor = swirl( tex0, st );

}

(116)

Single Texture

uniform sampler2D tex0;

void main() {

float s = gl_TexCoord[0].s;

float t = gl_TexCoord[0].t;

// Flip st vertically.

gl_FragColor = texture2D( tex0, vec2(s,-t) );

} void main() { gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } myVS.glsl myFS.glsl

(117)

Edge Detection

uniform sampler2D tex0;

uniform int imgWidth;

uniform int imgHeight;

float step = 0.2;

float intensity( in vec4 c )

{

return sqrt( (c.r*c.r) + (c.g*c.g) + (c.b*c.b) );

}

vec3 sobel( float dx, float dy, vec2 center ) {

float TL = intensity( texture2D( tex0, center + vec2( -dx, -dy ) ) );

float L = intensity( texture2D( tex0, center + vec2( -dx, 0 ) ) );

float BL = intensity( texture2D( tex0, center + vec2( -dx, +dy ) ) );

float T = intensity( texture2D( tex0, center + vec2( 0, -dy ) ) );

float B = intensity( texture2D( tex0, center + vec2( 0, +dy ) ) );

float TR = intensity( texture2D( tex0, center + vec2( +dx, -dy ) ) );

float R = intensity( texture2D( tex0, center + vec2( +dx, 0 ) ) );

float BR = intensity( texture2D( tex0, center + vec2( +dx, +dy ) ) );

float x = TL + 2*L + BL - TR - 2*R - BR;

float y = -TL - 2*T - TR + BL + 2*B + BR;

float color = sqrt( (x*x) + (y*y) );

return vec3( color, color, color );

}

void main() {

vec2 center = vec2( gl_TexCoord[0].st.s, -gl_TexCoord[0].st.t );

gl_FragColor = vec4( sobel( step/float(imgWidth), step/float(imgHeight),

center ), 1.0 ); }

(118)

Color Quantization

uniform sampler2D tex0;

void main() { vec2 st = gl_TexCoord[0].st; float s = gl_TexCoord[0].s; float t = gl_TexCoord[0].t; // Flip st vertically.

gl_FragColor = texture2D( tex0, vec2(s,-t) );

int numColors = 10;

float delta = 1.0 / (float)numColors;

gl_FragColor.r = trunc( gl_FragColor.r / delta ) * delta;

gl_FragColor.g = trunc( gl_FragColor.g / delta ) * delta;

gl_FragColor.b = trunc( gl_FragColor.b / delta ) * delta;

} void main() { gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } myVS.glsl myFS.glsl

(119)

Multiple Texture

uniform sampler2D tex0;

uniform sampler2D tex1;

void main() {

float s = gl_TexCoord[0].s;

float t = gl_TexCoord[0].t;

vec4 sample0 = texture2D( tex0, vec2(s,-t) );

vec4 sample1 = texture2D( tex1, vec2(s,-t) );

// Method #1)

//gl_FragColor = 0.5 * ( sample0 + sample1 ); // Method #2)

gl_FragColor = mix( sample0, sample1, 0.5 );

} void main() { gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } myVS.glsl myFS.glsl

(120)

Per Vertex Lighting

varying vec4 vColor;

void main() {

gl_FragColor = vColor;

}

varying vec4 vColor;

void main() {

gl_Position = ftransform();

vec4 P = gl_ModelViewMatrix * gl_Vertex;

vec3 normal = gl_NormalMatrix * gl_Normal;

vec3 eyeVec = -P.xyz;

vec3 lightDir = vec3( gl_LightSource[0].position.xyz - P.xyz );

vec3 N = normalize( normal );

vec3 E = normalize( eyeVec );

vec3 L = normalize( lightDir );

vec3 R = reflect( -L, N );

float Ks = max( dot( N, L ), 0.0 );

vec4 diffuse = Ks * ( gl_LightSource[0].diffuse * gl_FrontMaterial.diffuse );

float Kd = pow( max( dot( R, E ), 0.0 ), gl_FrontMaterial.shininess );

vec4 specular = Kd * ( gl_LightSource[0].specular * gl_FrontMaterial.specular );

vColor = diffuse + specular; }

myVS.glsl

(121)

Per Fragment Lighting

varying vec3 normal;

varying vec3 eyeVec;

varying vec3 lightDir;

void main() {

vec3 N = normalize( normal );

vec3 E = normalize( eyeVec );

vec3 L = normalize( lightDir );

vec3 R = reflect( -L, N );

float Ks = max( dot( N, L ), 0.0 );

vec4 diffuse = Ks * ( gl_LightSource[0].diffuse * gl_FrontMaterial.diffuse );

float Kd = pow( max( dot( R, E ), 0.0 ), gl_FrontMaterial.shininess );

vec4 specular = Kd * ( gl_LightSource[0].specular * gl_FrontMaterial.specular );

gl_FragColor = diffuse + specular;

}

varying vec3 normal;

varying vec3 eyeVec;

varying vec3 lightDir;

void main() {

gl_Position = ftransform();

vec4 P = gl_ModelViewMatrix * gl_Vertex; normal = gl_NormalMatrix * gl_Normal;

eyeVec = -P.xyz;

lightDir = vec3( gl_LightSource[0].position.xyz - P.xyz ); }

myVS.glsl

(122)

Cartoon Shader

varying vec3 normal;

varying vec3 lightDir;

void main() {

vec3 N = normalize( normal );

vec3 L = normalize( lightDir );

float intensity = dot( N, L );

if( intensity > 0.95 ) { gl_FragColor = vec4( 1.0, 0.5, 0.5, 1.0 ); } else if( intensity > 0.5 ) { gl_FragColor = vec4( 0.6, 0.3, 0.3, 1.0 ); } else if( intensity > 0.25 ) { gl_FragColor = vec4( 0.4, 0.2, 0.2, 1.0 ); } else { gl_FragColor = vec4( 0.2, 0.1, 0.1, 1.0 ); } }

varying vec3 normal;

varying vec3 lightDir;

void main() {

gl_Position = ftransform();

vec4 P = gl_ModelViewMatrix * gl_Vertex; normal = gl_NormalMatrix * gl_Normal;

lightDir = vec3( gl_LightSource[0].position.xyz - P.xyz ); }

myVS.glsl

(123)

Outline Shader

varying vec3 normal;

varying vec3 lightDir;

void main() {

vec3 N = normalize( normal );

vec3 L = normalize( lightDir );

float intensity = dot( N, L );

if( intensity < 0.15 ) { gl_FragColor = vec4( 1.0, 0.5, 0.5, 1.0 ); } else { gl_FragColor = vec4( 0.0, 0.0, 0.0, 0.0 ); } }

varying vec3 normal;

varying vec3 lightDir;

void main() {

gl_Position = ftransform();

vec4 P = gl_ModelViewMatrix * gl_Vertex; normal = gl_NormalMatrix * gl_Normal;

lightDir = vec3( gl_LightSource[0].position.xyz - P.xyz ); }

myVS.glsl

(124)

Stripes Shader

void main() {

vec4 color1 = vec4( 0.0, 0.0, 1.0, 1.0 );

vec4 color2 = vec4( 1.0, 1.0, 0.5, 1.0 );

vec2 st = gl_TexCoord[0].st;

float x = fract( st.s * 8 );

// bell shape function // 0.0 ~ 0.4: 0.0

// 0.4 ~ 0.5: 0.0 ~ 1.0 // 0.5 ~ 0.9: 1.0

// 0.0 ~ 1.0: 0.0

float f = smoothstep( 0.4, 0.5, x ) - smoothstep( 0.9, 1.0, x );

gl_FragColor = mix( color2, color1, f );

} void main() { gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } myVS.glsl myFS.glsl

(125)

Wireframe Shader

void main() { vec2 st = gl_TexCoord[0].st; float s = st.s * 8; float t = st.t * 8; if( fract(s) < 0.1 || fract(t) < 0.1 ) { gl_FragColor = vec4( 0.8, 0.8, 0.8, 1.0 ); } else { discard; } } void main() { gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } myVS.glsl myFS.glsl

(126)

Spherical Environment

uniform sampler2D tex0;

varying vec3 normal;

varying vec3 position;

vec2 sphericalCoord( vec3 normal, vec3 position ) {

vec3 u = normalize( position );

vec3 r = reflect( u, normal );

float m = 2.0 * sqrt( r.x*r.x + r.y*r.y + (r.z+1.0)*(r.z+1.0) );

return vec2( r.x/m+0.5, r.y/m+0.5 );

}

void main() {

vec2 sphCrd = sphericalCoord( normal, position );

float s = sphCrd.s;

float t = sphCrd.t;

// Flip st vertically.

gl_FragColor = texture2D( tex0, vec2(s,-t) );

}

varying vec3 normal;

varying vec3 position;

void main() {

gl_Position = ftransform();

vec4 P = gl_ModelViewMatrix * gl_Vertex; normal = gl_NormalMatrix * gl_Normal;

position = P.xyz; }

myVS.glsl

(127)

X-ray Shader

varying vec3 normal;

varying vec3 eyeVec;

void main() {

vec3 N = normalize( normal );

vec3 E = normalize( eyeVec );

float opacity = abs( dot( N, E ) );

opacity = 1.0 - pow( opacity, 2.0 );

gl_FragColor = opacity * vec4( 0.2, 0.2, 0.2, 1.0 );

}

varying vec3 normal;

varying vec3 eyeVec;

void main() {

gl_Position = ftransform();

vec4 P = gl_ModelViewMatrix * gl_Vertex; normal = gl_NormalMatrix * gl_Normal;

eyeVec = P.xyz; }

myVS.glsl

(128)

Geometry Shader

#version 430 core

in vec4 fColor;

out vec4 outColor;

void main() {

outColor = fColor; }

#version 430 core

uniform mat4 projectionMatrix;

uniform mat4 modelViewMatrix;

layout(location=0) in vec4 inPosition;

layout(location=1) in vec4 inColor;

out vec4 vColor;

void main() {

gl_Position = projectionMatrix * modelViewMatrix * inPosition;

vColor = inColor; } myVS.glsl myFS.glsl myGS.glsl #version 430 core layout(points) in;

layout(triangle_strip, max_vertices=3) out;

in vec4 vColor[];

out vec4 fColor;

void main() {

fColor = vColor[0];

gl_Position = gl_in[0].gl_Position + vec4( -0.3, -0.3, 0.0, 0.0 );

EmitVertex();

gl_Position = gl_in[0].gl_Position + vec4( +0.3, -0.3, 0.0, 0.0 );

EmitVertex();

gl_Position = gl_in[0].gl_Position + vec4( 0.0, +0.3, 0.0, 0.0 );

EmitVertex();

EndPrimitive();

(129)
(130)

Tessellation Shader

#version 430 core

layout(location=0) in vec4 inPosition;

layout(location=1) in vec4 inColor;

layout(location=0) out vec4 vsPosition;

layout(location=1) out vec4 vsColor;

void main() { vsPosition = inPosition; vsColor = inColor; } myVS.glsl #version 430 core

layout(vertices=4) out;

layout(location=0) in vec4 vsPosition[];

layout(location=1) in vec4 vsColor[];

layout(location=0) out vec4 tcPosition[];

layout(location=1) out vec4 tcColor[];

uniform float TessLevelInner;

uniform float TessLevelOuter;

#define ID gl_InvocationID

void main() {

if( ID == 0 ) // only for the 1st vertex of the patch.

{ gl_TessLevelOuter[0] = 2.0; gl_TessLevelOuter[1] = 3.0; gl_TessLevelOuter[2] = 4.0; gl_TessLevelOuter[3] = 5.0; gl_TessLevelInner[0] = 7.0; gl_TessLevelInner[1] = 3.0; } tcPosition[ID] = vsPosition[ID]; tcColor[ID] = vsColor[ID]; } myTC.glsl

(131)

#version 430 core

in vec4 fsColor;

out vec4 outColor;

void main() { outColor = fsColor; } myTE.glsl #version 430 core

layout(quads, equal_spacing, ccw) in;

uniform mat4 projectionMatrix;

uniform mat4 modelViewMatrix;

layout(location=0) in vec4 tcPosition[];

layout(location=1) in vec4 tcColor[];

out vec4 fsColor;

void main() {

float u = gl_TessCoord.x;

float v = gl_TessCoord.y;

{

vec4 a = mix( tcPosition[3], tcPosition[2], u ); vec4 b = mix( tcPosition[0], tcPosition[1], u ); vec4 p = mix( a, b, v );

gl_Position = projectionMatrix * modelViewMatrix * p;

} {

vec4 a = mix( tcColor[3], tcColor[2], u ); vec4 b = mix( tcColor[0], tcColor[1], u ); vec4 c = mix( a, b, v );

fsColor = c; }

}

(132)
(133)

Tessellation Shader

#version 430 core

layout(location=0) in vec4 inPosition;

layout(location=1) in vec4 inColor;

layout(location=0) out vec4 vsPosition;

layout(location=1) out vec4 vsColor;

void main() { vsPosition = inPosition; vsColor = inColor; } myVS.glsl #version 430 core

layout(vertices=3) out;

layout(location=0) in vec4 vsPosition[];

layout(location=1) in vec4 vsColor[];

layout(location=0) out vec4 tcPosition[];

layout(location=1) out vec4 tcColor[];

uniform float TessLevelInner;

uniform float TessLevelOuter;

#define ID gl_InvocationID

void main() {

if( ID == 0 ) // only for the 1st vertex of the patch.

{ gl_TessLevelOuter[0] = 3.0; gl_TessLevelOuter[1] = 4.0; gl_TessLevelOuter[2] = 5.0; gl_TessLevelInner[0] = 4.0; } tcPosition[ID] = vsPosition[ID]; tcColor[ID] = vsColor[ID]; } myTC.glsl

(134)

#version 430 core

in vec4 fsColor;

out vec4 outColor;

void main() { outColor = fsColor; } myTE.glsl #version 430 core

layout(triangles, equal_spacing, ccw) in;

uniform mat4 projectionMatrix;

uniform mat4 modelViewMatrix;

layout(location=0) in vec4 tcPosition[];

layout(location=1) in vec4 tcColor[];

out vec4 fsColor;

void main() {

float u = gl_TessCoord.x;

float v = gl_TessCoord.y;

float w = gl_TessCoord.z;

vec4 p = tcPosition[0] * u + tcPosition[1] * v + tcPosition[2] * w;

// Here, you can apply a displacement vector.

gl_Position = projectionMatrix * modelViewMatrix * p;

fsColor = tcColor[0] * u + tcColor[1] * v + tcColor[2] * w; }

(135)
(136)
(137)

GPGPU

General-Purpose computing on Graphics Processing Units • A GPU has many computing cores, and easy to parallelized.

- Fast vector and matrix multiplication - Fast access to onboard memory

(138)

GPGPU

General-Purpose computing on Graphics Processing Units Recently, it is called GPU-accelerated computing.

• GPU-accelerated computing is the use of GPU together with a CPU to

accelerate physically based simulation, physically based rendering, deep

learning, and other engineering applications.

(139)

GPU vs CPU Performance

A CPU consists of a few cores optimized for sequential serial processing.

A GPU has a massively parallel architecture consisting of thousands of smaller,

more efficient cores designed for handling multiple tasks simultaneously.

(140)

GPU vs CPU Performance

A CPU consists of a few cores optimized for sequential serial processing.

A GPU has a massively parallel architecture consisting of thousands of smaller,

more efficient cores designed for handling multiple tasks simultaneously.

(141)

Applications

https://www.youtube.com/watch?v=Sgk-8Mj8ylI http://madebyevan.com/webgl-path-tracing/

(142)

GPU Farm

(143)

수치

Updating...

참조

관련 주제 :