Shader Writing for Unity

Before we begin creating our own shaders we need to understand some basics.


 

What are Shaders?

Shaders in Unity – small scripts that contain the mathematical calculations and algorithms for calculating the colour of each pixel rendered, based on the lighting input and the material configuration.
A shader is simply code, a set of instructions that will be executed on the GPU. It is a program for one of the stages of the graphics rendering pipeline. All shaders can be divided into two groups: vertices and fragment(pixel). In a nutshell shaders are special programs which represents how different materials are renderered.

What is a Material?
Materials are wrappers which contain a shader and the values for its properties. Hence, different materials can share the same shader, feeding it with different data.
Another way of describing Materials is that they are definitions of how a surface should be rendered, including references to textures used, tiling information, colour tints and more. The available options for a material depend on which shader the material is using.
In general materials are not much more than containers for shaders and textures that can be applied to 3D models. Most of the customization of materials depends on which shader is chosen for it, although all shaders have some common functionality. Basically a material determines object appearance and includes a reference to a shader that is used to render geometry or particles.
In summary a shader’s job is to take in 3D geometry, convert it to pixels and render it on the 2D screen. A shader can define a number of properties that you will use to affect what is displayed when your model is rendered – the settings of those properties when stored are a material.

What is the Graphics Pipeline
The Graphics Pipeline or Rendering Pipeline refers to the sequence of steps used to create a 2D raster representation of a 3D scene.

Input Data
Data is sent in to the pipeline in the Input Assembler and processed all the way through the stages until it is displayed as a pixel on your monitor. The data typically being a 3D model (vert position, normal direction, tangents, texture coordinates and color).
Even sprites, particles, and textures in your game world are usually rendered using vertices just like a 3D model.

What came before?
“The fixed pipeline” – Pre DirectX 8 and OpenGL Arb assembly language. Fixed way to transform pixels and vertices. Impossible for developer to change how pixels and verts were transformed
and processed after passing them to the GPU.

Stages of the Graphics Pipeline
Vertex Shader Stage
This stage is executed per vertex and is mostly used to transform the vertex, do per vertex calculations or make calculations for use later down pipeline.
Hull Shader Stage (Only used for tessellation)
Takes the vert as input control points and convert it in to control points that make up a patch (a fraction of a surface)
Domain shader stage (Only used for tessellation)
This stage calculates a vertex position of a point in the patch created by the Hull Shader
Geometry Shader Stage
A geometry shader is an optoinal program that takes the primitives (a point, line, triangle etc) as an input and can modify remove or add geometry to it.
Pixel Shader Stage
The pixel shader (also known as fragment shaders in the openGL world) is executed once per pixel giving color to a pixel. It gets its input from the earlier stages in the pipeline
and is mostly used for calculating surface properties, lighting, and post-process effects.

Optimize!
Each of the stages above is usually executed thousands of times per frame, and can be a bottleneck in the graphics pipeline. A simple cube made from triangles usually has around 36 verts. This means that the vertex shader stage will be executed 36 times every frame, and if you aim for 60 fps, this will be executed 2160 times per second. Optimize as much as you can.

Unity’s Rendering Pipeline
So with shaders we can define how our object will appear in the game world and how it will react to lighting. How these lights will react on the objects depend on the passes of the shader and which rendering path is used. The rendering path can be changed through Unity’s Player Settings. Or it can be overridden in the camera’s ‘Rendering Path’ setting in the inspector. In Unity there are 3 rendering paths: Vertex Lit,Forward Rendering and Deferred Rendering. If the graphics card can’t handle the current selected render path it will fallback and use another one. So for example if deferred rendering isn’t supported by the graphics card, Unity will automatically use Forward Rendering. If forward rendering is not supported it will change to Vertex Lit. Since all shaders are influenced by the rendering path that is set I will briefly describe what each rendering path does.

Vertex Lit
Vertex Lit is the simplest lighting mode available. It has no support for real-time shadows. It is commonly used on old computers with limited hardware. Internally it will calculate lighting from all lights at the object vertices in one pass. Since lighting is done on a per-vertex level, per-pixel effects are not supported.

Forward Rendering
Forward rendering renders each object in one or more passes, depending on the lights that affect the object. All lights are treated differently depending on the settings and intensity being set by the user. When forward rendering is used, the amount of pixel lights set from the quality menu that affect the object will be rendering using full per-pixel lighting. Additionally 4 point lights are calculated per-vertex and all other lights are computed as Spherical Harmonics which is an approximation. A light can be per-pixel lit depending on several situations. Lights with the render mode set to Not Important, are always per-vertex or spherical harmonics. Brighter lights are always calculated per-pixel also when the render mode is set to Important. Forward rendering is the default selected rendering path in Unity.

Deferred Rendering
In Deferred rendering there is no limit on the number of lights that affect an object and all lights are calculated on a per-pixel base. This means that all lights interact with normal maps etc. Lights can also have cookies and shadows. Since all lights are calculated per-pixel it works great on big polygons. Deferred rendering is only available in Unity Pro.

Creating a Shader in Unity
1.) Firstly we need a 3D model with a material on it which will use our new shader. (Add Sphere)
2.) Create Shader – surface shader
3.) Add material, Set the shader this material is using to our new shader, now set 3D model mesh renderer material to this new material.
4.) This is what our ShaderLab shader is structured like at start:
Shader “Category/ShaderName” {
     Properties{}
     SubShader {
          Pass {
             CGPROGRAM
             // your shaders here
             ENDCG
          }
     }
    SubShader {
    }
    SubShader {
    }
    SubShader {
    }
    FallBack “FallbackShaderName”
}
The category is used to place the shader in the shader dropdown, and the name is used to identify it. Next each shader can have many properties. These can be normal numbers and floats, color data or textures. ShaderLab got a way of defining these so it looks good and user friendly in the Unity inspector.
Now we need to define at least one sub shader so our object can be displayed. We can have more than one sub shader, Unity will pick the first sub shader that runs on the graphics card. Each sub shaders defines a list of rendering passes. Each pass causes the geometry to be rendered once. Generally speaking you would like to use the minimum amount of passes possible since which every added pass our performance goes down because of the object being rendered again. A pass can defined in 3 ways, a regular pass, use pass or a grab pass.
The ‘UsePass’ command is used when we want to use another pass from another shader. This can help by reducing code duplication.
The ‘GrabPass’ is a special pass. It grabs the content of the screen where the object is to be drawn into a texture. This texture can then be used for more advanced image based processing effects. A regular pass sets various states for the graphics hardware. For example we could turn on/off vertex lighting, set blending mode, or set fog parameters.
Inside each subshader, there needs to be a pass, as a shader ca be executed in multiple passes. Try ot keep the number of passes to a minimum for performance reasons but a pass will render the geometry object once and then move on to the next pass. Most shaders will only need one pass.
Your shader implementation will be inside the pass, surrounded by CGPROGRAM or GLSLPROGRAM and ENDGLSL if you want to use GLSL. Unity will cross compile CG to optimized GLSL or HLSL depending on the platform.
Then we have the fallback. If none of the shaders will work, we can fallback to another simple shader like the diffuse shader.
Here we have an example of a shader that takes in ambient light.
1.) Category and name, can be whatever you want.
Shader “UnityShaderExample/SimpleAmbientLight” 
2.) Properties, first the name of the property, then a display name that will show up in the Unity Editor, a prop type and default value
  Properties {
        _AmbientLightColor (“Ambient Light Color”, Color) = (1,1,1,1)
        _AmbientLighIntensity(“Ambient Light Intensity”, Range(0.0, 1.0)) = 1.0
    }
3.) Sub Shaders
    SubShader 
    {
4.) Passes per Sub Shader
        Pass 
        {
            CGPROGRAM
5.) Define Shader Compilation Target
#pragma target 2.0
6.) Define the name of the function that will be used as the vertex shader
#pragma vertex vertexShader 
7.)  Define name of function to be used as fragment shader
#pragma fragment fragmentShader
8.) Define our variables that the property is pointing at, these must be same as property name above
    fixed4 _AmbientLightColor; 
            float _AmbientLighIntensity;
9.) Vertex Shader
            float4 vertexShader(float4 v:POSITION) : SV_POSITION
            {
                return mul(UNITY_MATRIX_MVP, v);
            }
10.) Pixel Shader
            fixed4 fragmentShader() : SV_Target
            {
                return _AmbientLightColor * _AmbientLighIntensity;
            }
            ENDCG
        }
    }
}

What is this Shader doing?
The Vertex Shader
The Vertex Shader is doing one thing only, and that is a matrix calculation. This function takes one input, and that is the vertex position only, and it got one output, the transformed position of the vertex (SV_POSITION) in screen space, the position of the vertex on the screen, stored by the return value of this function. This value is obtained by multiplying the vertex position (currently in local space) with the Model, View and Projection matrices easily obtained by Unity’s’ built-in state variable.
This is done to position the vertices at the correct place on your monitor, based on where the camera is (view) and the projection.
The SV_POSITION is a semantic as is used to pass data between different shader stages in the programmable pipeline. The SV_POSITION is interpreted by the rasterizer stage. Think if this as one of many registers on the GPU you can store values in. This semantic can store a vector value (XYZW), and since it is stored in SV_POSITION, the GPU knows that the intended use for this data is for positioning.

The Pixel Shader
This is where all the coloring is happening, and our algorithm is implemented. This algorithm doesn’t need any input as we won’t do any advanced lighting calculations yet (we will learn this in the next tutorial). The output is the RGBA value of our pixel color stored in SV_Target (a render target, our final output).

Unity Shaders Reference Material
This table of mathematical functions from the Nvidia Developer Zone is a great help

Leave a Reply

Your email address will not be published. Required fields are marked *