POM和RCSM
在我之前的文章在Unity里实现松散圆锥步进Relaxed Cone Step Mapping就已经介绍过了视差映射和松散圆锥步进浮雕映射的计算方法了,但是之前并没有对计算深度值做相应的研究,同时也限制于篇幅的原因就没有再展开了,这篇文章相当于是之前文章的后续。为了简便,后续将这两种计算方法统称为视差映射。
在视差映射中计算深度值是一个很直接的想法,因为很有可能会有其他物体被放置在视差映射的表面,与之发生穿插,如果不做特殊处理,就会使用模型本身的深度值进行深度比较,导致别的物体不能有正确的被遮挡的效果,削弱了视差映射带来的真实感。网上我找了一圈,并没有找到和计算视差映射的深度值相关的文章,因此我想用这篇文章进行相关的介绍。
Unity的高清管线(HDRP)的Lit Shader支持计算像素深度偏移,提供了Primitive Length
,Primitive Width
,和Amplitude
三个参数。Amplitude
可以用来控制视差映射的强度值,虽然其一个单位和世界空间的一米完全不能直接等同起来,但是值越大视差看上去就越深,可以根据视觉实时调整这个参数。另外两个参数就很奇怪了,居然和模型的大小有关,同一个材质球,用在Quad上这里就要填1,用在Plane上就要填10,哪有这种道理?虚幻引擎则是提供了POM的接口,至于输入和输出完全都由用户控制,这里就不太好直接比较了。
回顾POM的计算过程
视差映射一般不会直接在世界空间步进,而是会先将世界空间的视线viewWS
转换到切线空间viewTS
,在切线空间步进。照常理_ParallaxIntensity
是用来控制视差映射的深度的,因此会使用这个参数控制z方向步进的距离,但为了方便和高度图中记载的高度进行对比,会先对viewTS
的z分量进行归一化,将_ParallaxIntensity
在步进时乘到viewTS
的xy分量上,之后就是循环比较深度进入下一个循环了。
但是为什么是切线空间呢?这是因为切线tangent和副切线bitangent代表了贴图UV的xy的正方向,将视线转换到切线空间,其实目的是将视线转到UV空间,或者说是贴图空间(Texture Space,因为其与切线空间的相似性,我们还是用TS来做简写)。这里就出现了最重要的一个问题,Unity中通过GetVertexNormalInputs
获得到的世界空间的切线是经过归一化的,丢失了物体自身的缩放,所以我们其实应该先将世界坐标的视线viewWS
转换到物体空间viewOS
,然后再使用物体空间的tbn矩阵,将viewOS
转换到切线空间viewTS
。但又如我上面说到的,我们真实的目的是贴图空间,切线空间和贴图空间是存在差异性的。这也就是为什么Unity的HDRP要使用额外的参数Primitive Length
和Primitive Width
了,这两个参数的目的是通过额外的缩放,将切线空间和贴图空间对应起来。
这两个参数的意义应当是,贴图空间的xy分量每一个单位在物体空间的长度,这里我们记为uvScale
。同时我们可以顺理成章地正式引入_ParallaxIntensity
这个参数,它的含义应当是,贴图中颜色为0的点对应的物体空间的深度值。贴图空间转换到物体空间,只需要对xyz三个分量分别乘上uvScale.x
,uvScale.y
,和_ParallaxIntensity
即可。_ParallaxIntensity
这个参数我们可以作为材质球的一个输入进行控制,uvScale
是一个跟模型相关的参数,我们可以在Geometry Shader中计算而得。
uvScale的计算
如上面所属,uvScale
指代的是贴图空间的xy分量每一个单位在物体空间的长度。对于两个顶点v0
和v1
,贴图空间的xy分量其实就是这两个顶点uv值的差,物体空间的长度其实就是两个顶点之间的距离,为了对应到贴图空间上,我们需要计算这段距离在切线和副切线上的投影长度,后者除以前者就是我们需要的uvScale
了。由于构成三角形的三个顶点可能会存在某两个顶点之间uv的某个分量的变化率为0,导致我们计算uvScale
的时候除以零,我们在检测到这个情况的时候使用第三个顶点即可。
贴图空间变换
在获得了物体空间的切线、副切线和法线之后,为了构成贴图空间的三个基向量,我们需要对这个向量使用uvScale
和_ParallaxIntensity
进行缩放。这个缩放导致了我们按照以往的float3x3(tangentOS * uvScale.x, bitangentOS * uvScale.y, normalOS * _ParallaxIntensity)
构成的矩阵不再是一个正交矩阵,它实际上是贴图空间到物体空间的变换矩阵的转置。因此将物体空间的视线viewOS
转换到贴图空间viewTS
时,我们要用这个矩阵的转置的逆左乘viewOS
,将贴图空间的视线viewTS
转换到物体空间viewOS
时,我们要用这个矩阵的转置左乘viewTS
。
深度的获取
这个就相对来说比较简单了,我们在贴图空间步进的时候,可以知道我们在贴图空间步进的z方向的深度值len
。而由于我们的viewTS
会做除以z分量的归一化,我们只需要用归一化前的-viewTS
乘上len
再除以z分量,就能知道我们在贴图空间中总的步进的向量,将其转换到物体空间再转换到世界空间,和当前点的世界空间的坐标相加后再转换到裁剪空间,其z分量除以w分量就是我们需要的深度值了。
具体的代码
这里只做了可行性的研究,应该有个方法能够简化计算矩阵的逆这一步操作。在计算世界空间的切线、副切线和法线的时候,可以不进行归一化,这样我们也就不需要先转换到物体空间再转换到贴图空间了。
POMShader.shader
Shader "zznewclear13/POMShader"
{
Properties
{
[Toggle(OUTPUT_DEPTH)] _OutputDepth ("Output Depth", Float) = 1
_BaseColor("Base Color", Color) = (1, 1, 1, 1)
_MainTex ("Texture", 2D) = "white" {}
_HeightMap("Height Map", 2D) = "white" {}
_NormalMap("Normal Map", 2D) = "bump" {}
_NormalIntensity("Normal Intensity", Range(0, 2)) = 1
_ParallaxIntensity ("Parallax Intensity", Float) = 1
_ParallaxIteration ("Parallax Iteration", Float) = 15
}
HLSLINCLUDE
#include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Common.hlsl"
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl"
#pragma shader_feature OUTPUT_DEPTH
sampler2D _MainTex;
sampler2D _HeightMap;
sampler2D _NormalMap;
CBUFFER_START(UnityPerMaterial)
float4 _BaseColor;
float4 _MainTex_ST;
float _NormalIntensity;
float _ParallaxIntensity;
float _ParallaxIteration;
CBUFFER_END
struct a2v
{
float4 positionOS : POSITION;
float3 normalOS : NORMAL;
float4 tangentOS : TANGENT;
float2 texcoord : TEXCOORD0;
};
struct v2g
{
float4 positionCS : SV_POSITION;
float3 positionOS : TEXCOORD0;
float3 positionWS : TEXCOORD1;
float4 tangentOS : TEXCOORD2;
float3 bitangentOS : TEXCOORD3;
float3 normalOS : TEXCOORD4;
float2 texcoord : TEXCOORD5;
};
struct g2f
{
float4 positionCS : SV_POSITION;
float2 uv : TEXCOORD1;
float4 tbnWSPos[3] : TEXCOORD2; // tbnWS, posWS
float4 tbnOSView[3] : TEXCOORD5; // tbnOS, viewWS
float2 uvScale : TEXCOORD8;
};
v2g vert(a2v input)
{
v2g output = (v2g)0;
VertexPositionInputs vpi = GetVertexPositionInputs(input.positionOS.xyz);
VertexNormalInputs vni = GetVertexNormalInputs(input.normalOS, input.tangentOS);
output.positionCS = vpi.positionCS;
output.positionOS = input.positionOS.xyz;
output.positionWS = vpi.positionWS;
output.normalOS = input.normalOS;
output.tangentOS = input.tangentOS;
output.bitangentOS = cross(input.normalOS, input.tangentOS.xyz) * input.tangentOS.w * GetOddNegativeScale();
output.texcoord = input.texcoord;
return output;
}
[maxvertexcount(3)]
void geom(triangle v2g IN[3], inout TriangleStream<g2f> tristream)
{
float3 camWS = GetCameraPositionWS();
g2f output = (g2f)0;
float3 posDiff01 = IN[1].positionOS - IN[0].positionOS;
float3 posDiff02 = IN[2].positionOS - IN[0].positionOS;
float3 tangentOS0 = IN[0].tangentOS.xyz;
float3 bitangentOS0 = IN[1].bitangentOS;
float2 uvDiff01 = IN[1].texcoord - IN[0].texcoord;
float2 uvDiff02 = IN[2].texcoord - IN[0].texcoord;
float2 uvScale;
if (uvDiff01.x != 0.0f) uvScale.x = dot(posDiff01, tangentOS0) / uvDiff01.x;
else uvScale.x = dot(posDiff02, tangentOS0) / uvDiff02.x;
if (uvDiff01.y != 0.0f) uvScale.y = dot(posDiff01, bitangentOS0) / uvDiff01.y;
else uvScale.y = dot(posDiff02, bitangentOS0) / uvDiff02.y;
for (int i=0; i<3; ++i)
{
v2g input = IN[i];
VertexNormalInputs vni = GetVertexNormalInputs(input.normalOS, input.tangentOS);
float3 viewWS = camWS - input.positionWS;
output.positionCS = input.positionCS;
output.uv = input.texcoord;
output.tbnWSPos[0] = float4(vni.tangentWS, input.positionWS.x);
output.tbnWSPos[1] = float4(vni.bitangentWS, input.positionWS.y);
output.tbnWSPos[2] = float4(vni.normalWS, input.positionWS.z);
output.tbnOSView[0] = float4(input.tangentOS.xyz, viewWS.x);
output.tbnOSView[1] = float4(input.bitangentOS, viewWS.y);
output.tbnOSView[2] = float4(input.normalOS, viewWS.z);
output.uvScale = uvScale;
tristream.Append(output);
}
tristream.RestartStrip();
}
float sampleHeight(float2 uv)
{
return 1.0f - tex2D(_HeightMap, uv).r;
}
float2 parallax(float2 uv, float3 view, out float len)
{
float numLayers = _ParallaxIteration;
float layerDepth = 1.0f / numLayers;
float2 p = view.xy;
float2 deltaUVs = p / numLayers;
float texd = sampleHeight(uv);
float d = 0.0f;
[unroll(30)]
for (; d < texd; d += layerDepth)
{
uv -= deltaUVs;
texd = sampleHeight(uv);
}
float2 lastUVs = uv + deltaUVs;
float lastD = d - layerDepth;
float after = texd - d;
float before = sampleHeight(lastUVs) - d + layerDepth;
float w = after / (after - before);
len = lerp(d, lastD, w);
return lerp(uv, lastUVs, w);
}
// Returns the determinant of a 2x2 matrix.
float spvDet2x2(float a1, float a2, float b1, float b2)
{
return a1 * b2 - b1 * a2;
}
// Returns the inverse of a matrix, by using the algorithm of calculating the classical
// adjoint and dividing by the determinant. The contents of the matrix are changed.
float3x3 spvInverse(float3x3 m)
{
float3x3 adj; // The adjoint matrix (inverse after dividing by determinant)
// Create the transpose of the cofactors, as the classical adjoint of the matrix.
adj[0][0] = spvDet2x2(m[1][1], m[1][2], m[2][1], m[2][2]);
adj[0][1] = -spvDet2x2(m[0][1], m[0][2], m[2][1], m[2][2]);
adj[0][2] = spvDet2x2(m[0][1], m[0][2], m[1][1], m[1][2]);
adj[1][0] = -spvDet2x2(m[1][0], m[1][2], m[2][0], m[2][2]);
adj[1][1] = spvDet2x2(m[0][0], m[0][2], m[2][0], m[2][2]);
adj[1][2] = -spvDet2x2(m[0][0], m[0][2], m[1][0], m[1][2]);
adj[2][0] = spvDet2x2(m[1][0], m[1][1], m[2][0], m[2][1]);
adj[2][1] = -spvDet2x2(m[0][0], m[0][1], m[2][0], m[2][1]);
adj[2][2] = spvDet2x2(m[0][0], m[0][1], m[1][0], m[1][1]);
// Calculate the determinant as a combination of the cofactors of the first row.
float det = (adj[0][0] * m[0][0]) + (adj[0][1] * m[1][0]) + (adj[0][2] * m[2][0]);
// Divide the classical adjoint matrix by the determinant.
// If determinant is zero, matrix is not invertable, so leave it unchanged.
return (det != 0.0f) ? (adj * (1.0f / det)) : m;
}
float4 frag(g2f input
#if defined(OUTPUT_DEPTH)
, out float depth : SV_DEPTH
#endif
) : SV_TARGET
{
float3 tos = input.tbnOSView[0].xyz * input.uvScale.x;
float3 bos = input.tbnOSView[1].xyz * input.uvScale.y;
float3 nos = input.tbnOSView[2].xyz * _ParallaxIntensity;
float3x3 t2wOS = float3x3(tos.x, bos.x, nos.x,
tos.y, bos.y, nos.y,
tos.z, bos.z, nos.z);
float3 viewWS = float3(input.tbnOSView[0].w, input.tbnOSView[1].w, input.tbnOSView[2].w);
float3 viewOS = mul((float3x3)UNITY_MATRIX_I_M, viewWS);
float3 viewTS = mul(spvInverse(t2wOS), viewOS);
float z = max(abs(viewTS.z), 1e-5) * (viewTS.z >= 0.0f ? 1.0f : -1.0f);
float len;
float2 uv = parallax((input.uv * _MainTex_ST.xy + _MainTex_ST.zw), viewTS * float3(_MainTex_ST.xy, 1.0f) / z, len);
#if defined(OUTPUT_DEPTH)
float3 offsetTS = -viewTS * (len / z);
float3 offsetOS = mul(t2wOS, offsetTS);
float3 positionWS = float3(input.tbnWSPos[0].w, input.tbnWSPos[1].w, input.tbnWSPos[2].w);
float3 posWS = positionWS + mul((float3x3)UNITY_MATRIX_M, offsetOS);
float4 posCS = mul(UNITY_MATRIX_VP, float4(posWS, 1.0f));
depth = posCS.z / posCS.w;
#endif
float4 mainTex = tex2D(_MainTex, uv) * _BaseColor;
float3 normalTS = normalize(UnpackNormalScale(tex2D(_NormalMap, uv), _NormalIntensity));
float3 tws = input.tbnWSPos[0].xyz;
float3 bws = input.tbnWSPos[1].xyz;
float3 nws = input.tbnWSPos[2].xyz;
float3 n = normalize(mul(normalTS, float3x3(tws, bws, nws)));
Light mainLight = GetMainLight();
float ndotl = max(0.0f, dot(n, mainLight.direction));
float3 color = mainTex.rgb * mainLight.color * ndotl;
float alpha = mainTex.a;
return float4(color, alpha);
}
ENDHLSL
SubShader
{
Tags{ "RenderType"="Opaque" "Queue"="Geometry"}
Cull Back
Pass
{
HLSLPROGRAM
#pragma vertex vert
#pragma geometry geom
#pragma fragment frag
ENDHLSL
}
}
}
RCSMShader.Shader
Shader "zznewclear13/RCSMShader"
{
Properties
{
[Toggle(OUTPUT_DEPTH)] _OutputDepth ("Output Depth", Float) = 1
_BaseColor("Base Color", Color) = (1, 1, 1, 1)
_MainTex ("Texture", 2D) = "white" {}
_RCSMTex("RCSM Texture", 2D) = "white" {}
_NormalMap("Normal Map", 2D) = "bump" {}
_NormalIntensity("Normal Intensity", Range(0, 2)) = 1
_ParallaxIntensity("Parallax Intensity", Float) = 1
_ParallaxIteration("Parallax Iteration", Float) = 15
}
HLSLINCLUDE
#include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Common.hlsl"
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl"
#pragma shader_feature OUTPUT_DEPTH
sampler2D _MainTex;
sampler2D _NormalMap;
sampler2D _RCSMTex;
CBUFFER_START(UnityPerMaterial)
float4 _BaseColor;
float4 _MainTex_ST;
float _NormalIntensity;
float _ParallaxIntensity;
float _ParallaxIteration;
CBUFFER_END
struct a2v
{
float4 positionOS : POSITION;
float3 normalOS : NORMAL;
float4 tangentOS : TANGENT;
float2 texcoord : TEXCOORD0;
};
struct v2g
{
float4 positionCS : SV_POSITION;
float3 positionOS : TEXCOORD0;
float3 positionWS : TEXCOORD1;
float4 tangentOS : TEXCOORD2;
float3 bitangentOS : TEXCOORD3;
float3 normalOS : TEXCOORD4;
float2 texcoord : TEXCOORD5;
};
struct g2f
{
float4 positionCS : SV_POSITION;
float2 uv : TEXCOORD1;
float4 tbnWSPos[3] : TEXCOORD2; // tbnWS, posWS
float4 tbnOSView[3] : TEXCOORD5; // tbnOS, viewWS
float2 uvScale : TEXCOORD8;
};
v2g vert(a2v input)
{
v2g output = (v2g)0;
VertexPositionInputs vpi = GetVertexPositionInputs(input.positionOS.xyz);
VertexNormalInputs vni = GetVertexNormalInputs(input.normalOS, input.tangentOS);
output.positionCS = vpi.positionCS;
output.positionOS = input.positionOS.xyz;
output.positionWS = vpi.positionWS;
output.normalOS = input.normalOS;
output.tangentOS = input.tangentOS;
output.bitangentOS = cross(input.normalOS, input.tangentOS.xyz) * input.tangentOS.w * GetOddNegativeScale();
output.texcoord = input.texcoord;
return output;
}
[maxvertexcount(3)]
void geom(triangle v2g IN[3], inout TriangleStream<g2f> tristream)
{
float3 camWS = GetCameraPositionWS();
g2f output = (g2f)0;
float3 posDiff01 = IN[1].positionOS - IN[0].positionOS;
float3 posDiff02 = IN[2].positionOS - IN[0].positionOS;
float3 tangentOS0 = IN[0].tangentOS.xyz;
float3 bitangentOS0 = IN[1].bitangentOS;
float2 uvDiff01 = IN[1].texcoord - IN[0].texcoord;
float2 uvDiff02 = IN[2].texcoord - IN[0].texcoord;
float2 uvScale;
if (uvDiff01.x != 0.0f) uvScale.x = dot(posDiff01, tangentOS0) / uvDiff01.x;
else uvScale.x = dot(posDiff02, tangentOS0) / uvDiff02.x;
if (uvDiff01.y != 0.0f) uvScale.y = dot(posDiff01, bitangentOS0) / uvDiff01.y;
else uvScale.y = dot(posDiff02, bitangentOS0) / uvDiff02.y;
for (int i=0; i<3; ++i)
{
v2g input = IN[i];
VertexNormalInputs vni = GetVertexNormalInputs(input.normalOS, input.tangentOS);
float3 viewWS = camWS - input.positionWS;
output.positionCS = input.positionCS;
output.uv = input.texcoord;
output.tbnWSPos[0] = float4(vni.tangentWS, input.positionWS.x);
output.tbnWSPos[1] = float4(vni.bitangentWS, input.positionWS.y);
output.tbnWSPos[2] = float4(vni.normalWS, input.positionWS.z);
output.tbnOSView[0] = float4(input.tangentOS.xyz, viewWS.x);
output.tbnOSView[1] = float4(input.bitangentOS, viewWS.y);
output.tbnOSView[2] = float4(input.normalOS, viewWS.z);
output.uvScale = uvScale;
tristream.Append(output);
}
tristream.RestartStrip();
}
float2 sampleRCSM(float2 uv)
{
float2 rcsm = tex2D(_RCSMTex, uv).xy;
return float2(1.0f - rcsm.x, rcsm.y);
}
float getStepLength(float rayRatio, float coneRatio, float rayHeight, float sampleHeight)
{
float totalRatio = rayRatio / coneRatio + 1.0f;
return (sampleHeight - rayHeight) / totalRatio;
}
float2 parallax(float2 uv, float3 view, out float len)
{
view.xy = -view.xy * _ParallaxIntensity;
float3 samplePos = float3(uv, 0.0f);
float2 rcsm = sampleRCSM(samplePos.xy);
float rayRatio = length(view.xy);
float coneRatio = rcsm.y;
float rayHeight = samplePos.z;
float sampleHeight = rcsm.x;
float stepLength = getStepLength(rayRatio, coneRatio, rayHeight, sampleHeight);
[unroll(30)]
for (int i = 0; i < _ParallaxIteration; ++i)
{
samplePos += stepLength * view;
rcsm = sampleRCSM(samplePos.xy);
coneRatio = rcsm.y;
rayHeight = samplePos.z;
sampleHeight = rcsm.x;
if (sampleHeight <= rayHeight) break;
stepLength = getStepLength(rayRatio, coneRatio, rayHeight, sampleHeight);
}
stepLength *= 0.5f;
samplePos -= stepLength * view;
[unroll]
for (int j = 0; j < 5; ++j)
{
rcsm = sampleRCSM(samplePos.xy);
stepLength *= 0.5f;
if (samplePos.z >= rcsm.x)
{
samplePos -= stepLength * view;
}
else if(samplePos.z < rcsm.x)
{
samplePos += stepLength * view;
}
}
len = samplePos.z;
return samplePos.xy;
}
// Returns the determinant of a 2x2 matrix.
float spvDet2x2(float a1, float a2, float b1, float b2)
{
return a1 * b2 - b1 * a2;
}
// Returns the inverse of a matrix, by using the algorithm of calculating the classical
// adjoint and dividing by the determinant. The contents of the matrix are changed.
float3x3 spvInverse(float3x3 m)
{
float3x3 adj; // The adjoint matrix (inverse after dividing by determinant)
// Create the transpose of the cofactors, as the classical adjoint of the matrix.
adj[0][0] = spvDet2x2(m[1][1], m[1][2], m[2][1], m[2][2]);
adj[0][1] = -spvDet2x2(m[0][1], m[0][2], m[2][1], m[2][2]);
adj[0][2] = spvDet2x2(m[0][1], m[0][2], m[1][1], m[1][2]);
adj[1][0] = -spvDet2x2(m[1][0], m[1][2], m[2][0], m[2][2]);
adj[1][1] = spvDet2x2(m[0][0], m[0][2], m[2][0], m[2][2]);
adj[1][2] = -spvDet2x2(m[0][0], m[0][2], m[1][0], m[1][2]);
adj[2][0] = spvDet2x2(m[1][0], m[1][1], m[2][0], m[2][1]);
adj[2][1] = -spvDet2x2(m[0][0], m[0][1], m[2][0], m[2][1]);
adj[2][2] = spvDet2x2(m[0][0], m[0][1], m[1][0], m[1][1]);
// Calculate the determinant as a combination of the cofactors of the first row.
float det = (adj[0][0] * m[0][0]) + (adj[0][1] * m[1][0]) + (adj[0][2] * m[2][0]);
// Divide the classical adjoint matrix by the determinant.
// If determinant is zero, matrix is not invertable, so leave it unchanged.
return (det != 0.0f) ? (adj * (1.0f / det)) : m;
}
float4 frag(g2f input
#if defined(OUTPUT_DEPTH)
, out float depth : SV_DEPTH
#endif
) : SV_TARGET
{
float3 tos = input.tbnOSView[0].xyz * input.uvScale.x;
float3 bos = input.tbnOSView[1].xyz * input.uvScale.y;
float3 nos = input.tbnOSView[2].xyz * _ParallaxIntensity;
float3x3 t2wOS = float3x3(tos.x, bos.x, nos.x,
tos.y, bos.y, nos.y,
tos.z, bos.z, nos.z);
float3 viewWS = float3(input.tbnOSView[0].w, input.tbnOSView[1].w, input.tbnOSView[2].w);
float3 viewOS = mul((float3x3)UNITY_MATRIX_I_M, viewWS);
float3 viewTS = mul(spvInverse(t2wOS), viewOS);
float z = max(abs(viewTS.z), 1e-5) * (viewTS.z >= 0.0f ? 1.0f : -1.0f);
float len;
float2 uv = parallax((input.uv * _MainTex_ST.xy + _MainTex_ST.zw), viewTS * float3(_MainTex_ST.xy, 1.0f) / z, len);
#if defined(OUTPUT_DEPTH)
float3 offsetTS = -viewTS * (len / z);
float3 offsetOS = mul(t2wOS, offsetTS);
float3 positionWS = float3(input.tbnWSPos[0].w, input.tbnWSPos[1].w, input.tbnWSPos[2].w);
float3 posWS = positionWS + mul((float3x3)UNITY_MATRIX_M, offsetOS);
float4 posCS = mul(UNITY_MATRIX_VP, float4(posWS, 1.0f));
depth = posCS.z / posCS.w;
#endif
float4 mainTex = tex2D(_MainTex, uv) * _BaseColor;
float3 normalTS = normalize(UnpackNormalScale(tex2D(_NormalMap, uv), _NormalIntensity));
float3 tws = input.tbnWSPos[0].xyz;
float3 bws = input.tbnWSPos[1].xyz;
float3 nws = input.tbnWSPos[2].xyz;
float3 n = normalize(mul(normalTS, float3x3(tws, bws, nws)));
Light mainLight = GetMainLight();
float ndotl = max(0.0f, dot(n, mainLight.direction));
float3 color = mainTex.rgb * mainLight.color * ndotl;
float alpha = mainTex.a;
return float4(color, alpha);
}
ENDHLSL
SubShader
{
Tags{ "RenderType"="Opaque" "Queue"="Geometry"}
Cull Back
Pass
{
HLSLPROGRAM
#pragma vertex vert
#pragma geometry geom
#pragma fragment frag
ENDHLSL
}
}
}
最终的效果
最后的效果也就如封面图一样了,左边是RCSM做的,其余的则是普通的POM效果。特地对模型做了缩放,对贴图的平铺进行调整,用来表明这个计算方式的正确性,同样的材质球用在不同的模型上也能够得到正确的深度值。但是像球体这样的uv并不规则的模型,用上述的方法并不能得到完美的深度效果。上面和下面平面使用的贴图来自Quixel的Megascans。
后记
又迅速地写了一篇文章,计算了视差映射的深度值之后,各种屏幕空间的算法也都能够正常地使用了,很好。话又说回来了我被LearnOpenGL的贴图坑了一波,居然没有意识到上面的法线图和平常使用的法线图是不一样的,我就说怎么看上去有一种违和感。后来我直接在Blender里自己导出了这个Toy Box的法线和深度图,这才感觉一切都正常了。