There is a way to set a list for uniform? Something similar to "material.SetFloatArray" but with list? Or the restriction is for arrays? Also there is possible to change the array during runtime in shaders?
↧
Set list in shader (uniform)
↧
Where Is UNITY_POSITION(pos) Defined?
Hi All,
I was hoping to copy and modify the following fragment shader in UnityStandardCoreForwardSimple.cginc:
half4 fragForwardBaseSimpleInternal (VertexOutputBaseSimple i)
{
UNITY_APPLY_DITHER_CROSSFADE(i.pos.xy);
FragmentCommonData s = FragmentSetupSimple(i);
UnityLight mainLight = MainLightSimple(i, s);
#if !defined(LIGHTMAP_ON) && defined(_NORMALMAP)
half ndotl = saturate(dot(s.tangentSpaceNormal, i.tangentSpaceLightDir));
#else
half ndotl = saturate(dot(s.normalWorld, mainLight.dir));
#endif
//we can't have worldpos here (not enough interpolator on SM 2.0) so no shadow fade in that case.
half shadowMaskAttenuation = UnitySampleBakedOcclusion(i.ambientOrLightmapUV, 0);
half realtimeShadowAttenuation = SHADOW_ATTENUATION(i);
half atten = UnityMixRealtimeAndBakedShadows(realtimeShadowAttenuation, shadowMaskAttenuation, 0);
half occlusion = Occlusion(i.tex.xy);
half rl = dot(REFLECTVEC_FOR_SPECULAR(i, s), LightDirForSpecular(i, mainLight));
UnityGI gi = FragmentGI (s, occlusion, i.ambientOrLightmapUV, atten, mainLight);
half3 attenuatedLightColor = gi.light.color * ndotl;
half3 c = BRDF3_Indirect(s.diffColor, s.specColor, gi.indirect, PerVertexGrazingTerm(i, s), PerVertexFresnelTerm(i));
c += BRDF3DirectSimple(s.diffColor, s.specColor, s.smoothness, rl) * attenuatedLightColor;
c += Emission(i.tex.xy);
UNITY_APPLY_FOG(i.fogCoord, c);
return OutputForward (half4(c, 1), s.alpha);
}
But I get an error stating it doesn't recognize `VertexOutputBaseSimple`. If I include that it reads, "unrecognized identifier `UNITY_POSITION(pos)`".
If anyone can point me in the correct direction I would greatly appreciate it!
Thanks,
↧
↧
How to get picked worldposition to clipspace, in fragment shader? (Image effect shader)
So this is all happening in an image effect shader. I have converted each pixel from clipspace to world space. Applied an offset to this in world space and now I need the new world space coordinates in clipspace coords.
I have ran out of things to try, I always get a result close to what id expect but.
↧
How do i blur a Cubemap and Texture in CG/HLSL Unity 5
How can i blur(Gaussian) cubemaps and Textures in CG/HLSL in Unity 5??
Thanks in advanced.
↧
Vertex and fragment shader advice
Evening, I'm learning about fragment and vertex shaders (beginner) and I kept scratching my head trying to get it right.
My goal is to make a shader that is tileable (for indoor tiles) and customizable grouts (color and width).
With the help of the Internet, I managed to learn some basic stuff about shaders and made minor tweaks to it. I used a ready made grid shader and tweaked the grid to not tile according to world
space.
Here's the code:
Shader "Test/Unlit Grid"
{
Properties
{
//GRID PROPERTIES
_GridColour ("Grid Colour", color) = (1, 1, 1, 1)
_BaseColour ("Base Colour", color) = (1, 1, 1, 0)
_GridSpacing ("Grid Spacing", float) = 0.1
_LineThickness ("Line Thickness", float) = 1
//2D TEXTURES
_MainTex ("Base Texture", 2D) = "white"{}
}
SubShader
{
Tags { "RenderType"="Opaque" "Queue"="Transparent"}
LOD 100
Blend SrcAlpha OneMinusSrcAlpha
ZWrite Off
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float4 vertex : SV_POSITION;
float2 uv : TEXCOORD0;
};
fixed4 _GridColour;
fixed4 _BaseColour;
float _GridSpacing;
float _LineThickness;
sampler2D _MainTex;
float4 _MainTex_ST;
sampler2D _NormalTex;
//VERT FUNCTION
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
//o.uv = mul(unity_ObjectToWorld, v.vertex).xz / _GridSpacing;
o.uv = TRANSFORM_TEX(v.uv, _MainTex)*_GridSpacing;
return o;
}
//FRAG FUNCTION
fixed4 frag (v2f i) : SV_Target
{
float2 wrapped = frac(i.uv);
float2 range = abs(wrapped);
float2 speeds;
// Euclidean norm gives slightly more even thickness on diagonals
float4 deltas = float4(ddx(i.uv), ddy(i.uv));
speeds = sqrt(float2(
dot(deltas.xz, deltas.xz),
dot(deltas.yw, deltas.yw)
));
// Cheaper Manhattan norm in fwidth slightly exaggerates thickness of diagonals
//speeds = fwidth(i.uv)/2;
fixed4 col = tex2D(_MainTex, i.uv);
float2 pixelRange = range/speeds;
float lineWeight = saturate(min(pixelRange.x, pixelRange.y) /_LineThickness);
//float lineWeight = min(pixelRange.x, pixelRange.y) - _LineThickness;
return lerp(_GridColour, col, lineWeight);
}
ENDCG
}
}
}
Sample image:
![alt text][1]
The question is, am I able to mix a Surface Shader below the fragment shader and still retain the grouts from the fragment shader itself?
[1]: /storage/temp/100930-capture.jpg
↧
↧
Triangulate image in realtime
Hey everybody. I'm trying to create a shader that triangulates a texture. In first step the realtime part is not that important.
This is my plan to achieve this:
All this code is in the fragment shader part:
1. Apply Harris Corner detection to find interesting points / corners in the image.
2. Use this list of points an use Delaunay to create the triangles and color them in different colors.
Can I achieve this with a shader or is it the wrong way?
This is the effect I try to get
![alt text][1]
Thanks for your help.
[1]: /storage/temp/100949-delaunay-effect.jpg
↧
Add additional UV channel for Blit
Hello,
I wonder how do I send TEXCOORD1 vertex information to image effect shader when I'm doing Graphics.Blit?
I've browse through the Internet however I cannot find anything useful.
So if anyone can tell me how to add an additional UV channel I'll be very appreciated.
↧
Simple Unlit HLSL Shader with Rim Lighting?
Hi!
I'm looking for a way to create Rim Lighting effect on a simple Unlit shader. Everywhere I looked people are using the surface shaders and I'm not familiar with them yet.
Here is my current shader. Is simple Unlit that uses vertex colors.
struct appdata
{
float4 vertex : POSITION;
fixed4 color : COLOR;
};
struct v2f {
float4 pos : SV_POSITION;
fixed4 color : TEXCOORD2;
};
v2f vert (appdata v)
{
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);
o.color = v.color;
UNITY_TRANSFER_FOG(o, o.pos);
return o;
}
half4 frag (v2f i) : COLOR
{
return i.color;
}
I would like to have this effect:
![alt text][1]
Where could I find an example for this using standard HLSL shaders instead of the Surf ones.
Thank you!
[1]: http://kylehalladay.com/images/post_images/2014-02-23/FresnelRim.png
↧
"Floor" function produces artifacts in shader
Hello everyone,
here is the output of the simple shader
**tex2D( myTex , floor(i.uv*3)/3);**
I need to do this to get a coherent noise from sampling an image.
How can I avoid the flickering artifacts that appear? They appear in the middle, at points where i.uv.x/3=1, 2, 3 and i.uv.y/3=1, 2, 3
![alt text][1]
[1]: /storage/temp/102601-floor.png
↧
↧
Accessing and writing depth on a render texture in compute shader
Hey, so this might seem to be a stupid question, but so far after a few days of trying to find this I have found only people who don't know how to do this or people who assume you know how to do this, but no actual explanation.
I am writing a compute shader which I am giving a rendertexture. So far changing the color of the rendertexture works fine. I declare it as a RWTexture2D and set the color for each pixel as a float4.
My problem is that I also need to write to the depth buffer of my render texture too. How do I do this? Is there a way to read and write to the depth part of the rendertexture I am giving to my compute shader?
↧
Unexpected identifier cbuffer error
I have a hlsli file and would like to port it into Unity.
However, it seems something wrong that Unity does not recognize some keywords such as "cbuffer", "textureCube", "texture2D".
The following is snippets of code, working in original file but not in Unity. Any includes or settings I am missing?
cbuffer cbDebug : CB_DEBUG
{
float g_debug;
float g_debugSlider0;
}
TextureCube g_texCubeDiffuse : TEX_CUBE_DIFFUSE;
Texture2D g_texVSM : TEX_VSM;
the error looks like this (cbuffer, texturecube, texture2d are all not recognized):
Shader error in 'Custom/skin': Unexpected identifier "cbuffer". Expected one of: typedef const void inline uniform nointerpolation extern shared static volatile row_major column_major struct or a user-defined type at Assets/Shader/common.cginc(51)
I have found the document https://docs.unity3d.com/Manual/SL-BuiltinMacros.html says we should use CBUFFER_START instead, but it does not mention how to deal with registers after the colon.
Some other documents mention we should use samplerCUBE instead of TextureCube, and sampler2D instead of Texture2D. However, none of them talked about how to deal with the angle brackets.
Any ideas how to correct the code to work with Unity?
↧
Cross-project shader support
I have one project with some vertex shaders that I am working on. I recently discovered need to add a geometry stage to the shader. As soon as I add the '#pragma geometry xxx' (or for that matter '#pragma target 4.0') to the .shader file in my project the material turns pink and tells me that: "shader is not supported on this GPU (none of subshaders/fallbacks are suitable)".
To me that indicates that my machine doesn't have hardware/drivers capable of doing geometry shader work. However, when geometry shaders compile, execute, and produce correct output in other Unity3D projects on the same machine that indicates that something in Unity is amok.
Is there a setting I need to switch that determines shader level support or do I just need to start with a new project and copy all of my current assets into it to get geometry shader to start working?
Note: In another project 'Project Settings' -> Player and 'Project Settings' -> Graphics are both set to the exact same set of options and geometry shaders work on my work machine.
↧
Setting hlsl buffer value from c# code?
Hello,
I've been working on porting a 3rd party image effect hlsl shader to Unity and I have a buffer to set for the shader to work correctly.
Buffer _bSamplePattern : register(t2);
My question is, is there any way to set the `Buffer` of the instanced material in the c# code or I have to rewrite the buffer thing into something else?
Thanks,
Raul
MadGoat Studio
↧
↧
No acceptable conversion when converting cg shader to glsl
I have a simple shader where at some point I perfrom logical operations component-wise between boolean vectors:
float2 x_ = ( (x >= 0) && (x < a) ) ? float2(0., 0.) : float2(0.5, 0.5);
This compiles ok in HLSL. However when Unity tries to convert it to GLSL, I get:
> Shader error in 'Sprites/Advanced':> '&&' : wrong operand types no> operation '&&' exists that takes a> left-hand operand of type 'vec2' and a> right operand of type 'vec2' (or there> is no acceptable conversion) at line> 83 (on gles)
Is there an elegant way to perform logical operations per component so that it compiles to GLSL?
↧
Porting from ShaderToy(GLSL) to shaderlab(HLSL/CG) unity not giving me the desired result.
I came across this beautiful [shader][1] from [shadertoy][2] which I am trying to implement in Unity. I managed to get the blur box background and also the random dots get generated. But I am not able to get the curl effect as shown in the original shader. Here is my code, where `_MainText` takes the default colour black and `_MainTexBg` has this texture attached,
![alt text][3]
Shader "Unlit/MyFirstShader"
{
Properties
{
_MainTex ("Texture", 2D) = "black" {}
_MainTexBg ("Curl Pattern Texture", 2D) = "white" {}
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
sampler2D _MainTexBg;
float4 _MainTexBg_ST;
float2 hash2( float n )
{
return frac(sin(float2(n,n+1.0))*float2(43758.5453123,22578.1459123));
}
// smoothstep interpolation of texture
float4 ssamp( float2 uv, float oct )
{
uv = uv.xy / oct;
//return texture( iChannel0, uv, -10.0 );
float texSize = 8.;
float2 x = uv * texSize - .5;
float2 f = frac(x);
// remove fractional part
x = x - f;
// apply smoothstep to fractional part
f = f*f*(3.0-2.0*f);
// reapply fractional part
x = x + f;
uv = (x+.5) / texSize;
return tex2D( _MainTexBg, uv );
}
float2 e = float2(1./256., 0.);
float4 dx( float2 uv, float oct )
{
return (ssamp(uv+e.xy,oct) - ssamp(uv-e.xy,oct)) / (2.*e.x);
}
float4 dy( float2 uv, float oct )
{
return (ssamp(uv+e.yx,oct) - ssamp(uv-e.yx,oct)) / (2.*e.x);
}
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
i.uv=1-i.uv;
fixed4 col = tex2D(_MainTex, i.uv);
col=mul(col,2.0);
col = smoothstep(0.,1.,col);
col.xyz = 1.-col.xyz;
col.xyz = mul(col.xyz,pow(1. - 1.9*dot(i.uv-.5,i.uv-.5),.07));
float4 res = float4(0.,0.,0.,0.);
float scl = _ScreenParams.x/640.;
// random paint drops
//float fr = float(_Time.y);
float period = _Time.y < 2.9 ? 30. : _Time.y < 47. ? 8. : 3.;
float2 sparkPos = hash2(_Time.y+1.11) * _ScreenParams.xy;
if( length(sparkPos-i.vertex)<5.*scl && i.vertex.x > 1. && i.vertex.y > 1. )
{
// everyones favourite colour gradient
res = res + 2.5*float4(i.uv,0.5+0.5*sin(_Time.y),1.0);
}
float2 off = 0.* (float2(128.,128.)/_ScreenParams.xy) * unity_DeltaTime;
float oct = .25;
float2 curl1 = .001*float2( dy(i.uv,oct).x, -dx(i.uv,oct).x )*oct;
oct = 5.; float sp = 0.1;
curl1 = curl1 + .0002*float2( dy(i.uv+sp*_Time.y,oct).x, -dx(i.uv+sp*_Time.y,oct).x )*oct;
off = off + curl1;
off = mul(off,.4);
res = res + .999*tex2D( _MainTexBg, i.uv - off);
return col*res;
}
ENDCG
}
}
}
I am very new to shader programming. Thanks for your help.
[1]: https://www.shadertoy.com/view/4ltXR4
[2]: https://www.shadertoy.com
[3]: /storage/temp/108999-downloads.png
↧
What is the cost of ComputeShader.SetBuffer
I would like to know if ComputeShader.SetBuffer has any overhead.
I have a ComputeBuffer that I want to share between kernels since its rather large and I will need to set the Buffer for each kernel that needs to modify it.
Setting it to multiple kernels seems to work fine but I am wondering if there are any gotchas or if there is a cost to setting the buffer to multiple kernels(like... does it have a completely separate copy in memory per kernel or does it just share the same memory between kernels).
↧
ComputeShader: Accessing array elements within a StructuredBuffer
I have been toying with ComputeShaders for a while now, and have been trying to use them to leverage the hair physics calculations I was doing in my game. The hair is based off a series of points that use springs to create hair. Each strand is a list of these points.
I had success moving the physics code into a compute shader for one strand, and would be able to calculate 10,000 hair bodies at solid 60fps on one strand, but multiple strands would crash performance.
So I need to instead run a threadgroup for each hair strand, and dispatch these hair strands x amount of times. So I make a structure of hair strands, fill it with a max-sized array of 'hair body' structs. I would then access the hair strand by the groupID, and the hair bodies by the thread index.
Theoretically this should work, but I am not having these results. Due to limitations with StrucuredBuffer element sizes being max 2048 bytes, I can only store certain lengths of hair bodies per strand. When I try to change these values and read them from my C# script, they are null.
I have provided a test scenario that should duplicate my problem. I have simplified this only to get the base functionality working, which is setting the indexes of the array within the hair strand. The code is below:
ShaderTest.cs:
public class ShaderTest : MonoBehaviour
{
public struct TestStruct1
{
public int groupID;
public int[] threadIDs;
}
public ComputeShader computeShader;
private ComputeBuffer computeBuffer;
private int kernelID;
void Start()
{
var stopWatch = new System.Diagnostics.Stopwatch();
TestStruct1[] data = new TestStruct1[256];
for (int i = 0; i < 256; i++)
{
TestStruct1 s1 = data[i];
s1.threadIDs = new int[50];
}
this.kernelID = computeShader.FindKernel("CSMain");
computeBuffer = new ComputeBuffer(256, sizeof(int) + (50 * sizeof(int)));
computeBuffer.SetData(data);
computeShader.SetBuffer(kernelID, "Buf", computeBuffer);
stopWatch.Start();
computeShader.Dispatch(kernelID, 256, 1, 1);
computeBuffer.GetData(data);
stopWatch.Stop();
Debug.Log("Took: " + stopWatch.ElapsedMilliseconds + " milliseconds to compute");
foreach(TestStruct1 d in data)
{
Debug.Log(d.threadIDs[0]); // Crashes here
}
computeBuffer.Release();
}
}
ShaderTest.compute:
#pragma kernel CSMain
struct TestStruct1
{
int groupID;
int threadIDs[50];
};
RWStructuredBuffer Buf;
[numthreads(64,1,1)]
void CSMain(int3 threadID : SV_GroupThreadID, int3 groupID : SV_GroupID)
{
if (threadID.x <= 50)
{
Buf[groupID.x].threadIDs[threadID.x] = 7;
}
}
All I am trying to do in the above code is properly index through the buffer and set each threadID to 7 for proof that this structure works. For whatever reason, it is not giving me the expected results that were so clear before.
Any help would be greatly appreciated
↧
↧
Reuse vertices modified in a previous shader pass
I've recently started to learn shaders and I'm trying to write a shader that :
- Animates a mesh to simulate water waves ;
- Reuses the standard Unity shader so that I don't have to write all the complex lighting stuff myself.
The shader modifies the vertices in the first pass and that pass works as intended. The problem is that, when the subsequent passes are performed, i.e the Unity standard shader passes, these passes do not reuse the modified vertex coordinates !
![alt text][1]
As you can see on the picture, my first pass properly updates vertices, but the following passes ignore the updated values.
My question is, how can I make the following passes reuse the updated vertexes ? Do I have to modify every single pass of the unity standard shader to also update the vertices or is there some kind of magic command that I have not learned yet ?
Thanks ! [1]: /storage/temp/111040-capture.png
- Animates a mesh to simulate water waves ;
- Reuses the standard Unity shader so that I don't have to write all the complex lighting stuff myself.
The shader modifies the vertices in the first pass and that pass works as intended. The problem is that, when the subsequent passes are performed, i.e the Unity standard shader passes, these passes do not reuse the modified vertex coordinates !
![alt text][1]
As you can see on the picture, my first pass properly updates vertices, but the following passes ignore the updated values.
My question is, how can I make the following passes reuse the updated vertexes ? Do I have to modify every single pass of the unity standard shader to also update the vertices or is there some kind of magic command that I have not learned yet ?
Thanks ! [1]: /storage/temp/111040-capture.png
↧
converting HLSL to GLSL.
@atomicjoe Can you please explain how HLSL to GLSL worked for you? ( github thread)
↧
How to prevent shader optimizations?
I was testing shader instruction performance, and to make results more clear i made a loop, and unrolled it.
[unroll] for(int j = 0; j < TEST_COUNT; j++)
{
col = tex2D(_MainTex, i.uv.xy);
}
But compiler optimized it and got rid of the whole loop, here ASM:
8: mov o1.xyzw, v1.xyxy
9: sample_l o2.xyzw, v1.xyxx, t0.xyzw, s0, v1.y
10: mov o3.xyzw, l(0.500000,0.500000,0.500000,0.500000)
So i did this:
[unroll] for(int j = 0; j < TEST_COUNT; j++)
{
[isolate]
{
col = tex2D(_MainTex, i.uv.xy);
}
}
And compiler said: "**unknown attribute isolate, or attribute invalid for this statement**"
Even if i add a little value to uv it doesn't helps
↧