I have a hlsli file and would like to port it into Unity.
However, it seems something wrong that Unity does not recognize some keywords such as "cbuffer", "textureCube", "texture2D".
The following is snippets of code, working in original file but not in Unity. Any includes or settings I am missing?
cbuffer cbDebug : CB_DEBUG
{
float g_debug;
float g_debugSlider0;
}
TextureCube g_texCubeDiffuse : TEX_CUBE_DIFFUSE;
Texture2D g_texVSM : TEX_VSM;
the error looks like this (cbuffer, texturecube, texture2d are all not recognized):
Shader error in 'Custom/skin': Unexpected identifier "cbuffer". Expected one of: typedef const void inline uniform nointerpolation extern shared static volatile row_major column_major struct or a user-defined type at Assets/Shader/common.cginc(51)
I have found the document https://docs.unity3d.com/Manual/SL-BuiltinMacros.html says we should use CBUFFER_START instead, but it does not mention how to deal with registers after the colon.
Some other documents mention we should use samplerCUBE instead of TextureCube, and sampler2D instead of Texture2D. However, none of them talked about how to deal with the angle brackets.
Any ideas how to correct the code to work with Unity?
↧
Unexpected identifier cbuffer error
↧
Using Texture2DArray as RenderTarget and passing data to fragment shader
I'd like to render multiple views into an array of 2D textures, then pass this array into a fragment shader for processing.
Imagine a stack of differing 2D viewpoints of the 3D scene, where on the 0th texture we render the left-most view and the last element is the right-most view of the scene.
I would like to pass this array to a fragment shader for processing.
Currently I'm just using one render texture and tiling the views, which works but has limitations in dimension sizes.
I came across a post that said 5.4 has hardware 2D texture array support, but there is no documentation on using the class Texture2DArray. I've tried but I'm not sure how to access each 2D texture element from the array to write to it, or read from it in the shader.
↧
↧
No acceptable conversion when converting cg shader to glsl
I have a simple shader where at some point I perfrom logical operations component-wise between boolean vectors:
float2 x_ = ( (x >= 0) && (x < a) ) ? float2(0., 0.) : float2(0.5, 0.5);
This compiles ok in HLSL. However when Unity tries to convert it to GLSL, I get:
> Shader error in 'Sprites/Advanced':> '&&' : wrong operand types no> operation '&&' exists that takes a> left-hand operand of type 'vec2' and a> right operand of type 'vec2' (or there> is no acceptable conversion) at line> 83 (on gles)
Is there an elegant way to perform logical operations per component so that it compiles to GLSL?
↧
Exporting graphically created shaders in Visual Studio to Unity. [.glsl to .shader]
Hello.
Can I export Visual Studio “.glsl” shader into Unity? Visual Studio has a great tool for creating Shaders, but it cannot be converted to Unity Shaders, just “.hlsl”, “.h”, “.cso”.
.glsl to .shader
↧
Moving UV's with a shader (tiling error)
With the following code I tried to animate a texture on a surface by moving its uvs. However it moves, but it doesn't tile correct as the net tile is just grey (stretched from texture's border). Now how do I say that it should display the same texture over nd over again? I also tried TRANSFORM_TEX in the vertex function (same behaviour).
Thanks.
Shader "Custom/test" {
Properties {
_Color ("Color", Color) = (1,1,1,1)
_MainTex ("Albedo (RGB)", 2D) = "white" {}
_Offset ("Offset", Range(-2.0, 2.0)) = 0.0
}
SubShader {
Tags { "RenderType"="Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf Lambert alpha:blend vertex:vert
#pragma target 3.0
sampler2D _MainTex;
float _Offset;
struct Input {
float2 uv_MainTex;
};
void vert (inout appdata_full v) {
v.texcoord.xy += _Offset;
}
fixed4 _Color;
void surf (Input IN, inout SurfaceOutput o) {
fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = c.rgb;
o.Alpha = c.a;
}
ENDCG
}
FallBack "Diffuse"
}
↧
↧
Custom shader samples pixels per-texture rather than from the screen
Hi all,
Thanks for reading (and sorry if this is a bit wordy), I've had a Google and a think on this but can't seem to work out what's happening. I'm new to shaders and after a few tutorials have started to try to write my own, and I've started by trying to attempt to emulate old VHS/CRT effects for a game I'm planning. So far I've done a very basic colour blur/bleed effect, which works, and I've set it as the render material for the Main Camera using SetReplacementShader() as I'm intending to use it as an image effect. The shader code is here: http://pastebin.com/09NaxyKU
It works to a degree, and all the objects in the scene are rendered with their colours displaced, but it seems to be sampling on a per-texture basis rather than sampling from the whole screen as I had expected. This means that the effect on a 50x50 texture is way more obvious (and less convincing) than on a 500X500 texture, and scaling the images down or up doesn't change the effect size, just makes the pixels look really blocky. The effect also abruptly cuts off at the edge of the sprite instead of having it bleed over into the background which is what I was expecting.
I wanted the effect to apply to the whole screen equally rather than varying on each texture, and I thought by applying it to the camera it would do so. Is there something I've missed/fundamentally not understood (more likely!) I am suspicious that _MainTex needs to be set to a render texture or something but can't for the life of me work out how to capture a render texture, apply the effect, then display the result.
Any help would be greatly appreciated!
Thanks
↧
HLSL clip() function don't work on Adreno 320
HLSL clip() function don't work on Adreno 320 (for example Nexus 4) GPU if compile with Unity 5.3.5 or newer. On Adreno 330 (f.e. Nexus 5) works fine compile on any version, but on Adreno 320 works only old Unity version. I tried to replace the function simple test and discard, but the same result. what is the problem?
↧
Shaders don't work properly on android
Hello,
I am working on a way to create high resolution screenshots of a tilemap on android
in order to achieve this i am manually drawing the image using GetPixel/SetPixel, and subsequently i am feeding the resulting image through a shader to add certain details
This works just fine on PC, however on some android devices (samsung galaxy tab 2 to be precise) i've run into several issues
The first was that the image would render entirely black( http://imgur.com/JaAFqM3 ), entirely pink( http://imgur.com/2lgnMkA ), or transparent with a rectangle (black, pink, or blue) in the bottom-left corner(http://imgur.com/Et1Lndb , http://imgur.com/YnYT80p , http://imgur.com/giTmdJM ).
after about 2 full days of research on what was causing this i figured out it had to do with the fact that the shader used uint type uniform variables, which are apparently not supported in GLES 2
after fixing this, the image would REMAIN black as it was greater than 2048*2048, fixing that by splitting the image in half (which is what the code is designed to do anyway)
now after fixing that and disabling all of the working parts of the shader, it seemed to be returning the original image as expected, giving me the following output:
Left: http://imgur.com/egbHE6v
Right: http://imgur.com/mtxmHNV
however when i re-enabled the working parts of the shader and reducing the quality instead of splitting, the output became as such:
low-res: http://imgur.com/zdMFC8o
mid-res: http://imgur.com/7VWINg2
mid-res is a screenshot of the app window btw...
there is no code that works to actually read pixels from the screen in this process, so the only way i can see this happening is by the following code not working correctly:
Graphics.Blit(megascreenshot[x, y], finalTex, mat);
RenderTexture.active = finalTex;
megascreenshot[x, y].ReadPixels(new Rect(0, 0, megascreenshot[x, y].width, megascreenshot[x, y].height), 0, 0);
megascreenshot[x, y].Apply();
RenderTexture.active = null;
and the second line being skipped (resulting in the screen being transfered to the texture instead of the renderTexture)
however there is NO output in the debug log
for reference, here's the expected output:
Left: http://imgur.com/2sidm4c
Right: http://imgur.com/BqsizC2
Low-res:http://imgur.com/AwyQd25
Mid-res:http://imgur.com/nHtTYCR
(this is the exact same shader code as is running on the tablet, although it actually looks like my changing uints to floats has broken rendering of the numbers)
full shader code:
http://pastebin.com/TrzNjbmw
any help as to what might be going on would be much appreciated
↧
How can I make a shader pass that emits dynamic light?
I have been reading the docs and searching around on google for a few hours, so I doubt there is an answer on this website, but if there is, please answer with a link. So, back to my question, I have a multi-pass shader, the objects which will use this shader emit a very strong light, so the player, and other dynamic objects need to cast shadows. Dynamic lighting is an absolute necessity for the scenes, meaning, it generates a lightmap every frame, I am aware that this will be very costly for performance, so this shader will be used very sparingly, in very small scenes. The light emitted will not have a color variable (it will always be pure white), but it will be multiplied by an intensity value. I am not asking anyone to program the entire shader for me, but I am completely new to this type of lighting, so if anyone could give snippets or documentation to look at that would be very helpful. I think this is a fragment shader (it does not cast shadows or receive lighting, it is always one color)
Thanks in advance, I apologize for my lack of knowledge in this topic.
↧
↧
How to use SampleGrad in Unity shader
The code in hlsl but I can't use it.
Unity says "sampler2D object does not have methods"
There's the detail [SampleGrad (DirectX HLSL Texture Object)][1]
The article relate to [A Closer Look At Parallax Occlusion Mapping][2]
[1]: https://msdn.microsoft.com/en-us/library/windows/desktop/bb509698(v=vs.85).aspx
[2]: http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/a-closer-look-at-parallax-occlusion-mapping-r3262
↧
How can I get correct lighting on a low poly water shader?
I have been working on this low poly water shader and I got each vertex to move up and down like I want to, I also got the shader to be transparent but I am having problems with lighting. If the metallic is set to 1, it is a solid color and I cant tell that the vertices are even moving in the center. The light hits the plane as if it was completely flat and leaves an object that was in the center a shadow of a flat line and not the way it is supposed to be. I am very new to shader writing so please correct me on anything else I did wrong. Here is my code:
Shader "Custom/LowPolyWater" {
Properties {
_Color ("Color", Color) = (1,1,1,1)
_MainTex("Albedo (RGB)", 2D) = "white" {}
_Glossiness("Smoothness", Range(0,1)) = 0
_Metallic("Metallic", Range(0,1)) = 0
_Speed("Speed", Range(0, 5)) = 1
_Scale("Scale", Range(0, 3)) = 0.3
_Amount("Amount", Range(0, 0.5)) = 0.1
}
SubShader {
Tags { "RenderType"="Transparent" "Queue"="Transparent" }
LOD 200
CGPROGRAM
// Physically based Standard lighting model, and enable shadows on all light types
#pragma surface surf Standard fullforwardshadows alpha
#pragma vertex vert
// Use shader model 3.0 target, to get nicer looking lighting
#pragma target 3.0
sampler2D _MainTex;
half _Glossiness;
half _Metallic;
fixed4 _Color;
half _Speed;
half _Scale;
fixed _Amount;
struct Input {
float2 uv_MainTex;
};
void vert(inout appdata_full v, out Input o) {
//Idk why i usually need this but just in case
UNITY_INITIALIZE_OUTPUT(Input, o);
//I basically plugged functions and numbers in until something worked... my favorite meathod
v.vertex.y = (sin((_Time.w * _Speed) + v.vertex.x / _Amount) + sin((_Time.w * _Speed) + v.vertex.z / _Amount)) * _Scale;
}
void surf (Input IN, inout SurfaceOutputStandard o) {
// Albedo comes from a texture tinted by color
fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = c.rgb;
// Metallic and smoothness come from slider variables
o.Metallic = _Metallic;
o.Smoothness = _Glossiness;
o.Alpha = _Color.a;
}
ENDCG
}
FallBack "Diffuse"
}
↧
doubles and compute shaders
I have a compute shader I use to crunch the n-body gravity calculations for my project. when I use all floats in the shader, it runs fine and can process the gravity calculations of 10,000 objects in about 8 ms. However. I can't use floats because part the gravity equation ((G x mass1 x mass2) / d^2) can produce a number greater than what floats can hold with 2 sun sized masses. This leads me to need to use doubles for that part of the calculations. This wouldn't be a problem, except it seems to SEVERELY increase the time it takes to execute the shader for 8 ms to 130 ms. Any input is appreciated.
[numthreads(256,1,1)]
void GravityComp (uint3 id : SV_DispatchThreadID)
{
uint ind = id.x;
float2 gravResult = float3(0, 0);
for (uint i = 0; i < (uint)numAsteroids; i++) {
if (ind == i)
continue;
float distance = Distance(dataIn[ind].xy, dataIn[i].xy);
double G = (double)0.0000000000667408;
double m1 = (double)dataIn[ind].z; // mass
double m2 = (double)dataIn[i].z; // mass
double newt = (G * m1 * m2) / (double)pow(distance, 2);
float acc = (float)(newt / m1);
float2 dir = -normalize(dataIn[ind].xy - dataIn[i].xy);
float2 grav = dir.xy * acc;
gravResult.xy = gravResult.xy + grav.xy;
}
dataOut[ind].xy = gravResult.xy;
}
↧
[Shader] Usage #ifdef DIRECTIONAL
How can I use #ifdef DIRECTIONAL in surf + vert shader?
What conditions?
M.b. some .cgiinc or .include or HLSLPROGRAM only etc
here is full shader
Shader "mitay/cutout tree" {
Properties {
_Color ("Color", Color) = (1,1,1,1)
_SpecColor ("Specular Color", Color) = (0.1, 0.1, 0.1, 1)
_MainTex ("Albedo (RGB)", 2D) = "white" {}
_BumpMap ("Bump (RGB)", 2D) = "bump" {}
_Smoothness ("Smoothness", Range(0.001,1)) = 1
_Cutoff ("Alpha cutoff", Range(0.25,0.9)) = 0.5
[MaterialToggle] _isToggled("ShakeDirection1", Float) = 0
[MaterialToggle] _isToggled2("ShakeDirec tion2", Float) = 0
_ShakeDisplacement ("Displacement", Range (0, 1.0)) = 1.0
_ShakeTime ("Shake Time", Range (0, 1.0)) = 1.0
_ShakeWindspeed ("Shake Windspeed", Range (0, 1.0)) = 1.0
_ShakeBending ("Shake Bending", Range (0, 1.0)) = 0.2
// These are here only to provide default values
[HideInInspector] _TreeInstanceColor ("TreeInstanceColor", Vector) = (1,1,1,1)
[HideInInspector] _TreeInstanceScale ("TreeInstanceScale", Vector) = (1,1,1,1)
[HideInInspector] _SquashAmount ("Squash", Float) = 1
}
SubShader {
Tags { "RenderType"="TreeTransparentCutout" }
LOD 200
Cull Off
CGPROGRAM
// add "addshadow" to let unity know you're displacing verts
// this will ensure their ShadowCaster + ShadowCollector passes use the vert function and have the correct positions
#pragma surface surf BlinnPhong fullforwardshadows vertex:vert addshadow alphatest:_Cutoff
//#include "UnityBuiltin2xTreeLibrary.cginc"
#pragma target 3.0
float _isToggled;
float _isToggled2;
sampler2D _MainTex;
sampler2D _BumpMap;
fixed4 _Color;
half _Smoothness;
half _Glossiness;
half _Speed;
half _Amount;
half _Distance;
float _ShakeDisplacement;
float _ShakeTime;
float _ShakeWindspeed;
float _ShakeBending;
fixed4 _TreeInstanceColor;
float4 _TreeInstanceScale;
float4x4 _TerrainEngineBendTree;
float4 _SquashPlaneNormal;
float _SquashAmount;
struct Input {
float2 uv_MainTex;
float2 uv_BumpMap;
};
fixed4 LightingNormalizedBlinnPhong (SurfaceOutput s, fixed3 lightDir, fixed3 halfDir, fixed atten)
{
// TODO: conditional normalization using ifdef
fixed3 nN = normalize(s.Normal);
fixed diff = max( 0, dot(nN, lightDir) );
fixed nh = max( 0, dot(nN, halfDir) );
fixed spec = pow(nh, s.Specular*128) * s.Gloss;
fixed4 c;
c.rgb = _LightColor0.rgb * (s.Albedo * diff + spec) * atten;
UNITY_OPAQUE_ALPHA(c.a);
return c;
}
void FastSinCos (float4 val, out float4 s, out float4 c) {
val = val * 6.408849 - 3.1415927;
float4 r5 = val * val;
float4 r6 = r5 * r5;
float4 r7 = r6 * r5;
float4 r8 = r6 * r5;
float4 r1 = r5 * val;
float4 r2 = r1 * r5;
float4 r3 = r2 * r5;
float4 sin7 = {1, -0.16161616, 0.0083333, -0.00019841} ;
float4 cos8 = {-0.5, 0.041666666, -0.0013888889, 0.000024801587} ;
s = val + r1 * sin7.y + r2 * sin7.z + r3 * sin7.w;
c = 1 + r5 * cos8.x + r6 * cos8.y + r7 * cos8.z + r8 * cos8.w;
}
inline float4 Squash(in float4 pos)
{
// To squash the tree the vertex needs to be moved in the direction
// of the squash plane. The plane is defined by the the:
// plane point - point lying on the plane, defined in model space
// plane normal - _SquashPlaneNormal.xyz
// we're pushing squashed tree plane in direction of planeNormal by amount of _SquashPlaneNormal.w
// this squashing has to match logic of tree billboards
float3 planeNormal = _SquashPlaneNormal.xyz;
// unoptimized version:
//float3 planePoint = -planeNormal * _SquashPlaneNormal.w;
//float3 projectedVertex = pos.xyz + dot(planeNormal, (planePoint - pos)) * planeNormal;
// optimized version:
float3 projectedVertex = pos.xyz - (dot(planeNormal.xyz, pos.xyz) + _SquashPlaneNormal.w) * planeNormal;
pos = float4(lerp(projectedVertex, pos.xyz, _SquashAmount), 1);
return pos;
}
void TerrainAnimateTree( inout float4 pos, float alpha )
{
pos.xyz *= _TreeInstanceScale.xyz;
float3 bent = mul(_TerrainEngineBendTree, float4(pos.xyz, 0.0)).xyz;
pos.xyz = lerp( pos.xyz, bent, alpha );
pos = Squash(pos);
}
void vert (inout appdata_full v) {
float factor = (1 - _ShakeDisplacement) * 0.5;
const float _WindSpeed = (_ShakeWindspeed);
const float _WaveScale = _ShakeDisplacement;
const float4 _waveXSize = float4(0.048, 0.06, 0.24, 0.096);
const float4 _waveZSize = float4 (0.024, .08, 0.08, 0.2);
const float4 waveSpeed = float4 (1.2, 2, 1.6, 4.8);
float4 _waveXmove = float4(0.024, 0.04, -0.12, 0.096);
float4 _waveZmove = float4 (0.006, .02, -0.02, 0.1);
float4 waves;
waves = v.vertex.x * _waveXSize;
waves += v.vertex.z * _waveZSize;
waves += _Time.x * (1 - _ShakeTime * 2 - v.color.b ) * waveSpeed *_WindSpeed;
float4 s, c;
waves = frac (waves);
FastSinCos (waves, s,c);
float waveAmount = 1;
if (_isToggled > 0)
waveAmount = v.texcoord.y * (v.color.a + _ShakeBending);
else
waveAmount = v.texcoord.x * (v.color.a + _ShakeBending);
s *= waveAmount;
s *= normalize (waveSpeed);
s = s * s;
float fade = dot (s, 1.3);
s = s * s;
float3 waveMove = float3 (0,0,0);
waveMove.x = dot (s, _waveXmove);
waveMove.z = dot (s, _waveZmove);
v.vertex.xz -= mul ((float3x3)unity_WorldToObject, waveMove).xz;
v.color *= _TreeInstanceColor;
float3 viewpos = mul(UNITY_MATRIX_MV, v.vertex);
#ifdef DIRECTIONAL
float3 viewpos = mul(UNITY_MATRIX_MV, v.vertex);
float4 lightDir = 0;
float4 lightColor = 0;
lightDir.w = _AO;
float4 light = UNITY_LIGHTMODEL_AMBIENT;
for (int i = 0; i < 4; i++) {
float atten = 1.0;
#ifdef USE_CUSTOM_LIGHT_DIR
lightDir.xyz = _TerrainTreeLightDirections[i];
lightColor = _TerrainTreeLightColors[i];
#else
float3 toLight = unity_LightPosition[i].xyz - viewpos.xyz * unity_LightPosition[i].w;
toLight.z *= -1.0;
lightDir.xyz = mul( (float3x3)unity_CameraToWorld, normalize(toLight) );
float lengthSq = dot(toLight, toLight);
atten = 1.0 / (1.0 + lengthSq * unity_LightAtten[i].z);
lightColor.rgb = unity_LightColor[i].rgb;
#endif
lightDir.xyz *= _Occlusion;
float occ = dot (v.tangent, lightDir);
occ = max(0, occ);
occ += _BaseLight;
light += lightColor * (occ * atten);
}
v.color = light * _Color.rgb * _TreeInstanceColor;
#endif
TerrainAnimateTree(v.vertex, v.color.w);
}
void surf (Input IN, inout SurfaceOutput o)
{
fixed4 tex = tex2D (_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = tex.rgb ;
o.Gloss = tex.a;
o.Alpha = tex.a * _Color.a;
o.Specular = _Smoothness;
o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap));
}
ENDCG
}
Dependency "BillboardShader" = "Hidden/Nature/Tree Soft Occlusion Leaves Rendertex"
FallBack "Diffuse"
}
↧
↧
Hlsl Pow function platform-dependent error
Hi guys,
it looks like the pow function does crazy things if the exponent is too big. For example if it's 10 it causes the shader to simply return 0, regardless the result of the operation.
The interesting point is, how can this be dependent on the OS (it actually is, the error does not happen on Linux of Mac)?
And it also depends on the project it is into: the shader is working right in one project, and gives the error in another.
↧
Webgl 1.0 error shader loop, workaround?
it seems webgl doenst support some kind of loops in the hlsl shader. My shader only have this variant.
I wanna know a workaround this problem if possible and if anyone know in the future if these issues will be solved. Like tis version of WebGL 2.0 coming.
My shader use a loop to iterate through points to print decals , it is a fragment shader.
Error log is ERROR: 0:37: 'while' : This type of loop is not allowed.
I already looked at https://docs.unity3d.com/Manual/webgl-graphics.html
↧
Why does this shader give different result Editor/ Android?
Hi,
I've written a simple *curved* unlit shader in HLSL that basically translates each vertex based on the distance to the camera (the magic happens in the vertex shader):
Properties
{
_BendFactor ("Bend Factor", Vector) = (0, 0, 0, 0)
_Color ("Color", Color) = (1,1,1,1)
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 100
Lighting Off
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv_tex1 : TEXCOORD0;
};
struct v2f
{
float2 uv_tex1 : TEXCOORD0;
float4 vertex : SV_POSITION;
};
fixed4 _BendFactor;
fixed4 _Color;
sampler2D _MainTex;
float4 _MainTex_ST;
v2f vert (appdata v)
{
v2f o;
// here I calculate the offset
fixed4 offset = mul( unity_ObjectToWorld, v.vertex );
offset.xyz -= _WorldSpaceCameraPos.xyz;
offset =_BendFactor * (offset.z * offset.z);
o.vertex = UnityObjectToClipPos ( v.vertex) + offset ;
o.uv_tex1 = TRANSFORM_TEX(v.uv_tex1, _MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
return tex2D(_MainTex, i.uv_tex1) * _Color;
}
ENDCG
}
}
It worked perfectly in Unity 5.5, but I just updated to 5.6 and now I get different results in Editor vs Android device. On the Y axis, I get opposite translations:
Editor:
![alt text][1]
Huawei P9 plus:
![alt text][2]
Does anyone know what could be the issue here?
Thank you in advance!
[1]: /storage/temp/95003-1.png
[2]: /storage/temp/95005-3.png
↧
how to approach instanceID in shader?
SkinnedMeshRenderer didn't batched in Unity. so i will implement GPU Skinning with GPU Instancing.
i try to approach instanceID(like SV_InstanceID), but they are wrapped in preprocessor macro. so i find in built in shader(version is 5.6.1f). and i found this comment.
// basic instancing setups
// - UNITY_VERTEX_INPUT_INSTANCE_ID Declare instance ID field in vertex shader input / output struct.
// - UNITY_GET_INSTANCE_ID (Internal) Get the instance ID from input struct.
and im in frustration.. "Internal" keyword make me embrassed. i have just only one question.
***Can i use "instanceID" in shader code?***
↧
↧
Is it possible to render to a texture via RWTexture2D without using Graphics.blit
I would like to save the output of a fragment shader to a texture, so that I can attempt to reuse the data in later shaders. I originally used Graphics.Blit, but the problem I found is that it submits a quad to the shader, and I need to render a mesh.
My current fallback is to use a RWStructuredBuffer to store the colour value. Unfortunately, I have to do this at a per-vertex basis, as I wouldn't know how many fragments there will be. Therefore, if I wanted to reuse the shaded data, it would have lost some accuracy.
So, I would like to know if there is a way to define a rendertexture as the shader target, without using Graphics.blit.
![alt text][1]
This is my current shader.
![alt text][2]
And this is the code initialising and assigning the buffers.
[1]: /storage/temp/96691-buffer-shader.png
[2]: /storage/temp/96692-shadedbuffer-setup.png
↧
How to reuse HLSL shader code?
Hello!
I needed a set of shaders that curve based on the distance from camera. Basically, I took some of the default HLSL shaders Unity provides and changed the code in the vertex shader to adjust the position of the vertices.
It works great, but I would like to know if there is a way to centralize the code in the vertex shader, as it is the same in all cases, and just pass that code throughout all shaders I need, because only the fragment shader is different.
This is because if I need to change anything in the vertex shader function, I need to change it in 6 different places.
Thank you!
↧
(Shader)How to find the cordinate of a given color inside a ramp texture?
I'm currently working on palette swap feature for my game's sprites on Unity . I know that this is normaly achieved by using a gayscale texture and then using a ramp texture to replace the colors or mapping each base color to a given channel value and then doing a lookup . BUT , since every sprite is hand drawn and painted , there are like a gazillion different RGB values and applying those techniques are a little troublesome .
So , what I want to do is write a shader that does the following :
-Get the RGB value of the pixel being processed
-Find the coordinate of that value in a palette texture ([n-colors]x2) (This is the part that i have no idea how to accomplish)
-With its coordinate , get the swap color that would be one row beneat the original color inside the palette texture
-Apply the new color to the sprite
Basically , this
![alt text][1]
[1]: https://i.stack.imgur.com/AHNCm.png
What i need to know is how to find the color inside the palette texture , **basicaly a reverse tex2D(_Texture,coord)**
Is there any way I could achieve this? If so , how efficient it is? Is there any other way?
↧