Quantcast
Channel: Questions in topic: "hlsl"
Viewing all articles
Browse latest Browse all 206

HLSL convert float to int losing precision

$
0
0
I have the following frag function in my vertex shader: (the rest of the shader is the Sprite-Default shader that ships with Unity). float4 SpriteFrag(v2f IN) : SV_Target { float4 c = tex2D (_MainTex, IN.texcoord); int r = int(c.r * 255); int g = int(c.g * 255); int b = int(c.b * 255); c.r = r / 255.0f; c.g = g / 255.0f; c.b = b / 255.0f; c.rgb *= c.a; return c; } and the following color value gets sampled: `RGBA(68, 48, 36, 255)`. I would like to cast the R, G, B values to ints to perform bit manipulation on them, but when I cast the float values they aren't accurate. My values end up as: r = 66 g = 46 b = 34 The values are not consistently off by 2 either (otherwise i would just increment them and work with that). What am I doing wrong? **note: if I return `c` unmodified, the color values are correct. ** ## Update: I feel like I'm going insane here. Returning a hardcoded red value seems to be incorrect. fixed4 SpriteFrag(v2f IN) : SV_Target { float4 c = tex2D (_MainTex, IN.texcoord); c.r = float(48.0/255.0); c.g = 0; c.b = 0; c.rgb *= c.a; return c; } This should set the R channel to `48`, or `0.18823529411`, but instead it sets it to `120` or `0.4705883`. WTAF is going on here?

Viewing all articles
Browse latest Browse all 206

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>