[renderer] Use 16 bits for d_lightstylevalue

Even the comment says it's 8.8, so no need for 32 bits for each value.
It seems to have made a very small improvement to my glsl stub test, but
it's probably just noise (< 0.5%). However, having it "officially" 16
bits means that cached values can be 16 bits thus reducing struct sizes
when I rework lightmap surface data (taking the cache from 16 to 8
bytes).
This commit is contained in:
Bill Currie 2024-01-26 02:15:04 +09:00
parent c5fc34bb0b
commit 0a9cc91503
2 changed files with 2 additions and 2 deletions

View file

@ -127,7 +127,7 @@ extern float xscale, yscale;
extern float xscaleinv, yscaleinv;
extern float xscaleshrink, yscaleshrink;
extern int d_lightstylevalue[256]; // 8.8 frac of base light value
extern int16_t d_lightstylevalue[256]; // 8.8 frac of base light value
extern void TransformVector (const vec3_t in, vec3_t out);
extern void SetUpForLineScan(fixed8_t startvertu, fixed8_t startvertv,

View file

@ -66,7 +66,7 @@ vec4f_t r_entorigin; // the currently rendering entity in world
// screen size info
refdef_t r_refdef;
int d_lightstylevalue[256]; // 8.8 fraction of base light value
int16_t d_lightstylevalue[256]; // 8.8 fraction of base light value
byte color_white[4] = { 255, 255, 255, 0 }; // alpha will be explicitly set
byte color_black[4] = { 0, 0, 0, 0 }; // alpha will be explicitly set