after getting DXGI_ERROR_DEVICE_REMOVED or D3DDDIERR_DEVICEREMOVED,
GetDeviceRemovedReason can return DXGI_ERROR_DEVICE_RESET
this particular case should trigger a video restart instead of a fatal error
- r_alphaToCoverageMipBoost scales the alpha value based on the current texture LoD
this prevents the "fading with distance" effect
- mip 0 dimensions are tested to decide whether contrast boosting around 0.5 is enabled
this is to deal with high r_picmip configs
- improved algorithm for excellent sharpness and minimized temporal artefacts
- GLSL 4.00 is required for the GL3 backend to use A2C (for textureQueryLod)
the issues were:
- screenshots were black because of mismatched dimensions in the CopyResource call
- display was showing garbage because the scissor rect was too large
also increased the buffer's size (e.g. to draw all chars in the console in 4K)
one of the crashes happens in R_SortDrawSurfs:
-> render command list is too full
-> RE_EndFrame returns early because it can't allocate RC_SWAP_BUFFERS
-> R_ClearFrame in RE_EndFrame doesn't get called
-> the next frame starts with r_firstSceneDrawSurf etc. not being reset to 0
-> r_firstSceneDrawSurf becomes really close to the maximum draw surface limit
-> the draw surface list is iterated incorrectly (no wrapping handled)
-> we fetch a draw surface we shouldn't
-> its sort key gets decoded and we get an invalid sorted shader index
-> we fetch a NULL shader at that index location
-> we attempt to read shader->sort
-> we crash reading address 76
-> 76 bytes is exactly the offset of the sort member into the shader_t struct
- renamed r_softSprites to r_depthFade
the term's more descriptive and it helps that UE4 uses it
- fixed the GL3 fragment shader halving the depth bias
- fixed the D3D11 pixel shader only fetching depth sample 0
not fixed for GL3 yet, see the code comments for that
- added support for more blend states
- added the q3map_cnq3_depthFade general shader directive
it happened because the color channels for blendColor weren't initialized,
that memory could have at least one channel initialized as a NaN value and
throwing a NaN in the shader logic would break it
the D3D11 tweaks:
- better error message formatting
- D3DDDIERR_DEVICEREMOVED from Present is a fatal error too
- synchronized offsets is always the automatic behavior for now
it turns out that split mode isn't always the fastest for nVidia
- ditched vertex colors (not wanted) and alpha tests (not needed) in the shaders
- using a Bezier fall-off to get much softer edges
- added no-depth-write transparent surfaces support by adjusting the depth test
- multiplying the diffuse texture's color by its alpha in non-opaque passes
- fixed triangle rejection based on cull type and normal direction
- reflecting normals in shaders to support two-sided surfaces
- rejecting surfaces with no diffuse stage or bad blend states as early as possible
- liquids get lit weaker than other surfaces
the problem is that stb_image can and will allocate much more than it needs to
e.g. for a 2048x2048 BGR image:
it allocates an unnecessary intermediate 12 MB buffer to decode the image
instead of decoding it directly into the final 16 MB RGBA buffer
the old CNQ3 code didn't decode greyscale properly because of a missing macro call
it also didn't range-check memory accesses at all
this is because it would draw (parts of) geometric edges with different colors and that makes visual inspections annoying
also, final MSAA sample counts are always reported by GL3 and GL2 now