Rework GLimp_GetRefreshRate(), fix a stupid bug.

In the old world GLimp_GetRefreshRate() was called once at renderer
startup. Now in the new world with SDL 2.0 only it's called every frame
and thus the target framerate git increased by one every frame... That
lead to subtile timin problem in case that the vsync is enabled.

While here remove the hack added for some Windows GPU drivers by AMD.
Older versions returned 59 on 59.95hz displays, leading to small timing
problems. This is fixed in newer version so we don't need to work around
it. Removing the hack gives us somewhat more overall timing precision.

If someone really needs the hack vid_displayrefresh can be set to 60 to
get the old behaviour.
This commit is contained in:
Yamagi Burmeister 2018-09-08 19:05:10 +02:00
parent 3c349d6078
commit 95bbb9900b

View file

@ -377,7 +377,7 @@ GLimp_GetRefreshRate(void)
if (vid_displayrefreshrate->value > -1)
{
glimp_refreshRate = ceil(vid_displayrefreshrate->value);
glimp_refreshRate = floor(vid_displayrefreshrate->value);
}
/* Do this only once. We asume that no one will change their
@ -389,7 +389,7 @@ GLimp_GetRefreshRate(void)
int i = SDL_GetWindowDisplayIndex(window);
if(i >= 0 && SDL_GetCurrentDisplayMode(i, &mode) == 0)
if (i >= 0 && SDL_GetCurrentDisplayMode(i, &mode) == 0)
{
glimp_refreshRate = mode.refresh_rate;
}
@ -401,12 +401,5 @@ GLimp_GetRefreshRate(void)
}
}
/* The value reported by SDL may be one frame too low, for example
on my old Radeon R7 360 the driver returns 59hz for my 59.95hz
display. And Quake II isn't that accurate, we loose a little bit
here and there. Since it doesn't really hurt if we're running a
litte bit too fast just return one frame more than we really have. */
glimp_refreshRate++;
return glimp_refreshRate;
}