The last commits did some bigger changes to the interaction between the
GL renderers and the client. The code is now SDL 2.0 conformant, window
and context creation are strictly distinct operations. SDL is only
initialized when necessary. Since this broke the client <-> renderer
API, bump it's version.
There a lot of things left to do for dark and cold winter evenings:
* The software renderer implements it's own window handling and
reinitialized SDL whenever vid_restart is called. This is highly
problematic.
* vid_fullscreen is abused to communicate changes to renderer config
throughout the code. That's a very ugly, messy and potential very
problematic hack. But not easy to remove.
* Some funtion calls between the client and the renderer are
unnecessary.
The changes to the client <-> renderer interaction fixed issue #302.
* Another round of general cleanup.
* Introduce gl3_libgl cvar to force a libGL.
* Fix stencil buffer tests.
* Further untangle window <-> context stuff.
The window is now fully at client side, the context at renderer side.
This is another break of the renderer API. And at least GL1 needs to
track this, it's broken for now.
* Even more syntax and code style fixes.
* Rename functions to match their actual purpose.
* Fix comments.
* SDL initialization and shutdown is now client side only. With
SDL 1.2 finally gone there's no need to involve the renderers
in it.
This breaks the client <-> renderer API. I haven't bumped the API
version with this commit because there're likely more changes when
I'm going through the renderer side of things. The VID backend also
needs a lot of love...
It might be a good idea to move this SDL backend files into the client
and rename them. We'll decide that at a later time.
If too many of these sounds are started in one frame (for example if the
player shoots with the super shotgun into the power screen of a Brain)
things get too loud and OpenAL is forced to scale the volume of several
other sounds and the background music down. That leads to a noticable
and annoying drop in the overall volume.
Work around that by limiting the number of sounds started. 16 was
choosen by empirical testing.
SDL_WINDOW_FULLSCREEN changes the display resolution if the requested
resolution is different to the actual resultion. SDL_WINDOW_FULLSCREEN_
DESKTOP doesn't do that, it places a smaller or bigger render area
somewhere inside the fullscreen area. This is somewhat nicer with modern
high resolution flatscreens.
This commit changes vid_fullscreen 1 from SDL_WINDOW_FULLSCREEN to
SDL_WINDOW_FULLSCREEN_DESKTOP. Additional vid_fullscreen 2 is
implemented, it uses SDL_WINDOW_FULLSCREEN to create the fullscreen
area.
TL;DR: Use vid_fullscreen 1 to keep the current resolution or use
vid_fullscreen 2 to switch the resolution.
Implementation details: The whole fullscreen stuff is a horrible mess.
Like generations of hackers before me I'm not desperated enough to clean
it up. GLimp_InitGraphics() is modified to take the fullscreen mode as
an integer and not as a boolean. That's a change to the renderer API.
In GLimp_InitGraphics() the needed SDL fullscreen mode flag is
determined once at the top and just used further down below. That saves
dome SDL1 <-> SDL2 compatibility cruft. IsFullscreen() was modified to
return the actual fullscreen mode and not just if fullscreen is enabled.
One problem was that GL3_Shutdown() called several functions that use
that gl* function pointers - not a good idea if InitContext() failed
and the function pointers are all NULL. So check for that.
Similarly in GL3/R_ShutdownWindow() calling glClear() etc.
Another problem was that R_SetMode() would, if R_SetMode_impl() failed,
try again with a "safe" resolution (640x480 unless we had another
working resolution before) - which is bad if we're already using that
"safe" resolution because then GLimp_InitGraphics() would check mode
and fullscreen and decide it hasn't changed and do nothing and return
true, which would make SetMode() believe everything is fine and
afterwards all hell breaks loose.
if the lightmap textures are 1024x512 instead of 128x128, all original
Q2 levels will only need one lightmap texture (instead of max 26 or so)
and even maps that needed all 127 (the 128th was the dynamic lightmap)
won't need more than 4.
This should result in less glBindTexture() calls.
(Note: When I wrote "1 lightmap texture" I meant 4, because where the
old renderer dynamically blended the up to 4 lightmaps/surface, I put
them in 4 textures that belong together and are alle passed to and
blended in the fragment shader)
the screenshot command now supports the filetype as optional argument
(just "screenshot" will use tga like before):
"screenshot png" will save the screenshot as PNG, same with jpg, png
and tga.
For jpg, you can even specify the quality, like "screenshot jpg 90"
(the Quality is between 1 and 100, like with libjpeg).
To reduce duplicated code, I addeed Vid_WriteScreenshot() to refimport_t
and implement most of it in the client (vid.c).
The renderer still fetches the raw image data from OpenGL or whatever
and then calls re.VidWriteScreenshot() which will write it to disk in
the format requested by the user.
pass both normal texture and lightmap to shader instead of rendering the
level geometry again with the lightmap and GL_BLEND.
This is not done, some translucent surfaces are buggy now and it's only
static lightmaps. For this to work properly I'll need to add some more
shaders with and without lightmaps and use them accordingly.
For example, translucent surfaces (SURF_TRANS33/66) never have
lightmaps, neither to pertubed ones (SURF_DRAWTURB) like water and lava,
but scrolling surfaces (SURF_FLOWING) like elevators do use lightmaps
(as long as they're not also transulcent or perturbed)...
the arguments were not used anyway, and returning true/false is clearer
than returning -1 (for error) or sth else (which has no deeper meaning
anyway).
Also:
* PrepareForWindow() can now return -1 if there's an error
* suppress some warnings in Makefile
* fix error for building ref_gl.dylib on OSX
So in all code in the reflib (ref_gl.dll/.so/.dylib) calls to
ri.Con_Printf(print_level, fmt, ...) have been replaced by calls to
R_Printf(print_level, fmt, ...) which uses ri.Com_VPrintf().
This is largely based upon the cl_async 1 mode from KMQuake2, which in
turn is based upon r1q2. The origins of this code may be even older...
Different to KMQuake2 the asynchonous mode is not optional, the client
is always asynchonous. Since we're mainly integrating this rather
fundamental change to simplify the complex internal timing between
client, server and refresh, there's no point in keeping it optional.
The old cl_maxfps cvar controls the network frames. 30 frames should be
enough, even Q3A hasn't more. The new gl_maxfps cvar controls the render
frames. It's set to 95 fps by default to avoid possible remnant of the
famous 125hz bug.
sounds easy, right?
Except some genius decided to save CVAR_LATCH cvars in buffers of
MAX_OSPATH length into savegames.. so just changing MAX_OSPATH
breaks savegame compatibility.
Fortunately, this assumption only matters in SV_WriteServerFile() and
SV_ReadServerFile() so I worked around it by introducing a
platform-specific LATCH_CVAR_SAVELENGTH (because MAX_OSPATH was 256 on
Windows but 128 on other systems..)
it's saved in $HOME/.yq2/$mod/history.txt
While I was at it, I made the max number of lines in the history
configurable at compiletime by introducing a NUM_KEY_LINES #define
This is a less intrusive variant of the old Key_ClearState() function.
When the refresher is restarted or the menu is left, this function is
called to mark all keys as "up". That works around some corner cases
where a key is still marked "down" and thus the first stroke is detected
as a repetition.
Until now Quake 2 used keysyms aka key events for everything, including
the console and the chat window. Since key events don't reflect if the
shift key is pressed, Quake 2 needed to convert the lower case chars to
upper case char through a hardcoded table. That lead to the problem that
the keyboard layout was utilised for lower case characters only.
Solve this long standing problem by refactoring both the input backend
and the frontends Key_Event() funktion to use character events for most
input subsystem. Character events are generated by SDL and send the
real character.
An example:
- On german keyboards shift and . is : but Quake 2 generated <.
- Now a character event with : is generated and used.
There are at least 3 disadvantes by this approach:
- The backend needs to tell the frontend if a normal character (ASCII
32 to 126) or a special character is send. Only normal characters can
be treated as character events.
- There may be some differences between the binding of a key seen
through the console and seen by the game. If you have a german
keyboard and bind :, the game may not react to :. This can be worked
around by editing the config file.
- Users may need to rebind some keys.
Please note that Quake 2 can handle ASCII characters only!
In the old times the refresher was a stand alone DLL. For performance
reasons and to avoid laggy input parts of the input system were
implemented in this DLL. Now that the renfresher is part of the main
binary and initialized at client startup we can remove most of the
abstractions between input system, refresher and client. Also the
input system can be treated as a normal subsystem.
Changes:
- Untangle the VID_* stuff and the IN_* stuff. The functions
called by function pointers in in_state are now called directly
and 'struct in_state' was removed.
- Remove input.h and rename the appropriate backend functions.
There's no longer a need for an abstraction layer between the
input backend and the input frontend.
- Move input initialization and shutdown into CL_Init(), like it's
already done for all other subsystems.
- Remove Key_ClearStates(). I'm pretty sure that's a left over from
the old Win 9x backends and unnecessary.
- General cleanup.
Input devices should send key events and nothing more. The ability to
add commands into the input buffer was used by the joystick code
(removed long time ago) and as a dirty hack to work around limitations
of DirectInput.
This function is only used in cl_view.c, so no need for external
declaration. Reimplement it in a sane and on all platform 64 bit
clean way. This allows us to finally remove the horible INT macro.