This shouldn't have any noteable impact on timing (besides the machine
is way too slow for Quake II) and saves a lot of CPU cycles. 100% load
vs. 17% load on my desktop.
Having the server in an own timing zone seems to simplify things but
introduces slight timing discrepancies. The most visible effect is that
the game runs a little bit too fast, especially in the first cl_maxfps
frames.
Therefor: Remove timeframes, they're unnecessary. Track the time since
the last (client|server) frame instead and pass it to the client and
server when it's called.
This allows us to implement the global timing without an artificial
brake slowing the game unnecessary down. This is only partial working,
more changes and fixes are coming.
This is a no-op for now. We need this to get a much higher precision
when calculating the frame times. This changes the fixedtime cvar from
milli- to microseconds.
This is the same as the client does for it's realtime. It looks at least
somewhat more correct since it pevents rounding errors. And things are
simplified a litte bit since the server timing is now independent of the
global timing.
The old framecounter had two problems:
* It measured only the time of the current render frame, not the total
time spend between the last and the current render frame. Therefor the
calculated value was too high.
* It was based upon milliseconds and rather inaccurate.
This new frame counter solves both problems. The total time spend
between two render frames is measured and the measurement done in
microseconds.
There're three modes:
* cl_drawfps 1 displayes the average frame rate calculated over the last
60 frames.
* cl_drawfps 2 displays a nice string with minimal framerate, maximum
framerate and average framerate. All three values are calculated over
the last 60 frames.
* cl_drawfps 3 is the same as number 2 but with a second line showing the
raw values.
TODO:
* Discuss if cl_drawfps should be renamed to cl_showfps. All other
status displays are named cl_show*.
While at it remove several unsused drawing functions.
This is the same as the well known Sys_Milliseconds() but like the name
suggests with microsecond precision. To be used in the upcoming new
framecounter.
For some fucking reason, if you set an unsupported
SDL_GL_MULTISAMPLESAMPLES value on Windows (at least Win10 with Intel GPU
drivers, there 16 is unsupported), creating the Window and OpenGL context
will succeed, but you'll get Microsofts stupid GDI OpenGL software
implementation that only supports OpenGL 1.1.
Before these fixes, the GL3 renderer would just crash and the GL1 renderer
would fail to load, which caused the game to run in the background:
No Window, no Input, but sound was playing..
Now this problem should be handled properly and if initialization fails,
the rendering backend will be considered not working, and it will
try the gl1 backend next, and if that also fails it'll give up and exit
the game.
Until now the video menu enforced:
* fov set to 90 and horplus set to 1
* fov set to something other than 90 and horplus to 0
If the user hat configured another configuration through the console the
menu would reset it, even if only unrelated changes are applied. With
this change horplus is ignored by the menu and only fov is altered. The
rationale behind this is that most users want horplus enabled and all
others can disable it through the console.
This is believed to fix issue #225.
While here reimplement the same hack for baseq2/players, lost somewhere
on the way. This is just another searchpath f*ckup. For some reasons
paks have a higher priority than plain directories. We do not want that
for the maps.lst and players/ since id Software decided to put updated
versions of them directly into baseq2/...
This closes issue #217.
SDL_WINDOW_FULLSCREEN changes the display resolution if the requested
resolution is different to the actual resultion. SDL_WINDOW_FULLSCREEN_
DESKTOP doesn't do that, it places a smaller or bigger render area
somewhere inside the fullscreen area. This is somewhat nicer with modern
high resolution flatscreens.
This commit changes vid_fullscreen 1 from SDL_WINDOW_FULLSCREEN to
SDL_WINDOW_FULLSCREEN_DESKTOP. Additional vid_fullscreen 2 is
implemented, it uses SDL_WINDOW_FULLSCREEN to create the fullscreen
area.
TL;DR: Use vid_fullscreen 1 to keep the current resolution or use
vid_fullscreen 2 to switch the resolution.
Implementation details: The whole fullscreen stuff is a horrible mess.
Like generations of hackers before me I'm not desperated enough to clean
it up. GLimp_InitGraphics() is modified to take the fullscreen mode as
an integer and not as a boolean. That's a change to the renderer API.
In GLimp_InitGraphics() the needed SDL fullscreen mode flag is
determined once at the top and just used further down below. That saves
dome SDL1 <-> SDL2 compatibility cruft. IsFullscreen() was modified to
return the actual fullscreen mode and not just if fullscreen is enabled.
Several platforms - OpenBSD being a prominent example - don't provide a
way to get the executable path. Don't abort, just return the current
dir ./ executable dir. This is just a work around, of course. The user
needs to supply a script that calls ./quake2 in the correct directory.
The big problem with the old implementation was that stdout.txt and
stderr.txt on Windows became available when nearly all the low level
initialization was already done. Regardless if the client was in
normal or in portable mode.
Solve this by scanning the command line for the string '-portable'. If
it's not found, stdout and stderr are redirected as early as possible.
If found the global variable (*sigh*) is_portable is set to true. It's
evaluated later on to set the cvar 'portable', which in turn is used
be the filesystem to decide if the home directory should be added to
the search path.
Maybe we should remove the cvar and stick to the global variable.
While at it change the maximum path length for qconsole.log from
MAX_QPATH to MAX_OSPATH. At least on my Linux laptop MAX_QPATH is
too short.
This commit is still untested on Windows!
A new linked list fs_rawPath with nodes of type fsRawPath_t is added.
The new function FS_BuildRawPath() fills it at filesystem initialization
with the raw search path directories. Later FS_BuildGenericSearchPath()
and FS_BuildGameSpecificSearchPath() use it to derive the actual search
directories.
Remove all functions that are no longer used:
* FS_AddGameDirectory()
* FS_SetGameDir()
* FS_AddHomeAsGameDirectory()
* FS_AddBinaryDirAsGameDirectory()
While at it try remove as much global variables from filesystem.c as
possible. Also fix a small, longstandig bug: The download code should
treat .zip and .pk3 files as pak files and not as normal directories.
Refactor FS_SetGamedir() into FS_BuildGameSpecificSearchPath(). The new
function removes the specialized part of the search path if necessary
and create a new specialized part based upon the given directory. It
uses the FS_AddDirToSearchpath() function added in the last commit.
This moves the code used to add a directory and it's paks to the search
path into one well defined function FS_AddDirToSearchPath(). Also create
a new function FS_BuildGenericSearchPath() that builds the generic part
of the global search path. This obsoletes several other, specialized
functions. They'll be removed in a later commit.
This prevents Windows from scaling our (fullscreen) window to crap if
the whole desktop is scaled and we're rendering more than 1080p. This is
believed to fix#208.
if that cvar is set to 1, particles aren't rendered as nice circles, but
as squares, like in the software renderer or in Quake1.
Also documented it in cvarlist.md and fixed some typos there
The model shadows are rendered after all entities are rendered.
This fixes them making entity brushes below them translucent (#194)
The model rendering code used lots of global variables, many of them
totally superfluous (esp. currententity, currentmodel).
I refactored the code to use less global variables (this was at least
partly needed to render the shadows later).
So this looks like lots of changes, but many of them are just using
"entity" instead of "currententity" or "model" instead of "currentmodel"
Like GL1 gl_shadows + gl_stencilshadows: no shadow volumes, but looks
ok apart from standing over edges
The gl_stencilshadows cvar isn't used in GL3, it always uses the stencil
buffer if available (and if gl_shadows != 0)
This still needs performance optimizations: Like the GL1 impl it takes
lots of draw calls per model, it could be done with one per model like
when rendering the actual model.
there have been complaints that those things look too bright, so let
people configure their intensity independently of the general intensity
used for levels, monsters etc.
fixes#189
When applied to y SURF_FLOWING textures are scrolled into the wrong
direction. I guess that in GL1 the offset is also applied to x.
This fixes issue #186.
Sometimes cinematics are skipped after the first frame even if the
player didn't press any key. I'm unable to reliable reproduce that,
so my educated guess is that one or more events are still waiting in
SDLs event queue.
For example, during intermission IN_Update() is not called for 5
seconds, key presses by impatient players are just added to the queue
and not processed. The first event is used to skip leave the
intermission, the second event skips the cinematic...
Fix this by implementing a new function IN_FlushQueue() to flush SDLs
event queue and calling it when starting cinematic playback. Yes, this
is just another layer violation. :(
When the client is paused (either explicit or by entering the menu or
console) the cinematic is paused, too. Therefor no more sound samples
are generated and added to the playback queue, the existing samples are
played over and over again. Until now these samples weren't hearable,
because OpenAL marked them as processed and AL_StreamUpdate() removed
them from OpenALs playback queues. This changed in the previous commit,
now the stay in OpenALs queue and are hearable.
Fix this by calling AL_UnqueueRawSamples() when the menu or console is
entered during cinematic playback.
Newer openal-soft versions changed the way how the processed buffers are
counted when in AL_STOPPED state. Previously only processed buffers were
counted, now all buffers are. Change our unqueue logic to match this new
behavior.
This was debugged and fixed @xorw, I'm just committing the patch. This
closes issue #185.
For some reasons setting the MSAA fails at window creation and not at
GL context creation. And of course SDL is unable to detect before, that
the requested number of MSAA samples is invalid... Implement a work
around: Fall back to gl_msaa_samples == 0 if the window cannot be
created.
Resurrect support for render / refresher loadable libraries and use them to implement an experimental OpenGL 3.2 renderer. Please note that the new renderer interface is somewhat different from the original one, old render libraries will NOT work!
there's a target_secret (with targetname "t117"), but no one triggers
it - that's why the help computer shows four secrets, but you can only
get three of them.
Now when you open the door in front of the hidden secret armor
(by shooting it), it'll trigger the target_secret and you can get all
four secrets.
fixes#182
The internal order of the items is determined by Menu_AddItem() and
not the y position. Without this change the cursor didn't jump from
item to item, but from the mode list box to the aspect list box,
skipping the brightness slider.
- Bump vid_gamma to 1.2 in both GL1 and GL3. A default value of 1.0 is
too dark.
- Lower gl3_overbrightbits to 1.3, the previous value of 1.5 was too
bright. This can be seen in later units, for example on mine1 some
textures blended into white.
- Lower gl3_particle_size to 40. A value of 60 may be okay, but with
gl3_particle_fade_factor 1.2 the particles take up too much screen
estate in close range combat.
With this changes GL3 looks (at least for me) nearly the same as GL1
rendered through the removed multitexturing path.
Without capping the brightness entity models may fade into pure white
which looks ugly. This can be seen when several flyer fire blaster
bolds onto the player or when multiple barrels are exploding. This
change was suggest by @DanielGibson, I'm just the messenger.
I really don't see why this constraint was ever necessary. It leads to
one line of pixels not rendered either at the bottom or the right edge
of the screen. In GL1 for whatever reasons this line is just black, in
GL3 garbage is drawn.
In SCR_DrawFieldScaled() the HUD scale factor wasn't taken into account
when calculating the screen area affected by the change. Therefor wrong
coordinates were passed to SCR_AddDirtyPoint() and a part of the changed
area wasn't marked dirty, leading to render artifacts.
This bug was present since HUD scaling was first introduced.
the flash should only be drawn in the part of the window where actual
3D rendering happens, not in the borders added if viewsize < 100
(and apparently also for with 1 pixel width if the resolution is odd).
* gl3_particle_size: in GL3 the particles should be a bit bigger because
the particles fade out towards the edge, so I put it in a seperate
CVar
* gl3_intensity: in GL3 the intensity can have any floating point value,
in GL1 only integers, so it gets its own CVar
* gl3_overbrightbits: gl_overbrightbits had to be 1, 2 or 4, in GL3 it
can have any floating point value.
Changed the particle scaling a bit so they look bigger.
One problem was that GL3_Shutdown() called several functions that use
that gl* function pointers - not a good idea if InitContext() failed
and the function pointers are all NULL. So check for that.
Similarly in GL3/R_ShutdownWindow() calling glClear() etc.
Another problem was that R_SetMode() would, if R_SetMode_impl() failed,
try again with a "safe" resolution (640x480 unless we had another
working resolution before) - which is bad if we're already using that
"safe" resolution because then GLimp_InitGraphics() would check mode
and fullscreen and decide it hasn't changed and do nothing and return
true, which would make SetMode() believe everything is fine and
afterwards all hell breaks loose.
This makes the fragment shader faster by skipping lights that haven't
marked this surface in GL3_MarkLights()
This seems to improve performance at least slightly everywhere, but
it really helps *a lot* on integrated intel GPUs like the one on their
Sandy Bridge, Ivy Bride and Haswell CPUs (those are the ones we tested).
adding dot(surfaceNormal, lightToPixelOnSurfaceNormal) to the equation,
should be Phong-y now? Looks good at least.
The Windows AMD legacy driver needed its usual manual padding..
OSX was totally weird.. There were no errors or warnings from OpenGL
at all, but the dynamic lights were just not visible.
After (too long) debugging the shader I figured out that
dynLights[i].lightIntensity was always 1, and thus
'dynLights[i].lightIntensity - distLightToPos - 64' was negative and set
to 0 with max(0, ...).
I still have no idea why that happens, but removing lightIntensity from
the struct, making lightColor a vec4 and using .a for intensity works...
Dynamic lights on normal world brushes work, on brush-based entities
probably not yet properly. For this I need the model matrix in the
shader to transform vertex positions and normals to worldspace
(they already are for world brushes, but not entities that might rotate
and move etc).
Furthermore, while they dynamic lights look nice and smooth they might
need some fine tuning in the shader..
For this to work there are two bigger changes:
* the vertex data for brushes (gl3_3D_vtx_t) now also contains the
vertex normal
- glpoly_t contains array of gl3_3D_vtx_t instead of 7 floats
* 3D shaders now have in vec3 normal, bound to GL3_ATTRIB_NORMAL
* There's a new UBO for light data: uniLights, containing an array of
up to 32 dynamic lights, with data copied from gl3_newrefdef.dlights
..with Radeon 6950 using AMDs legacy driver.
For uploading UBOs it turned out that glBufferData() is faster,
sometimes a lot faster, with several drivers, especially Intel/OSX.
if the lightmap textures are 1024x512 instead of 128x128, all original
Q2 levels will only need one lightmap texture (instead of max 26 or so)
and even maps that needed all 127 (the 128th was the dynamic lightmap)
won't need more than 4.
This should result in less glBindTexture() calls.
(Note: When I wrote "1 lightmap texture" I meant 4, because where the
old renderer dynamically blended the up to 4 lightmaps/surface, I put
them in 4 textures that belong together and are alle passed to and
blended in the fragment shader)
not sure if this is the very best solution..
Every surface can have up to 4 lightmaps.
I now always create 4 lightmaps (in separate textures, so the
corresponding texture coordinates are identical), the "fillers" are
set to 0, so in the shader they won't make a visible difference.
(The shader always adds up lightmaps from 4 textures, but how much
they're actually visible depends on lmScales which also will be set to
0 if "unused")
If all this turns out to be (too) slow, there could be a special case
for surfaces with only one lightmap, I /think/ that's the most common
case by far.
adjusted to new GL3 stuff, of course.
Also, more code for light style/lightmap scale support.
Should probably (maybe) work once we have really have 4 textures
per lightmap id.
(And then of course dynamic lights are still missing)
To be able to test if the game is running portable all checks of the
portable cvar must be done after Cvar_Init(). Instead of redirecting
stdout and stderr as early as possible, delay the redirection right
after Cvar_Init(). After this change the printf() in WinMain() aren't
printed into stdout.txt, but I guess that it isn't a big problem. All
interessting stuff like the search pathes is still there.
Rename fs_portable to portable. It's no longer filesystem specific.
Normally Q2 writes all persistent data (the configurations, saves, etc.)
into a subdirectory in the users $HOME. That can be a problem when the
game is installed onto an thumb drive or something like that. Therefor
provide a cvar fs_portable. When set to 1 the games uses it binary dir
as it's persistent storage location.
Examples:
./quake2 +set fs_portable 1
./quake2 +set basedir ~/games/quake2 +set fs_portable 1
fs_portable is _not_ saved into the config file. It must be set at
every start!
This closed issue #158.
Some people complained about the usage of non busy waits:
* I was told that there's an input lag with nanosleep(). I still doubt
that, but since the problem is easy to solve...
* Some Intel CPUs throttle the GPU if the selected CPU pstate is too
low. This is especially a problem on Haswell mobile CPUs. Keeping
a core busy works around that.
the screenshot command now supports the filetype as optional argument
(just "screenshot" will use tga like before):
"screenshot png" will save the screenshot as PNG, same with jpg, png
and tga.
For jpg, you can even specify the quality, like "screenshot jpg 90"
(the Quality is between 1 and 100, like with libjpeg).
To reduce duplicated code, I addeed Vid_WriteScreenshot() to refimport_t
and implement most of it in the client (vid.c).
The renderer still fetches the raw image data from OpenGL or whatever
and then calls re.VidWriteScreenshot() which will write it to disk in
the format requested by the user.
pass both normal texture and lightmap to shader instead of rendering the
level geometry again with the lightmap and GL_BLEND.
This is not done, some translucent surfaces are buggy now and it's only
static lightmaps. For this to work properly I'll need to add some more
shaders with and without lightmaps and use them accordingly.
For example, translucent surfaces (SURF_TRANS33/66) never have
lightmaps, neither to pertubed ones (SURF_DRAWTURB) like water and lava,
but scrolling surfaces (SURF_FLOWING) like elevators do use lightmaps
(as long as they're not also transulcent or perturbed)...
well, seems to work, but once the lightmaps are rendered with the normal
textured faces, maybe the dynamic part can be done in shader?
(Might even look less blocky, because it's not limited to lightmap
resolution then)
that struct can/should be used with gl3state.vao3D which expects data
as 7 floats (x,y,z, s,t, lms,lmt) - this is used for brushes, sprites,
the sky and more (not for models though).
(For rendering brushes the struct isn't used, the data already is in
that format in float arrays)
beams are now rendered, they are used by the BFG and in some levels for
lasers etc
which just called R_PolyBlend().. no idea what that all was about..
and no idea why it happend in 3D mode, it's much easier after
SetGL2D(), then it can share code with GL3_Draw_FadeScreen() and doesn't
need any additional messing with transformation matrices
* The particles look more fuzzy than in old renderer - I think it looks
better this way ;)
* Not sure I keep the way they're rendered - instead of calculating and
passing the distance in GL3_ShutdownShaders() I could set the player
(camera) origin in a UBO and calculate distance (and based on that
the size) in the vertex shader. I could also pass the basic point size
via UBO, it's the same for all particles..
* Deleting shader programs is a lot shorter now and using a loop and
the fact that consecutive fields of the same type in a struct have
the same memory layout as an array of that type.
Now I can just add a gl3ShaderInfo_t to gl3state, set it up in
GL3_InitShaders() and don't have to add anything to
GL3_ShutdownShaders() (this is good, I forgot that all the time and
didn't notice, as it doesn't cause visible errors)
* Of course the color attribute has 4 floats, not 2..
* I read that updating UBOs with glBufferData() is kinda slow.
I didn't change that (yet), but at least all three GL3_UpdateUBO*()
functions now call updateUBO() which can easily be changed to do
whatever is best without touching the other three functions.
* When using gl_pointparameters, the particles always had the same size
regardless of resolution, i.e. they look bigger (use bigger part of
screen) at lower resolutions. Now I scale gl_particle_size according
to the resolution, assuming the configured size looks good at 800x600
(or generally 600px vertical)
* When not using gl_pointparameters, a textured triangle is rendered.
The texture had a resolution of 8x8 pixels and looked like a cross,
now it's 16x16 and has rounded ages, looking more like a circle.
So particles with "gl_pointparameters 0" should look much better now.
turns out R_MYgluPerspective() was not the same as gluPerspective()
and thus not equivalent to HMM_Perspective() either. Because of this,
the weapon and corresponding arm looked different in GL3 vs GL1.
Created GL3_MYgluPerspective() to fix that.
Also tested optimized code in GL3_RotateForEntity() and
rotAroundAxisZYX(), use this code from now on and cleaned it all up by
removing commented out code.
that was easy..
however, not related to this change or left vs right hand, the gun
seems to be drawn too far back, we should see more of the arm..
I wonder where that went wrong...
introducing vertex color attributes GL3_ATTRIB_COLOR (it's used for
lighting models and to render models flat-colored, I think that's used
for quad damage effect and similar)
kind of messy commit with all the shit from last weekend, finished now
Most importantly the common vertex attribute layout stuff using
glBindAttribLocation()
still no 3D rendering, but in theory it should be able to load models,
bsps etc, just not render them yet.
also moved/copied md2.c and sp2.c to gl/ and gl3/ because they use
renderer-specific types
only for 2D rendering, as we don't have 3D yet; also this might need
a more flexible solution later, as some textures are not supposed to
have intensity applied.
According to the old R_Upload32*() and R_LightScaleTexture() the ones
without mipmaps didn't get intensity. Those were it_pic and it_sky (and
the ones in the "scrap", but those were it_pic too)
when called from R_FindImage() the palette wasn't used anyway.
R_FindImage() now passes NULL.
(LoadPCX() is still called with non-NULL palette from Draw_GetPalette())
the arguments were not used anyway, and returning true/false is clearer
than returning -1 (for error) or sth else (which has no deeper meaning
anyway).
Also:
* PrepareForWindow() can now return -1 if there's an error
* suppress some warnings in Makefile
* fix error for building ref_gl.dylib on OSX
So in all code in the reflib (ref_gl.dll/.so/.dylib) calls to
ri.Con_Printf(print_level, fmt, ...) have been replaced by calls to
R_Printf(print_level, fmt, ...) which uses ri.Com_VPrintf().
somehow all the printf()-like things in Q2 wrap each other and each
one prints into a buffer and then calls the next one with ("%s", buf).
That's not very clever and kinda annoying.
As in the end everyone calls Com_Printf() I created Com_VPrintf()
that can be called instead with the va_list.
I also added printf-format annotation to Com_Printf() and Com_DPrintf()
and fixed places where Com_Printf() was called with the wrong type.
and the changes in the including files for this.
(also removed gl.h includes from local.h, as it's already included in
qgl.h which is included by local.h)
When the monster was already killed by another monster or a coop player
some references may be NULL and the game was crashed. This was observed
by maraakte, who reported it in issue #164. I've just merged his fix
from q2dos.
Repeat 10 times:
- strcat() is evil
- strcat() is evil
...
While here fix another small inconsistency: Vorbis playback should
stop when switching 'shuffle' on.
The overflow was reported by @tomgreen66 in pull request #168.
teleport_time has nothing to do with teleports, it's just the time
since the last player sound. Rename it accordingly. This was suggest
by maraakate in issue #162.
In ai_checkattack() is a check against AI_SOUND_TARGET. If the player
made a noice and the the monster noticed this noise it's true. If
that noice was more than 5 seconds ago the monster forgets that event
and continues with it's search for the player. Otherwise it informs
the surrounding monsters that something interesting has happened and
then returns false. So the problem is: Even if the monster heard the
player and can see him, it aborts at this point.
Fix this by adding an additional visibility check. Do the sound
checking only if the player is not visible, otherwise just continue.
This was reported by shoober420 and debbuged by maraakate. This fix
was DanielGibons idea. This commit fixes issue #162.
I don't think that this has any visible effect, but it's saver than
assume that in multiplayer all clients enter the intermission at the
same time. This was reported by maraakate in yquake2 issue #160.
Until now autoexec.cfg was a special case. It was read several
times, whenever the 'game' cvar was altered or when the client was
restarted. But only if it was in the right directory in the right
position of the internal search path... Remove this altogether and
replace it by an ordinary 'exec autoexec.cfg' at startup.
This may break some mods that depend on an autoexec.cfg if the user has
his own version in ~/.yq2/. Such mods should use default.cfg instead.
This closes issue #163.
Multitexturing was never part of any official Quake II release. It was
added in version 3.21, which was released only in source. Over the years
many developers tried to fix multitexturing, including myself. Yamagi
Quake II had it even enabled by default for several releases... But:
* Multitexturing is poorly implemented and **slow**
* Multitexturing leads to render errors, for example in city3
* Multitextring is ortogonal to the normal render path and adds a
lot of special cases to the renderer
Remove it for good. Ciao, it wasn't a nice time. :) The last version
before this commit was at least somewhat fixed, read some of the worst
problems were fixed. If someone's ever going to resurrect it, it would
be a good idea to start at that point.
In baseq2 there's no need to force a certain damage texture on gunners
since there's only one. Also gunners can't wear the powershield so
there's no need to turn it of.
This was noticed by Maraakate.
Otherwise at least one key may be still marked as down causing an
immediate abort of playback. While here be a little bit paranoid
and clean up key states when focus is gained. In theory that's a
no-op.
When the video is scaled through OpenGL we can just throw it on our
vertexes and everything's alright. But when we're softscaling the video
we must consider that the videos size doesn't really match the vertex
size...
especially in the intermission videos, the text looked broken, as parts
of the characters were missing.
This is because Draw_StretchRaw() converts the 320x240 video frame into
a 256x256 texture, without doing proper interpolation (just skipping
some pixels instead).
Now, if the GPU supports non-power-of-two texture sizes, the video
frames are uploaded as textures in their original size.
(Also fixed a harmless typo in common.h)
This crash was found by DanielGibson, he even guessed the right fix
without having a usable coredump. ;) In boss1.bsp Macron is waiting for
the player, despawning as soon as the player moves to him. After that
the player needs to press 2 buttons, each button triggers 3 flyers. If
the player is fast enough, the first bunch of flyers may spawn before
macron is despawned. Now there's a small chance that a flyer decides to
attack macron... If Macron despwans at the the next frame, self->enemy
is set to NULL (Macron is gone) but nevertheless flyer_fire() is called.
The correct fix would be to call flyer_fire() before Macron despawns,
but that's hard to impossible. So take the easy route and check if
self->enemy is not NULL.
This bug was "fixed" by id with removing two lines in the ground entity
check. When cleaning up the game I added them back... I don't know if
it's really correct to just remove them, but let's try it. This fixes
issue #157.
The old implementation had two problems:
* OSTYPE and ARCH are systemwide defines, overriding them may break
the global libc headers. This is a theoretical problem, I've never
seen it in praxis.
* Not all system set ARCH correctly when building in a chroot env.
For example on Linux ARCH is set to x86_64 when building in an
i386 chroot. Now the user can do something like "make YQ2ARCH=i386"
to get things right.
In lower mines you can drop a rock onto a tank by using a lever.
This was broken, the rock exploded instead of dropping, because debris
is spawned and that debris was solid.
Now the debris is not solid anymore (like it was up to
b4d16ab6b3
"Some additions to last commit:") and it works again
The original client crashed (or survived by pure luck) when muzzle
flash offsets >210 were send. Our fix was to bail out, but that broke
some buggy mods... So just return and print an optional debug message.
This fixes issue #153.
This step height is used by several evelators, leading to stuttering due
to misspredictions. Additionally half height steps weren't smoothed by
the synchronous client.
Predicting these steps leads to a heavily stuttering elevator in
hangar1. I think that steps between 94 and 98 can't appear on stairs, so
just remove them. If I'm wrong we need a hack to the hack ontop of the
hack... ^^
This fixes#119. As always I've chosen the least invasive way to solve
this problem. Trying to open players/$model/trix.md2 is hack, but solves
the problem without changes to filesystem.c and it's API.
With vsync enabled the render times of consecutive frames can diverge.
The first frame arrives right at the next display frame and is rendered
without waiting time. The next frame has to wait 20ms. The leads to some
problems with the move prediction if the client is asynchronous. Fix
this by capping the desired frame rate at the display refresh rate. Also
make sure that the network framerate is never higher then the renderer
framerate.
With the commit the timing is always correct:
* With no limit as much frames as possible are rendered. In this case
rfps > nfps and everything's good.
* With vsync enabled rfps > nfps or rfps == nfps is given. Also rfps
will never exceed the display refresh rate.
* On slow hardware either rfps > nfps or an implicit rfpc == nfps is
given.
While the bugfix in a6f4a3b made the steping prediction working for
stairs, elevators are still stuttering. r1q2s code solves this problem
and is a little bit faster. Use it instead.
Yesterday I chose setting cl_async to 0 since I saw some movement
changed with the async client enabled. Especially when clipping against
bevels the game started to stutter and there were small rendering
problems. After some debugging I realized that it is caused by slight
inaccuracies in the move prediction. When cl_maxfps is too low, the
movement error between two render frames becomes to big, leading to
misspositions. There're two ways to solve this problem:
* Processing more client frames. Most async clients I've looked on
process 60 or even 90 render frames. I chose to stay at 60 since
I was unable to see differences with higher rates.
* Changed to pmove.c and the pmove_t struct. Some multiplayer focused
clients go that way. But there's a very high of breaking singleplayer
movement and pmove_t is part of the server <-> game API. Additionally
the network code must / should be altered. So this is unsuitable for
YQ2.
Please note that there's still a change in movement. Before 4ae8706 and
when cl_async is set to 0 movement is dependend on the render framerate.
At low framerate bevel clipping isn't working too good, at high
framerates prediction causes physics changes like the famous 125hz bug.
With cl_async set to 1 the network framerate is stable, leading to a
more consistant behahiour.
Most (all?) clients implement the synchronous and the asynchronous
client by seperate code pathes. Instead of doing that we force the
asynchronous path to process one network frame for each render frame.
We're missusing the current frame to pass data from the input subsystem
to the movement prediction without a server frame. While we can use the
current frame for the movements itself, it's not finished and thus
unsuitable for stair step prediction. Also oldframe is determined
wrongly. As a result the player "jumps" over stairs.
This is a slighty revised version of id Software original code. Icculus
code may have some advantages on broken drivers or underpowered GPUs.
Today it's just a performance hook. This is a first step in fixing #147.
This is more than enough for everyone and prevents wasting CPU time.
Without this change as many client frames as possible are rendered,
Quake II uses a complete core.
This is based on work submitted by Scott "pickle" Smith. It's said that
vertex arrays are somewhat faster and more compatible than the old way.
This may remove support of some very, very old GPUs like the Riva128.
This is more less cosmetics since gl_tex_solid_format == GL_RGB and
GL_LIGHTMAP_FORMAT == GL_RGBA. No measurable FPS change on Nvidia and
Intel. Based upon the OpenGL ES patch by Scott "pickle" Smith.
This is based upon the original OpenGL ES patch by Scott "pickle"
Smith. This change gives about the same frame rate on an 750TI but
about 3% more frames on an Ivy Bridge IGP with Mesa3D...
This is largely based upon the cl_async 1 mode from KMQuake2, which in
turn is based upon r1q2. The origins of this code may be even older...
Different to KMQuake2 the asynchonous mode is not optional, the client
is always asynchonous. Since we're mainly integrating this rather
fundamental change to simplify the complex internal timing between
client, server and refresh, there's no point in keeping it optional.
The old cl_maxfps cvar controls the network frames. 30 frames should be
enough, even Q3A hasn't more. The new gl_maxfps cvar controls the render
frames. It's set to 95 fps by default to avoid possible remnant of the
famous 125hz bug.
I'm not quite sure if this really makes a difference. But it's the only
idea I have regarding several "Quake II hangs at shutdown when OpenAL is
run with Pulseaudio backend" bugs.
The game code does not include common.h, so it needs to redo this
part for builds without SOURCE_DATE_EPOCH, where BUILD_DATE will
not have been passed in from the outside.
Signed-off-by: Simon McVittie <smcv@debian.org>
In Linux distributions, having the executable depend on the right
libraries and arrange for them to be installed is straightforward,
and there's a lot of infrastructure for tracking which library
version a particular executable needs, including making sure we have
a version that contains all of the symbols that were used. Loading
libopenal at runtime defeats that infrastructure.
The ability to substitute a different-but-compatible libopenal,
or operate with reduced functionality without libopenal, might
still be desirable for generic/portable binary releases.
The CMake build system already linked the executable to
${OPENAL_LIBRARY} anyway, so it is already a hard dependency in that
build system.
For deterministic/reproducible builds (where the same source and
toolchain can be verified to produce the same binary, allowing
maliciously substituted binaries to be detected) it is desirable to
take the software's idea of the build date from the build system;
otherwise, the real-time clock at the time of building affects the
result, making it non-reproducible.
SOURCE_DATE_EPOCH is a distribution-neutral specification for how
to do that. It is meant to be set by meta-build systems such as
dpkg or RPM, using a date/time that is already part of the source code,
for example the date of the latest git commit, the date in
the package's debian/changelog, or the date in the RPM spec file.
See https://reproducible-builds.org/specs/source-date-epoch/ for the
specification of SOURCE_DATE_EPOCH, or https://reproducible-builds.org/
for more information on reproducible builds in general.
The old whitelist was a leftover from the early days of YQ2. It should
run on most / all architectures, as long SDL supports them. As suggested
by smcv in issue #138 generate the OSTYPE and ARCH defines by the build
system instead of hardcoding it.
Savegame compatibility is provided by bumping the savegame version. Old
savegames are compared against the old OSTYPE and ARCH defined, new ones
against the new defines. This compatibility code should be removed
somewhere in the distant future.
Hardware gamma is broken, especially in fullscreen, and a Mac user told me
that setting HW/screen gamma on OSX is a bad idea anyway, because it resets
the monitor calibration.
The game /should/ look ok with vid_gamma 1 (if your display is configured
properly), but if you think it's too dark set it a bit higher and do
vid_restart.
Multitexturing was enabled by default in 0f7b422. It gives a small but
on todays hardware neglectable performance boost, but caused several
problems over the years. For example gl_showtris doesn't work with it
and there's at least one render glitch in city3.bsp.
not sure if this has any drawbacks, seems to work good so far.
No idea why id apparently deactivated this at some point, maybe to
optimize performance?
turns out the if the console is opened while no game is currently
running, cls.key_dest is not key_console but still key_game.
Changing it to key_console in those cases blows up in interesting ways.
So in Char_Event() we send events to the console in those cases.
The first row of AZERTY-Keyboards (used in France and Belgium) doesn't
have numbers as keys but ², &, é, ", ', (, -, è, _, ç, à, ), =
(with small differences between France and Belgium).
For some of those keys we don't have keycodes - and neither does SDL2.
See also https://bugzilla.libsdl.org/show_bug.cgi?id=3188
As a workaround, just map those keys to 1, 2, ..., 9, 0 anyway, as those
are keys Quake2 already knows (and those chars are printed on the keys
too, for typing they're reachable via shift).
This workaround only works for SDL2, as SDL1.2 doesn't have scancodes
which we need scancodes to identify the keys.
While at it I unified handling of SDL_KEYDOWN and SDL_KEYUP, the code
is almost identical anyway, apart from one bool argument to Key_Event().
We track this problem in #81
This clamps the UI scale, limiting the relative size of the UI
elements to what they would be at scale 1 in a 320x240 resolution.
Allowing bigger scales is not useful, and would make it possible
for the user to shoot him-/herself in the foot by setting a
"too big" UI scale value in the menu. (Since this would mess up
the menu, it could be hard te recover from for a casual user.)