This allows for easy (and safe) printing of cexpr values where the type
supports it. Types that don't support printing would be due to being too
complex or possibly write-only (eg, password strings, when strings are
supported directly).
Surprisingly, only two, but they were caught by the different value
fields being used, thus the cvar was checked in multiple places. I
imagine that's not really all that common, so there may be some
inconsistencies between default value and use.
This is progress towards #23. There are still some references to
host_time and host_client (via nq's server.h), and a lot of references
to sv and svs, but this is definitely a step in the right direction.
This allows a single render pass description to be used for both
on-screen and off-screen targets. While Vulkan does allow a VkRenderPass
to be used with any compatible frame buffer, and vkparse caches a
VkRenderPass created from the same description, this allows the same
description to be used for a compatible off-screen target without any
dependence on the swapchain. However, there is a problem in the caching
when it comes to targeting outputs with different formats.
As I had suspected, it's due to a synchronization problem between the
scrap and drawing. There's actually a double problem in that data
uploaded to the scrap isn't flushed until the first frame is rendered
causing a quick init-shutdown sequence to take at least five seconds due
to the staging buffer waiting (and timing out) on a stuck fence.
Rendering just one frame "fixes" the problem (draw was one of the
earliest subsystems to get going in vulkan).
Surprisingly, only two, but they were caught by the different value
fields being used, thus the cvar was checked in multiple places. I
imagine that's not really all that common, so there may be some
inconsistencies between default value and use.
This is progress towards #23. There are still some references to
host_time and host_client (via nq's server.h), and a lot of references
to sv and svs, but this is definitely a step in the right direction.
Since it is updated every frame, it needs to be as fast as possible for
the cpu code. This seems to make a difference of about 10us (~130 ->
~120) when testing in marcher. Not a huge change, but the timing
calculation was wrapped around the entire base world pass, so there was
a fair bit of overhead from bsp traversal etc.
The improved allocation overheads have been implemented for gl and sw,
and glsl no longer uses malloc. Using array textures will have to wait
as the current texture loading code doesn't support them.
Really, this won't make all that much difference because alias models
with more than one skin are quite rare, and those with animated skin
groups are even rarer. However, for those models that do have more than
one skin, it will allow for reduced allocation overheads, and when
supported (glsl, vulkan, maybe gl), loading all the skins into an array
texture (since all skins are the same size, though external skins may
vary), but that's not implemented yet, this just wraps the old one skin
at a time code.
While looking at the deferred attachment images with using a template in
mind, I noticed that the opaque attachment was using 8-bit color. The
problem is, it's meant to be HDRI with the compose pass crunching it
down to LDRI. Switching to 16-bit float does seem to have made a subtle
difference (hey, it's still quake data, not much HDRI in there).
That certainly makes it nicer to work with large sets, and shows one way
to be careful with allocated resources: don't allocate them in the
inherited data and use a template that needs a few things filled in to
be valid. Also, it seems that overriding values in sub-structures "just
works" :)
It simply parses the referenced plist dictionary (via @inherit =
plist.path;) into the current data block, then allows the data to be
overwritten by the current plist dictionary. This may be a bit iffy for
any allocated resources, so some care must be taken, but it seems to
work nicely.
This allows a single render pass description to be used for both
on-screen and off-screen targets. While Vulkan does allow a VkRenderPass
to be used with any compatible frame buffer, and vkparse caches a
VkRenderPass created from the same description, this allows the same
description to be used for a compatible off-screen target without any
dependence on the swapchain. However, there is a problem in the caching
when it comes to targeting outputs with different formats.
This makes much more sense as they are intimately tied to the frame
buffer on which a render pass is working. Now, just the window width
and height are stored in vulkan_ctx_t. As a side benefit,
QFV_CreateSwapchain no long references viddef (now just palette and
conview in vulkan_draw.c to go).
While I have trouble imagining it making that much performance
difference going from 4 verts to 3 for a whopping 2 polygons, or even
from 2 triangles to 1 for each poly, using only indices for the vertices
does remove a lot of code, and better yet, some memory and buffer
allocations... always a good thing.
That said, I guess freeing up a GPU thread for something else could make
a difference.
I think I had gotten lucky with captures not being corrupt due to them
being much bigger than all but the L3 cache (and then they're over 1/2
the size), so the memory was being automatically invalidated by other
activity. Don't want to trust such luck, though.
It makes a significant difference to level load times (approximately
halves them for demo1 and demo2). Nicely, it turns out I had implemented
the rest of the staging buffer code (in particular, flushing) correctly
in that it seems there's no corruption any of the data.
This means that a tex_t object is passed in instead of just raw bytes
and width and height, but it means the texture can specify whether it's
flipped or uses BGR instead of RGB. This fixes the upside down
screenshots for vulkan.
This fixes (*ahem*) the vulkan renderer segfaulting when attempting to
take a screenshot. However, the image is upside down. Also, remote
snapshots and demo capture are broken for the moment.
QFS_NextFilename was renamed to QFS_NextFile to reflect the fact it now
returns a QFile pointer for the newly created file (as well as the
name). This necessitated updating WritePNG to take a file pointer
instead of a file name, with the advantage that WritePNGqfs is no longer
necessary and callers have much more control over the creation of the
file.
This makes QFS_NextFile much more secure against file system race
conditions and attacks (at least in theory). If nothing else, it will
make it more robust in a multi-threaded environment.
It's not there yet as it promptly closes the file and returns only the
filename (and then only the portion within the user's directory tree).
However, this worked nicely as a test for Sys_UniqueFile.
QF currently uses unique file names for screenshots and server-side
demos (and remote snapshots), but they're generally useful.
QFS_NextFilename has been filling this role, but it is highly insecure
in its current implementation. This is the first step to taking care of
that.
The tests fail due to differences in how clang and gcc treat floating
point to unsigned integral type conversions when the values overflow. It
wouldn't be so bad if clang was consistent with conversions to 32-bit
unsigned integers, like it seems to be with conversion to 64-bit
unsigned integers.
With this, the "get QF building with clang" mini-project is done and I
won't have to panic when someone comes to me and asks if it will work.
At worst, there'll be a little bit-rot.
Only edicts themselves get a smaller alignment (4, 8 or 32 bytes,
depending on hardware and progs version). I didn't want to waste too
much memory on edict alignment for progs that don't need any better than
void *, but the zone really wants 64 bytes, and the stack might as well
have 64 bytes as well. Fixes a segfault when running v6 progs in a clang
build (clang got really agressive with simd in zone.c).
gcc and clang have rather different swizzle builtins, but both do a nice
job of optimizing the intuitive initializer swizzle (I think gcc 8(?)
didn't do such a good job thus my use of __builtin_shuffle).
clang doesn't like anything but a bare 0 as null (and in some of the
cases, it was quite right: '\0' should not be treated as a null
pointer). And the crashers were just for paranoia and probably aren't
needed any more (kept for now, though).
It seems clang defaults to unsigned for enums. Interestingly, gcc was ok
with the checks being either way. I guess gcc treats enums that *can* be
unsigned as DWIM.
Still work with gcc, of course, and I still need to fix them properly,
but now they're actually slightly easier to find as they all have vec_t
and FIXME on the same line.
Viewport and FOV updates are now separate so updating one doesn't cause
recalculations of the other. Also, perspective setup is now done
directly from the tangents of the half angles for fov_x and fov_y making
the renderers independent of fov/aspect mode. I imagine things are a bit
of a mess with view size changes, and especially screen size changes
(not supported yet anyway), and vulkan winds up updating its projection
matrices every frame, but everything that's expected to work does
(vulkan errors out for fisheye or warp due to frame buffer creation not
being supported yet).
If the entity didn't have a known model type, R_StoreEfrags would get
stuck in an infinite loop (fortunately, never actually happened. The
result of making it not call Sys_Error for unknown models)).
I meant to do this a while ago but forgot about it. Things are a bit of
a mess in that the renderer knows too much about entities, but
eventually the renderer will know about only things to render (meshes,
particles, etc).
The quake-specific enums are now in the client header, and the particle
system now has a gravity field rather than getting it from
vid_render_data (which I hope to eventually get rid of entirely).
r_refdef is really meant for holding the various screen "constants" for
the software renderer rather than the more generic scene stuff. All the
fields referenced by the low level rendering code (especially assembly)
have been moved to the beginning of the struct (and nicely fit within 64
bytes). The other fields should be moved elsewhere, but not this commit.
On top of that, R_ViewChanged is much easier to read, and there are
fewer static globals.
Now GL perspective matrix setup matches that of GLSL and Vulkan, and
GL's z_up matrix matches GLSL's (as it should, since they're really
going through the same API). GL also needs the depth adjustmet matrix
now. Other than having to google the docs for glFrustum, there's nothing
wrong with the function itself, but it's nice to have direct control
over the matrices.
In the process, I discovered how horribly confused I've been at times
with respect to the handedness of GL and Quake: GL is right-handed
(y-up, z-out, x-right), as is Quake itself (but z-up, y-left, x-in), but
as the perspective matrix used in the three renderers expects z-in,
having x-right and y-up makes the matrix effectively left-handed (not
for Vulkan though, because there it's y-down, x-right, z-up, so
right-handed again).
Of course, it's not as correct as glsl or sw due to using polygons and
uvs rather than a fragment shader (not that such is out of the question
since GL 3.0 is requested, but I don't feel like getting shaders going
just for a couple of post-processing effects in an obsolete renderer).
While it's not where I want it to be, it at least now no longer messes
with frame buffer binding or the view ports. This involved switching
around buffers in D_WarpScreen so that the main buffer could be bound
before post-processing.
The cvar setup for particles is a bit wonky in that the arrays get
initialized using the default max particle count but never updated.
Though things could be improved some more, this solution works (and has
been more or less copied to gl, but I couldn't reproduce the crash
there, or even the valgrind error).
The code dealing with state is a bit of a mess, but everything is
working nicely. Get around 400fps when all 6 faces need to be rendered
(no surprise: it should be about 1/6 of that for normal rendering). The
messy state handling code did not come as a surprise as I suspected
there were various mistakes in my scene rendering "recipe", and fisheye
highlighted them nicely (I'm sure getting this stuff working in Vulkan
will highlight even more issues).
Finally, after a decade :P Looks pretty good, too, and is (almost)
properly scaled to the resolution (almost because the effect is a little
squashed, but I think the sw renderer does the same).
The GLSL compiler requires any #version lines to be the first (real)
line of the program, even #line causes an error, so if the first line of
the chunk starts with #version, insert the #line directive as the second
line.
Again, gl/vulkan not working yet (on the assumption that sw would be
trickier).
Fisheye overrides water warp because updating the projection map every
frame is far too expensive.
I've added a post-process pass to the interface in order to hide the
implementation details, but I'm not sure I'm happy about how the
multi-pass rendering for cube maps is handled (or having the frame
buffers as exposed as they are), but mainly because Vulkan will make
implementation interesting.
For now, OpenGL and Vulkan renderers are broken as I focused on getting
the software renderer working (which was quite tricky to get right).
This fixes a couple of issues: the segfault when warping the screen (due
to the scene rendering move invalidating the warp buffer), and warp
always having 320x200 resolution. There's still the problem of the
effect being too subtle at high resolution, but that's just a matter of
updating the tables and tweaking the code in D_WarpScreen.
Another issue is the Draw functions should probably write directly to
the main frame buffer or even one passed in as a parameter. This would
remove the need for binding the main buffer at the beginning and end of
the frame.
This used to be handled by R_RenderView (encompassing all of the
rendering) before the scene rendering was moved out to r_screen. This
fixes the stuck time in 32-bit nq-win.
Its guts have been moved to D_Init temporarily while I work on the
frame buffer design. This is actually a big part of that work as it
moves most of the frame buffer creation into the one place, making it
easier to ensure I get all the sub-buffers and caches created.
With what I have planned for frame buffers etc, GL 3.0 will be needed
even for the fixed-function GL renderer, and then I might even take the
GLSL renderer to 4.6 (dunno yet). This means that wgl will need to be
updated too, and I've found the info I need for that, but it's a bit
much to take on just yet.
I think the widespread use of recalc_refdef (and force_fullscreen) was
the result of a rushed merge of the renderer and video code (I do seem
to remember sprinkling them around). This cleans the two out of the
client code.
This avoids the possibility of a singularity (and thus the temptation to
use Sys_Error). While the rendering is rubbish, 0 degrees is allowed
because values less than 1 should be allowed, but where does one stop?
170 is the maximum in order to avoid any issues with (near) parallel or
inverted frustum planes (or other fun things) in the low level code.
Other than the view model (undecided on the approach) this has
R_RenderView pretty much pulled out of the low level renderers. With
this, I'll be able to focus on scene handling for a bit then getting
shadows and fisheye working (again for fisheye).
r_screen isn't really the right place, but it gets the scene rendering
out of the low-level renderers and will make it easier to sort out
later, and hopefully easier to figure out a good design for vulkan.
gl_overbright_f shouldn't need to run through any entity queues to
update the light maps as only the world model has light maps, and
hitting the world model should hit all its sub-models.
The change to using separate per-model-type entity queues resulted in
the lighting vector used for alias and iqm models being in an ephemeral
location (in the shared setup_lighting function's stack frame). This
resulted in the model rendering code getting a garbage vector due to it
being overwritten by another stack frame. What I don't get is why the
garbage varied from run to run for the same demo (demo2, the first scrag
behind the start door showed the bad lighting nicely), which made
tracking down the offending commit (and thus the code) rather
troublesome, though once I found it, it was a bit of a face-palm moment.
Move r_pcurrentvertbase into the sw renderer, cleaning up gl's use of
(not really needed there). Not ready to move r_bsp into the main bin yet
as there are linking issues since only the low-level code references any
of its symbols.
The code is really part of scene (not a typo wrt r_screen: that is
misnamed as such, or at least SCR_UpdateScreen needs to be split into
screen (2d overlay, really) and scene updates).
This breaks fisheye rendering as the fisheye code calls the actual scene
render code multiple times, but the fisheye code is called by said scene
render code via a diversion. The fisheye needs to be moved out to the
high level scene render, but that will takes some extra work for frame
buffer setup.
The two aren't compatible (but warping might be doable in the fisheye
code). The whole frame setup code needs a rework, and really, even the
buffer handling.
It being on the stack was a bad idea as R_RenderWorld returns before the
scans are rendered and thus the entity pointer winds up pointing to
abandoned stack space.
While the scheme of using our own allocated did work just fine, fisheye
rendering uses glGenTextures which caused a texture id clash and thus
invalid operations (the cube map texture happened to be the same as the
console background texture). Sure, I could have just "fixed" the fisheye
init code, but this brings gl closer in line with glsl (which makes
extensive use of glGenTextures and glDeleteTextures). This doesn't fix
any texture leaks gl has (plenty, I imagine), but it's a step in the
right direction.
Only for gl and sw at the moment (want to merge things further before I
do anything for glsl or vulkan). However, with with I've learned getting
gl and sw to work, glsl and vulkan will be trivial.
R_RecursiveLightUpdate has been obsolete for a very long time, and
R_Mirror is just wrong (needs envmaps etc, wonder if it can be done in
the fixed function code using skyclip?)
Finally. I never liked it (felt bad adding it in the first place), and
it has caused confusion with function and global variable names, but it
did let me get the render plugins working.
They're still slightly confusing, but the situation itself is confusing,
but the comments should be a little more helpful now as they are more
explicit about the orientation of the matrices and just which axis
points where.
This moves the common camera setup code out of the individual drivers,
and completely removes vup/vright/vpn from the non-software renderers.
This has highlighted the craziness around AngleVectors with it putting
+X forward, -Y right and +Z up. The main issue with this is it requires
a 90 degree pre-rotation about the Z axis to get the camera pointing in
the right direction, and that's for the native sw renderer (vulkan needs
a 90 degree pre-rotation about X, and gl and glsl need to invert an
axis, too), though at least it's just a matrix swizzle and vector
negation. However, it does mean the camera matrices can't be used
directly.
Also rename vpn to vfwd (still abbreviated, but fwd is much clearer in
meaning (to me, at least) than pn (plane normal, I guess, but which
way?)).
I'd been considering it for a while, but in the end, all the issues it
presented made me decide it wasn't worth merging and was never really
worth keeping: it was a neat proof of concept but of little actual use,
especially now everyone either has an OK GPU or would want to stick to
8-bit rendering anyway (sorry L-Havoc).
However, both it and my merge work are preserved in git history :)
16 and 32 bit rendering are disabled at the moment because there's a
weird segfault I need to fix, but the 8-bit dynamic lights are doing
weird things (for x11, too) when updating the light maps.
I got tired of having to maintain two separate software renderers, but
didn't want to just nuke sw32, so its core changes are merged into sw.
Alias model rendering is broken, but I know exactly what's wrong and how
to fix it, just need to take care due to asm.
So far, in gl and glsl, but viewposition is much clearer than r_origin
(despite being the same thing), and modelorg is just confusing (I think
it's the view position relative to the current model).
GL still has its own functions for enabling and disabling fog while
rendering, but GLSL doesn't need such (thanks to the shaders), nor will
vulkan (and the software renderers don't support fog).
This is a step towards high-level unification of the renderers, as far
as possible keeping only actual low-level implementation details in the
individual renderers (some higher level stuff, eg shadows, is expected
to be per-renderer as some things are just not feasible to implement in
all renderers). However, the idea is to move the high-level
functionality into scene rendering.
As qwaq doesn't yet do any 3d rendering, it doesn't use efrags and thus
wasn't pulling in the object file, but the various renderers were trying
to access it. And I thought plugin builds were more difficult (I had
forgotten).
Only CaptureBGR is per-renderer as the rest of the screenshot code uses
it to do the actual capture (which is target dependent). Vulkan is
currently broken due to capture being an asynchronous process and the
rest of the code expecting capture to be synchronous (also, bgr vs rgb).
The best thing is all renderers now write the same format (currently
png).
I'm not sure what the author of that code was thinking (maybe trying to
do 4 pixels at a time?), but the resulting code still did only one.
Better to remove all the casts, use the right pointer type, and keep the
code clear.
Drawing sky chains first ensures that sky surfaces correctly block parts
of the map that should not be visible (by writing the correct depth to
the depth buffer when doing box or dome skies). Writing brush models
first means that the models (ammo boxes etc) could be visible when they
should not be.
While there's currently only the one still, this will allow the entities
to be multiply queued for multi-pass rendering (eg, shadows). As the
avoidance of putting an entity in the same queue more than once relies
on the entity id, all entities now come from the scene (which is stored
in cl_world in the client code for nq and qw), thus the extensive
changes in the clients.
The root transform of each hierarchy can be extracted from the first
transform of the list in the hierarchy, so no information is lost. The
main reason for the change is I discovered (obvious in hindsight) that
deleting root transforms was O(n) due to keeping them in an array, thus
the use of a linked list (I don't expect a hierarchy to be in more than
one such list), and I didn't want the transforms to be in a linked list.
GL and GLSL were drawing the view model after particles instead of
before. For GL, this is likely due to avoiding fog affecting the view
model (which I think is not the right thing to do), and GLSL due to
copying GL (because I had no idea at the time). This makes the two
renderers consistent with the software renderers, and might even speed
things up a little as that's one less set of blends to do when the
particles are covered by the view model (I don't expect much
difference).
While I doubt the difference is all that significant, this should speed
up entity rendering because it cuts out a lot of branching, and
eliminates scanning the same list multiple times only to not do anything
for large chunks of the list.
Since transforms now know the scene to which they belong, and they know
when they are root and when not, getting the transform code to manage
the scene roots is the best way to keep the list of root transforms
consistent.
It turns out cam_controls is for pointing the player model in the
direction of movement rather than controlling the camera (I should add
proper camera controls).
I finally spent the time to work out what it was trying to say. Still
not sure it's clear, but what is clear is that there was probably some
disagreement at Id about the orientation of the world.
They no longer spin like crazy. I don't know how, but I must have broken
something over the years as I'm sure Seth had the code working (and I
seem to remember seeing it working). In the process, clean up a lot of
the angle mess.
It's a lot easier to read (and see the difference between modes 2 and 3)
with all the ifs removed, and the state is properly is chasestate_t now
(though not handled properly on level reset etc).
The more advanced modes are rather broken (continuous spinning), but
they may have been for a while. The bulk of the various changes were due
to renaming viewstate's origin and angles to make their meaning more
explicit.
They've been near-identical for years, now they're only one. It proved
necessary to start merging the HUD code which for now is just a few cvar
declarations (not even init), but that should be a separate set of
commits.
The actual view and projection matrices are now consistent with vulkan,
with the vulkan-gl disparity moved into adjustment matrices. The goal is
to allow the same camera data and code to be used in all renderers. The
extra matrix multiplication shouldn't be too expensive as it occurs only
when the field of view (not often, under user control) or near and far
clip distances (very rarely) change.
It holds the data for a basic 3d camera (transform, fov, near and far
clip). Not used yet as there is much work to be done in cleaning up the
client code.
Handling of view angles is a little hacky at the moment, but this gets
the chase camera code and most of the common input code into one place,
which will make cleaning up the camera code much easier.
While both matrices had positive determinants in the first place, I find
the projection matrix easier to understand without all the negatives,
and having quake-x/vulkan-z positively parallel in the z-up matrix makes
that a lot easier to think about.
Regardless of whether the sky is spinning or not, the matrix needs to be
updated with the current origin in order to get the direction vector
right in the shader. Also, it's in the update that the required x-y
plane rotation gets in so the skies move in the correct direction.
This actually has at least two benefits: the transform id is managed by
the scene and thus does not need separate management by the Ruamoko
wrapper functions, and better memory handling of the transform objects.
Another benefit that isn't realized yet is that this is a step towards
breaking the renderers free of quake and quakeworld: although the
clients don't actually use the scene yet, it will be a good place to
store the rendering information (functions to run, etc).
I've run into a bit of an issue with transform management (really, just
need to make them owned by the scene, but that means creating a scene
for quake and quakeworld).
This is the bulk of the work for recording the resource pointer with
with builtin data. I don't know how much of a difference it makes for
most things, but it's probably pretty big for qwaq-curses due to the
very high number of calls to the curses builtins.
Closes#26
The zone memory block header is 64 bytes, so allocating a single 8 byte
selector is rather wasteful. Instead, allocate selectors in large chunks
(currently 64) and divvy them out as needed. Significantly reduces
memory pressure in large Ruamoko progs.
These add legacy support for basic float bitops (& | ^ ~). Avoiding the
instructions would require tot only the source to be converted, but also
the servers (as they do access those fields), and this seemed to be too
much.
It's not enforced a this stage, and it would be easy enough to handle,
but it turns out all the standard quake and quakeworld progs never used
... for the print functions: the behavior of PF_VarString was
undocumented and so... tough :P.
I had forgotten that unsigned division was different from signed
division (rather silly of me). However, with some testing and analysis,
unsigned true modulo is not needed as it's not possible to have
negative inputs and thus it's the same as remainder.
It now takes the function name to print in error message (passed on to
PR_Sprintf) and the argument number of the format string. The variable
arguments (in ...) are assumed to be immediately after the format
argument.
This loads the current return pointer into the specified register. No
offset is used (should make that an error, but for now any offset is
simply ignored). This is part of the fix for getting obj_msg_sendv to
work with return values.
With the return buffer in progs_t, it could not be addressed by the
progs on 64-bit machines (this was intentional, actually), but in order
to get obj_msg_sendv working properly, I needed a way to "bounce" the
return address of a calling function to the called function. The
cleanest solution I could think of was to add a mode to the with
instruction allowing the return pointer to be loaded into a register and
then calling the function with a 0 offset for the return value but using
the relevant register (next few commits). Testing promptly segfaulted
due to the 64-bit offset not fitting into a 32-bit value.
This gets message forwarding apparently working, though something isn't
quite right as qwaq-app doesn't update properly when I try to step
through the program, but that could be an error elsewhere.
The plan is to use the types to extract the number of parameters for a
selector when it is necessary to know the count. However, it'll probably
become useful for something else alter (these things seem to always do
so).
This takes care of the problems with PR_RESET_PARAMS (which has recently
become just a wrapper for PR_SetupParams) changing the stack and causing
PR_CallFunction to save the wrong stack pointer. Message forwarding is
currently broken for Ruamoko ISA progs, but that is due to not having a
valid pr_argc. However, I do have a plan involving extracting the
parameter count from the selector, but that's something for a later
commit. Everything else seems to be ok (my little game is working
nicely).
rua_obj was skipped because that looks to be a bit more work and should
be a separate commit.
This is to avoid the stack getting mangled when calling progs functions
with parameters.
I suppose having one builtin call another was a neat idea at the time,
and really could have been fixed by simply wrapping the calls with
push/pop frame, but this is probably faster.
obj_msg_sendv needs to push the parameters onto the stack for Ruamoko
progs, but this causes problems because PR_CallFunction winds up
recording the wrong stack pointer for progs functions, and nothing
restores the stack for builtins. The handling is basically the same as
for the return pointer.
It's a bit disconcerting seeing a builtin in the top 10 when builtins
are counted by call while progs functions are counted by instruction.
Also, show the total profile after the function top-10 list.
pr_argc cannot be used in Ruamoko progs because nothing sets it. This
fixes the parse errors and resulting segfault when trying to parse the
Vulkan pipeline config.
It's currently only 4 (or even 3 for v6) words, but this fixes false
positives when checking for null pointers in Ruamoko progs due to
pr_return pointing to the return buffer and thus outside the progs
memory map resulting in an impossible to exceed value.
Since Z_Malloc uses Z_TagMalloc to do the work, this ensures the check
is always run.
Also, add the check to Z_Realloc when it needs to adjust an existing
block.
Builtins that call progs with parameters now must always wrap the call
to PR_ExecuteProgram so that the data stack is properly preserved across
the call.
I need to do an audit of all the calls to PR_ExecuteProgram.
It turns out the return pointer still needs to be saved even when a
builtin sets up a chain call to progs, but rather than the pointer being
simply restored, it needs to be saved in the call stack exactly as if
the function was called directly by progs. This fixes the invalid self
issue quite thoroughly: parameter state seems to be correct across all
calls now.
I should set up an automated test now that I know and understand the
situation.
In Ruamoko ISA progs, the param pointers point to the stack and
generally must most be manipulated by builtins, and there is no need
anyway as Ruamoko doesn't have RCALL. Fixes the mangling of .super.
When calling a builtin, normally the return pointer needs to be
restored, but if the builtin changes the call depth (usually by
effecting "return foo()" as in support for objects, but possibly
setjmp/longjmp when they are implemented), then the return pointer must
not be restored. This gets vkgen past object allocation, but it dies
when trying to send messages to super. This appears to be a compiler
bug.
Since the operand types sort out the difference between asr and shr, no
need to give them different opnames. Means qfcc doesn't need to worry
about which one it's searching for.
Yet another redundant addressing mode (since ptr + 0 can be used), so
replace it with a variable-indexed array (same as in v6p). Was forced
into noticing the problem when trying to compile Machine.r.
I abandoned the reason for doing it (adding a pile of vector types), but
I liked the cleanup. All the implementations are hand-written still, but
at least the boilerplate stuff is automated.
Of course, only in Ruamoko progs, but it works quite nicely.
global_string is now passed the absolute address of the referenced
operand. With a little groveling through the progs stack, it should be
possible to resolve pointers to locals in functions further up the
stack.
This fixes Ruamoko's return format string. It looks like it's producing
the correct address (but doesn't show all the information it should),
but the rest of the debug code needs work locals.
It turned out I need locals count and params_start for debugging, so use
the progs version instead to bail early from PR_EnterFunction and
PR_LeaveFunction (which I had forgotten anyway, oops).
They now include base register index and effective address of the
operands (though it may be wrong for instructions that don't use a base
register for that operand).
This cleans up dprograms_t, making it easier to read and see what chunks
are in it (I was surprised to see only 6, the explicit pairs made it
seem to have more).
Intel hardware requires 32-byte alignment for lvec4 and dvec4.
Unfortunately, it turns out that my attempts to align progs data in qfcc
went awry do to the order block sizes are calculated when writing the
progs.
This makes return consistent with load, store, etc, though its
addressing mode is encoded in bits 5 and 6 of c rather than the opcode.
It turns out I had no tests for any of return's addressing modes other
than basic def references, so no tests needed changing.
The parameter defs are allocated from the parameter space using a
minimum alignment of 4, and varargs functions get a va_list struct in
place of the ...
An "args" expression is unconditionally injected into the call arguments
list at the place where ... is in the list, with arguments passed
through ... coming after the ...
Arguments get through to functions now, but there's problems with taking
the address of local variables: currently done using constant pointer
defs, which can't work for the base register addressing used in Ruamoko
progs.
With the update to test-bi's printf (and a hack to qfcc for lea),
triangle.r actually works, printing the expected results (but -1 instead
of 1 for equality, though that too is actually expected). qfcc will take
a bit longer because it seems there are some design issues in address
expressions (ambiguity, and a few other things) that have pretty much
always been there.
PR_SetupParams is new and sets up the parameter pointers so older code
that expects only up to 8 parameter will work with both v6p and Ruamoko
progs without having to check what progs are running. PR_SetupParams is
useful even when Ruamoko progs are expected as it reserves the required
space (respecting alignment) on the stack and returns a pointer to the
top (bottom? confusing) of the stack. PR_PushFrame and PR_PopFrame
need to be used around PR_SetupParams, regardless of using temp strings,
to avoid a stack leak (need to do an audit).
This is part of the work for #26 (Record resource pointer with builtin
function data). Currently, the data pointer gets as far as the
per-instance VM function table (I don't feel like tackling the job of
converting all the builtin functions tonight). All the builtin modules
that register a resources data block pass that block on to
PR_RegisterBuiltins.
The builtin and progs function data is overlaid so the extra data
doesn't cause too much memory to be used (it's actually 8 bytes smaller
now). The plan is to pre-compute the offsets based on the parameter
size and alignment data.
This will make it possible for the engine to set up their parameter
pointers when running Ruamoko progs. At this stage, it doesn't matter
*too* much, except for varargs functions, because no builtin yet takes
anything larger than a float quaternion, but it will be critical when
double or long vec3 and vec4 values are passed.
Just 32-bit rounding to next higher power of two, and base 2 logarithm.
Most importantly, they are suitable for use in initializers as they are
constant in, constant out.