This gets an empty (no tasks or pipelines connected) render context
initialized and available for other subsystems to register their task
functions. Nothing is using it yet, but the test parse of rp_main_def
fails gracefully (needs those tasks).
This just sets up the memory block and cexpr descriptors for the
parameters, parameter parsing is separate (and next). The parameters are
aligned to their size.
Needed to add the render passes plitem to the cexr symbol table, too.
All that remains is to figure out how to deal with multiview (or really
@next) and get task parsing working.
A bunch of missed struct members, incorrect parse types, and some logic
errors in the parse setup. Still not working due to problems with
vectors from plist string references and some other errors, but getting
there.
This is most useful when parsing a labeled array where the key/value
pairs go into a simple array:
key = value;
going to:
struct foo {
const char *key;
enumtype value;
};
This treats dictionary items as arrays ordered by key creation (ie, the
order of the key/value pairs in the dictionary is preserved). The label
is written to the specified field when parsing the struct. Both actual
arrays and single element "arrays" are supported.
This allows having sections in a spec used for things like `properties`
that have no corresponding fields in the actual struct: the field is
ignored when parsing and no cexpr field symbol is emitted.
There's still a lot of work to do, but the basics are in. The spec will
be parsed into info structs that can then be further processed to
generate all the actual structs, generally making things a little less
timing dependent (eg, image view info refers to its image by name).
The new render pass and subpass structs have their names mangled for now
until I can switch over to the new system.
Ruamoko currently doesn't support `const`, so that's not relevant, but
recognizing `char *` (via a hack to work around what looks like a bug
with type aliasing) allows strings to be handled without having to use a
custom parser. Things are still a little clunky for custom parsers, but
this seems to be a good start.
Using the typedef name makes using structs declared as
typedef struct foo_s { ... } foo_t;
easier and cleaner. Sure, I could have written the "struct foo_s" for
the output name, but I'm much more likely to look for foo_t than foo_s
when checking the generated code.
While the old system did get things going, it felt clunky to set up,
especially when it came to variations on render passes (eg, flat vs
cube-mapped). Also, much of it felt inside-out, especially the
separation of pipelines and render passes: having to specify the render
pass and subpass in the pipeline spec made the spec feel overly coupled
to the render pass setup. While this is the case in Vulkan, it is not
reflected properly in the pipeline spec. The new system will adjust the
render pass and subpass parameters of the pipeline spec as needed,
making the pipeline specs more reusable, and hopefully less error prone
as the pipelines are directly referenced by the subpasses that use them.
In addition, subpass dependencies should be much easier to set up as
only the dependent subpass specifies the dependency and the subpass
source dependency is mentioned by name. Frame buffer attachments also
get a similar treatment.
The new spec "format" isn't quite finalized (needs to meet the enemy
known as parsing) but it feels like a good starting place.
I suspect this is a hold-over from before the bsp thread safety changes,
but with the nicely separated queues, it's easy to pass the sky surfaces
through the depth pass as well as the translucency pass (I think the
reason for that is lighting). This prevents bits of world being seen
through sky surfaces when the sky isn't fully opaque (like skysheet due
to the shortcuts in the shader).
Partial because frame buffer creation isn't handled yet (using six
layers), but using layer a layer capable view and shaders doesn't cause
problems (other than maybe slightly slower code).
It turns out that my laptop doesn't do multiview properly (or I've
misconfigured something, later), but the biggest issue I had on my
desktop seems to be that I had the push constants wrong: fov in aspect,
time in fov, and I had degrees instead of radians (half angle) anyway.
There are some missing parts from this commit as these are the fairly
clean changes. Missing is building a separate set of pipelines for the
new render pass (might be able to get away from that), OIT heads texture
is flat rather than an array, view matrices aren't set up, and the
fisheye renderer isn't hooked up to the output pass (code exists but is
messy). However, with the missing parts included, testing shows things
mostly working: the cube map is rendered correctly even though it's not
displayed correctly (incorrect view). This has definitely proven to be a
good test for Vulkan's multiview feature (very nice).
CURLOPT_PROGRESSFUNCTION was deprecated in favor of
CURLOPT_XFERINFOFUNCTION (and the parameters for the callback adjusted).
I'm not sure why this didn't trigger until now.
While the cexpr parser itself doesn't support void functions, they have
their uses when used with the system, and mixing them into the list of
function overloads shouldn't break non-void functions.
The type system rewrite had lost some of the checks for function fields.
This puts the actual code in the one place and covers parameters as well
as globals.
Internally, * is not really a valid operator for vectors since it can
have many meanings. This didn't cause trouble until trying to build
everything in game-source (since there's still a lot of legacy code in
there).
The precedence check changes done in
63795e790b seem to have been incorrect
(game-source/ctf produced many false positives), so putting that check
against '=' back into the code seems like a good idea (no more false
positives). That sounds a bit cargo-cult, but I'm really not sure what I
was thinking when I did the changes (probably just tired).
This applies only to the top-level scope of the function. I'm not sure
if it's right for traditional quakec code, but that can be adjusted
easily enough.
The symtab code itself cares only about global/not global for the size
of the hash table, but other code can use the symtab type for various
checks (eg, parameter shadowing).
At least with a push-parser, by the time the parser has figured out it
has an identifier, the lexer has forgotten the token, thus the annoying
and uninformative "undefined identifier " error messages. Since
identifiers should always have a value (and functions need a function
type), setting up a dummy symbol with just the identifier name
duplicated seems to do the trick. It is a bit wasteful of memory if
there are a lot of such errors between cmem resets, though.
I ran into the need to get at the label of labeled array element and the
best way seemed to be by setting the name field of the plfield_t item
passed to the parser function, and then found that PL_ParseSymtab
already does this. I then decided passing the array index would also be
good, and the offset field made sense.
qfot_basic_t is necessary for getting at the width of basic value types
(int, uint, float, long, ulong, double) in order to distinguish between
scalars and vectors of those types.
I had forgotten this when doing the Ruamoko VM and qfcc changes.