Lots of Linux distros have different names (libcurl-gnutls.so vs etc), and
version the symbols (curl_global_init@@CURL_LIBSSL_3), so it's more compatible
to just dlsym the basic entry points we need and just demand that libcurl is
installed at all.
Alternately: we'll use our own libcurl build, but we'll probably have to dump
SSL support to make this sane to do.
Need to be able to specify minimum Mac OS X version outside of the
Makefile to avoid conflicting CFLAGS.
Moved -mmacosx-version-min LDFLAGS into the Makefile.
Moved -arch x86_64 from OPTIMIZEVM to CFLAGS to fix linker errors
(previously make-macosx-ub.sh passed it to CFLAGS manually).
The goal of reproducible builds is that a rebuild of the same source
code with the same compiler, libraries, etc. should result in the same
binaries. SOURCE_DATE_EPOCH provides a standard way for build systems
to fill in the date of the latest source change, typically from a git
commit or from metadata like the debian/changelog in Debian packages.
This does not change anything if SOURCE_DATE_EPOCH is not defined;
the intention is that a larger build system like a Debian package
will define it.
Please see https://reproducible-builds.org/ for more information about
reproducible builds.
This can be used for LDFLAGS that would be inappropriate for shared
libraries, such as the "-fPIE -pie" used to link position-independent
executables. PIEs make it more difficult to exploit various classes
of security vulnerability.
A built-in GNU Make rule causes code/tools/lcc/lburg/gram.y to replace
gram.c if gram.y has a newer modified time. This causes git diff to
pick up changes to gram.c, which seems to have been manually modified
to fix warnings and may vary by Yacc used to create it. It also
requires installing a program to generate a file that already exists
in a usable state in the code repository.
So replace the built-in rule so it is only used if USE_YACC is 1
(defaults to 0). The Yacc executable name can be overriden using
`make YACC=yacc` like before.
I preferred to touch gram.c instead of installing Yacc because of the
problems it causes. It doesn't really seem like a good idea to recommend
others do that instead of disabling Yacc the Makefile though.
If multiple mingw toolchains are installed, the Makefile set CC to all
present gcc executables. Pick only the first one instead of passing the
others as arguments.
"CC: /usr/bin/i686-w64-mingw32-gcc /usr/bin/i686-pc-mingw32-gcc"
Server/client VoIP protocol is handled by adding new cvars
cl_voipProtocol and sv_voipProtocol, sv_voip and cl_voip
are used to auto set/clear them. All users need to touch
are cl/sv_voip as 0 or 1 just like before.
Old Speex VoIP packets in demos are skipped.
New VoIP packets are skipped in demos if sv_voipProtocol
doesn't match cl_voipProtocol.
Notable difference between usage of speex and opus codecs,
when using Speex client would be sent 80ms at a time.
Using Opus, 60ms is sent at a time. This was changed because
the Opus codec supports encoding up to 60ms at a time.
(Simpler to send only one codec frame in a packet.)
If TERM is not set (which can happen in autobuilders and other
batch environments), or if tput cannot determine the number of
columns for some other reason, then it can fail and not produce
any output. Prior to this change, that would result in passing
field width -4 to fmt, which is an error and causes fmt to
produce no output.
GNU platforms (Linux, kFreeBSD, Hurd) have endian.h to determine
endianness, so all architectures except x86_64 are in fact treated
identically, except that their ARCH_STRING is different.
The ARCH_STRING must always be identical to the ARCH from the Makefile,
otherwise the engine will not find its cgame, game and ui plugins
under their expected names and startup will fail. If we pass it in
from the Makefile, then an identical value is guaranteed, and we can
get rid of an increasingly long list of defined(__some_cpu__) tests.
The one remaining quirk is that we test __x86_64__ to determine
whether to define idx64; I've kept that, but separated it from
the ARCH_STRING.
On non-Linux platforms we only support a few architectures anyway,
so keeping the list up to date is less of a burden; *BSD porters
could probably use the same technique to get support for lots of
architectures with little effort, but I have not done that here,
because I cannot test it.
Windows must continue to support preprocessor-based architecture tests
in any case, so that the MSVC solutions (which do not use the Makefile)
can continue to work. However, Windows only runs on a few CPU families,
so this shouldn't be a significant burden in practice.
When cross-compiling, the tools are compiled for the build architecture
(COMPILE_PLATFORM, COMPILE_ARCH) rather than the host architecture
(PLATFORM, ARCH), so define ARCH_STRING to COMPILE_ARCH on a GNU
COMPILE_PLATFORM.
The ARCH in the Makefile must match the ARCH_STRING in q_platform.h;
otherwise, ioquake3 will install (for instance) uiARCH.so but look for
uiARCH_STRING.so, which isn't going to go well (particularly for
the modular renderer).
Like i386, but unlike most (all?) other Linux platforms, uname -m on
32-bit ARM machines can have various results starting with "arm",
depending on the specific CPU version (e.g. Raspberry Pi is armv6l,
RPi2 is armv7l). Again similar to the x86 family,
it's appropriate for them to share an architecture suffix;
q_platform.h has traditionally used "arm" so let's use that.
64-bit ARM makes a clean break from this, much like 64-bit x86 does:
uname -m produces a string not starting with arm (specifically
"aarch64"), and gcc predefines __aarch64__ instead of __arm__.
As a result, it is unaffected by this change.
https://bugzilla.icculus.org/show_bug.cgi?id=5986
Created make-macosx-app.sh to handle manually creating an app bundle from other scripts.
Updated make-macosx.sh to create bundle with make-macosx-app.sh (TODO: make-macosx-ub.sh support).
Updated Makefile to create bundle with make-macosx-app.sh and zip up the resulting ioquake3.app if ARCHIVE is defined.
[As with GNU/kFreeBSD, it's treated as "Linux": all three use the GNU libc
and runtime linker, which is mostly what matters for ioquake3. -smcv]
Bug-Debian: http://bugs.debian.org/679330
Reviewed-by: Simon McVittie <smcv@debian.org>
As usual, the order of precedence is: user override, pkg-config,
or assume they're in standard locations.
In particular, Opus isn't in the default search path on Debian.
We didn't add CURL_CFLAGS to CLIENT_CFLAGS on all platforms, and didn't
use CURL_LIBS at all, so if "pkg-config --libs" returned "-L... -lcurl"
or even "/.../libcurl.a", it wouldn't work.
This lets us find a library in a non-standard library directory
(via -L in the pkg-config metadata), and allows overrides similar to
the Autoconf convention, e.g.
make FREETYPE_CFLAGS=-I/opt/freetype/include \
FREETYPE_LIBS="-L/opt/freetype/lib -lfreetype"
If pkg-config didn't work, assume that Freetype is in the default
location.
Linux distributions that want to link dependencies externally will
generally want to link (almost) every dependency externally; similarly,
minimal-dependency builds that want to use the embedded copies of
dependencies will generally want to do so for (almost) every dependency.
Make it easier to choose one of those by setting USE_INTERNAL_LIBS=0
or USE_INTERNAL_LIBS=1, respectively.
The default can still be overridden per-dependency; for instance,
"make USE_INTERNAL_LIBS=0 USE_INTERNAL_OPUS=1" will use the system
version of everything except Opus.