Commit graph

7 commits

Author SHA1 Message Date
Randy Heit
55142078d8 Normalize line endings 2016-03-01 09:47:10 -06:00
alexey.lysiuk
e6d468eb38 Use byte swapping functions from <libkern/OSByteOrder.h> on OS X
Remove inclusion of Core Foundation headers to avoid type conflicts with LZMA SDK.
2014-06-28 10:59:56 +03:00
Randy Heit
5a3b3631c3 - Added XMIDI support (including subsongs).
- Moved unaligned accessors into m_swap.h.

SVN r2859 (trunk)
2010-09-28 03:58:41 +00:00
Randy Heit
33a397c04b - Basic Mac support: Everything compiles but does not yet link.
SVN r1780 (trunk)
2009-09-01 02:08:53 +00:00
Randy Heit
3f003e06db - Replaced the use of autoconf's WORDS_BIGENDIAN with __BIG_ENDIAN__, since
latter comes predefined by GCC.


SVN r1779 (trunk)
2009-08-31 21:47:29 +00:00
Randy Heit
dda5ddd3c2 - Ported vlinetallasm4 to AMD64 assembly. Even with the increased number of
registers AMD64 provides, this routine still needs to be written as self-
  modifying code for maximum performance. The additional registers do allow
  for further optimization over the x86 version by allowing all four pixels
  to be in flight at the same time. The end result is that AMD64 ASM is about
  2.18 times faster than AMD64 C and about 1.06 times faster than x86 ASM.
  (For further comparison, AMD64 C and x86 C are practically the same for
  this function.) Should I port any more assembly to AMD64, mvlineasm4 is the
  most likely candidate, but it's not used enough at this point to bother.
  Also, this may or may not work with Linux at the moment, since it doesn't
  have the eh_handler metadata. Win64 is easier, since I just need to
  structure the function prologue and epilogue properly and use some
  assembler directives/macros to automatically generate the metadata. And
  that brings up another point: You need YASM to assemble the AMD64 code,
  because NASM doesn't support the Win64 metadata directives.
- Added an SSE version of DoBlending. This is strictly C intrinsics.
  VC++ still throws around unneccessary register moves. GCC seems to be
  pretty close to optimal, requiring only about 2 cycles/color. They're
  both faster than my hand-written MMX routine, so I don't need to feel
  bad about not hand-optimizing this for x64 builds.
- Removed an extra instruction from DoBlending_MMX, transposed two
  instructions, and unrolled it once, shaving off about 80 cycles from the
  time required to blend 256 palette entries. Why? Because I tried writing
  a C version of the routine using compiler intrinsics and was appalled by
  all the extra movq's VC++ added to the code. GCC was better, but still
  generated extra instructions. I only wanted a C version because I can't
  use inline assembly with VC++'s x64 compiler, and x64 assembly is a bit
  of a pain. (It's a pain because Linux and Windows have different calling
  conventions, and you need to maintain extra metadata for functions.) So,
  the assembly version stays and the C version stays out.
- Removed all the pixel doubling r_detail modes, since the one platform they
  were intended to assist (486) actually sees very little benefit from them.
- Rewrote CheckMMX in C and renamed it to CheckCPU.
- Fixed: CPUID function 0x80000005 is specified to return detailed L1 cache
  only for AMD processors, so we must not use it on other architectures, or
  we end up overwriting the L1 cache line size with 0 or some other number
  we don't actually understand.


SVN r1134 (trunk)
2008-08-09 03:13:43 +00:00
Randy Heit
cf11cbdb30 Directory restructuring to make it easier to version projects that don't build zdoom.exe.
SVN r4 (trunk)
2006-02-24 04:48:15 +00:00