Commit graph

5 commits

Author SHA1 Message Date
Randy Heit
55142078d8 Normalize line endings 2016-03-01 09:47:10 -06:00
Randy Heit
8ce9e178d4 - Changed the four-byte fillers in asm_x86_64/tmap3.s from 0x88888888 to 0x44444444 because
newer version of gas complained about it.

SVN r2481 (trunk)
2010-08-01 04:31:18 +00:00
Randy Heit
765ccd0100 - Remove Valgrind macros from the (Win64-only) asm_x86_64/tmap3.asm
SVN r1161 (trunk)
2008-08-12 03:20:50 +00:00
Randy Heit
c9187a0e09 - Ported asm_x86_64/tmap3.nas to AT&T syntax so it can be compiled with gas.
After finding out that gas does have directives to describe the .eh_frame
  metadata, I figured that would be significantly easier and quicker than
  trying to locate all the scattered docs I would need to read to figure out
  how to construct it by hand. Unfortunately, this now means I have to
  maintain two versions of exactly the same code. :( (But unless I add 32-bit
  color rendering in the future, the chances that I will have to touch it
  again are quite slim.)



SVN r1159 (trunk)
2008-08-12 02:49:00 +00:00
Randy Heit
dda5ddd3c2 - Ported vlinetallasm4 to AMD64 assembly. Even with the increased number of
registers AMD64 provides, this routine still needs to be written as self-
  modifying code for maximum performance. The additional registers do allow
  for further optimization over the x86 version by allowing all four pixels
  to be in flight at the same time. The end result is that AMD64 ASM is about
  2.18 times faster than AMD64 C and about 1.06 times faster than x86 ASM.
  (For further comparison, AMD64 C and x86 C are practically the same for
  this function.) Should I port any more assembly to AMD64, mvlineasm4 is the
  most likely candidate, but it's not used enough at this point to bother.
  Also, this may or may not work with Linux at the moment, since it doesn't
  have the eh_handler metadata. Win64 is easier, since I just need to
  structure the function prologue and epilogue properly and use some
  assembler directives/macros to automatically generate the metadata. And
  that brings up another point: You need YASM to assemble the AMD64 code,
  because NASM doesn't support the Win64 metadata directives.
- Added an SSE version of DoBlending. This is strictly C intrinsics.
  VC++ still throws around unneccessary register moves. GCC seems to be
  pretty close to optimal, requiring only about 2 cycles/color. They're
  both faster than my hand-written MMX routine, so I don't need to feel
  bad about not hand-optimizing this for x64 builds.
- Removed an extra instruction from DoBlending_MMX, transposed two
  instructions, and unrolled it once, shaving off about 80 cycles from the
  time required to blend 256 palette entries. Why? Because I tried writing
  a C version of the routine using compiler intrinsics and was appalled by
  all the extra movq's VC++ added to the code. GCC was better, but still
  generated extra instructions. I only wanted a C version because I can't
  use inline assembly with VC++'s x64 compiler, and x64 assembly is a bit
  of a pain. (It's a pain because Linux and Windows have different calling
  conventions, and you need to maintain extra metadata for functions.) So,
  the assembly version stays and the C version stays out.
- Removed all the pixel doubling r_detail modes, since the one platform they
  were intended to assist (486) actually sees very little benefit from them.
- Rewrote CheckMMX in C and renamed it to CheckCPU.
- Fixed: CPUID function 0x80000005 is specified to return detailed L1 cache
  only for AMD processors, so we must not use it on other architectures, or
  we end up overwriting the L1 cache line size with 0 or some other number
  we don't actually understand.


SVN r1134 (trunk)
2008-08-09 03:13:43 +00:00