1 //===---------------------------------------------------------------------===//
2 // Random ideas for the X86 backend.
3 //===---------------------------------------------------------------------===//
6 //===---------------------------------------------------------------------===//
8 CodeGen/X86/lea-3.ll:test3 should be a single LEA, not a shift/move. The X86
9 backend knows how to three-addressify this shift, but it appears the register
10 allocator isn't even asking it to do so in this case. We should investigate
11 why this isn't happening, it could have significant impact on other important
12 cases for X86 as well.
14 //===---------------------------------------------------------------------===//
16 This should be one DIV/IDIV instruction, not a libcall:
18 unsigned test(unsigned long long X, unsigned Y) {
22 This can be done trivially with a custom legalizer. What about overflow
23 though? http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14224
25 //===---------------------------------------------------------------------===//
27 Improvements to the multiply -> shift/add algorithm:
28 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg01590.html
30 //===---------------------------------------------------------------------===//
32 Improve code like this (occurs fairly frequently, e.g. in LLVM):
33 long long foo(int x) { return 1LL << x; }
35 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01109.html
36 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01128.html
37 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01136.html
39 Another useful one would be ~0ULL >> X and ~0ULL << X.
41 One better solution for 1LL << x is:
50 But that requires good 8-bit subreg support.
52 Also, this might be better. It's an extra shift, but it's one instruction
53 shorter, and doesn't stress 8-bit subreg support.
54 (From http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01148.html,
55 but without the unnecessary and.)
63 64-bit shifts (in general) expand to really bad code. Instead of using
64 cmovs, we should expand to a conditional branch like GCC produces.
66 //===---------------------------------------------------------------------===//
69 _Bool f(_Bool a) { return a!=1; }
76 (Although note that this isn't a legal way to express the code that llvm-gcc
77 currently generates for that function.)
79 //===---------------------------------------------------------------------===//
83 1. Dynamic programming based approach when compile time if not an
85 2. Code duplication (addressing mode) during isel.
86 3. Other ideas from "Register-Sensitive Selection, Duplication, and
87 Sequencing of Instructions".
88 4. Scheduling for reduced register pressure. E.g. "Minimum Register
89 Instruction Sequence Problem: Revisiting Optimal Code Generation for DAGs"
90 and other related papers.
91 http://citeseer.ist.psu.edu/govindarajan01minimum.html
93 //===---------------------------------------------------------------------===//
95 Should we promote i16 to i32 to avoid partial register update stalls?
97 //===---------------------------------------------------------------------===//
99 Leave any_extend as pseudo instruction and hint to register
100 allocator. Delay codegen until post register allocation.
101 Note. any_extend is now turned into an INSERT_SUBREG. We still need to teach
102 the coalescer how to deal with it though.
104 //===---------------------------------------------------------------------===//
106 It appears icc use push for parameter passing. Need to investigate.
108 //===---------------------------------------------------------------------===//
110 Only use inc/neg/not instructions on processors where they are faster than
111 add/sub/xor. They are slower on the P4 due to only updating some processor
114 //===---------------------------------------------------------------------===//
116 The instruction selector sometimes misses folding a load into a compare. The
117 pattern is written as (cmp reg, (load p)). Because the compare isn't
118 commutative, it is not matched with the load on both sides. The dag combiner
119 should be made smart enough to cannonicalize the load into the RHS of a compare
120 when it can invert the result of the compare for free.
122 //===---------------------------------------------------------------------===//
124 How about intrinsics? An example is:
125 *res = _mm_mulhi_epu16(*A, _mm_mul_epu32(*B, *C));
128 pmuludq (%eax), %xmm0
133 The transformation probably requires a X86 specific pass or a DAG combiner
134 target specific hook.
136 //===---------------------------------------------------------------------===//
138 In many cases, LLVM generates code like this:
147 on some processors (which ones?), it is more efficient to do this:
156 Doing this correctly is tricky though, as the xor clobbers the flags.
158 //===---------------------------------------------------------------------===//
160 We should generate bts/btr/etc instructions on targets where they are cheap or
161 when codesize is important. e.g., for:
163 void setbit(int *target, int bit) {
164 *target |= (1 << bit);
166 void clearbit(int *target, int bit) {
167 *target &= ~(1 << bit);
170 //===---------------------------------------------------------------------===//
172 Instead of the following for memset char*, 1, 10:
174 movl $16843009, 4(%edx)
175 movl $16843009, (%edx)
178 It might be better to generate
185 when we can spare a register. It reduces code size.
187 //===---------------------------------------------------------------------===//
189 Evaluate what the best way to codegen sdiv X, (2^C) is. For X/8, we currently
192 define i32 @test1(i32 %X) {
206 GCC knows several different ways to codegen it, one of which is this:
216 which is probably slower, but it's interesting at least :)
218 //===---------------------------------------------------------------------===//
220 We are currently lowering large (1MB+) memmove/memcpy to rep/stosl and rep/movsl
221 We should leave these as libcalls for everything over a much lower threshold,
222 since libc is hand tuned for medium and large mem ops (avoiding RFO for large
223 stores, TLB preheating, etc)
225 //===---------------------------------------------------------------------===//
227 Optimize this into something reasonable:
228 x * copysign(1.0, y) * copysign(1.0, z)
230 //===---------------------------------------------------------------------===//
232 Optimize copysign(x, *y) to use an integer load from y.
234 //===---------------------------------------------------------------------===//
236 The following tests perform worse with LSR:
238 lambda, siod, optimizer-eval, ackermann, hash2, nestedloop, strcat, and Treesor.
240 //===---------------------------------------------------------------------===//
242 Teach the coalescer to coalesce vregs of different register classes. e.g. FR32 /
245 //===---------------------------------------------------------------------===//
247 Adding to the list of cmp / test poor codegen issues:
249 int test(__m128 *A, __m128 *B) {
250 if (_mm_comige_ss(*A, *B))
270 Note the setae, movzbl, cmpl, cmove can be replaced with a single cmovae. There
271 are a number of issues. 1) We are introducing a setcc between the result of the
272 intrisic call and select. 2) The intrinsic is expected to produce a i32 value
273 so a any extend (which becomes a zero extend) is added.
275 We probably need some kind of target DAG combine hook to fix this.
277 //===---------------------------------------------------------------------===//
279 We generate significantly worse code for this than GCC:
280 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21150
281 http://gcc.gnu.org/bugzilla/attachment.cgi?id=8701
283 There is also one case we do worse on PPC.
285 //===---------------------------------------------------------------------===//
295 imull $3, 4(%esp), %eax
297 Perhaps this is what we really should generate is? Is imull three or four
298 cycles? Note: ICC generates this:
300 leal (%eax,%eax,2), %eax
302 The current instruction priority is based on pattern complexity. The former is
303 more "complex" because it folds a load so the latter will not be emitted.
305 Perhaps we should use AddedComplexity to give LEA32r a higher priority? We
306 should always try to match LEA first since the LEA matching code does some
307 estimate to determine whether the match is profitable.
309 However, if we care more about code size, then imull is better. It's two bytes
310 shorter than movl + leal.
312 On a Pentium M, both variants have the same characteristics with regard
313 to throughput; however, the multiplication has a latency of four cycles, as
314 opposed to two cycles for the movl+lea variant.
316 //===---------------------------------------------------------------------===//
318 __builtin_ffs codegen is messy.
320 int ffs_(unsigned X) { return __builtin_ffs(X); }
343 Another example of __builtin_ffs (use predsimplify to eliminate a select):
345 int foo (unsigned long j) {
347 return __builtin_ffs (j) - 1;
352 //===---------------------------------------------------------------------===//
354 It appears gcc place string data with linkonce linkage in
355 .section __TEXT,__const_coal,coalesced instead of
356 .section __DATA,__const_coal,coalesced.
357 Take a look at darwin.h, there are other Darwin assembler directives that we
360 //===---------------------------------------------------------------------===//
362 define i32 @foo(i32* %a, i32 %t) {
366 cond_true: ; preds = %cond_true, %entry
367 %x.0.0 = phi i32 [ 0, %entry ], [ %tmp9, %cond_true ] ; <i32> [#uses=3]
368 %t_addr.0.0 = phi i32 [ %t, %entry ], [ %tmp7, %cond_true ] ; <i32> [#uses=1]
369 %tmp2 = getelementptr i32* %a, i32 %x.0.0 ; <i32*> [#uses=1]
370 %tmp3 = load i32* %tmp2 ; <i32> [#uses=1]
371 %tmp5 = add i32 %t_addr.0.0, %x.0.0 ; <i32> [#uses=1]
372 %tmp7 = add i32 %tmp5, %tmp3 ; <i32> [#uses=2]
373 %tmp9 = add i32 %x.0.0, 1 ; <i32> [#uses=2]
374 %tmp = icmp sgt i32 %tmp9, 39 ; <i1> [#uses=1]
375 br i1 %tmp, label %bb12, label %cond_true
377 bb12: ; preds = %cond_true
380 is pessimized by -loop-reduce and -indvars
382 //===---------------------------------------------------------------------===//
384 u32 to float conversion improvement:
386 float uint32_2_float( unsigned u ) {
387 float fl = (int) (u & 0xffff);
388 float fh = (int) (u >> 16);
393 00000000 subl $0x04,%esp
394 00000003 movl 0x08(%esp,1),%eax
395 00000007 movl %eax,%ecx
396 00000009 shrl $0x10,%ecx
397 0000000c cvtsi2ss %ecx,%xmm0
398 00000010 andl $0x0000ffff,%eax
399 00000015 cvtsi2ss %eax,%xmm1
400 00000019 mulss 0x00000078,%xmm0
401 00000021 addss %xmm1,%xmm0
402 00000025 movss %xmm0,(%esp,1)
403 0000002a flds (%esp,1)
404 0000002d addl $0x04,%esp
407 //===---------------------------------------------------------------------===//
409 When using fastcc abi, align stack slot of argument of type double on 8 byte
410 boundary to improve performance.
412 //===---------------------------------------------------------------------===//
416 int f(int a, int b) {
417 if (a == 4 || a == 6)
429 //===---------------------------------------------------------------------===//
431 GCC's ix86_expand_int_movcc function (in i386.c) has a ton of interesting
432 simplifications for integer "x cmp y ? a : b". For example, instead of:
435 void f(int X, int Y) {
462 int usesbb(unsigned int a, unsigned int b) {
463 return (a < b ? -1 : 0);
477 movl $4294967295, %ecx
481 //===---------------------------------------------------------------------===//
483 Currently we don't have elimination of redundant stack manipulations. Consider
488 call fastcc void %test1( )
489 call fastcc void %test2( sbyte* cast (void ()* %test1 to sbyte*) )
493 declare fastcc void %test1()
495 declare fastcc void %test2(sbyte*)
498 This currently compiles to:
508 The add\sub pair is really unneeded here.
510 //===---------------------------------------------------------------------===//
512 Consider the expansion of:
514 define i32 @test3(i32 %X) {
515 %tmp1 = urem i32 %X, 255
519 Currently it compiles to:
522 movl $2155905153, %ecx
528 This could be "reassociated" into:
530 movl $2155905153, %eax
534 to avoid the copy. In fact, the existing two-address stuff would do this
535 except that mul isn't a commutative 2-addr instruction. I guess this has
536 to be done at isel time based on the #uses to mul?
538 //===---------------------------------------------------------------------===//
540 Make sure the instruction which starts a loop does not cross a cacheline
541 boundary. This requires knowning the exact length of each machine instruction.
542 That is somewhat complicated, but doable. Example 256.bzip2:
544 In the new trace, the hot loop has an instruction which crosses a cacheline
545 boundary. In addition to potential cache misses, this can't help decoding as I
546 imagine there has to be some kind of complicated decoder reset and realignment
547 to grab the bytes from the next cacheline.
549 532 532 0x3cfc movb (1809(%esp, %esi), %bl <<<--- spans 2 64 byte lines
550 942 942 0x3d03 movl %dh, (1809(%esp, %esi)
551 937 937 0x3d0a incl %esi
552 3 3 0x3d0b cmpb %bl, %dl
553 27 27 0x3d0d jnz 0x000062db <main+11707>
555 //===---------------------------------------------------------------------===//
557 In c99 mode, the preprocessor doesn't like assembly comments like #TRUNCATE.
559 //===---------------------------------------------------------------------===//
561 This could be a single 16-bit load.
564 if ((p[0] == 1) & (p[1] == 2)) return 1;
568 //===---------------------------------------------------------------------===//
570 We should inline lrintf and probably other libc functions.
572 //===---------------------------------------------------------------------===//
574 Start using the flags more. For example, compile:
576 int add_zf(int *x, int y, int a, int b) {
600 int add_zf(int *x, int y, int a, int b) {
624 //===---------------------------------------------------------------------===//
626 These two functions have identical effects:
628 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return i;}
629 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
631 We currently compile them to:
639 jne LBB1_2 #UnifiedReturnBlock
643 LBB1_2: #UnifiedReturnBlock
653 leal 1(%ecx,%eax), %eax
656 both of which are inferior to GCC's:
674 //===---------------------------------------------------------------------===//
682 is currently compiled to:
693 It would be better to produce:
702 This can be applied to any no-return function call that takes no arguments etc.
703 Alternatively, the stack save/restore logic could be shrink-wrapped, producing
714 Both are useful in different situations. Finally, it could be shrink-wrapped
715 and tail called, like this:
722 pop %eax # realign stack.
725 Though this probably isn't worth it.
727 //===---------------------------------------------------------------------===//
729 We need to teach the codegen to convert two-address INC instructions to LEA
730 when the flags are dead (likewise dec). For example, on X86-64, compile:
732 int foo(int A, int B) {
751 ;; X's live range extends beyond the shift, so the register allocator
752 ;; cannot coalesce it with Y. Because of this, a copy needs to be
753 ;; emitted before the shift to save the register value before it is
754 ;; clobbered. However, this copy is not needed if the register
755 ;; allocator turns the shift into an LEA. This also occurs for ADD.
757 ; Check that the shift gets turned into an LEA.
758 ; RUN: llvm-as < %s | llc -march=x86 -x86-asm-syntax=intel | \
759 ; RUN: not grep {mov E.X, E.X}
761 @G = external global i32 ; <i32*> [#uses=3]
763 define i32 @test1(i32 %X, i32 %Y) {
764 %Z = add i32 %X, %Y ; <i32> [#uses=1]
765 volatile store i32 %Y, i32* @G
766 volatile store i32 %Z, i32* @G
770 define i32 @test2(i32 %X) {
771 %Z = add i32 %X, 1 ; <i32> [#uses=1]
772 volatile store i32 %Z, i32* @G
776 //===---------------------------------------------------------------------===//
778 Sometimes it is better to codegen subtractions from a constant (e.g. 7-x) with
779 a neg instead of a sub instruction. Consider:
781 int test(char X) { return 7-X; }
783 we currently produce:
790 We would use one fewer register if codegen'd as:
797 Note that this isn't beneficial if the load can be folded into the sub. In
798 this case, we want a sub:
800 int test(int X) { return 7-X; }
806 //===---------------------------------------------------------------------===//
808 Leaf functions that require one 4-byte spill slot have a prolog like this:
814 and an epilog like this:
819 It would be smaller, and potentially faster, to push eax on entry and to
820 pop into a dummy register instead of using addl/subl of esp. Just don't pop
821 into any return registers :)
823 //===---------------------------------------------------------------------===//
825 The X86 backend should fold (branch (or (setcc, setcc))) into multiple
826 branches. We generate really poor code for:
828 double testf(double a) {
829 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
832 For example, the entry BB is:
837 movsd 24(%esp), %xmm1
842 jne LBB1_5 # UnifiedReturnBlock
846 it would be better to replace the last four instructions with:
852 We also codegen the inner ?: into a diamond:
854 cvtss2sd LCPI1_0(%rip), %xmm2
855 cvtss2sd LCPI1_1(%rip), %xmm3
857 ja LBB1_3 # cond_true
864 We should sink the load into xmm3 into the LBB1_2 block. This should
865 be pretty easy, and will nuke all the copies.
867 //===---------------------------------------------------------------------===//
871 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
872 { return std::make_pair(a + b, a + b < a); }
873 bool no_overflow(unsigned a, unsigned b)
874 { return !full_add(a, b).second; }
884 FIXME: That code looks wrong; bool return is normally defined as zext.
896 //===---------------------------------------------------------------------===//
898 Re-materialize MOV32r0 etc. with xor instead of changing them to moves if the
899 condition register is dead. xor reg reg is shorter than mov reg, #0.
901 //===---------------------------------------------------------------------===//
903 We aren't matching RMW instructions aggressively
904 enough. Here's a reduced testcase (more in PR1160):
906 define void @test(i32* %huge_ptr, i32* %target_ptr) {
907 %A = load i32* %huge_ptr ; <i32> [#uses=1]
908 %B = load i32* %target_ptr ; <i32> [#uses=1]
909 %C = or i32 %A, %B ; <i32> [#uses=1]
910 store i32 %C, i32* %target_ptr
914 $ llvm-as < t.ll | llc -march=x86-64
922 That should be something like:
929 //===---------------------------------------------------------------------===//
933 bb114.preheader: ; preds = %cond_next94
934 %tmp231232 = sext i16 %tmp62 to i32 ; <i32> [#uses=1]
935 %tmp233 = sub i32 32, %tmp231232 ; <i32> [#uses=1]
936 %tmp245246 = sext i16 %tmp65 to i32 ; <i32> [#uses=1]
937 %tmp252253 = sext i16 %tmp68 to i32 ; <i32> [#uses=1]
938 %tmp254 = sub i32 32, %tmp252253 ; <i32> [#uses=1]
939 %tmp553554 = bitcast i16* %tmp37 to i8* ; <i8*> [#uses=2]
940 %tmp583584 = sext i16 %tmp98 to i32 ; <i32> [#uses=1]
941 %tmp585 = sub i32 32, %tmp583584 ; <i32> [#uses=1]
942 %tmp614615 = sext i16 %tmp101 to i32 ; <i32> [#uses=1]
943 %tmp621622 = sext i16 %tmp104 to i32 ; <i32> [#uses=1]
944 %tmp623 = sub i32 32, %tmp621622 ; <i32> [#uses=1]
949 LBB3_5: # bb114.preheader
950 movswl -68(%ebp), %eax
954 movswl -52(%ebp), %eax
957 movswl -70(%ebp), %eax
960 movswl -50(%ebp), %eax
963 movswl -42(%ebp), %eax
965 movswl -66(%ebp), %eax
969 This appears to be bad because the RA is not folding the store to the stack
970 slot into the movl. The above instructions could be:
975 This seems like a cross between remat and spill folding.
977 This has redundant subtractions of %eax from a stack slot. However, %ecx doesn't
978 change, so we could simply subtract %eax from %ecx first and then use %ecx (or
981 //===---------------------------------------------------------------------===//
985 %tmp659 = icmp slt i16 %tmp654, 0 ; <i1> [#uses=1]
986 br i1 %tmp659, label %cond_true662, label %cond_next715
992 jns LBB4_109 # cond_next715
994 Shark tells us that using %cx in the testw instruction is sub-optimal. It
995 suggests using the 32-bit register (which is what ICC uses).
997 //===---------------------------------------------------------------------===//
1001 void compare (long long foo) {
1002 if (foo < 4294967297LL)
1018 jne .LBB1_2 # UnifiedReturnBlock
1021 .LBB1_2: # UnifiedReturnBlock
1025 (also really horrible code on ppc). This is due to the expand code for 64-bit
1026 compares. GCC produces multiple branches, which is much nicer:
1047 //===---------------------------------------------------------------------===//
1049 Tail call optimization improvements: Tail call optimization currently
1050 pushes all arguments on the top of the stack (their normal place for
1051 non-tail call optimized calls) that source from the callers arguments
1052 or that source from a virtual register (also possibly sourcing from
1054 This is done to prevent overwriting of parameters (see example
1055 below) that might be used later.
1059 int callee(int32, int64);
1060 int caller(int32 arg1, int32 arg2) {
1061 int64 local = arg2 * 2;
1062 return callee(arg2, (int64)local);
1065 [arg1] [!arg2 no longer valid since we moved local onto it]
1069 Moving arg1 onto the stack slot of callee function would overwrite
1072 Possible optimizations:
1075 - Analyse the actual parameters of the callee to see which would
1076 overwrite a caller parameter which is used by the callee and only
1077 push them onto the top of the stack.
1079 int callee (int32 arg1, int32 arg2);
1080 int caller (int32 arg1, int32 arg2) {
1081 return callee(arg1,arg2);
1084 Here we don't need to write any variables to the top of the stack
1085 since they don't overwrite each other.
1087 int callee (int32 arg1, int32 arg2);
1088 int caller (int32 arg1, int32 arg2) {
1089 return callee(arg2,arg1);
1092 Here we need to push the arguments because they overwrite each
1095 //===---------------------------------------------------------------------===//
1100 unsigned long int z = 0;
1111 gcc compiles this to:
1137 jge LBB1_4 # cond_true
1140 addl $4294950912, %ecx
1150 1. LSR should rewrite the first cmp with induction variable %ecx.
1151 2. DAG combiner should fold
1157 //===---------------------------------------------------------------------===//
1159 define i64 @test(double %X) {
1160 %Y = fptosi double %X to i64
1168 movsd 24(%esp), %xmm0
1169 movsd %xmm0, 8(%esp)
1178 This should just fldl directly from the input stack slot.
1180 //===---------------------------------------------------------------------===//
1183 int foo (int x) { return (x & 65535) | 255; }
1185 Should compile into:
1188 movzwl 4(%esp), %eax
1199 //===---------------------------------------------------------------------===//
1201 We're codegen'ing multiply of long longs inefficiently:
1203 unsigned long long LLM(unsigned long long arg1, unsigned long long arg2) {
1207 We compile to (fomit-frame-pointer):
1215 imull 12(%esp), %esi
1217 imull 20(%esp), %ecx
1223 This looks like a scheduling deficiency and lack of remat of the load from
1224 the argument area. ICC apparently produces:
1227 imull 12(%esp), %ecx
1236 Note that it remat'd loads from 4(esp) and 12(esp). See this GCC PR:
1237 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17236
1239 //===---------------------------------------------------------------------===//
1241 We can fold a store into "zeroing a reg". Instead of:
1244 movl %eax, 124(%esp)
1250 if the flags of the xor are dead.
1252 Likewise, we isel "x<<1" into "add reg,reg". If reg is spilled, this should
1253 be folded into: shl [mem], 1
1255 //===---------------------------------------------------------------------===//
1257 This testcase misses a read/modify/write opportunity (from PR1425):
1259 void vertical_decompose97iH1(int *b0, int *b1, int *b2, int width){
1261 for(i=0; i<width; i++)
1262 b1[i] += (1*(b0[i] + b2[i])+0)>>0;
1265 We compile it down to:
1268 movl (%esi,%edi,4), %ebx
1269 addl (%ecx,%edi,4), %ebx
1270 addl (%edx,%edi,4), %ebx
1271 movl %ebx, (%ecx,%edi,4)
1276 the inner loop should add to the memory location (%ecx,%edi,4), saving
1277 a mov. Something like:
1279 movl (%esi,%edi,4), %ebx
1280 addl (%edx,%edi,4), %ebx
1281 addl %ebx, (%ecx,%edi,4)
1283 Here is another interesting example:
1285 void vertical_compose97iH1(int *b0, int *b1, int *b2, int width){
1287 for(i=0; i<width; i++)
1288 b1[i] -= (1*(b0[i] + b2[i])+0)>>0;
1291 We miss the r/m/w opportunity here by using 2 subs instead of an add+sub[mem]:
1294 movl (%ecx,%edi,4), %ebx
1295 subl (%esi,%edi,4), %ebx
1296 subl (%edx,%edi,4), %ebx
1297 movl %ebx, (%ecx,%edi,4)
1302 Additionally, LSR should rewrite the exit condition of these loops to use
1303 a stride-4 IV, would would allow all the scales in the loop to go away.
1304 This would result in smaller code and more efficient microops.
1306 //===---------------------------------------------------------------------===//
1308 In SSE mode, we turn abs and neg into a load from the constant pool plus a xor
1309 or and instruction, for example:
1311 xorpd LCPI1_0, %xmm2
1313 However, if xmm2 gets spilled, we end up with really ugly code like this:
1316 xorpd LCPI1_0, %xmm0
1319 Since we 'know' that this is a 'neg', we can actually "fold" the spill into
1320 the neg/abs instruction, turning it into an *integer* operation, like this:
1322 xorl 2147483648, [mem+4] ## 2147483648 = (1 << 31)
1324 you could also use xorb, but xorl is less likely to lead to a partial register
1325 stall. Here is a contrived testcase:
1328 void test(double *P) {
1338 //===---------------------------------------------------------------------===//
1340 handling llvm.memory.barrier on pre SSE2 cpus
1343 lock ; mov %esp, %esp
1345 //===---------------------------------------------------------------------===//
1347 The generated code on x86 for checking for signed overflow on a multiply the
1348 obvious way is much longer than it needs to be.
1350 int x(int a, int b) {
1351 long long prod = (long long)a*b;
1352 return prod > 0x7FFFFFFF || prod < (-0x7FFFFFFF-1);
1355 See PR2053 for more details.
1357 //===---------------------------------------------------------------------===//
1359 We should investigate using cdq/ctld (effect: edx = sar eax, 31)
1360 more aggressively; it should cost the same as a move+shift on any modern
1361 processor, but it's a lot shorter. Downside is that it puts more
1362 pressure on register allocation because it has fixed operands.
1365 int abs(int x) {return x < 0 ? -x : x;}
1367 gcc compiles this to the following when using march/mtune=pentium2/3/4/m/etc.:
1375 //===---------------------------------------------------------------------===//
1378 int test(unsigned long a, unsigned long b) { return -(a < b); }
1380 We currently compile this to:
1382 define i32 @test(i32 %a, i32 %b) nounwind {
1383 %tmp3 = icmp ult i32 %a, %b ; <i1> [#uses=1]
1384 %tmp34 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
1385 %tmp5 = sub i32 0, %tmp34 ; <i32> [#uses=1]
1399 Several deficiencies here. First, we should instcombine zext+neg into sext:
1401 define i32 @test2(i32 %a, i32 %b) nounwind {
1402 %tmp3 = icmp ult i32 %a, %b ; <i1> [#uses=1]
1403 %tmp34 = sext i1 %tmp3 to i32 ; <i32> [#uses=1]
1407 However, before we can do that, we have to fix the bad codegen that we get for
1419 This code should be at least as good as the code above. Once this is fixed, we
1420 can optimize this specific case even more to:
1427 //===---------------------------------------------------------------------===//
1429 Take the following code (from
1430 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16541):
1432 extern unsigned char first_one[65536];
1433 int FirstOnet(unsigned long long arg1)
1436 return (first_one[arg1 >> 48]);
1441 The following code is currently generated:
1446 jb .LBB1_2 # UnifiedReturnBlock
1449 movzbl first_one(%eax), %eax
1451 .LBB1_2: # UnifiedReturnBlock
1455 There are a few possible improvements here:
1456 1. We should be able to eliminate the dead load into %ecx
1457 2. We could change the "movl 8(%esp), %eax" into
1458 "movzwl 10(%esp), %eax"; this lets us change the cmpl
1459 into a testl, which is shorter, and eliminate the shift.
1461 We could also in theory eliminate the branch by using a conditional
1462 for the address of the load, but that seems unlikely to be worthwhile
1465 //===---------------------------------------------------------------------===//
1467 We compile this function:
1469 define i32 @foo(i32 %a, i32 %b, i32 %c, i8 zeroext %d) nounwind {
1471 %tmp2 = icmp eq i8 %d, 0 ; <i1> [#uses=1]
1472 br i1 %tmp2, label %bb7, label %bb
1474 bb: ; preds = %entry
1475 %tmp6 = add i32 %b, %a ; <i32> [#uses=1]
1478 bb7: ; preds = %entry
1479 %tmp10 = sub i32 %a, %c ; <i32> [#uses=1]
1499 The coalescer could coalesce "edx" with "eax" to avoid the movl in LBB1_2
1500 if it commuted the addl in LBB1_1.
1502 //===---------------------------------------------------------------------===//
1509 cvtss2sd LCPI1_0, %xmm1
1511 movsd 176(%esp), %xmm2
1516 mulsd LCPI1_23, %xmm4
1517 addsd LCPI1_24, %xmm4
1519 addsd LCPI1_25, %xmm4
1521 addsd LCPI1_26, %xmm4
1523 addsd LCPI1_27, %xmm4
1525 addsd LCPI1_28, %xmm4
1529 movsd 152(%esp), %xmm1
1531 movsd %xmm1, 152(%esp)
1535 LBB1_16: # bb358.loopexit
1536 movsd 152(%esp), %xmm0
1538 addsd LCPI1_22, %xmm0
1539 movsd %xmm0, 152(%esp)
1541 Rather than spilling the result of the last addsd in the loop, we should have
1542 insert a copy to split the interval (one for the duration of the loop, one
1543 extending to the fall through). The register pressure in the loop isn't high
1544 enough to warrant the spill.
1546 Also check why xmm7 is not used at all in the function.
1548 //===---------------------------------------------------------------------===//
1550 Legalize loses track of the fact that bools are always zero extended when in
1551 memory. This causes us to compile abort_gzip (from 164.gzip) from:
1553 target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
1554 target triple = "i386-apple-darwin8"
1555 @in_exit.4870.b = internal global i1 false ; <i1*> [#uses=2]
1556 define fastcc void @abort_gzip() noreturn nounwind {
1558 %tmp.b.i = load i1* @in_exit.4870.b ; <i1> [#uses=1]
1559 br i1 %tmp.b.i, label %bb.i, label %bb4.i
1560 bb.i: ; preds = %entry
1561 tail call void @exit( i32 1 ) noreturn nounwind
1563 bb4.i: ; preds = %entry
1564 store i1 true, i1* @in_exit.4870.b
1565 tail call void @exit( i32 1 ) noreturn nounwind
1568 declare void @exit(i32) noreturn nounwind
1574 movb _in_exit.4870.b, %al
1581 //===---------------------------------------------------------------------===//
1585 int test(int x, int y) {
1597 it would be better to codegen as: x+~y (notl+addl)
1599 //===---------------------------------------------------------------------===//
1603 int foo(const char *str,...)
1605 __builtin_va_list a; int x;
1606 __builtin_va_start(a,str); x = __builtin_va_arg(a,int); __builtin_va_end(a);
1610 gets compiled into this on x86-64:
1612 movaps %xmm7, 160(%rsp)
1613 movaps %xmm6, 144(%rsp)
1614 movaps %xmm5, 128(%rsp)
1615 movaps %xmm4, 112(%rsp)
1616 movaps %xmm3, 96(%rsp)
1617 movaps %xmm2, 80(%rsp)
1618 movaps %xmm1, 64(%rsp)
1619 movaps %xmm0, 48(%rsp)
1626 movq %rax, 192(%rsp)
1627 leaq 208(%rsp), %rax
1628 movq %rax, 184(%rsp)
1631 movl 176(%rsp), %eax
1635 movq 184(%rsp), %rcx
1637 movq %rax, 184(%rsp)
1645 addq 192(%rsp), %rcx
1646 movl %eax, 176(%rsp)
1652 leaq 104(%rsp), %rax
1653 movq %rsi, -80(%rsp)
1655 movq %rax, -112(%rsp)
1656 leaq -88(%rsp), %rax
1657 movq %rax, -104(%rsp)
1661 movq -112(%rsp), %rdx
1669 addq -104(%rsp), %rdx
1671 movl %eax, -120(%rsp)
1676 and it gets compiled into this on x86:
1696 //===---------------------------------------------------------------------===//
1698 Teach tblgen not to check bitconvert source type in some cases. This allows us
1699 to consolidate the following patterns in X86InstrMMX.td:
1701 def : Pat<(v2i32 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1703 (v2i32 (MMX_MOVDQ2Qrr VR128:$src))>;
1704 def : Pat<(v4i16 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1706 (v4i16 (MMX_MOVDQ2Qrr VR128:$src))>;
1707 def : Pat<(v8i8 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1709 (v8i8 (MMX_MOVDQ2Qrr VR128:$src))>;
1711 There are other cases in various td files.
1713 //===---------------------------------------------------------------------===//
1715 Take something like the following on x86-32:
1716 unsigned a(unsigned long long x, unsigned y) {return x % y;}
1718 We currently generate a libcall, but we really shouldn't: the expansion is
1719 shorter and likely faster than the libcall. The expected code is something
1731 A similar code sequence works for division.
1733 //===---------------------------------------------------------------------===//
1735 These should compile to the same code, but the later codegen's to useless
1736 instructions on X86. This may be a trivial dag combine (GCC PR7061):
1738 struct s1 { unsigned char a, b; };
1739 unsigned long f1(struct s1 x) {
1742 struct s2 { unsigned a: 8, b: 8; };
1743 unsigned long f2(struct s2 x) {
1747 //===---------------------------------------------------------------------===//
1749 We currently compile this:
1751 define i32 @func1(i32 %v1, i32 %v2) nounwind {
1753 %t = call {i32, i1} @llvm.sadd.with.overflow.i32(i32 %v1, i32 %v2)
1754 %sum = extractvalue {i32, i1} %t, 0
1755 %obit = extractvalue {i32, i1} %t, 1
1756 br i1 %obit, label %overflow, label %normal
1760 call void @llvm.trap()
1763 declare {i32, i1} @llvm.sadd.with.overflow.i32(i32, i32)
1764 declare void @llvm.trap()
1771 jo LBB1_2 ## overflow
1777 it would be nice to produce "into" someday.
1779 //===---------------------------------------------------------------------===//
1783 void vec_mpys1(int y[], const int x[], int scaler) {
1785 for (i = 0; i < 150; i++)
1786 y[i] += (((long long)scaler * (long long)x[i]) >> 31);
1789 Compiles to this loop with GCC 3.x:
1794 shrdl $31, %edx, %eax
1795 addl %eax, (%esi,%ecx,4)
1800 llvm-gcc compiles it to the much uglier:
1804 movl (%eax,%edi,4), %ebx
1813 shldl $1, %eax, %ebx
1815 addl %ebx, (%eax,%edi,4)
1820 //===---------------------------------------------------------------------===//
1822 Test instructions can be eliminated by using EFLAGS values from arithmetic
1823 instructions. This is currently not done for mul, and, or, xor, neg, shl,
1824 sra, srl, shld, shrd, atomic ops, and others. It is also currently not done
1825 for read-modify-write instructions. It is also current not done if the
1826 OF or CF flags are needed.
1828 The shift operators have the complication that when the shift count is
1829 zero, EFLAGS is not set, so they can only subsume a test instruction if
1830 the shift count is known to be non-zero. Also, using the EFLAGS value
1831 from a shift is apparently very slow on some x86 implementations.
1833 In read-modify-write instructions, the root node in the isel match is
1834 the store, and isel has no way for the use of the EFLAGS result of the
1835 arithmetic to be remapped to the new node.
1837 Add and subtract instructions set OF on signed overflow and CF on unsiged
1838 overflow, while test instructions always clear OF and CF. In order to
1839 replace a test with an add or subtract in a situation where OF or CF is
1840 needed, codegen must be able to prove that the operation cannot see
1841 signed or unsigned overflow, respectively.
1843 //===---------------------------------------------------------------------===//
1845 memcpy/memmove do not lower to SSE copies when possible. A silly example is:
1846 define <16 x float> @foo(<16 x float> %A) nounwind {
1847 %tmp = alloca <16 x float>, align 16
1848 %tmp2 = alloca <16 x float>, align 16
1849 store <16 x float> %A, <16 x float>* %tmp
1850 %s = bitcast <16 x float>* %tmp to i8*
1851 %s2 = bitcast <16 x float>* %tmp2 to i8*
1852 call void @llvm.memcpy.i64(i8* %s, i8* %s2, i64 64, i32 16)
1853 %R = load <16 x float>* %tmp2
1857 declare void @llvm.memcpy.i64(i8* nocapture, i8* nocapture, i64, i32) nounwind
1863 movaps %xmm3, 112(%esp)
1864 movaps %xmm2, 96(%esp)
1865 movaps %xmm1, 80(%esp)
1866 movaps %xmm0, 64(%esp)
1868 movl %eax, 124(%esp)
1870 movl %eax, 120(%esp)
1872 <many many more 32-bit copies>
1873 movaps (%esp), %xmm0
1874 movaps 16(%esp), %xmm1
1875 movaps 32(%esp), %xmm2
1876 movaps 48(%esp), %xmm3
1880 On Nehalem, it may even be cheaper to just use movups when unaligned than to
1881 fall back to lower-granularity chunks.
1883 //===---------------------------------------------------------------------===//