1 //===---------------------------------------------------------------------===//
2 // Random ideas for the X86 backend.
3 //===---------------------------------------------------------------------===//
5 We should add support for the "movbe" instruction, which does a byte-swapping
6 copy (3-addr bswap + memory support?) This is available on Atom processors.
8 //===---------------------------------------------------------------------===//
10 CodeGen/X86/lea-3.ll:test3 should be a single LEA, not a shift/move. The X86
11 backend knows how to three-addressify this shift, but it appears the register
12 allocator isn't even asking it to do so in this case. We should investigate
13 why this isn't happening, it could have significant impact on other important
14 cases for X86 as well.
16 //===---------------------------------------------------------------------===//
18 This should be one DIV/IDIV instruction, not a libcall:
20 unsigned test(unsigned long long X, unsigned Y) {
24 This can be done trivially with a custom legalizer. What about overflow
25 though? http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14224
27 //===---------------------------------------------------------------------===//
29 Improvements to the multiply -> shift/add algorithm:
30 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg01590.html
32 //===---------------------------------------------------------------------===//
34 Improve code like this (occurs fairly frequently, e.g. in LLVM):
35 long long foo(int x) { return 1LL << x; }
37 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01109.html
38 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01128.html
39 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01136.html
41 Another useful one would be ~0ULL >> X and ~0ULL << X.
43 One better solution for 1LL << x is:
52 But that requires good 8-bit subreg support.
54 Also, this might be better. It's an extra shift, but it's one instruction
55 shorter, and doesn't stress 8-bit subreg support.
56 (From http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01148.html,
57 but without the unnecessary and.)
65 64-bit shifts (in general) expand to really bad code. Instead of using
66 cmovs, we should expand to a conditional branch like GCC produces.
68 //===---------------------------------------------------------------------===//
71 _Bool f(_Bool a) { return a!=1; }
78 (Although note that this isn't a legal way to express the code that llvm-gcc
79 currently generates for that function.)
81 //===---------------------------------------------------------------------===//
85 1. Dynamic programming based approach when compile time if not an
87 2. Code duplication (addressing mode) during isel.
88 3. Other ideas from "Register-Sensitive Selection, Duplication, and
89 Sequencing of Instructions".
90 4. Scheduling for reduced register pressure. E.g. "Minimum Register
91 Instruction Sequence Problem: Revisiting Optimal Code Generation for DAGs"
92 and other related papers.
93 http://citeseer.ist.psu.edu/govindarajan01minimum.html
95 //===---------------------------------------------------------------------===//
97 Should we promote i16 to i32 to avoid partial register update stalls?
99 //===---------------------------------------------------------------------===//
101 Leave any_extend as pseudo instruction and hint to register
102 allocator. Delay codegen until post register allocation.
103 Note. any_extend is now turned into an INSERT_SUBREG. We still need to teach
104 the coalescer how to deal with it though.
106 //===---------------------------------------------------------------------===//
108 It appears icc use push for parameter passing. Need to investigate.
110 //===---------------------------------------------------------------------===//
112 Only use inc/neg/not instructions on processors where they are faster than
113 add/sub/xor. They are slower on the P4 due to only updating some processor
116 //===---------------------------------------------------------------------===//
118 The instruction selector sometimes misses folding a load into a compare. The
119 pattern is written as (cmp reg, (load p)). Because the compare isn't
120 commutative, it is not matched with the load on both sides. The dag combiner
121 should be made smart enough to cannonicalize the load into the RHS of a compare
122 when it can invert the result of the compare for free.
124 //===---------------------------------------------------------------------===//
126 How about intrinsics? An example is:
127 *res = _mm_mulhi_epu16(*A, _mm_mul_epu32(*B, *C));
130 pmuludq (%eax), %xmm0
135 The transformation probably requires a X86 specific pass or a DAG combiner
136 target specific hook.
138 //===---------------------------------------------------------------------===//
140 In many cases, LLVM generates code like this:
149 on some processors (which ones?), it is more efficient to do this:
158 Doing this correctly is tricky though, as the xor clobbers the flags.
160 //===---------------------------------------------------------------------===//
162 We should generate bts/btr/etc instructions on targets where they are cheap or
163 when codesize is important. e.g., for:
165 void setbit(int *target, int bit) {
166 *target |= (1 << bit);
168 void clearbit(int *target, int bit) {
169 *target &= ~(1 << bit);
172 //===---------------------------------------------------------------------===//
174 Instead of the following for memset char*, 1, 10:
176 movl $16843009, 4(%edx)
177 movl $16843009, (%edx)
180 It might be better to generate
187 when we can spare a register. It reduces code size.
189 //===---------------------------------------------------------------------===//
191 Evaluate what the best way to codegen sdiv X, (2^C) is. For X/8, we currently
194 define i32 @test1(i32 %X) {
208 GCC knows several different ways to codegen it, one of which is this:
218 which is probably slower, but it's interesting at least :)
220 //===---------------------------------------------------------------------===//
222 We are currently lowering large (1MB+) memmove/memcpy to rep/stosl and rep/movsl
223 We should leave these as libcalls for everything over a much lower threshold,
224 since libc is hand tuned for medium and large mem ops (avoiding RFO for large
225 stores, TLB preheating, etc)
227 //===---------------------------------------------------------------------===//
229 Optimize this into something reasonable:
230 x * copysign(1.0, y) * copysign(1.0, z)
232 //===---------------------------------------------------------------------===//
234 Optimize copysign(x, *y) to use an integer load from y.
236 //===---------------------------------------------------------------------===//
238 The following tests perform worse with LSR:
240 lambda, siod, optimizer-eval, ackermann, hash2, nestedloop, strcat, and Treesor.
242 //===---------------------------------------------------------------------===//
244 Teach the coalescer to coalesce vregs of different register classes. e.g. FR32 /
247 //===---------------------------------------------------------------------===//
249 Adding to the list of cmp / test poor codegen issues:
251 int test(__m128 *A, __m128 *B) {
252 if (_mm_comige_ss(*A, *B))
272 Note the setae, movzbl, cmpl, cmove can be replaced with a single cmovae. There
273 are a number of issues. 1) We are introducing a setcc between the result of the
274 intrisic call and select. 2) The intrinsic is expected to produce a i32 value
275 so a any extend (which becomes a zero extend) is added.
277 We probably need some kind of target DAG combine hook to fix this.
279 //===---------------------------------------------------------------------===//
281 We generate significantly worse code for this than GCC:
282 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21150
283 http://gcc.gnu.org/bugzilla/attachment.cgi?id=8701
285 There is also one case we do worse on PPC.
287 //===---------------------------------------------------------------------===//
297 imull $3, 4(%esp), %eax
299 Perhaps this is what we really should generate is? Is imull three or four
300 cycles? Note: ICC generates this:
302 leal (%eax,%eax,2), %eax
304 The current instruction priority is based on pattern complexity. The former is
305 more "complex" because it folds a load so the latter will not be emitted.
307 Perhaps we should use AddedComplexity to give LEA32r a higher priority? We
308 should always try to match LEA first since the LEA matching code does some
309 estimate to determine whether the match is profitable.
311 However, if we care more about code size, then imull is better. It's two bytes
312 shorter than movl + leal.
314 On a Pentium M, both variants have the same characteristics with regard
315 to throughput; however, the multiplication has a latency of four cycles, as
316 opposed to two cycles for the movl+lea variant.
318 //===---------------------------------------------------------------------===//
320 __builtin_ffs codegen is messy.
322 int ffs_(unsigned X) { return __builtin_ffs(X); }
345 Another example of __builtin_ffs (use predsimplify to eliminate a select):
347 int foo (unsigned long j) {
349 return __builtin_ffs (j) - 1;
354 //===---------------------------------------------------------------------===//
356 It appears gcc place string data with linkonce linkage in
357 .section __TEXT,__const_coal,coalesced instead of
358 .section __DATA,__const_coal,coalesced.
359 Take a look at darwin.h, there are other Darwin assembler directives that we
362 //===---------------------------------------------------------------------===//
364 define i32 @foo(i32* %a, i32 %t) {
368 cond_true: ; preds = %cond_true, %entry
369 %x.0.0 = phi i32 [ 0, %entry ], [ %tmp9, %cond_true ] ; <i32> [#uses=3]
370 %t_addr.0.0 = phi i32 [ %t, %entry ], [ %tmp7, %cond_true ] ; <i32> [#uses=1]
371 %tmp2 = getelementptr i32* %a, i32 %x.0.0 ; <i32*> [#uses=1]
372 %tmp3 = load i32* %tmp2 ; <i32> [#uses=1]
373 %tmp5 = add i32 %t_addr.0.0, %x.0.0 ; <i32> [#uses=1]
374 %tmp7 = add i32 %tmp5, %tmp3 ; <i32> [#uses=2]
375 %tmp9 = add i32 %x.0.0, 1 ; <i32> [#uses=2]
376 %tmp = icmp sgt i32 %tmp9, 39 ; <i1> [#uses=1]
377 br i1 %tmp, label %bb12, label %cond_true
379 bb12: ; preds = %cond_true
382 is pessimized by -loop-reduce and -indvars
384 //===---------------------------------------------------------------------===//
386 u32 to float conversion improvement:
388 float uint32_2_float( unsigned u ) {
389 float fl = (int) (u & 0xffff);
390 float fh = (int) (u >> 16);
395 00000000 subl $0x04,%esp
396 00000003 movl 0x08(%esp,1),%eax
397 00000007 movl %eax,%ecx
398 00000009 shrl $0x10,%ecx
399 0000000c cvtsi2ss %ecx,%xmm0
400 00000010 andl $0x0000ffff,%eax
401 00000015 cvtsi2ss %eax,%xmm1
402 00000019 mulss 0x00000078,%xmm0
403 00000021 addss %xmm1,%xmm0
404 00000025 movss %xmm0,(%esp,1)
405 0000002a flds (%esp,1)
406 0000002d addl $0x04,%esp
409 //===---------------------------------------------------------------------===//
411 When using fastcc abi, align stack slot of argument of type double on 8 byte
412 boundary to improve performance.
414 //===---------------------------------------------------------------------===//
418 int f(int a, int b) {
419 if (a == 4 || a == 6)
431 //===---------------------------------------------------------------------===//
433 GCC's ix86_expand_int_movcc function (in i386.c) has a ton of interesting
434 simplifications for integer "x cmp y ? a : b". For example, instead of:
437 void f(int X, int Y) {
464 int usesbb(unsigned int a, unsigned int b) {
465 return (a < b ? -1 : 0);
479 movl $4294967295, %ecx
483 //===---------------------------------------------------------------------===//
485 Consider the expansion of:
487 define i32 @test3(i32 %X) {
488 %tmp1 = urem i32 %X, 255
492 Currently it compiles to:
495 movl $2155905153, %ecx
501 This could be "reassociated" into:
503 movl $2155905153, %eax
507 to avoid the copy. In fact, the existing two-address stuff would do this
508 except that mul isn't a commutative 2-addr instruction. I guess this has
509 to be done at isel time based on the #uses to mul?
511 //===---------------------------------------------------------------------===//
513 Make sure the instruction which starts a loop does not cross a cacheline
514 boundary. This requires knowning the exact length of each machine instruction.
515 That is somewhat complicated, but doable. Example 256.bzip2:
517 In the new trace, the hot loop has an instruction which crosses a cacheline
518 boundary. In addition to potential cache misses, this can't help decoding as I
519 imagine there has to be some kind of complicated decoder reset and realignment
520 to grab the bytes from the next cacheline.
522 532 532 0x3cfc movb (1809(%esp, %esi), %bl <<<--- spans 2 64 byte lines
523 942 942 0x3d03 movl %dh, (1809(%esp, %esi)
524 937 937 0x3d0a incl %esi
525 3 3 0x3d0b cmpb %bl, %dl
526 27 27 0x3d0d jnz 0x000062db <main+11707>
528 //===---------------------------------------------------------------------===//
530 In c99 mode, the preprocessor doesn't like assembly comments like #TRUNCATE.
532 //===---------------------------------------------------------------------===//
534 This could be a single 16-bit load.
537 if ((p[0] == 1) & (p[1] == 2)) return 1;
541 //===---------------------------------------------------------------------===//
543 We should inline lrintf and probably other libc functions.
545 //===---------------------------------------------------------------------===//
547 Start using the flags more. For example, compile:
549 int add_zf(int *x, int y, int a, int b) {
573 int add_zf(int *x, int y, int a, int b) {
597 //===---------------------------------------------------------------------===//
599 These two functions have identical effects:
601 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return i;}
602 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
604 We currently compile them to:
612 jne LBB1_2 #UnifiedReturnBlock
616 LBB1_2: #UnifiedReturnBlock
626 leal 1(%ecx,%eax), %eax
629 both of which are inferior to GCC's:
647 //===---------------------------------------------------------------------===//
655 is currently compiled to:
666 It would be better to produce:
675 This can be applied to any no-return function call that takes no arguments etc.
676 Alternatively, the stack save/restore logic could be shrink-wrapped, producing
687 Both are useful in different situations. Finally, it could be shrink-wrapped
688 and tail called, like this:
695 pop %eax # realign stack.
698 Though this probably isn't worth it.
700 //===---------------------------------------------------------------------===//
702 We need to teach the codegen to convert two-address INC instructions to LEA
703 when the flags are dead (likewise dec). For example, on X86-64, compile:
705 int foo(int A, int B) {
724 ;; X's live range extends beyond the shift, so the register allocator
725 ;; cannot coalesce it with Y. Because of this, a copy needs to be
726 ;; emitted before the shift to save the register value before it is
727 ;; clobbered. However, this copy is not needed if the register
728 ;; allocator turns the shift into an LEA. This also occurs for ADD.
730 ; Check that the shift gets turned into an LEA.
731 ; RUN: llvm-as < %s | llc -march=x86 -x86-asm-syntax=intel | \
732 ; RUN: not grep {mov E.X, E.X}
734 @G = external global i32 ; <i32*> [#uses=3]
736 define i32 @test1(i32 %X, i32 %Y) {
737 %Z = add i32 %X, %Y ; <i32> [#uses=1]
738 volatile store i32 %Y, i32* @G
739 volatile store i32 %Z, i32* @G
743 define i32 @test2(i32 %X) {
744 %Z = add i32 %X, 1 ; <i32> [#uses=1]
745 volatile store i32 %Z, i32* @G
749 //===---------------------------------------------------------------------===//
751 Sometimes it is better to codegen subtractions from a constant (e.g. 7-x) with
752 a neg instead of a sub instruction. Consider:
754 int test(char X) { return 7-X; }
756 we currently produce:
763 We would use one fewer register if codegen'd as:
770 Note that this isn't beneficial if the load can be folded into the sub. In
771 this case, we want a sub:
773 int test(int X) { return 7-X; }
779 //===---------------------------------------------------------------------===//
781 Leaf functions that require one 4-byte spill slot have a prolog like this:
787 and an epilog like this:
792 It would be smaller, and potentially faster, to push eax on entry and to
793 pop into a dummy register instead of using addl/subl of esp. Just don't pop
794 into any return registers :)
796 //===---------------------------------------------------------------------===//
798 The X86 backend should fold (branch (or (setcc, setcc))) into multiple
799 branches. We generate really poor code for:
801 double testf(double a) {
802 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
805 For example, the entry BB is:
810 movsd 24(%esp), %xmm1
815 jne LBB1_5 # UnifiedReturnBlock
819 it would be better to replace the last four instructions with:
825 We also codegen the inner ?: into a diamond:
827 cvtss2sd LCPI1_0(%rip), %xmm2
828 cvtss2sd LCPI1_1(%rip), %xmm3
830 ja LBB1_3 # cond_true
837 We should sink the load into xmm3 into the LBB1_2 block. This should
838 be pretty easy, and will nuke all the copies.
840 //===---------------------------------------------------------------------===//
844 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
845 { return std::make_pair(a + b, a + b < a); }
846 bool no_overflow(unsigned a, unsigned b)
847 { return !full_add(a, b).second; }
857 FIXME: That code looks wrong; bool return is normally defined as zext.
869 //===---------------------------------------------------------------------===//
871 Re-materialize MOV32r0 etc. with xor instead of changing them to moves if the
872 condition register is dead. xor reg reg is shorter than mov reg, #0.
874 //===---------------------------------------------------------------------===//
878 bb114.preheader: ; preds = %cond_next94
879 %tmp231232 = sext i16 %tmp62 to i32 ; <i32> [#uses=1]
880 %tmp233 = sub i32 32, %tmp231232 ; <i32> [#uses=1]
881 %tmp245246 = sext i16 %tmp65 to i32 ; <i32> [#uses=1]
882 %tmp252253 = sext i16 %tmp68 to i32 ; <i32> [#uses=1]
883 %tmp254 = sub i32 32, %tmp252253 ; <i32> [#uses=1]
884 %tmp553554 = bitcast i16* %tmp37 to i8* ; <i8*> [#uses=2]
885 %tmp583584 = sext i16 %tmp98 to i32 ; <i32> [#uses=1]
886 %tmp585 = sub i32 32, %tmp583584 ; <i32> [#uses=1]
887 %tmp614615 = sext i16 %tmp101 to i32 ; <i32> [#uses=1]
888 %tmp621622 = sext i16 %tmp104 to i32 ; <i32> [#uses=1]
889 %tmp623 = sub i32 32, %tmp621622 ; <i32> [#uses=1]
894 LBB3_5: # bb114.preheader
895 movswl -68(%ebp), %eax
899 movswl -52(%ebp), %eax
902 movswl -70(%ebp), %eax
905 movswl -50(%ebp), %eax
908 movswl -42(%ebp), %eax
910 movswl -66(%ebp), %eax
914 This appears to be bad because the RA is not folding the store to the stack
915 slot into the movl. The above instructions could be:
920 This seems like a cross between remat and spill folding.
922 This has redundant subtractions of %eax from a stack slot. However, %ecx doesn't
923 change, so we could simply subtract %eax from %ecx first and then use %ecx (or
926 //===---------------------------------------------------------------------===//
930 %tmp659 = icmp slt i16 %tmp654, 0 ; <i1> [#uses=1]
931 br i1 %tmp659, label %cond_true662, label %cond_next715
937 jns LBB4_109 # cond_next715
939 Shark tells us that using %cx in the testw instruction is sub-optimal. It
940 suggests using the 32-bit register (which is what ICC uses).
942 //===---------------------------------------------------------------------===//
946 void compare (long long foo) {
947 if (foo < 4294967297LL)
963 jne .LBB1_2 # UnifiedReturnBlock
966 .LBB1_2: # UnifiedReturnBlock
970 (also really horrible code on ppc). This is due to the expand code for 64-bit
971 compares. GCC produces multiple branches, which is much nicer:
992 //===---------------------------------------------------------------------===//
994 Tail call optimization improvements: Tail call optimization currently
995 pushes all arguments on the top of the stack (their normal place for
996 non-tail call optimized calls) that source from the callers arguments
997 or that source from a virtual register (also possibly sourcing from
999 This is done to prevent overwriting of parameters (see example
1000 below) that might be used later.
1004 int callee(int32, int64);
1005 int caller(int32 arg1, int32 arg2) {
1006 int64 local = arg2 * 2;
1007 return callee(arg2, (int64)local);
1010 [arg1] [!arg2 no longer valid since we moved local onto it]
1014 Moving arg1 onto the stack slot of callee function would overwrite
1017 Possible optimizations:
1020 - Analyse the actual parameters of the callee to see which would
1021 overwrite a caller parameter which is used by the callee and only
1022 push them onto the top of the stack.
1024 int callee (int32 arg1, int32 arg2);
1025 int caller (int32 arg1, int32 arg2) {
1026 return callee(arg1,arg2);
1029 Here we don't need to write any variables to the top of the stack
1030 since they don't overwrite each other.
1032 int callee (int32 arg1, int32 arg2);
1033 int caller (int32 arg1, int32 arg2) {
1034 return callee(arg2,arg1);
1037 Here we need to push the arguments because they overwrite each
1040 //===---------------------------------------------------------------------===//
1045 unsigned long int z = 0;
1056 gcc compiles this to:
1082 jge LBB1_4 # cond_true
1085 addl $4294950912, %ecx
1095 1. LSR should rewrite the first cmp with induction variable %ecx.
1096 2. DAG combiner should fold
1102 //===---------------------------------------------------------------------===//
1104 define i64 @test(double %X) {
1105 %Y = fptosi double %X to i64
1113 movsd 24(%esp), %xmm0
1114 movsd %xmm0, 8(%esp)
1123 This should just fldl directly from the input stack slot.
1125 //===---------------------------------------------------------------------===//
1128 int foo (int x) { return (x & 65535) | 255; }
1130 Should compile into:
1133 movzwl 4(%esp), %eax
1144 //===---------------------------------------------------------------------===//
1146 We're codegen'ing multiply of long longs inefficiently:
1148 unsigned long long LLM(unsigned long long arg1, unsigned long long arg2) {
1152 We compile to (fomit-frame-pointer):
1160 imull 12(%esp), %esi
1162 imull 20(%esp), %ecx
1168 This looks like a scheduling deficiency and lack of remat of the load from
1169 the argument area. ICC apparently produces:
1172 imull 12(%esp), %ecx
1181 Note that it remat'd loads from 4(esp) and 12(esp). See this GCC PR:
1182 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17236
1184 //===---------------------------------------------------------------------===//
1186 We can fold a store into "zeroing a reg". Instead of:
1189 movl %eax, 124(%esp)
1195 if the flags of the xor are dead.
1197 Likewise, we isel "x<<1" into "add reg,reg". If reg is spilled, this should
1198 be folded into: shl [mem], 1
1200 //===---------------------------------------------------------------------===//
1202 This testcase misses a read/modify/write opportunity (from PR1425):
1204 void vertical_decompose97iH1(int *b0, int *b1, int *b2, int width){
1206 for(i=0; i<width; i++)
1207 b1[i] += (1*(b0[i] + b2[i])+0)>>0;
1210 We compile it down to:
1213 movl (%esi,%edi,4), %ebx
1214 addl (%ecx,%edi,4), %ebx
1215 addl (%edx,%edi,4), %ebx
1216 movl %ebx, (%ecx,%edi,4)
1221 the inner loop should add to the memory location (%ecx,%edi,4), saving
1222 a mov. Something like:
1224 movl (%esi,%edi,4), %ebx
1225 addl (%edx,%edi,4), %ebx
1226 addl %ebx, (%ecx,%edi,4)
1228 Here is another interesting example:
1230 void vertical_compose97iH1(int *b0, int *b1, int *b2, int width){
1232 for(i=0; i<width; i++)
1233 b1[i] -= (1*(b0[i] + b2[i])+0)>>0;
1236 We miss the r/m/w opportunity here by using 2 subs instead of an add+sub[mem]:
1239 movl (%ecx,%edi,4), %ebx
1240 subl (%esi,%edi,4), %ebx
1241 subl (%edx,%edi,4), %ebx
1242 movl %ebx, (%ecx,%edi,4)
1247 Additionally, LSR should rewrite the exit condition of these loops to use
1248 a stride-4 IV, would would allow all the scales in the loop to go away.
1249 This would result in smaller code and more efficient microops.
1251 //===---------------------------------------------------------------------===//
1253 In SSE mode, we turn abs and neg into a load from the constant pool plus a xor
1254 or and instruction, for example:
1256 xorpd LCPI1_0, %xmm2
1258 However, if xmm2 gets spilled, we end up with really ugly code like this:
1261 xorpd LCPI1_0, %xmm0
1264 Since we 'know' that this is a 'neg', we can actually "fold" the spill into
1265 the neg/abs instruction, turning it into an *integer* operation, like this:
1267 xorl 2147483648, [mem+4] ## 2147483648 = (1 << 31)
1269 you could also use xorb, but xorl is less likely to lead to a partial register
1270 stall. Here is a contrived testcase:
1273 void test(double *P) {
1283 //===---------------------------------------------------------------------===//
1285 handling llvm.memory.barrier on pre SSE2 cpus
1288 lock ; mov %esp, %esp
1290 //===---------------------------------------------------------------------===//
1292 The generated code on x86 for checking for signed overflow on a multiply the
1293 obvious way is much longer than it needs to be.
1295 int x(int a, int b) {
1296 long long prod = (long long)a*b;
1297 return prod > 0x7FFFFFFF || prod < (-0x7FFFFFFF-1);
1300 See PR2053 for more details.
1302 //===---------------------------------------------------------------------===//
1304 We should investigate using cdq/ctld (effect: edx = sar eax, 31)
1305 more aggressively; it should cost the same as a move+shift on any modern
1306 processor, but it's a lot shorter. Downside is that it puts more
1307 pressure on register allocation because it has fixed operands.
1310 int abs(int x) {return x < 0 ? -x : x;}
1312 gcc compiles this to the following when using march/mtune=pentium2/3/4/m/etc.:
1320 //===---------------------------------------------------------------------===//
1323 int test(unsigned long a, unsigned long b) { return -(a < b); }
1325 We currently compile this to:
1327 define i32 @test(i32 %a, i32 %b) nounwind {
1328 %tmp3 = icmp ult i32 %a, %b ; <i1> [#uses=1]
1329 %tmp34 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
1330 %tmp5 = sub i32 0, %tmp34 ; <i32> [#uses=1]
1344 Several deficiencies here. First, we should instcombine zext+neg into sext:
1346 define i32 @test2(i32 %a, i32 %b) nounwind {
1347 %tmp3 = icmp ult i32 %a, %b ; <i1> [#uses=1]
1348 %tmp34 = sext i1 %tmp3 to i32 ; <i32> [#uses=1]
1352 However, before we can do that, we have to fix the bad codegen that we get for
1364 This code should be at least as good as the code above. Once this is fixed, we
1365 can optimize this specific case even more to:
1372 //===---------------------------------------------------------------------===//
1374 Take the following code (from
1375 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16541):
1377 extern unsigned char first_one[65536];
1378 int FirstOnet(unsigned long long arg1)
1381 return (first_one[arg1 >> 48]);
1386 The following code is currently generated:
1391 jb .LBB1_2 # UnifiedReturnBlock
1394 movzbl first_one(%eax), %eax
1396 .LBB1_2: # UnifiedReturnBlock
1400 There are a few possible improvements here:
1401 1. We should be able to eliminate the dead load into %ecx
1402 2. We could change the "movl 8(%esp), %eax" into
1403 "movzwl 10(%esp), %eax"; this lets us change the cmpl
1404 into a testl, which is shorter, and eliminate the shift.
1406 We could also in theory eliminate the branch by using a conditional
1407 for the address of the load, but that seems unlikely to be worthwhile
1410 //===---------------------------------------------------------------------===//
1412 We compile this function:
1414 define i32 @foo(i32 %a, i32 %b, i32 %c, i8 zeroext %d) nounwind {
1416 %tmp2 = icmp eq i8 %d, 0 ; <i1> [#uses=1]
1417 br i1 %tmp2, label %bb7, label %bb
1419 bb: ; preds = %entry
1420 %tmp6 = add i32 %b, %a ; <i32> [#uses=1]
1423 bb7: ; preds = %entry
1424 %tmp10 = sub i32 %a, %c ; <i32> [#uses=1]
1444 The coalescer could coalesce "edx" with "eax" to avoid the movl in LBB1_2
1445 if it commuted the addl in LBB1_1.
1447 //===---------------------------------------------------------------------===//
1454 cvtss2sd LCPI1_0, %xmm1
1456 movsd 176(%esp), %xmm2
1461 mulsd LCPI1_23, %xmm4
1462 addsd LCPI1_24, %xmm4
1464 addsd LCPI1_25, %xmm4
1466 addsd LCPI1_26, %xmm4
1468 addsd LCPI1_27, %xmm4
1470 addsd LCPI1_28, %xmm4
1474 movsd 152(%esp), %xmm1
1476 movsd %xmm1, 152(%esp)
1480 LBB1_16: # bb358.loopexit
1481 movsd 152(%esp), %xmm0
1483 addsd LCPI1_22, %xmm0
1484 movsd %xmm0, 152(%esp)
1486 Rather than spilling the result of the last addsd in the loop, we should have
1487 insert a copy to split the interval (one for the duration of the loop, one
1488 extending to the fall through). The register pressure in the loop isn't high
1489 enough to warrant the spill.
1491 Also check why xmm7 is not used at all in the function.
1493 //===---------------------------------------------------------------------===//
1495 Legalize loses track of the fact that bools are always zero extended when in
1496 memory. This causes us to compile abort_gzip (from 164.gzip) from:
1498 target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
1499 target triple = "i386-apple-darwin8"
1500 @in_exit.4870.b = internal global i1 false ; <i1*> [#uses=2]
1501 define fastcc void @abort_gzip() noreturn nounwind {
1503 %tmp.b.i = load i1* @in_exit.4870.b ; <i1> [#uses=1]
1504 br i1 %tmp.b.i, label %bb.i, label %bb4.i
1505 bb.i: ; preds = %entry
1506 tail call void @exit( i32 1 ) noreturn nounwind
1508 bb4.i: ; preds = %entry
1509 store i1 true, i1* @in_exit.4870.b
1510 tail call void @exit( i32 1 ) noreturn nounwind
1513 declare void @exit(i32) noreturn nounwind
1519 movb _in_exit.4870.b, %al
1526 //===---------------------------------------------------------------------===//
1530 int test(int x, int y) {
1542 it would be better to codegen as: x+~y (notl+addl)
1544 //===---------------------------------------------------------------------===//
1548 int foo(const char *str,...)
1550 __builtin_va_list a; int x;
1551 __builtin_va_start(a,str); x = __builtin_va_arg(a,int); __builtin_va_end(a);
1555 gets compiled into this on x86-64:
1557 movaps %xmm7, 160(%rsp)
1558 movaps %xmm6, 144(%rsp)
1559 movaps %xmm5, 128(%rsp)
1560 movaps %xmm4, 112(%rsp)
1561 movaps %xmm3, 96(%rsp)
1562 movaps %xmm2, 80(%rsp)
1563 movaps %xmm1, 64(%rsp)
1564 movaps %xmm0, 48(%rsp)
1571 movq %rax, 192(%rsp)
1572 leaq 208(%rsp), %rax
1573 movq %rax, 184(%rsp)
1576 movl 176(%rsp), %eax
1580 movq 184(%rsp), %rcx
1582 movq %rax, 184(%rsp)
1590 addq 192(%rsp), %rcx
1591 movl %eax, 176(%rsp)
1597 leaq 104(%rsp), %rax
1598 movq %rsi, -80(%rsp)
1600 movq %rax, -112(%rsp)
1601 leaq -88(%rsp), %rax
1602 movq %rax, -104(%rsp)
1606 movq -112(%rsp), %rdx
1614 addq -104(%rsp), %rdx
1616 movl %eax, -120(%rsp)
1621 and it gets compiled into this on x86:
1641 //===---------------------------------------------------------------------===//
1643 Teach tblgen not to check bitconvert source type in some cases. This allows us
1644 to consolidate the following patterns in X86InstrMMX.td:
1646 def : Pat<(v2i32 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1648 (v2i32 (MMX_MOVDQ2Qrr VR128:$src))>;
1649 def : Pat<(v4i16 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1651 (v4i16 (MMX_MOVDQ2Qrr VR128:$src))>;
1652 def : Pat<(v8i8 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1654 (v8i8 (MMX_MOVDQ2Qrr VR128:$src))>;
1656 There are other cases in various td files.
1658 //===---------------------------------------------------------------------===//
1660 Take something like the following on x86-32:
1661 unsigned a(unsigned long long x, unsigned y) {return x % y;}
1663 We currently generate a libcall, but we really shouldn't: the expansion is
1664 shorter and likely faster than the libcall. The expected code is something
1676 A similar code sequence works for division.
1678 //===---------------------------------------------------------------------===//
1680 These should compile to the same code, but the later codegen's to useless
1681 instructions on X86. This may be a trivial dag combine (GCC PR7061):
1683 struct s1 { unsigned char a, b; };
1684 unsigned long f1(struct s1 x) {
1687 struct s2 { unsigned a: 8, b: 8; };
1688 unsigned long f2(struct s2 x) {
1692 //===---------------------------------------------------------------------===//
1694 We currently compile this:
1696 define i32 @func1(i32 %v1, i32 %v2) nounwind {
1698 %t = call {i32, i1} @llvm.sadd.with.overflow.i32(i32 %v1, i32 %v2)
1699 %sum = extractvalue {i32, i1} %t, 0
1700 %obit = extractvalue {i32, i1} %t, 1
1701 br i1 %obit, label %overflow, label %normal
1705 call void @llvm.trap()
1708 declare {i32, i1} @llvm.sadd.with.overflow.i32(i32, i32)
1709 declare void @llvm.trap()
1716 jo LBB1_2 ## overflow
1722 it would be nice to produce "into" someday.
1724 //===---------------------------------------------------------------------===//
1728 void vec_mpys1(int y[], const int x[], int scaler) {
1730 for (i = 0; i < 150; i++)
1731 y[i] += (((long long)scaler * (long long)x[i]) >> 31);
1734 Compiles to this loop with GCC 3.x:
1739 shrdl $31, %edx, %eax
1740 addl %eax, (%esi,%ecx,4)
1745 llvm-gcc compiles it to the much uglier:
1749 movl (%eax,%edi,4), %ebx
1758 shldl $1, %eax, %ebx
1760 addl %ebx, (%eax,%edi,4)
1765 //===---------------------------------------------------------------------===//
1767 Test instructions can be eliminated by using EFLAGS values from arithmetic
1768 instructions. This is currently not done for mul, and, or, xor, neg, shl,
1769 sra, srl, shld, shrd, atomic ops, and others. It is also currently not done
1770 for read-modify-write instructions. It is also current not done if the
1771 OF or CF flags are needed.
1773 The shift operators have the complication that when the shift count is
1774 zero, EFLAGS is not set, so they can only subsume a test instruction if
1775 the shift count is known to be non-zero. Also, using the EFLAGS value
1776 from a shift is apparently very slow on some x86 implementations.
1778 In read-modify-write instructions, the root node in the isel match is
1779 the store, and isel has no way for the use of the EFLAGS result of the
1780 arithmetic to be remapped to the new node.
1782 Add and subtract instructions set OF on signed overflow and CF on unsiged
1783 overflow, while test instructions always clear OF and CF. In order to
1784 replace a test with an add or subtract in a situation where OF or CF is
1785 needed, codegen must be able to prove that the operation cannot see
1786 signed or unsigned overflow, respectively.
1788 //===---------------------------------------------------------------------===//
1790 memcpy/memmove do not lower to SSE copies when possible. A silly example is:
1791 define <16 x float> @foo(<16 x float> %A) nounwind {
1792 %tmp = alloca <16 x float>, align 16
1793 %tmp2 = alloca <16 x float>, align 16
1794 store <16 x float> %A, <16 x float>* %tmp
1795 %s = bitcast <16 x float>* %tmp to i8*
1796 %s2 = bitcast <16 x float>* %tmp2 to i8*
1797 call void @llvm.memcpy.i64(i8* %s, i8* %s2, i64 64, i32 16)
1798 %R = load <16 x float>* %tmp2
1802 declare void @llvm.memcpy.i64(i8* nocapture, i8* nocapture, i64, i32) nounwind
1808 movaps %xmm3, 112(%esp)
1809 movaps %xmm2, 96(%esp)
1810 movaps %xmm1, 80(%esp)
1811 movaps %xmm0, 64(%esp)
1813 movl %eax, 124(%esp)
1815 movl %eax, 120(%esp)
1817 <many many more 32-bit copies>
1818 movaps (%esp), %xmm0
1819 movaps 16(%esp), %xmm1
1820 movaps 32(%esp), %xmm2
1821 movaps 48(%esp), %xmm3
1825 On Nehalem, it may even be cheaper to just use movups when unaligned than to
1826 fall back to lower-granularity chunks.
1828 //===---------------------------------------------------------------------===//
1830 Implement processor-specific optimizations for parity with GCC on these
1831 processors. GCC does two optimizations:
1833 1. ix86_pad_returns inserts a noop before ret instructions if immediately
1834 preceeded by a conditional branch or is the target of a jump.
1835 2. ix86_avoid_jump_misspredicts inserts noops in cases where a 16-byte block of
1836 code contains more than 3 branches.
1838 The first one is done for all AMDs, Core2, and "Generic"
1839 The second one is done for: Atom, Pentium Pro, all AMDs, Pentium 4, Nocona,
1840 Core 2, and "Generic"
1842 //===---------------------------------------------------------------------===//
1845 int a(int x) { return (x & 127) > 31; }
1861 This should definitely be done in instcombine, canonicalizing the range
1862 condition into a != condition. We get this IR:
1864 define i32 @a(i32 %x) nounwind readnone {
1866 %0 = and i32 %x, 127 ; <i32> [#uses=1]
1867 %1 = icmp ugt i32 %0, 31 ; <i1> [#uses=1]
1868 %2 = zext i1 %1 to i32 ; <i32> [#uses=1]
1872 Instcombine prefers to strength reduce relational comparisons to equality
1873 comparisons when possible, this should be another case of that. This could
1874 be handled pretty easily in InstCombiner::visitICmpInstWithInstAndIntCst, but it
1875 looks like InstCombiner::visitICmpInstWithInstAndIntCst should really already
1876 be redesigned to use ComputeMaskedBits and friends.
1879 //===---------------------------------------------------------------------===//
1881 int x(int a) { return (a&0xf0)>>4; }
1890 movzbl 4(%esp), %eax
1894 //===---------------------------------------------------------------------===//
1897 int x(int a) { return (a & 0x80) ? 0x100 : 0; }
1898 int y(int a) { return (a & 0x80) *2; }
1913 This is another general instcombine transformation that is profitable on all
1914 targets. In LLVM IR, these functions look like this:
1916 define i32 @x(i32 %a) nounwind readnone {
1918 %0 = and i32 %a, 128
1919 %1 = icmp eq i32 %0, 0
1920 %iftmp.0.0 = select i1 %1, i32 0, i32 256
1924 define i32 @y(i32 %a) nounwind readnone {
1927 %1 = and i32 %0, 256
1931 Replacing an icmp+select with a shift should always be considered profitable in
1934 //===---------------------------------------------------------------------===//
1936 Re-implement atomic builtins __sync_add_and_fetch() and __sync_sub_and_fetch
1939 When the return value is not used (i.e. only care about the value in the
1940 memory), x86 does not have to use add to implement these. Instead, it can use
1941 add, sub, inc, dec instructions with the "lock" prefix.
1943 This is currently implemented using a bit of instruction selection trick. The
1944 issue is the target independent pattern produces one output and a chain and we
1945 want to map it into one that just output a chain. The current trick is to select
1946 it into a MERGE_VALUES with the first definition being an implicit_def. The
1947 proper solution is to add new ISD opcodes for the no-output variant. DAG
1948 combiner can then transform the node before it gets to target node selection.
1950 Problem #2 is we are adding a whole bunch of x86 atomic instructions when in
1951 fact these instructions are identical to the non-lock versions. We need a way to
1952 add target specific information to target nodes and have this information
1953 carried over to machine instructions. Asm printer (or JIT) can use this
1954 information to add the "lock" prefix.