1 //===---------------------------------------------------------------------===//
2 // Random ideas for the X86 backend.
3 //===---------------------------------------------------------------------===//
5 We should add support for the "movbe" instruction, which does a byte-swapping
6 copy (3-addr bswap + memory support?) This is available on Atom processors.
8 //===---------------------------------------------------------------------===//
10 CodeGen/X86/lea-3.ll:test3 should be a single LEA, not a shift/move. The X86
11 backend knows how to three-addressify this shift, but it appears the register
12 allocator isn't even asking it to do so in this case. We should investigate
13 why this isn't happening, it could have significant impact on other important
14 cases for X86 as well.
16 //===---------------------------------------------------------------------===//
18 This should be one DIV/IDIV instruction, not a libcall:
20 unsigned test(unsigned long long X, unsigned Y) {
24 This can be done trivially with a custom legalizer. What about overflow
25 though? http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14224
27 //===---------------------------------------------------------------------===//
29 Improvements to the multiply -> shift/add algorithm:
30 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg01590.html
32 //===---------------------------------------------------------------------===//
34 Improve code like this (occurs fairly frequently, e.g. in LLVM):
35 long long foo(int x) { return 1LL << x; }
37 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01109.html
38 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01128.html
39 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01136.html
41 Another useful one would be ~0ULL >> X and ~0ULL << X.
43 One better solution for 1LL << x is:
52 But that requires good 8-bit subreg support.
54 Also, this might be better. It's an extra shift, but it's one instruction
55 shorter, and doesn't stress 8-bit subreg support.
56 (From http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01148.html,
57 but without the unnecessary and.)
65 64-bit shifts (in general) expand to really bad code. Instead of using
66 cmovs, we should expand to a conditional branch like GCC produces.
68 //===---------------------------------------------------------------------===//
71 _Bool f(_Bool a) { return a!=1; }
78 (Although note that this isn't a legal way to express the code that llvm-gcc
79 currently generates for that function.)
81 //===---------------------------------------------------------------------===//
85 1. Dynamic programming based approach when compile time if not an
87 2. Code duplication (addressing mode) during isel.
88 3. Other ideas from "Register-Sensitive Selection, Duplication, and
89 Sequencing of Instructions".
90 4. Scheduling for reduced register pressure. E.g. "Minimum Register
91 Instruction Sequence Problem: Revisiting Optimal Code Generation for DAGs"
92 and other related papers.
93 http://citeseer.ist.psu.edu/govindarajan01minimum.html
95 //===---------------------------------------------------------------------===//
97 Should we promote i16 to i32 to avoid partial register update stalls?
99 //===---------------------------------------------------------------------===//
101 Leave any_extend as pseudo instruction and hint to register
102 allocator. Delay codegen until post register allocation.
103 Note. any_extend is now turned into an INSERT_SUBREG. We still need to teach
104 the coalescer how to deal with it though.
106 //===---------------------------------------------------------------------===//
108 It appears icc use push for parameter passing. Need to investigate.
110 //===---------------------------------------------------------------------===//
112 Only use inc/neg/not instructions on processors where they are faster than
113 add/sub/xor. They are slower on the P4 due to only updating some processor
116 //===---------------------------------------------------------------------===//
118 The instruction selector sometimes misses folding a load into a compare. The
119 pattern is written as (cmp reg, (load p)). Because the compare isn't
120 commutative, it is not matched with the load on both sides. The dag combiner
121 should be made smart enough to cannonicalize the load into the RHS of a compare
122 when it can invert the result of the compare for free.
124 //===---------------------------------------------------------------------===//
126 In many cases, LLVM generates code like this:
135 on some processors (which ones?), it is more efficient to do this:
144 Doing this correctly is tricky though, as the xor clobbers the flags.
146 //===---------------------------------------------------------------------===//
148 We should generate bts/btr/etc instructions on targets where they are cheap or
149 when codesize is important. e.g., for:
151 void setbit(int *target, int bit) {
152 *target |= (1 << bit);
154 void clearbit(int *target, int bit) {
155 *target &= ~(1 << bit);
158 //===---------------------------------------------------------------------===//
160 Instead of the following for memset char*, 1, 10:
162 movl $16843009, 4(%edx)
163 movl $16843009, (%edx)
166 It might be better to generate
173 when we can spare a register. It reduces code size.
175 //===---------------------------------------------------------------------===//
177 Evaluate what the best way to codegen sdiv X, (2^C) is. For X/8, we currently
180 define i32 @test1(i32 %X) {
194 GCC knows several different ways to codegen it, one of which is this:
204 which is probably slower, but it's interesting at least :)
206 //===---------------------------------------------------------------------===//
208 We are currently lowering large (1MB+) memmove/memcpy to rep/stosl and rep/movsl
209 We should leave these as libcalls for everything over a much lower threshold,
210 since libc is hand tuned for medium and large mem ops (avoiding RFO for large
211 stores, TLB preheating, etc)
213 //===---------------------------------------------------------------------===//
215 Optimize this into something reasonable:
216 x * copysign(1.0, y) * copysign(1.0, z)
218 //===---------------------------------------------------------------------===//
220 Optimize copysign(x, *y) to use an integer load from y.
222 //===---------------------------------------------------------------------===//
224 The following tests perform worse with LSR:
226 lambda, siod, optimizer-eval, ackermann, hash2, nestedloop, strcat, and Treesor.
228 //===---------------------------------------------------------------------===//
230 Adding to the list of cmp / test poor codegen issues:
232 int test(__m128 *A, __m128 *B) {
233 if (_mm_comige_ss(*A, *B))
253 Note the setae, movzbl, cmpl, cmove can be replaced with a single cmovae. There
254 are a number of issues. 1) We are introducing a setcc between the result of the
255 intrisic call and select. 2) The intrinsic is expected to produce a i32 value
256 so a any extend (which becomes a zero extend) is added.
258 We probably need some kind of target DAG combine hook to fix this.
260 //===---------------------------------------------------------------------===//
262 We generate significantly worse code for this than GCC:
263 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21150
264 http://gcc.gnu.org/bugzilla/attachment.cgi?id=8701
266 There is also one case we do worse on PPC.
268 //===---------------------------------------------------------------------===//
278 imull $3, 4(%esp), %eax
280 Perhaps this is what we really should generate is? Is imull three or four
281 cycles? Note: ICC generates this:
283 leal (%eax,%eax,2), %eax
285 The current instruction priority is based on pattern complexity. The former is
286 more "complex" because it folds a load so the latter will not be emitted.
288 Perhaps we should use AddedComplexity to give LEA32r a higher priority? We
289 should always try to match LEA first since the LEA matching code does some
290 estimate to determine whether the match is profitable.
292 However, if we care more about code size, then imull is better. It's two bytes
293 shorter than movl + leal.
295 On a Pentium M, both variants have the same characteristics with regard
296 to throughput; however, the multiplication has a latency of four cycles, as
297 opposed to two cycles for the movl+lea variant.
299 //===---------------------------------------------------------------------===//
301 __builtin_ffs codegen is messy.
303 int ffs_(unsigned X) { return __builtin_ffs(X); }
326 Another example of __builtin_ffs (use predsimplify to eliminate a select):
328 int foo (unsigned long j) {
330 return __builtin_ffs (j) - 1;
335 //===---------------------------------------------------------------------===//
337 It appears gcc place string data with linkonce linkage in
338 .section __TEXT,__const_coal,coalesced instead of
339 .section __DATA,__const_coal,coalesced.
340 Take a look at darwin.h, there are other Darwin assembler directives that we
343 //===---------------------------------------------------------------------===//
345 define i32 @foo(i32* %a, i32 %t) {
349 cond_true: ; preds = %cond_true, %entry
350 %x.0.0 = phi i32 [ 0, %entry ], [ %tmp9, %cond_true ] ; <i32> [#uses=3]
351 %t_addr.0.0 = phi i32 [ %t, %entry ], [ %tmp7, %cond_true ] ; <i32> [#uses=1]
352 %tmp2 = getelementptr i32* %a, i32 %x.0.0 ; <i32*> [#uses=1]
353 %tmp3 = load i32* %tmp2 ; <i32> [#uses=1]
354 %tmp5 = add i32 %t_addr.0.0, %x.0.0 ; <i32> [#uses=1]
355 %tmp7 = add i32 %tmp5, %tmp3 ; <i32> [#uses=2]
356 %tmp9 = add i32 %x.0.0, 1 ; <i32> [#uses=2]
357 %tmp = icmp sgt i32 %tmp9, 39 ; <i1> [#uses=1]
358 br i1 %tmp, label %bb12, label %cond_true
360 bb12: ; preds = %cond_true
363 is pessimized by -loop-reduce and -indvars
365 //===---------------------------------------------------------------------===//
367 u32 to float conversion improvement:
369 float uint32_2_float( unsigned u ) {
370 float fl = (int) (u & 0xffff);
371 float fh = (int) (u >> 16);
376 00000000 subl $0x04,%esp
377 00000003 movl 0x08(%esp,1),%eax
378 00000007 movl %eax,%ecx
379 00000009 shrl $0x10,%ecx
380 0000000c cvtsi2ss %ecx,%xmm0
381 00000010 andl $0x0000ffff,%eax
382 00000015 cvtsi2ss %eax,%xmm1
383 00000019 mulss 0x00000078,%xmm0
384 00000021 addss %xmm1,%xmm0
385 00000025 movss %xmm0,(%esp,1)
386 0000002a flds (%esp,1)
387 0000002d addl $0x04,%esp
390 //===---------------------------------------------------------------------===//
392 When using fastcc abi, align stack slot of argument of type double on 8 byte
393 boundary to improve performance.
395 //===---------------------------------------------------------------------===//
399 int f(int a, int b) {
400 if (a == 4 || a == 6)
412 //===---------------------------------------------------------------------===//
414 GCC's ix86_expand_int_movcc function (in i386.c) has a ton of interesting
415 simplifications for integer "x cmp y ? a : b". For example, instead of:
418 void f(int X, int Y) {
445 int usesbb(unsigned int a, unsigned int b) {
446 return (a < b ? -1 : 0);
460 movl $4294967295, %ecx
464 //===---------------------------------------------------------------------===//
466 Consider the expansion of:
468 define i32 @test3(i32 %X) {
469 %tmp1 = urem i32 %X, 255
473 Currently it compiles to:
476 movl $2155905153, %ecx
482 This could be "reassociated" into:
484 movl $2155905153, %eax
488 to avoid the copy. In fact, the existing two-address stuff would do this
489 except that mul isn't a commutative 2-addr instruction. I guess this has
490 to be done at isel time based on the #uses to mul?
492 //===---------------------------------------------------------------------===//
494 Make sure the instruction which starts a loop does not cross a cacheline
495 boundary. This requires knowning the exact length of each machine instruction.
496 That is somewhat complicated, but doable. Example 256.bzip2:
498 In the new trace, the hot loop has an instruction which crosses a cacheline
499 boundary. In addition to potential cache misses, this can't help decoding as I
500 imagine there has to be some kind of complicated decoder reset and realignment
501 to grab the bytes from the next cacheline.
503 532 532 0x3cfc movb (1809(%esp, %esi), %bl <<<--- spans 2 64 byte lines
504 942 942 0x3d03 movl %dh, (1809(%esp, %esi)
505 937 937 0x3d0a incl %esi
506 3 3 0x3d0b cmpb %bl, %dl
507 27 27 0x3d0d jnz 0x000062db <main+11707>
509 //===---------------------------------------------------------------------===//
511 In c99 mode, the preprocessor doesn't like assembly comments like #TRUNCATE.
513 //===---------------------------------------------------------------------===//
515 This could be a single 16-bit load.
518 if ((p[0] == 1) & (p[1] == 2)) return 1;
522 //===---------------------------------------------------------------------===//
524 We should inline lrintf and probably other libc functions.
526 //===---------------------------------------------------------------------===//
528 Use the FLAGS values from arithmetic instructions more. For example, compile:
530 int add_zf(int *x, int y, int a, int b) {
552 As another example, compile function f2 in test/CodeGen/X86/cmp-test.ll
553 without a test instruction.
555 //===---------------------------------------------------------------------===//
557 These two functions have identical effects:
559 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return i;}
560 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
562 We currently compile them to:
570 jne LBB1_2 #UnifiedReturnBlock
574 LBB1_2: #UnifiedReturnBlock
584 leal 1(%ecx,%eax), %eax
587 both of which are inferior to GCC's:
605 //===---------------------------------------------------------------------===//
613 is currently compiled to:
624 It would be better to produce:
633 This can be applied to any no-return function call that takes no arguments etc.
634 Alternatively, the stack save/restore logic could be shrink-wrapped, producing
645 Both are useful in different situations. Finally, it could be shrink-wrapped
646 and tail called, like this:
653 pop %eax # realign stack.
656 Though this probably isn't worth it.
658 //===---------------------------------------------------------------------===//
660 Sometimes it is better to codegen subtractions from a constant (e.g. 7-x) with
661 a neg instead of a sub instruction. Consider:
663 int test(char X) { return 7-X; }
665 we currently produce:
672 We would use one fewer register if codegen'd as:
679 Note that this isn't beneficial if the load can be folded into the sub. In
680 this case, we want a sub:
682 int test(int X) { return 7-X; }
688 //===---------------------------------------------------------------------===//
690 Leaf functions that require one 4-byte spill slot have a prolog like this:
696 and an epilog like this:
701 It would be smaller, and potentially faster, to push eax on entry and to
702 pop into a dummy register instead of using addl/subl of esp. Just don't pop
703 into any return registers :)
705 //===---------------------------------------------------------------------===//
707 The X86 backend should fold (branch (or (setcc, setcc))) into multiple
708 branches. We generate really poor code for:
710 double testf(double a) {
711 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
714 For example, the entry BB is:
719 movsd 24(%esp), %xmm1
724 jne LBB1_5 # UnifiedReturnBlock
728 it would be better to replace the last four instructions with:
734 We also codegen the inner ?: into a diamond:
736 cvtss2sd LCPI1_0(%rip), %xmm2
737 cvtss2sd LCPI1_1(%rip), %xmm3
739 ja LBB1_3 # cond_true
746 We should sink the load into xmm3 into the LBB1_2 block. This should
747 be pretty easy, and will nuke all the copies.
749 //===---------------------------------------------------------------------===//
753 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
754 { return std::make_pair(a + b, a + b < a); }
755 bool no_overflow(unsigned a, unsigned b)
756 { return !full_add(a, b).second; }
766 FIXME: That code looks wrong; bool return is normally defined as zext.
778 //===---------------------------------------------------------------------===//
782 bb114.preheader: ; preds = %cond_next94
783 %tmp231232 = sext i16 %tmp62 to i32 ; <i32> [#uses=1]
784 %tmp233 = sub i32 32, %tmp231232 ; <i32> [#uses=1]
785 %tmp245246 = sext i16 %tmp65 to i32 ; <i32> [#uses=1]
786 %tmp252253 = sext i16 %tmp68 to i32 ; <i32> [#uses=1]
787 %tmp254 = sub i32 32, %tmp252253 ; <i32> [#uses=1]
788 %tmp553554 = bitcast i16* %tmp37 to i8* ; <i8*> [#uses=2]
789 %tmp583584 = sext i16 %tmp98 to i32 ; <i32> [#uses=1]
790 %tmp585 = sub i32 32, %tmp583584 ; <i32> [#uses=1]
791 %tmp614615 = sext i16 %tmp101 to i32 ; <i32> [#uses=1]
792 %tmp621622 = sext i16 %tmp104 to i32 ; <i32> [#uses=1]
793 %tmp623 = sub i32 32, %tmp621622 ; <i32> [#uses=1]
798 LBB3_5: # bb114.preheader
799 movswl -68(%ebp), %eax
803 movswl -52(%ebp), %eax
806 movswl -70(%ebp), %eax
809 movswl -50(%ebp), %eax
812 movswl -42(%ebp), %eax
814 movswl -66(%ebp), %eax
818 This appears to be bad because the RA is not folding the store to the stack
819 slot into the movl. The above instructions could be:
824 This seems like a cross between remat and spill folding.
826 This has redundant subtractions of %eax from a stack slot. However, %ecx doesn't
827 change, so we could simply subtract %eax from %ecx first and then use %ecx (or
830 //===---------------------------------------------------------------------===//
834 %tmp659 = icmp slt i16 %tmp654, 0 ; <i1> [#uses=1]
835 br i1 %tmp659, label %cond_true662, label %cond_next715
841 jns LBB4_109 # cond_next715
843 Shark tells us that using %cx in the testw instruction is sub-optimal. It
844 suggests using the 32-bit register (which is what ICC uses).
846 //===---------------------------------------------------------------------===//
850 void compare (long long foo) {
851 if (foo < 4294967297LL)
867 jne .LBB1_2 # UnifiedReturnBlock
870 .LBB1_2: # UnifiedReturnBlock
874 (also really horrible code on ppc). This is due to the expand code for 64-bit
875 compares. GCC produces multiple branches, which is much nicer:
896 //===---------------------------------------------------------------------===//
898 Tail call optimization improvements: Tail call optimization currently
899 pushes all arguments on the top of the stack (their normal place for
900 non-tail call optimized calls) that source from the callers arguments
901 or that source from a virtual register (also possibly sourcing from
903 This is done to prevent overwriting of parameters (see example
904 below) that might be used later.
908 int callee(int32, int64);
909 int caller(int32 arg1, int32 arg2) {
910 int64 local = arg2 * 2;
911 return callee(arg2, (int64)local);
914 [arg1] [!arg2 no longer valid since we moved local onto it]
918 Moving arg1 onto the stack slot of callee function would overwrite
921 Possible optimizations:
924 - Analyse the actual parameters of the callee to see which would
925 overwrite a caller parameter which is used by the callee and only
926 push them onto the top of the stack.
928 int callee (int32 arg1, int32 arg2);
929 int caller (int32 arg1, int32 arg2) {
930 return callee(arg1,arg2);
933 Here we don't need to write any variables to the top of the stack
934 since they don't overwrite each other.
936 int callee (int32 arg1, int32 arg2);
937 int caller (int32 arg1, int32 arg2) {
938 return callee(arg2,arg1);
941 Here we need to push the arguments because they overwrite each
944 //===---------------------------------------------------------------------===//
949 unsigned long int z = 0;
960 gcc compiles this to:
986 jge LBB1_4 # cond_true
989 addl $4294950912, %ecx
999 1. LSR should rewrite the first cmp with induction variable %ecx.
1000 2. DAG combiner should fold
1006 //===---------------------------------------------------------------------===//
1008 define i64 @test(double %X) {
1009 %Y = fptosi double %X to i64
1017 movsd 24(%esp), %xmm0
1018 movsd %xmm0, 8(%esp)
1027 This should just fldl directly from the input stack slot.
1029 //===---------------------------------------------------------------------===//
1032 int foo (int x) { return (x & 65535) | 255; }
1034 Should compile into:
1037 movzwl 4(%esp), %eax
1048 //===---------------------------------------------------------------------===//
1050 We're codegen'ing multiply of long longs inefficiently:
1052 unsigned long long LLM(unsigned long long arg1, unsigned long long arg2) {
1056 We compile to (fomit-frame-pointer):
1064 imull 12(%esp), %esi
1066 imull 20(%esp), %ecx
1072 This looks like a scheduling deficiency and lack of remat of the load from
1073 the argument area. ICC apparently produces:
1076 imull 12(%esp), %ecx
1085 Note that it remat'd loads from 4(esp) and 12(esp). See this GCC PR:
1086 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17236
1088 //===---------------------------------------------------------------------===//
1090 We can fold a store into "zeroing a reg". Instead of:
1093 movl %eax, 124(%esp)
1099 if the flags of the xor are dead.
1101 Likewise, we isel "x<<1" into "add reg,reg". If reg is spilled, this should
1102 be folded into: shl [mem], 1
1104 //===---------------------------------------------------------------------===//
1106 In SSE mode, we turn abs and neg into a load from the constant pool plus a xor
1107 or and instruction, for example:
1109 xorpd LCPI1_0, %xmm2
1111 However, if xmm2 gets spilled, we end up with really ugly code like this:
1114 xorpd LCPI1_0, %xmm0
1117 Since we 'know' that this is a 'neg', we can actually "fold" the spill into
1118 the neg/abs instruction, turning it into an *integer* operation, like this:
1120 xorl 2147483648, [mem+4] ## 2147483648 = (1 << 31)
1122 you could also use xorb, but xorl is less likely to lead to a partial register
1123 stall. Here is a contrived testcase:
1126 void test(double *P) {
1136 //===---------------------------------------------------------------------===//
1138 The generated code on x86 for checking for signed overflow on a multiply the
1139 obvious way is much longer than it needs to be.
1141 int x(int a, int b) {
1142 long long prod = (long long)a*b;
1143 return prod > 0x7FFFFFFF || prod < (-0x7FFFFFFF-1);
1146 See PR2053 for more details.
1148 //===---------------------------------------------------------------------===//
1150 We should investigate using cdq/ctld (effect: edx = sar eax, 31)
1151 more aggressively; it should cost the same as a move+shift on any modern
1152 processor, but it's a lot shorter. Downside is that it puts more
1153 pressure on register allocation because it has fixed operands.
1156 int abs(int x) {return x < 0 ? -x : x;}
1158 gcc compiles this to the following when using march/mtune=pentium2/3/4/m/etc.:
1166 //===---------------------------------------------------------------------===//
1169 int test(unsigned long a, unsigned long b) { return -(a < b); }
1171 We currently compile this to:
1173 define i32 @test(i32 %a, i32 %b) nounwind {
1174 %tmp3 = icmp ult i32 %a, %b ; <i1> [#uses=1]
1175 %tmp34 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
1176 %tmp5 = sub i32 0, %tmp34 ; <i32> [#uses=1]
1190 Several deficiencies here. First, we should instcombine zext+neg into sext:
1192 define i32 @test2(i32 %a, i32 %b) nounwind {
1193 %tmp3 = icmp ult i32 %a, %b ; <i1> [#uses=1]
1194 %tmp34 = sext i1 %tmp3 to i32 ; <i32> [#uses=1]
1198 However, before we can do that, we have to fix the bad codegen that we get for
1210 This code should be at least as good as the code above. Once this is fixed, we
1211 can optimize this specific case even more to:
1218 //===---------------------------------------------------------------------===//
1220 Take the following code (from
1221 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16541):
1223 extern unsigned char first_one[65536];
1224 int FirstOnet(unsigned long long arg1)
1227 return (first_one[arg1 >> 48]);
1232 The following code is currently generated:
1237 jb .LBB1_2 # UnifiedReturnBlock
1240 movzbl first_one(%eax), %eax
1242 .LBB1_2: # UnifiedReturnBlock
1246 We could change the "movl 8(%esp), %eax" into "movzwl 10(%esp), %eax"; this
1247 lets us change the cmpl into a testl, which is shorter, and eliminate the shift.
1249 //===---------------------------------------------------------------------===//
1251 We compile this function:
1253 define i32 @foo(i32 %a, i32 %b, i32 %c, i8 zeroext %d) nounwind {
1255 %tmp2 = icmp eq i8 %d, 0 ; <i1> [#uses=1]
1256 br i1 %tmp2, label %bb7, label %bb
1258 bb: ; preds = %entry
1259 %tmp6 = add i32 %b, %a ; <i32> [#uses=1]
1262 bb7: ; preds = %entry
1263 %tmp10 = sub i32 %a, %c ; <i32> [#uses=1]
1284 There's an obviously unnecessary movl in .LBB0_2, and we could eliminate a
1285 couple more movls by putting 4(%esp) into %eax instead of %ecx.
1287 //===---------------------------------------------------------------------===//
1294 cvtss2sd LCPI1_0, %xmm1
1296 movsd 176(%esp), %xmm2
1301 mulsd LCPI1_23, %xmm4
1302 addsd LCPI1_24, %xmm4
1304 addsd LCPI1_25, %xmm4
1306 addsd LCPI1_26, %xmm4
1308 addsd LCPI1_27, %xmm4
1310 addsd LCPI1_28, %xmm4
1314 movsd 152(%esp), %xmm1
1316 movsd %xmm1, 152(%esp)
1320 LBB1_16: # bb358.loopexit
1321 movsd 152(%esp), %xmm0
1323 addsd LCPI1_22, %xmm0
1324 movsd %xmm0, 152(%esp)
1326 Rather than spilling the result of the last addsd in the loop, we should have
1327 insert a copy to split the interval (one for the duration of the loop, one
1328 extending to the fall through). The register pressure in the loop isn't high
1329 enough to warrant the spill.
1331 Also check why xmm7 is not used at all in the function.
1333 //===---------------------------------------------------------------------===//
1337 target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
1338 target triple = "i386-apple-darwin8"
1339 @in_exit.4870.b = internal global i1 false ; <i1*> [#uses=2]
1340 define fastcc void @abort_gzip() noreturn nounwind {
1342 %tmp.b.i = load i1* @in_exit.4870.b ; <i1> [#uses=1]
1343 br i1 %tmp.b.i, label %bb.i, label %bb4.i
1344 bb.i: ; preds = %entry
1345 tail call void @exit( i32 1 ) noreturn nounwind
1347 bb4.i: ; preds = %entry
1348 store i1 true, i1* @in_exit.4870.b
1349 tail call void @exit( i32 1 ) noreturn nounwind
1352 declare void @exit(i32) noreturn nounwind
1355 _abort_gzip: ## @abort_gzip
1358 movb _in_exit.4870.b, %al
1362 We somehow miss folding the movb into the cmpb.
1364 //===---------------------------------------------------------------------===//
1368 int test(int x, int y) {
1380 it would be better to codegen as: x+~y (notl+addl)
1382 //===---------------------------------------------------------------------===//
1386 int foo(const char *str,...)
1388 __builtin_va_list a; int x;
1389 __builtin_va_start(a,str); x = __builtin_va_arg(a,int); __builtin_va_end(a);
1393 gets compiled into this on x86-64:
1395 movaps %xmm7, 160(%rsp)
1396 movaps %xmm6, 144(%rsp)
1397 movaps %xmm5, 128(%rsp)
1398 movaps %xmm4, 112(%rsp)
1399 movaps %xmm3, 96(%rsp)
1400 movaps %xmm2, 80(%rsp)
1401 movaps %xmm1, 64(%rsp)
1402 movaps %xmm0, 48(%rsp)
1409 movq %rax, 192(%rsp)
1410 leaq 208(%rsp), %rax
1411 movq %rax, 184(%rsp)
1414 movl 176(%rsp), %eax
1418 movq 184(%rsp), %rcx
1420 movq %rax, 184(%rsp)
1428 addq 192(%rsp), %rcx
1429 movl %eax, 176(%rsp)
1435 leaq 104(%rsp), %rax
1436 movq %rsi, -80(%rsp)
1438 movq %rax, -112(%rsp)
1439 leaq -88(%rsp), %rax
1440 movq %rax, -104(%rsp)
1444 movq -112(%rsp), %rdx
1452 addq -104(%rsp), %rdx
1454 movl %eax, -120(%rsp)
1459 and it gets compiled into this on x86:
1479 //===---------------------------------------------------------------------===//
1481 Teach tblgen not to check bitconvert source type in some cases. This allows us
1482 to consolidate the following patterns in X86InstrMMX.td:
1484 def : Pat<(v2i32 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1486 (v2i32 (MMX_MOVDQ2Qrr VR128:$src))>;
1487 def : Pat<(v4i16 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1489 (v4i16 (MMX_MOVDQ2Qrr VR128:$src))>;
1490 def : Pat<(v8i8 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1492 (v8i8 (MMX_MOVDQ2Qrr VR128:$src))>;
1494 There are other cases in various td files.
1496 //===---------------------------------------------------------------------===//
1498 Take something like the following on x86-32:
1499 unsigned a(unsigned long long x, unsigned y) {return x % y;}
1501 We currently generate a libcall, but we really shouldn't: the expansion is
1502 shorter and likely faster than the libcall. The expected code is something
1514 A similar code sequence works for division.
1516 //===---------------------------------------------------------------------===//
1518 These should compile to the same code, but the later codegen's to useless
1519 instructions on X86. This may be a trivial dag combine (GCC PR7061):
1521 struct s1 { unsigned char a, b; };
1522 unsigned long f1(struct s1 x) {
1525 struct s2 { unsigned a: 8, b: 8; };
1526 unsigned long f2(struct s2 x) {
1530 //===---------------------------------------------------------------------===//
1532 We currently compile this:
1534 define i32 @func1(i32 %v1, i32 %v2) nounwind {
1536 %t = call {i32, i1} @llvm.sadd.with.overflow.i32(i32 %v1, i32 %v2)
1537 %sum = extractvalue {i32, i1} %t, 0
1538 %obit = extractvalue {i32, i1} %t, 1
1539 br i1 %obit, label %overflow, label %normal
1543 call void @llvm.trap()
1546 declare {i32, i1} @llvm.sadd.with.overflow.i32(i32, i32)
1547 declare void @llvm.trap()
1554 jo LBB1_2 ## overflow
1560 it would be nice to produce "into" someday.
1562 //===---------------------------------------------------------------------===//
1566 void vec_mpys1(int y[], const int x[], int scaler) {
1568 for (i = 0; i < 150; i++)
1569 y[i] += (((long long)scaler * (long long)x[i]) >> 31);
1572 Compiles to this loop with GCC 3.x:
1577 shrdl $31, %edx, %eax
1578 addl %eax, (%esi,%ecx,4)
1583 llvm-gcc compiles it to the much uglier:
1587 movl (%eax,%edi,4), %ebx
1596 shldl $1, %eax, %ebx
1598 addl %ebx, (%eax,%edi,4)
1603 The issue is that we hoist the cast of "scaler" to long long outside of the
1604 loop, the value comes into the loop as two values, and
1605 RegsForValue::getCopyFromRegs doesn't know how to put an AssertSext on the
1606 constructed BUILD_PAIR which represents the cast value.
1608 //===---------------------------------------------------------------------===//
1610 Test instructions can be eliminated by using EFLAGS values from arithmetic
1611 instructions. This is currently not done for mul, and, or, xor, neg, shl,
1612 sra, srl, shld, shrd, atomic ops, and others. It is also currently not done
1613 for read-modify-write instructions. It is also current not done if the
1614 OF or CF flags are needed.
1616 The shift operators have the complication that when the shift count is
1617 zero, EFLAGS is not set, so they can only subsume a test instruction if
1618 the shift count is known to be non-zero. Also, using the EFLAGS value
1619 from a shift is apparently very slow on some x86 implementations.
1621 In read-modify-write instructions, the root node in the isel match is
1622 the store, and isel has no way for the use of the EFLAGS result of the
1623 arithmetic to be remapped to the new node.
1625 Add and subtract instructions set OF on signed overflow and CF on unsiged
1626 overflow, while test instructions always clear OF and CF. In order to
1627 replace a test with an add or subtract in a situation where OF or CF is
1628 needed, codegen must be able to prove that the operation cannot see
1629 signed or unsigned overflow, respectively.
1631 //===---------------------------------------------------------------------===//
1633 memcpy/memmove do not lower to SSE copies when possible. A silly example is:
1634 define <16 x float> @foo(<16 x float> %A) nounwind {
1635 %tmp = alloca <16 x float>, align 16
1636 %tmp2 = alloca <16 x float>, align 16
1637 store <16 x float> %A, <16 x float>* %tmp
1638 %s = bitcast <16 x float>* %tmp to i8*
1639 %s2 = bitcast <16 x float>* %tmp2 to i8*
1640 call void @llvm.memcpy.i64(i8* %s, i8* %s2, i64 64, i32 16)
1641 %R = load <16 x float>* %tmp2
1645 declare void @llvm.memcpy.i64(i8* nocapture, i8* nocapture, i64, i32) nounwind
1651 movaps %xmm3, 112(%esp)
1652 movaps %xmm2, 96(%esp)
1653 movaps %xmm1, 80(%esp)
1654 movaps %xmm0, 64(%esp)
1656 movl %eax, 124(%esp)
1658 movl %eax, 120(%esp)
1660 <many many more 32-bit copies>
1661 movaps (%esp), %xmm0
1662 movaps 16(%esp), %xmm1
1663 movaps 32(%esp), %xmm2
1664 movaps 48(%esp), %xmm3
1668 On Nehalem, it may even be cheaper to just use movups when unaligned than to
1669 fall back to lower-granularity chunks.
1671 //===---------------------------------------------------------------------===//
1673 Implement processor-specific optimizations for parity with GCC on these
1674 processors. GCC does two optimizations:
1676 1. ix86_pad_returns inserts a noop before ret instructions if immediately
1677 preceeded by a conditional branch or is the target of a jump.
1678 2. ix86_avoid_jump_misspredicts inserts noops in cases where a 16-byte block of
1679 code contains more than 3 branches.
1681 The first one is done for all AMDs, Core2, and "Generic"
1682 The second one is done for: Atom, Pentium Pro, all AMDs, Pentium 4, Nocona,
1683 Core 2, and "Generic"
1685 //===---------------------------------------------------------------------===//
1688 int a(int x) { return (x & 127) > 31; }
1704 This should definitely be done in instcombine, canonicalizing the range
1705 condition into a != condition. We get this IR:
1707 define i32 @a(i32 %x) nounwind readnone {
1709 %0 = and i32 %x, 127 ; <i32> [#uses=1]
1710 %1 = icmp ugt i32 %0, 31 ; <i1> [#uses=1]
1711 %2 = zext i1 %1 to i32 ; <i32> [#uses=1]
1715 Instcombine prefers to strength reduce relational comparisons to equality
1716 comparisons when possible, this should be another case of that. This could
1717 be handled pretty easily in InstCombiner::visitICmpInstWithInstAndIntCst, but it
1718 looks like InstCombiner::visitICmpInstWithInstAndIntCst should really already
1719 be redesigned to use ComputeMaskedBits and friends.
1722 //===---------------------------------------------------------------------===//
1724 int x(int a) { return (a&0xf0)>>4; }
1733 movzbl 4(%esp), %eax
1737 //===---------------------------------------------------------------------===//
1740 int x(int a) { return (a & 0x80) ? 0x100 : 0; }
1741 int y(int a) { return (a & 0x80) *2; }
1756 This is another general instcombine transformation that is profitable on all
1757 targets. In LLVM IR, these functions look like this:
1759 define i32 @x(i32 %a) nounwind readnone {
1761 %0 = and i32 %a, 128
1762 %1 = icmp eq i32 %0, 0
1763 %iftmp.0.0 = select i1 %1, i32 0, i32 256
1767 define i32 @y(i32 %a) nounwind readnone {
1770 %1 = and i32 %0, 256
1774 Replacing an icmp+select with a shift should always be considered profitable in
1777 //===---------------------------------------------------------------------===//
1779 Re-implement atomic builtins __sync_add_and_fetch() and __sync_sub_and_fetch
1782 When the return value is not used (i.e. only care about the value in the
1783 memory), x86 does not have to use add to implement these. Instead, it can use
1784 add, sub, inc, dec instructions with the "lock" prefix.
1786 This is currently implemented using a bit of instruction selection trick. The
1787 issue is the target independent pattern produces one output and a chain and we
1788 want to map it into one that just output a chain. The current trick is to select
1789 it into a MERGE_VALUES with the first definition being an implicit_def. The
1790 proper solution is to add new ISD opcodes for the no-output variant. DAG
1791 combiner can then transform the node before it gets to target node selection.
1793 Problem #2 is we are adding a whole bunch of x86 atomic instructions when in
1794 fact these instructions are identical to the non-lock versions. We need a way to
1795 add target specific information to target nodes and have this information
1796 carried over to machine instructions. Asm printer (or JIT) can use this
1797 information to add the "lock" prefix.
1799 //===---------------------------------------------------------------------===//
1801 _Bool bar(int *x) { return *x & 1; }
1803 define zeroext i1 @bar(i32* nocapture %x) nounwind readonly {
1805 %tmp1 = load i32* %x ; <i32> [#uses=1]
1806 %and = and i32 %tmp1, 1 ; <i32> [#uses=1]
1807 %tobool = icmp ne i32 %and, 0 ; <i1> [#uses=1]
1819 Missed optimization: should be movl+andl.
1821 //===---------------------------------------------------------------------===//
1823 Consider the following two functions compiled with clang:
1824 _Bool foo(int *x) { return !(*x & 4); }
1825 unsigned bar(int *x) { return !(*x & 4); }
1842 The second function generates more code even though the two functions are
1843 are functionally identical.
1845 //===---------------------------------------------------------------------===//
1847 Take the following C code:
1848 int x(int y) { return (y & 63) << 14; }
1850 Code produced by gcc:
1856 Code produced by clang:
1862 The code produced by gcc is 3 bytes shorter. This sort of construct often
1863 shows up with bitfields.
1865 //===---------------------------------------------------------------------===//
1867 Take the following C code:
1868 int f(int a, int b) { return (unsigned char)a == (unsigned char)b; }
1870 We generate the following IR with clang:
1871 define i32 @f(i32 %a, i32 %b) nounwind readnone {
1873 %tmp = xor i32 %b, %a ; <i32> [#uses=1]
1874 %tmp6 = and i32 %tmp, 255 ; <i32> [#uses=1]
1875 %cmp = icmp eq i32 %tmp6, 0 ; <i1> [#uses=1]
1876 %conv5 = zext i1 %cmp to i32 ; <i32> [#uses=1]
1880 And the following x86 code:
1887 A cmpb instead of the xorl+testb would be one instruction shorter.
1889 //===---------------------------------------------------------------------===//
1891 Given the following C code:
1892 int f(int a, int b) { return (signed char)a == (signed char)b; }
1894 We generate the following IR with clang:
1895 define i32 @f(i32 %a, i32 %b) nounwind readnone {
1897 %sext = shl i32 %a, 24 ; <i32> [#uses=1]
1898 %conv1 = ashr i32 %sext, 24 ; <i32> [#uses=1]
1899 %sext6 = shl i32 %b, 24 ; <i32> [#uses=1]
1900 %conv4 = ashr i32 %sext6, 24 ; <i32> [#uses=1]
1901 %cmp = icmp eq i32 %conv1, %conv4 ; <i1> [#uses=1]
1902 %conv5 = zext i1 %cmp to i32 ; <i32> [#uses=1]
1906 And the following x86 code:
1915 It should be possible to eliminate the sign extensions.
1917 //===---------------------------------------------------------------------===//
1919 LLVM misses a load+store narrowing opportunity in this code:
1921 %struct.bf = type { i64, i16, i16, i32 }
1923 @bfi = external global %struct.bf* ; <%struct.bf**> [#uses=2]
1925 define void @t1() nounwind ssp {
1927 %0 = load %struct.bf** @bfi, align 8 ; <%struct.bf*> [#uses=1]
1928 %1 = getelementptr %struct.bf* %0, i64 0, i32 1 ; <i16*> [#uses=1]
1929 %2 = bitcast i16* %1 to i32* ; <i32*> [#uses=2]
1930 %3 = load i32* %2, align 1 ; <i32> [#uses=1]
1931 %4 = and i32 %3, -65537 ; <i32> [#uses=1]
1932 store i32 %4, i32* %2, align 1
1933 %5 = load %struct.bf** @bfi, align 8 ; <%struct.bf*> [#uses=1]
1934 %6 = getelementptr %struct.bf* %5, i64 0, i32 1 ; <i16*> [#uses=1]
1935 %7 = bitcast i16* %6 to i32* ; <i32*> [#uses=2]
1936 %8 = load i32* %7, align 1 ; <i32> [#uses=1]
1937 %9 = and i32 %8, -131073 ; <i32> [#uses=1]
1938 store i32 %9, i32* %7, align 1
1942 LLVM currently emits this:
1944 movq bfi(%rip), %rax
1945 andl $-65537, 8(%rax)
1946 movq bfi(%rip), %rax
1947 andl $-131073, 8(%rax)
1950 It could narrow the loads and stores to emit this:
1952 movq bfi(%rip), %rax
1954 movq bfi(%rip), %rax
1958 The trouble is that there is a TokenFactor between the store and the
1959 load, making it non-trivial to determine if there's anything between
1960 the load and the store which would prohibit narrowing.
1962 //===---------------------------------------------------------------------===//