1 //===---------------------------------------------------------------------===//
2 // Random ideas for the ARM backend.
3 //===---------------------------------------------------------------------===//
5 Reimplement 'select' in terms of 'SEL'.
7 * We would really like to support UXTAB16, but we need to prove that the
8 add doesn't need to overflow between the two 16-bit chunks.
10 * Implement pre/post increment support. (e.g. PR935)
11 * Coalesce stack slots!
12 * Implement smarter constant generation for binops with large immediates.
14 * Consider materializing FP constants like 0.0f and 1.0f using integer
15 immediate instructions then copy to FPU. Slower than load into FPU?
17 //===---------------------------------------------------------------------===//
19 Crazy idea: Consider code that uses lots of 8-bit or 16-bit values. By the
20 time regalloc happens, these values are now in a 32-bit register, usually with
21 the top-bits known to be sign or zero extended. If spilled, we should be able
22 to spill these to a 8-bit or 16-bit stack slot, zero or sign extending as part
25 Doing this reduces the size of the stack frame (important for thumb etc), and
26 also increases the likelihood that we will be able to reload multiple values
27 from the stack with a single load.
29 //===---------------------------------------------------------------------===//
31 The constant island pass is in good shape. Some cleanups might be desirable,
32 but there is unlikely to be much improvement in the generated code.
34 1. There may be some advantage to trying to be smarter about the initial
35 placement, rather than putting everything at the end.
37 2. There might be some compile-time efficiency to be had by representing
38 consecutive islands as a single block rather than multiple blocks.
40 3. Use a priority queue to sort constant pool users in inverse order of
41 position so we always process the one closed to the end of functions
42 first. This may simply CreateNewWater.
44 //===---------------------------------------------------------------------===//
46 Eliminate copysign custom expansion. We are still generating crappy code with
47 default expansion + if-conversion.
49 //===---------------------------------------------------------------------===//
51 Eliminate one instruction from:
53 define i32 @_Z6slow4bii(i32 %x, i32 %y) {
54 %tmp = icmp sgt i32 %x, %y
55 %retval = select i1 %tmp, i32 %x, i32 %y
71 //===---------------------------------------------------------------------===//
73 Implement long long "X-3" with instructions that fold the immediate in. These
74 were disabled due to badness with the ARM carry flag on subtracts.
76 //===---------------------------------------------------------------------===//
78 We currently compile abs:
79 int foo(int p) { return p < 0 ? -p : p; }
90 This is very, uh, literal. This could be a 3 operation sequence:
94 Which would be better. This occurs in png decode.
96 //===---------------------------------------------------------------------===//
98 More load / store optimizations:
99 1) Better representation for block transfer? This is from Olden/power:
110 If we can spare the registers, it would be better to use fldm and fstm here.
111 Need major register allocator enhancement though.
113 2) Can we recognize the relative position of constantpool entries? i.e. Treat
124 Then the ldr's can be combined into a single ldm. See Olden/power.
126 Note for ARM v4 gcc uses ldmia to load a pair of 32-bit values to represent a
127 double 64-bit FP constant:
137 3) struct copies appear to be done field by field
138 instead of by words, at least sometimes:
140 struct foo { int x; short s; char c1; char c2; };
141 void cpy(struct foo*a, struct foo*b) { *a = *b; }
156 In this benchmark poor handling of aggregate copies has shown up as
157 having a large effect on size, and possibly speed as well (we don't have
158 a good way to measure on ARM).
160 //===---------------------------------------------------------------------===//
162 * Consider this silly example:
164 double bar(double x) {
170 stmfd sp!, {r4, r5, r7, lr}
182 ldmfd sp!, {r4, r5, r7, pc}
184 Ignore the prologue and epilogue stuff for a second. Note
187 the copys to callee-save registers and the fact they are only being used by the
188 fmdrr instruction. It would have been better had the fmdrr been scheduled
189 before the call and place the result in a callee-save DPR register. The two
190 mov ops would not have been necessary.
192 //===---------------------------------------------------------------------===//
194 Calling convention related stuff:
196 * gcc's parameter passing implementation is terrible and we suffer as a result:
204 void foo(struct s S) {
205 printf("%g, %d\n", S.d1, S.s1);
208 'S' is passed via registers r0, r1, r2. But gcc stores them to the stack, and
209 then reload them to r1, r2, and r3 before issuing the call (r0 contains the
210 address of the format string):
215 stmia sp, {r0, r1, r2}
223 Instead of a stmia, ldmia, and a ldr, wouldn't it be better to do three moves?
225 * Return an aggregate type is even worse:
229 struct s S = {1.1, 2};
238 @ lr needed for prologue
239 ldmia r0, {r0, r1, r2}
240 stmia sp, {r0, r1, r2}
241 stmia ip, {r0, r1, r2}
246 r0 (and later ip) is the hidden parameter from caller to store the value in. The
247 first ldmia loads the constants into r0, r1, r2. The last stmia stores r0, r1,
248 r2 into the address passed in. However, there is one additional stmia that
249 stores r0, r1, and r2 to some stack location. The store is dead.
251 The llvm-gcc generated code looks like this:
253 csretcc void %foo(%struct.s* %agg.result) {
255 %S = alloca %struct.s, align 4 ; <%struct.s*> [#uses=1]
256 %memtmp = alloca %struct.s ; <%struct.s*> [#uses=1]
257 cast %struct.s* %S to sbyte* ; <sbyte*>:0 [#uses=2]
258 call void %llvm.memcpy.i32( sbyte* %0, sbyte* cast ({ double, int }* %C.0.904 to sbyte*), uint 12, uint 4 )
259 cast %struct.s* %agg.result to sbyte* ; <sbyte*>:1 [#uses=2]
260 call void %llvm.memcpy.i32( sbyte* %1, sbyte* %0, uint 12, uint 0 )
261 cast %struct.s* %memtmp to sbyte* ; <sbyte*>:2 [#uses=1]
262 call void %llvm.memcpy.i32( sbyte* %2, sbyte* %1, uint 12, uint 0 )
266 llc ends up issuing two memcpy's (the first memcpy becomes 3 loads from
267 constantpool). Perhaps we should 1) fix llvm-gcc so the memcpy is translated
268 into a number of load and stores, or 2) custom lower memcpy (of small size) to
269 be ldmia / stmia. I think option 2 is better but the current register
270 allocator cannot allocate a chunk of registers at a time.
272 A feasible temporary solution is to use specific physical registers at the
273 lowering time for small (<= 4 words?) transfer size.
275 * ARM CSRet calling convention requires the hidden argument to be returned by
278 //===---------------------------------------------------------------------===//
280 We can definitely do a better job on BB placements to eliminate some branches.
281 It's very common to see llvm generated assembly code that looks like this:
290 If BB4 is the only predecessor of BB3, then we can emit BB3 after BB4. We can
291 then eliminate beq and and turn the unconditional branch to LBB2 to a bne.
293 See McCat/18-imp/ComputeBoundingBoxes for an example.
295 //===---------------------------------------------------------------------===//
297 Pre-/post- indexed load / stores:
299 1) We should not make the pre/post- indexed load/store transform if the base ptr
300 is guaranteed to be live beyond the load/store. This can happen if the base
301 ptr is live out of the block we are performing the optimization. e.g.
313 In most cases, this is just a wasted optimization. However, sometimes it can
314 negatively impact the performance because two-address code is more restrictive
315 when it comes to scheduling.
317 Unfortunately, liveout information is currently unavailable during DAG combine
320 2) Consider spliting a indexed load / store into a pair of add/sub + load/store
321 to solve #1 (in TwoAddressInstructionPass.cpp).
323 3) Enhance LSR to generate more opportunities for indexed ops.
325 4) Once we added support for multiple result patterns, write indexed loads
326 patterns instead of C++ instruction selection code.
328 5) Use FLDM / FSTM to emulate indexed FP load / store.
330 //===---------------------------------------------------------------------===//
332 Implement support for some more tricky ways to materialize immediates. For
333 example, to get 0xffff8000, we can use:
338 //===---------------------------------------------------------------------===//
340 We sometimes generate multiple add / sub instructions to update sp in prologue
341 and epilogue if the inc / dec value is too large to fit in a single immediate
342 operand. In some cases, perhaps it might be better to load the value from a
343 constantpool instead.
345 //===---------------------------------------------------------------------===//
347 GCC generates significantly better code for this function.
349 int foo(int StackPtr, unsigned char *Line, unsigned char *Stack, int LineLen) {
353 while (StackPtr != 0 && i < (((LineLen) < (32768))? (LineLen) : (32768)))
354 Line[i++] = Stack[--StackPtr];
357 while (StackPtr != 0 && i < LineLen)
367 //===---------------------------------------------------------------------===//
369 This should compile to the mlas instruction:
370 int mlas(int x, int y, int z) { return ((x * y + z) < 0) ? 7 : 13; }
372 //===---------------------------------------------------------------------===//
374 At some point, we should triage these to see if they still apply to us:
376 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19598
377 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18560
378 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27016
380 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11831
381 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11826
382 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11825
383 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11824
384 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11823
385 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11820
386 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10982
388 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10242
389 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9831
390 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9760
391 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9759
392 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9703
393 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9702
394 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9663
396 http://www.inf.u-szeged.hu/gcc-arm/
397 http://citeseer.ist.psu.edu/debus04linktime.html
399 //===---------------------------------------------------------------------===//
401 gcc generates smaller code for this function at -O2 or -Os:
403 void foo(signed char* p) {
412 llvm decides it's a good idea to turn the repeated if...else into a
413 binary tree, as if it were a switch; the resulting code requires -1
414 compare-and-branches when *p<=2 or *p==5, the same number if *p==4
415 or *p>6, and +1 if *p==3. So it should be a speed win
416 (on balance). However, the revised code is larger, with 4 conditional
417 branches instead of 3.
419 More seriously, there is a byte->word extend before
420 each comparison, where there should be only one, and the condition codes
421 are not remembered when the same two values are compared twice.
423 //===---------------------------------------------------------------------===//
425 More register scavenging work:
427 1. Use the register scavenger to track frame index materialized into registers
428 (those that do not fit in addressing modes) to allow reuse in the same BB.
429 2. Finish scavenging for Thumb.
431 //===---------------------------------------------------------------------===//
433 More LSR enhancements possible:
435 1. Teach LSR about pre- and post- indexed ops to allow iv increment be merged
437 2. Allow iv reuse even when a type conversion is required. For example, i8
438 and i32 load / store addressing modes are identical.
441 //===---------------------------------------------------------------------===//
445 int foo(int a, int b, int c, int d) {
446 long long acc = (long long)a * (long long)b;
447 acc += (long long)c * (long long)d;
448 return (int)(acc >> 32);
451 Should compile to use SMLAL (Signed Multiply Accumulate Long) which multiplies
452 two signed 32-bit values to produce a 64-bit value, and accumulates this with
455 We currently get this with both v4 and v6:
464 //===---------------------------------------------------------------------===//
468 std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
469 { return std::make_pair(a + b, a + b < a); }
470 bool no_overflow(unsigned a, unsigned b)
471 { return !full_add(a, b).second; }
509 //===---------------------------------------------------------------------===//
511 Some of the NEON intrinsics may be appropriate for more general use, either
512 as target-independent intrinsics or perhaps elsewhere in the ARM backend.
513 Some of them may also be lowered to target-independent SDNodes, and perhaps
514 some new SDNodes could be added.
516 For example, maximum, minimum, and absolute value operations are well-defined
517 and standard operations, both for vector and scalar types.
519 The current NEON-specific intrinsics for count leading zeros and count one
520 bits could perhaps be replaced by the target-independent ctlz and ctpop
521 intrinsics. It may also make sense to add a target-independent "ctls"
522 intrinsic for "count leading sign bits". Likewise, the backend could use
523 the target-independent SDNodes for these operations.
525 ARMv6 has scalar saturating and halving adds and subtracts. The same
526 intrinsics could possibly be used for both NEON's vector implementations of
527 those operations and the ARMv6 scalar versions.
529 //===---------------------------------------------------------------------===//
531 ARM::MOVCCr is commutable (by flipping the condition). But we need to implement
532 ARMInstrInfo::commuteInstruction() to support it.
534 //===---------------------------------------------------------------------===//
536 Split out LDR (literal) from normal ARM LDR instruction. Also consider spliting
537 LDR into imm12 and so_reg forms. This allows us to clean up some code. e.g.
538 ARMLoadStoreOptimizer does not need to look at LDR (literal) and LDR (so_reg)
539 while ARMConstantIslandPass only need to worry about LDR (literal).
541 //===---------------------------------------------------------------------===//
543 We need to fix constant isel for ARMv6t2 to use MOVT.
545 //===---------------------------------------------------------------------===//
547 Constant island pass should make use of full range SoImm values for LEApcrel.
548 Be careful though as the last attempt caused infinite looping on lencod.
550 //===---------------------------------------------------------------------===//
552 Predication issue. This function:
554 extern unsigned array[ 128 ];
557 y = array[ x & 127 ];
559 y = 123456789 & ( y >> 2 );
571 ldr r1, [r2, +r1, lsl #2]
579 It would be better to do something like this, to fold the shift into the
585 ldr r1, [r2, +r1, lsl #2]
592 it saves an instruction and a register.
594 //===---------------------------------------------------------------------===//