1 //===---------------------------------------------------------------------===//
2 // Random ideas for the ARM backend (Thumb specific).
3 //===---------------------------------------------------------------------===//
5 * Add support for compiling functions in both ARM and Thumb mode, then taking
8 * Add support for compiling individual basic blocks in thumb mode, when in a
9 larger ARM function. This can be used for presumed cold code, like paths
10 to abort (failure path of asserts), EH handling code, etc.
12 * Thumb doesn't have normal pre/post increment addressing modes, but you can
13 load/store 32-bit integers with pre/postinc by using load/store multiple
14 instrs with a single register.
16 * Make better use of high registers r8, r10, r11, r12 (ip). Some variants of add
17 and cmp instructions can use high registers. Also, we can use them as
18 temporaries to spill values into.
20 * In thumb mode, short, byte, and bool preferred alignments are currently set
21 to 4 to accommodate ISA restriction (i.e. add sp, #imm, imm must be multiple
24 //===---------------------------------------------------------------------===//
26 Potential jumptable improvements:
28 * If we know function size is less than (1 << 16) * 2 bytes, we can use 16-bit
29 jumptable entries (e.g. (L1 - L2) >> 1). Or even smaller entries if the
30 function is even smaller. This also applies to ARM.
32 * Thumb jumptable codegen can improve given some help from the assembler. This
33 is what we generate right now:
35 .set PCRELV0, (LJTI1_0_0-(LPCRELL0+4))
46 Note there is another pc relative add that we can take advantage of.
47 add r1, pc, #imm_8 * 4
49 We should be able to generate:
59 if the assembler can translate the add to:
60 add r1, pc, #((LJTI1_0_0-(LPCRELL0+4))&0xfffffffc)
62 Note the assembler also does something similar to constpool load:
66 ldr r0, pc, #((LCPI1_0-(LPCRELL0+4))&0xfffffffc)
69 //===---------------------------------------------------------------------===//
71 We compiles the following:
73 define i16 @func_entry_2E_ce(i32 %i) {
74 switch i32 %i, label %bb12.exitStub [
75 i32 0, label %bb4.exitStub
76 i32 1, label %bb9.exitStub
77 i32 2, label %bb4.exitStub
78 i32 3, label %bb4.exitStub
79 i32 7, label %bb9.exitStub
80 i32 8, label %bb.exitStub
81 i32 9, label %bb9.exitStub
103 bhi LBB1_4 @bb12.exitStub
107 bne LBB1_5 @bb4.exitStub
111 bne LBB1_6 @bb9.exitStub
116 bne LBB1_7 @bb.exitStub
117 LBB1_4: @bb12.exitStub
120 LBB1_5: @bb4.exitStub
123 LBB1_6: @bb9.exitStub
138 @ lr needed for prologue
143 ands r0, r3, r2, asl r0
164 GCC is doing a couple of clever things here:
165 1. It is predicating one of the returns. This isn't a clear win though: in
166 cases where that return isn't taken, it is replacing one condbranch with
167 two 'ne' predicated instructions.
168 2. It is sinking the shift of "1 << i" into the tst, and using ands instead of
169 tst. This will probably require whole function isel.
178 //===---------------------------------------------------------------------===//
180 When spilling in thumb mode and the sp offset is too large to fit in the ldr /
181 str offset field, we load the offset from a constpool entry and add it to sp:
187 These instructions preserve the condition code which is important if the spill
188 is between a cmp and a bcc instruction. However, we can use the (potentially)
189 cheaper sequnce if we know it's ok to clobber the condition register.
195 This is especially bad when dynamic alloca is used. The all fixed size stack
196 objects are referenced off the frame pointer with negative offsets. See
197 oggenc for an example.
200 //===---------------------------------------------------------------------===//
202 Poor codegen test/CodeGen/ARM/select.ll f7:
213 //===---------------------------------------------------------------------===//
215 Make register allocator / spiller smarter so we can re-materialize "mov r, imm",
216 etc. Almost all Thumb instructions clobber condition code.
218 //===---------------------------------------------------------------------===//
220 Add ldmia, stmia support.
222 //===---------------------------------------------------------------------===//
224 Thumb load / store address mode offsets are scaled. The values kept in the
225 instruction operands are pre-scale values. This probably ought to be changed
226 to avoid extra work when we convert Thumb2 instructions to Thumb1 instructions.
228 //===---------------------------------------------------------------------===//
230 We need to make (some of the) Thumb1 instructions predicable. That will allow
231 shrinking of predicated Thumb2 instructions. To allow this, we need to be able
232 to toggle the 's' bit since they do not set CPSR when they are inside IT blocks.
234 //===---------------------------------------------------------------------===//
236 Make use of hi register variants of cmp: tCMPhir / tCMPZhir.
238 //===---------------------------------------------------------------------===//
240 Thumb1 immediate field sometimes keep pre-scaled values. See
241 Thumb1RegisterInfo::eliminateFrameIndex. This is inconsistent from ARM and
244 //===---------------------------------------------------------------------===//
246 Rather than having tBR_JTr print a ".align 2" and constant island pass pad it,
247 add a target specific ALIGN instruction instead. That way, GetInstSizeInBytes
248 won't have to over-estimate. It can also be used for loop alignment pass.