README.txt revision 0371d01fb98a580aa79ddcc0e4ed5bd2d2e541b5
1//===---------------------------------------------------------------------===//
2// Random notes about and ideas for the SystemZ backend.
3//===---------------------------------------------------------------------===//
4
5The initial backend is deliberately restricted to z10.  We should add support
6for later architectures at some point.
7
8--
9
10SystemZDAGToDAGISel::SelectInlineAsmMemoryOperand() is passed "m" for all
11inline asm memory constraints; it doesn't get to see the original constraint.
12This means that it must conservatively treat all inline asm constraints
13as the most restricted type, "R".
14
15--
16
17If an inline asm ties an i32 "r" result to an i64 input, the input
18will be treated as an i32, leaving the upper bits uninitialised.
19For example:
20
21define void @f4(i32 *%dst) {
22  %val = call i32 asm "blah $0", "=r,0" (i64 103)
23  store i32 %val, i32 *%dst
24  ret void
25}
26
27from CodeGen/SystemZ/asm-09.ll will use LHI rather than LGHI.
28to load 103.  This seems to be a general target-independent problem.
29
30--
31
32The tuning of the choice between LOAD ADDRESS (LA) and addition in
33SystemZISelDAGToDAG.cpp is suspect.  It should be tweaked based on
34performance measurements.
35
36--
37
38We don't support prefetching yet.
39
40--
41
42There is no scheduling support.
43
44--
45
46We don't use the BRANCH ON INDEX instructions.
47
48--
49
50We might want to use BRANCH ON CONDITION for conditional indirect calls
51and conditional returns.
52
53--
54
55We don't use the TEST DATA CLASS instructions.
56
57--
58
59We could use the generic floating-point forms of LOAD COMPLEMENT,
60LOAD NEGATIVE and LOAD POSITIVE in cases where we don't need the
61condition codes.  For example, we could use LCDFR instead of LCDBR.
62
63--
64
65We don't optimize block memory operations, except using single MVCs
66for memcpy and single CLCs for memcmp.
67
68It's definitely worth using things like NC, XC and OC with
69constant lengths.  MVCIN may be worthwhile too.
70
71We should probably implement general memcpy using MVC with EXECUTE.
72Likewise memcmp and CLC.  MVCLE and CLCLE could be useful too.
73
74--
75
76We don't use CUSE or the TRANSLATE family of instructions for string
77operations.  The TRANSLATE ones are probably more difficult to exploit.
78
79--
80
81We don't take full advantage of builtins like fabsl because the calling
82conventions require f128s to be returned by invisible reference.
83
84--
85
86ADD LOGICAL WITH SIGNED IMMEDIATE could be useful when we need to
87produce a carry.  SUBTRACT LOGICAL IMMEDIATE could be useful when we
88need to produce a borrow.  (Note that there are no memory forms of
89ADD LOGICAL WITH CARRY and SUBTRACT LOGICAL WITH BORROW, so the high
90part of 128-bit memory operations would probably need to be done
91via a register.)
92
93--
94
95We don't use the halfword forms of LOAD REVERSED and STORE REVERSED
96(LRVH and STRVH).
97
98--
99
100We could take advantage of the various ... UNDER MASK instructions,
101such as ICM and STCM.
102
103--
104
105DAGCombiner doesn't yet fold truncations of extended loads.  Functions like:
106
107    unsigned long f (unsigned long x, unsigned short *y)
108    {
109      return (x << 32) | *y;
110    }
111
112therefore end up as:
113
114        sllg    %r2, %r2, 32
115        llgh    %r0, 0(%r3)
116        lr      %r2, %r0
117        br      %r14
118
119but truncating the load would give:
120
121        sllg    %r2, %r2, 32
122        lh      %r2, 0(%r3)
123        br      %r14
124
125--
126
127Functions like:
128
129define i64 @f1(i64 %a) {
130  %and = and i64 %a, 1
131  ret i64 %and
132}
133
134ought to be implemented as:
135
136        lhi     %r0, 1
137        ngr     %r2, %r0
138        br      %r14
139
140but two-address optimisations reverse the order of the AND and force:
141
142        lhi     %r0, 1
143        ngr     %r0, %r2
144        lgr     %r2, %r0
145        br      %r14
146
147CodeGen/SystemZ/and-04.ll has several examples of this.
148
149--
150
151Out-of-range displacements are usually handled by loading the full
152address into a register.  In many cases it would be better to create
153an anchor point instead.  E.g. for:
154
155define void @f4a(i128 *%aptr, i64 %base) {
156  %addr = add i64 %base, 524288
157  %bptr = inttoptr i64 %addr to i128 *
158  %a = load volatile i128 *%aptr
159  %b = load i128 *%bptr
160  %add = add i128 %a, %b
161  store i128 %add, i128 *%aptr
162  ret void
163}
164
165(from CodeGen/SystemZ/int-add-08.ll) we load %base+524288 and %base+524296
166into separate registers, rather than using %base+524288 as a base for both.
167
168--
169
170Dynamic stack allocations round the size to 8 bytes and then allocate
171that rounded amount.  It would be simpler to subtract the unrounded
172size from the copy of the stack pointer and then align the result.
173See CodeGen/SystemZ/alloca-01.ll for an example.
174
175--
176
177Atomic loads and stores use the default compare-and-swap based implementation.
178This is much too conservative in practice, since the architecture guarantees
179that 1-, 2-, 4- and 8-byte loads and stores to aligned addresses are
180inherently atomic.
181
182--
183
184If needed, we can support 16-byte atomics using LPQ, STPQ and CSDG.
185
186--
187
188We might want to model all access registers and use them to spill
18932-bit values.
190