1Target Independent Opportunities:
2
3//===---------------------------------------------------------------------===//
4
5We should recognized various "overflow detection" idioms and translate them into
6llvm.uadd.with.overflow and similar intrinsics.  Here is a multiply idiom:
7
8unsigned int mul(unsigned int a,unsigned int b) {
9 if ((unsigned long long)a*b>0xffffffff)
10   exit(0);
11  return a*b;
12}
13
14The legalization code for mul-with-overflow needs to be made more robust before
15this can be implemented though.
16
17//===---------------------------------------------------------------------===//
18
19Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
20precision don't matter (ffastmath).  Misc/mandel will like this. :)  This isn't
21safe in general, even on darwin.  See the libm implementation of hypot for
22examples (which special case when x/y are exactly zero to get signed zeros etc
23right).
24
25//===---------------------------------------------------------------------===//
26
27On targets with expensive 64-bit multiply, we could LSR this:
28
29for (i = ...; ++i) {
30   x = 1ULL << i;
31
32into:
33 long long tmp = 1;
34 for (i = ...; ++i, tmp+=tmp)
35   x = tmp;
36
37This would be a win on ppc32, but not x86 or ppc64.
38
39//===---------------------------------------------------------------------===//
40
41Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
42
43//===---------------------------------------------------------------------===//
44
45Reassociate should turn things like:
46
47int factorial(int X) {
48 return X*X*X*X*X*X*X*X;
49}
50
51into llvm.powi calls, allowing the code generator to produce balanced
52multiplication trees.
53
54First, the intrinsic needs to be extended to support integers, and second the
55code generator needs to be enhanced to lower these to multiplication trees.
56
57//===---------------------------------------------------------------------===//
58
59Interesting? testcase for add/shift/mul reassoc:
60
61int bar(int x, int y) {
62  return x*x*x+y+x*x*x*x*x*y*y*y*y;
63}
64int foo(int z, int n) {
65  return bar(z, n) + bar(2*z, 2*n);
66}
67
68This is blocked on not handling X*X*X -> powi(X, 3) (see note above).  The issue
69is that we end up getting t = 2*X  s = t*t   and don't turn this into 4*X*X,
70which is the same number of multiplies and is canonical, because the 2*X has
71multiple uses.  Here's a simple example:
72
73define i32 @test15(i32 %X1) {
74  %B = mul i32 %X1, 47   ; X1*47
75  %C = mul i32 %B, %B
76  ret i32 %C
77}
78
79
80//===---------------------------------------------------------------------===//
81
82Reassociate should handle the example in GCC PR16157:
83
84extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4; 
85void f () {  /* this can be optimized to four additions... */ 
86        b4 = a4 + a3 + a2 + a1 + a0; 
87        b3 = a3 + a2 + a1 + a0; 
88        b2 = a2 + a1 + a0; 
89        b1 = a1 + a0; 
90} 
91
92This requires reassociating to forms of expressions that are already available,
93something that reassoc doesn't think about yet.
94
95
96//===---------------------------------------------------------------------===//
97
98This function: (derived from GCC PR19988)
99double foo(double x, double y) {
100  return ((x + 0.1234 * y) * (x + -0.1234 * y));
101}
102
103compiles to:
104_foo:
105	movapd	%xmm1, %xmm2
106	mulsd	LCPI1_1(%rip), %xmm1
107	mulsd	LCPI1_0(%rip), %xmm2
108	addsd	%xmm0, %xmm1
109	addsd	%xmm0, %xmm2
110	movapd	%xmm1, %xmm0
111	mulsd	%xmm2, %xmm0
112	ret
113
114Reassociate should be able to turn it into:
115
116double foo(double x, double y) {
117  return ((x + 0.1234 * y) * (x - 0.1234 * y));
118}
119
120Which allows the multiply by constant to be CSE'd, producing:
121
122_foo:
123	mulsd	LCPI1_0(%rip), %xmm1
124	movapd	%xmm1, %xmm2
125	addsd	%xmm0, %xmm2
126	subsd	%xmm1, %xmm0
127	mulsd	%xmm2, %xmm0
128	ret
129
130This doesn't need -ffast-math support at all.  This is particularly bad because
131the llvm-gcc frontend is canonicalizing the later into the former, but clang
132doesn't have this problem.
133
134//===---------------------------------------------------------------------===//
135
136These two functions should generate the same code on big-endian systems:
137
138int g(int *j,int *l)  {  return memcmp(j,l,4);  }
139int h(int *j, int *l) {  return *j - *l; }
140
141this could be done in SelectionDAGISel.cpp, along with other special cases,
142for 1,2,4,8 bytes.
143
144//===---------------------------------------------------------------------===//
145
146It would be nice to revert this patch:
147http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
148
149And teach the dag combiner enough to simplify the code expanded before 
150legalize.  It seems plausible that this knowledge would let it simplify other
151stuff too.
152
153//===---------------------------------------------------------------------===//
154
155For vector types, DataLayout.cpp::getTypeInfo() returns alignment that is equal
156to the type size. It works but can be overly conservative as the alignment of
157specific vector types are target dependent.
158
159//===---------------------------------------------------------------------===//
160
161We should produce an unaligned load from code like this:
162
163v4sf example(float *P) {
164  return (v4sf){P[0], P[1], P[2], P[3] };
165}
166
167//===---------------------------------------------------------------------===//
168
169Add support for conditional increments, and other related patterns.  Instead
170of:
171
172	movl 136(%esp), %eax
173	cmpl $0, %eax
174	je LBB16_2	#cond_next
175LBB16_1:	#cond_true
176	incl _foo
177LBB16_2:	#cond_next
178
179emit:
180	movl	_foo, %eax
181	cmpl	$1, %edi
182	sbbl	$-1, %eax
183	movl	%eax, _foo
184
185//===---------------------------------------------------------------------===//
186
187Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
188
189Expand these to calls of sin/cos and stores:
190      double sincos(double x, double *sin, double *cos);
191      float sincosf(float x, float *sin, float *cos);
192      long double sincosl(long double x, long double *sin, long double *cos);
193
194Doing so could allow SROA of the destination pointers.  See also:
195http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
196
197This is now easily doable with MRVs.  We could even make an intrinsic for this
198if anyone cared enough about sincos.
199
200//===---------------------------------------------------------------------===//
201
202quantum_sigma_x in 462.libquantum contains the following loop:
203
204      for(i=0; i<reg->size; i++)
205	{
206	  /* Flip the target bit of each basis state */
207	  reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
208	} 
209
210Where MAX_UNSIGNED/state is a 64-bit int.  On a 32-bit platform it would be just
211so cool to turn it into something like:
212
213   long long Res = ((MAX_UNSIGNED) 1 << target);
214   if (target < 32) {
215     for(i=0; i<reg->size; i++)
216       reg->node[i].state ^= Res & 0xFFFFFFFFULL;
217   } else {
218     for(i=0; i<reg->size; i++)
219       reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
220   }
221   
222... which would only do one 32-bit XOR per loop iteration instead of two.
223
224It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
225this requires TBAA.
226
227//===---------------------------------------------------------------------===//
228
229This isn't recognized as bswap by instcombine (yes, it really is bswap):
230
231unsigned long reverse(unsigned v) {
232    unsigned t;
233    t = v ^ ((v << 16) | (v >> 16));
234    t &= ~0xff0000;
235    v = (v << 24) | (v >> 8);
236    return v ^ (t >> 8);
237}
238
239//===---------------------------------------------------------------------===//
240
241[LOOP DELETION]
242
243We don't delete this output free loop, because trip count analysis doesn't
244realize that it is finite (if it were infinite, it would be undefined).  Not
245having this blocks Loop Idiom from matching strlen and friends.  
246
247void foo(char *C) {
248  int x = 0;
249  while (*C)
250    ++x,++C;
251}
252
253//===---------------------------------------------------------------------===//
254
255[LOOP RECOGNITION]
256
257These idioms should be recognized as popcount (see PR1488):
258
259unsigned countbits_slow(unsigned v) {
260  unsigned c;
261  for (c = 0; v; v >>= 1)
262    c += v & 1;
263  return c;
264}
265
266unsigned int popcount(unsigned int input) {
267  unsigned int count = 0;
268  for (unsigned int i =  0; i < 4 * 8; i++)
269    count += (input >> i) & i;
270  return count;
271}
272
273This should be recognized as CLZ:  rdar://8459039
274
275unsigned clz_a(unsigned a) {
276  int i;
277  for (i=0;i<32;i++)
278    if (a & (1<<(31-i)))
279      return i;
280  return 32;
281}
282
283This sort of thing should be added to the loop idiom pass.
284
285//===---------------------------------------------------------------------===//
286
287These should turn into single 16-bit (unaligned?) loads on little/big endian
288processors.
289
290unsigned short read_16_le(const unsigned char *adr) {
291  return adr[0] | (adr[1] << 8);
292}
293unsigned short read_16_be(const unsigned char *adr) {
294  return (adr[0] << 8) | adr[1];
295}
296
297//===---------------------------------------------------------------------===//
298
299-instcombine should handle this transform:
300   icmp pred (sdiv X / C1 ), C2
301when X, C1, and C2 are unsigned.  Similarly for udiv and signed operands. 
302
303Currently InstCombine avoids this transform but will do it when the signs of
304the operands and the sign of the divide match. See the FIXME in 
305InstructionCombining.cpp in the visitSetCondInst method after the switch case 
306for Instruction::UDiv (around line 4447) for more details.
307
308The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
309this construct. 
310
311//===---------------------------------------------------------------------===//
312
313[LOOP OPTIMIZATION]
314
315SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
316opportunities in its double_array_divs_variable function: it needs loop
317interchange, memory promotion (which LICM already does), vectorization and
318variable trip count loop unrolling (since it has a constant trip count). ICC
319apparently produces this very nice code with -ffast-math:
320
321..B1.70:                        # Preds ..B1.70 ..B1.69
322       mulpd     %xmm0, %xmm1                                  #108.2
323       mulpd     %xmm0, %xmm1                                  #108.2
324       mulpd     %xmm0, %xmm1                                  #108.2
325       mulpd     %xmm0, %xmm1                                  #108.2
326       addl      $8, %edx                                      #
327       cmpl      $131072, %edx                                 #108.2
328       jb        ..B1.70       # Prob 99%                      #108.2
329
330It would be better to count down to zero, but this is a lot better than what we
331do.
332
333//===---------------------------------------------------------------------===//
334
335Consider:
336
337typedef unsigned U32;
338typedef unsigned long long U64;
339int test (U32 *inst, U64 *regs) {
340    U64 effective_addr2;
341    U32 temp = *inst;
342    int r1 = (temp >> 20) & 0xf;
343    int b2 = (temp >> 16) & 0xf;
344    effective_addr2 = temp & 0xfff;
345    if (b2) effective_addr2 += regs[b2];
346    b2 = (temp >> 12) & 0xf;
347    if (b2) effective_addr2 += regs[b2];
348    effective_addr2 &= regs[4];
349     if ((effective_addr2 & 3) == 0)
350        return 1;
351    return 0;
352}
353
354Note that only the low 2 bits of effective_addr2 are used.  On 32-bit systems,
355we don't eliminate the computation of the top half of effective_addr2 because
356we don't have whole-function selection dags.  On x86, this means we use one
357extra register for the function when effective_addr2 is declared as U64 than
358when it is declared U32.
359
360PHI Slicing could be extended to do this.
361
362//===---------------------------------------------------------------------===//
363
364Tail call elim should be more aggressive, checking to see if the call is
365followed by an uncond branch to an exit block.
366
367; This testcase is due to tail-duplication not wanting to copy the return
368; instruction into the terminating blocks because there was other code
369; optimized out of the function after the taildup happened.
370; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
371
372define i32 @t4(i32 %a) {
373entry:
374	%tmp.1 = and i32 %a, 1		; <i32> [#uses=1]
375	%tmp.2 = icmp ne i32 %tmp.1, 0		; <i1> [#uses=1]
376	br i1 %tmp.2, label %then.0, label %else.0
377
378then.0:		; preds = %entry
379	%tmp.5 = add i32 %a, -1		; <i32> [#uses=1]
380	%tmp.3 = call i32 @t4( i32 %tmp.5 )		; <i32> [#uses=1]
381	br label %return
382
383else.0:		; preds = %entry
384	%tmp.7 = icmp ne i32 %a, 0		; <i1> [#uses=1]
385	br i1 %tmp.7, label %then.1, label %return
386
387then.1:		; preds = %else.0
388	%tmp.11 = add i32 %a, -2		; <i32> [#uses=1]
389	%tmp.9 = call i32 @t4( i32 %tmp.11 )		; <i32> [#uses=1]
390	br label %return
391
392return:		; preds = %then.1, %else.0, %then.0
393	%result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
394                            [ %tmp.9, %then.1 ]
395	ret i32 %result.0
396}
397
398//===---------------------------------------------------------------------===//
399
400Tail recursion elimination should handle:
401
402int pow2m1(int n) {
403 if (n == 0)
404   return 0;
405 return 2 * pow2m1 (n - 1) + 1;
406}
407
408Also, multiplies can be turned into SHL's, so they should be handled as if
409they were associative.  "return foo() << 1" can be tail recursion eliminated.
410
411//===---------------------------------------------------------------------===//
412
413Argument promotion should promote arguments for recursive functions, like 
414this:
415
416; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
417
418define internal i32 @foo(i32* %x) {
419entry:
420	%tmp = load i32* %x		; <i32> [#uses=0]
421	%tmp.foo = call i32 @foo( i32* %x )		; <i32> [#uses=1]
422	ret i32 %tmp.foo
423}
424
425define i32 @bar(i32* %x) {
426entry:
427	%tmp3 = call i32 @foo( i32* %x )		; <i32> [#uses=1]
428	ret i32 %tmp3
429}
430
431//===---------------------------------------------------------------------===//
432
433We should investigate an instruction sinking pass.  Consider this silly
434example in pic mode:
435
436#include <assert.h>
437void foo(int x) {
438  assert(x);
439  //...
440}
441
442we compile this to:
443_foo:
444	subl	$28, %esp
445	call	"L1$pb"
446"L1$pb":
447	popl	%eax
448	cmpl	$0, 32(%esp)
449	je	LBB1_2	# cond_true
450LBB1_1:	# return
451	# ...
452	addl	$28, %esp
453	ret
454LBB1_2:	# cond_true
455...
456
457The PIC base computation (call+popl) is only used on one path through the 
458code, but is currently always computed in the entry block.  It would be 
459better to sink the picbase computation down into the block for the 
460assertion, as it is the only one that uses it.  This happens for a lot of 
461code with early outs.
462
463Another example is loads of arguments, which are usually emitted into the 
464entry block on targets like x86.  If not used in all paths through a 
465function, they should be sunk into the ones that do.
466
467In this case, whole-function-isel would also handle this.
468
469//===---------------------------------------------------------------------===//
470
471Investigate lowering of sparse switch statements into perfect hash tables:
472http://burtleburtle.net/bob/hash/perfect.html
473
474//===---------------------------------------------------------------------===//
475
476We should turn things like "load+fabs+store" and "load+fneg+store" into the
477corresponding integer operations.  On a yonah, this loop:
478
479double a[256];
480void foo() {
481  int i, b;
482  for (b = 0; b < 10000000; b++)
483  for (i = 0; i < 256; i++)
484    a[i] = -a[i];
485}
486
487is twice as slow as this loop:
488
489long long a[256];
490void foo() {
491  int i, b;
492  for (b = 0; b < 10000000; b++)
493  for (i = 0; i < 256; i++)
494    a[i] ^= (1ULL << 63);
495}
496
497and I suspect other processors are similar.  On X86 in particular this is a
498big win because doing this with integers allows the use of read/modify/write
499instructions.
500
501//===---------------------------------------------------------------------===//
502
503DAG Combiner should try to combine small loads into larger loads when 
504profitable.  For example, we compile this C++ example:
505
506struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
507extern THotKey m_HotKey;
508THotKey GetHotKey () { return m_HotKey; }
509
510into (-m64 -O3 -fno-exceptions -static -fomit-frame-pointer):
511
512__Z9GetHotKeyv:                         ## @_Z9GetHotKeyv
513	movq	_m_HotKey@GOTPCREL(%rip), %rax
514	movzwl	(%rax), %ecx
515	movzbl	2(%rax), %edx
516	shlq	$16, %rdx
517	orq	%rcx, %rdx
518	movzbl	3(%rax), %ecx
519	shlq	$24, %rcx
520	orq	%rdx, %rcx
521	movzbl	4(%rax), %eax
522	shlq	$32, %rax
523	orq	%rcx, %rax
524	ret
525
526//===---------------------------------------------------------------------===//
527
528We should add an FRINT node to the DAG to model targets that have legal
529implementations of ceil/floor/rint.
530
531//===---------------------------------------------------------------------===//
532
533Consider:
534
535int test() {
536  long long input[8] = {1,0,1,0,1,0,1,0};
537  foo(input);
538}
539
540Clang compiles this into:
541
542  call void @llvm.memset.p0i8.i64(i8* %tmp, i8 0, i64 64, i32 16, i1 false)
543  %0 = getelementptr [8 x i64]* %input, i64 0, i64 0
544  store i64 1, i64* %0, align 16
545  %1 = getelementptr [8 x i64]* %input, i64 0, i64 2
546  store i64 1, i64* %1, align 16
547  %2 = getelementptr [8 x i64]* %input, i64 0, i64 4
548  store i64 1, i64* %2, align 16
549  %3 = getelementptr [8 x i64]* %input, i64 0, i64 6
550  store i64 1, i64* %3, align 16
551
552Which gets codegen'd into:
553
554	pxor	%xmm0, %xmm0
555	movaps	%xmm0, -16(%rbp)
556	movaps	%xmm0, -32(%rbp)
557	movaps	%xmm0, -48(%rbp)
558	movaps	%xmm0, -64(%rbp)
559	movq	$1, -64(%rbp)
560	movq	$1, -48(%rbp)
561	movq	$1, -32(%rbp)
562	movq	$1, -16(%rbp)
563
564It would be better to have 4 movq's of 0 instead of the movaps's.
565
566//===---------------------------------------------------------------------===//
567
568http://llvm.org/PR717:
569
570The following code should compile into "ret int undef". Instead, LLVM
571produces "ret int 0":
572
573int f() {
574  int x = 4;
575  int y;
576  if (x == 3) y = 0;
577  return y;
578}
579
580//===---------------------------------------------------------------------===//
581
582The loop unroller should partially unroll loops (instead of peeling them)
583when code growth isn't too bad and when an unroll count allows simplification
584of some code within the loop.  One trivial example is:
585
586#include <stdio.h>
587int main() {
588    int nRet = 17;
589    int nLoop;
590    for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
591        if ( nLoop & 1 )
592            nRet += 2;
593        else
594            nRet -= 1;
595    }
596    return nRet;
597}
598
599Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
600reduction in code size.  The resultant code would then also be suitable for
601exit value computation.
602
603//===---------------------------------------------------------------------===//
604
605We miss a bunch of rotate opportunities on various targets, including ppc, x86,
606etc.  On X86, we miss a bunch of 'rotate by variable' cases because the rotate
607matching code in dag combine doesn't look through truncates aggressively 
608enough.  Here are some testcases reduces from GCC PR17886:
609
610unsigned long long f5(unsigned long long x, unsigned long long y) {
611  return (x << 8) | ((y >> 48) & 0xffull);
612}
613unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
614  switch(z) {
615  case 1:
616    return (x << 8) | ((y >> 48) & 0xffull);
617  case 2:
618    return (x << 16) | ((y >> 40) & 0xffffull);
619  case 3:
620    return (x << 24) | ((y >> 32) & 0xffffffull);
621  case 4:
622    return (x << 32) | ((y >> 24) & 0xffffffffull);
623  default:
624    return (x << 40) | ((y >> 16) & 0xffffffffffull);
625  }
626}
627
628//===---------------------------------------------------------------------===//
629
630This (and similar related idioms):
631
632unsigned int foo(unsigned char i) {
633  return i | (i<<8) | (i<<16) | (i<<24);
634} 
635
636compiles into:
637
638define i32 @foo(i8 zeroext %i) nounwind readnone ssp noredzone {
639entry:
640  %conv = zext i8 %i to i32
641  %shl = shl i32 %conv, 8
642  %shl5 = shl i32 %conv, 16
643  %shl9 = shl i32 %conv, 24
644  %or = or i32 %shl9, %conv
645  %or6 = or i32 %or, %shl5
646  %or10 = or i32 %or6, %shl
647  ret i32 %or10
648}
649
650it would be better as:
651
652unsigned int bar(unsigned char i) {
653  unsigned int j=i | (i << 8); 
654  return j | (j<<16);
655}
656
657aka:
658
659define i32 @bar(i8 zeroext %i) nounwind readnone ssp noredzone {
660entry:
661  %conv = zext i8 %i to i32
662  %shl = shl i32 %conv, 8
663  %or = or i32 %shl, %conv
664  %shl5 = shl i32 %or, 16
665  %or6 = or i32 %shl5, %or
666  ret i32 %or6
667}
668
669or even i*0x01010101, depending on the speed of the multiplier.  The best way to
670handle this is to canonicalize it to a multiply in IR and have codegen handle
671lowering multiplies to shifts on cpus where shifts are faster.
672
673//===---------------------------------------------------------------------===//
674
675We do a number of simplifications in simplify libcalls to strength reduce
676standard library functions, but we don't currently merge them together.  For
677example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy.  This can only
678be done safely if "b" isn't modified between the strlen and memcpy of course.
679
680//===---------------------------------------------------------------------===//
681
682We compile this program: (from GCC PR11680)
683http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
684
685Into code that runs the same speed in fast/slow modes, but both modes run 2x
686slower than when compile with GCC (either 4.0 or 4.2):
687
688$ llvm-g++ perf.cpp -O3 -fno-exceptions
689$ time ./a.out fast
6901.821u 0.003s 0:01.82 100.0%	0+0k 0+0io 0pf+0w
691
692$ g++ perf.cpp -O3 -fno-exceptions
693$ time ./a.out fast
6940.821u 0.001s 0:00.82 100.0%	0+0k 0+0io 0pf+0w
695
696It looks like we are making the same inlining decisions, so this may be raw
697codegen badness or something else (haven't investigated).
698
699//===---------------------------------------------------------------------===//
700
701Divisibility by constant can be simplified (according to GCC PR12849) from
702being a mulhi to being a mul lo (cheaper).  Testcase:
703
704void bar(unsigned n) {
705  if (n % 3 == 0)
706    true();
707}
708
709This is equivalent to the following, where 2863311531 is the multiplicative
710inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
711void bar(unsigned n) {
712  if (n * 2863311531U < 1431655766U)
713    true();
714}
715
716The same transformation can work with an even modulo with the addition of a
717rotate: rotate the result of the multiply to the right by the number of bits
718which need to be zero for the condition to be true, and shrink the compare RHS
719by the same amount.  Unless the target supports rotates, though, that
720transformation probably isn't worthwhile.
721
722The transformation can also easily be made to work with non-zero equality
723comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
724
725//===---------------------------------------------------------------------===//
726
727Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
728bunch of other stuff from this example (see PR1604): 
729
730#include <cstdio>
731struct test {
732    int val;
733    virtual ~test() {}
734};
735
736int main() {
737    test t;
738    std::scanf("%d", &t.val);
739    std::printf("%d\n", t.val);
740}
741
742//===---------------------------------------------------------------------===//
743
744These functions perform the same computation, but produce different assembly.
745
746define i8 @select(i8 %x) readnone nounwind {
747  %A = icmp ult i8 %x, 250
748  %B = select i1 %A, i8 0, i8 1
749  ret i8 %B 
750}
751
752define i8 @addshr(i8 %x) readnone nounwind {
753  %A = zext i8 %x to i9
754  %B = add i9 %A, 6       ;; 256 - 250 == 6
755  %C = lshr i9 %B, 8
756  %D = trunc i9 %C to i8
757  ret i8 %D
758}
759
760//===---------------------------------------------------------------------===//
761
762From gcc bug 24696:
763int
764f (unsigned long a, unsigned long b, unsigned long c)
765{
766  return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
767}
768int
769f (unsigned long a, unsigned long b, unsigned long c)
770{
771  return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
772}
773Both should combine to ((a|b) & (c-1)) != 0.  Currently not optimized with
774"clang -emit-llvm-bc | opt -std-compile-opts".
775
776//===---------------------------------------------------------------------===//
777
778From GCC Bug 20192:
779#define PMD_MASK    (~((1UL << 23) - 1))
780void clear_pmd_range(unsigned long start, unsigned long end)
781{
782   if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
783       f();
784}
785The expression should optimize to something like
786"!((start|end)&~PMD_MASK). Currently not optimized with "clang
787-emit-llvm-bc | opt -std-compile-opts".
788
789//===---------------------------------------------------------------------===//
790
791unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
792i;}
793unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
794These should combine to the same thing.  Currently, the first function
795produces better code on X86.
796
797//===---------------------------------------------------------------------===//
798
799From GCC Bug 15784:
800#define abs(x) x>0?x:-x
801int f(int x, int y)
802{
803 return (abs(x)) >= 0;
804}
805This should optimize to x == INT_MIN. (With -fwrapv.)  Currently not
806optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
807
808//===---------------------------------------------------------------------===//
809
810From GCC Bug 14753:
811void
812rotate_cst (unsigned int a)
813{
814 a = (a << 10) | (a >> 22);
815 if (a == 123)
816   bar ();
817}
818void
819minus_cst (unsigned int a)
820{
821 unsigned int tem;
822
823 tem = 20 - a;
824 if (tem == 5)
825   bar ();
826}
827void
828mask_gt (unsigned int a)
829{
830 /* This is equivalent to a > 15.  */
831 if ((a & ~7) > 8)
832   bar ();
833}
834void
835rshift_gt (unsigned int a)
836{
837 /* This is equivalent to a > 23.  */
838 if ((a >> 2) > 5)
839   bar ();
840}
841
842All should simplify to a single comparison.  All of these are
843currently not optimized with "clang -emit-llvm-bc | opt
844-std-compile-opts".
845
846//===---------------------------------------------------------------------===//
847
848From GCC Bug 32605:
849int c(int* x) {return (char*)x+2 == (char*)x;}
850Should combine to 0.  Currently not optimized with "clang
851-emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
852
853//===---------------------------------------------------------------------===//
854
855int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
856Should be combined to  "((b >> 1) | b) & 1".  Currently not optimized
857with "clang -emit-llvm-bc | opt -std-compile-opts".
858
859//===---------------------------------------------------------------------===//
860
861unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
862Should combine to "x | (y & 3)".  Currently not optimized with "clang
863-emit-llvm-bc | opt -std-compile-opts".
864
865//===---------------------------------------------------------------------===//
866
867int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
868Should fold to "(~a & c) | (a & b)".  Currently not optimized with
869"clang -emit-llvm-bc | opt -std-compile-opts".
870
871//===---------------------------------------------------------------------===//
872
873int a(int a,int b) {return (~(a|b))|a;}
874Should fold to "a|~b".  Currently not optimized with "clang
875-emit-llvm-bc | opt -std-compile-opts".
876
877//===---------------------------------------------------------------------===//
878
879int a(int a, int b) {return (a&&b) || (a&&!b);}
880Should fold to "a".  Currently not optimized with "clang -emit-llvm-bc
881| opt -std-compile-opts".
882
883//===---------------------------------------------------------------------===//
884
885int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
886Should fold to "a ? b : c", or at least something sane.  Currently not
887optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
888
889//===---------------------------------------------------------------------===//
890
891int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
892Should fold to a && (b || c).  Currently not optimized with "clang
893-emit-llvm-bc | opt -std-compile-opts".
894
895//===---------------------------------------------------------------------===//
896
897int a(int x) {return x | ((x & 8) ^ 8);}
898Should combine to x | 8.  Currently not optimized with "clang
899-emit-llvm-bc | opt -std-compile-opts".
900
901//===---------------------------------------------------------------------===//
902
903int a(int x) {return x ^ ((x & 8) ^ 8);}
904Should also combine to x | 8.  Currently not optimized with "clang
905-emit-llvm-bc | opt -std-compile-opts".
906
907//===---------------------------------------------------------------------===//
908
909int a(int x) {return ((x | -9) ^ 8) & x;}
910Should combine to x & -9.  Currently not optimized with "clang
911-emit-llvm-bc | opt -std-compile-opts".
912
913//===---------------------------------------------------------------------===//
914
915unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
916Should combine to "a * 0x88888888 >> 31".  Currently not optimized
917with "clang -emit-llvm-bc | opt -std-compile-opts".
918
919//===---------------------------------------------------------------------===//
920
921unsigned a(char* x) {if ((*x & 32) == 0) return b();}
922There's an unnecessary zext in the generated code with "clang
923-emit-llvm-bc | opt -std-compile-opts".
924
925//===---------------------------------------------------------------------===//
926
927unsigned a(unsigned long long x) {return 40 * (x >> 1);}
928Should combine to "20 * (((unsigned)x) & -2)".  Currently not
929optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
930
931//===---------------------------------------------------------------------===//
932
933int g(int x) { return (x - 10) < 0; }
934Should combine to "x <= 9" (the sub has nsw).  Currently not
935optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
936
937//===---------------------------------------------------------------------===//
938
939int g(int x) { return (x + 10) < 0; }
940Should combine to "x < -10" (the add has nsw).  Currently not
941optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
942
943//===---------------------------------------------------------------------===//
944
945int f(int i, int j) { return i < j + 1; }
946int g(int i, int j) { return j > i - 1; }
947Should combine to "i <= j" (the add/sub has nsw).  Currently not
948optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
949
950//===---------------------------------------------------------------------===//
951
952unsigned f(unsigned x) { return ((x & 7) + 1) & 15; }
953The & 15 part should be optimized away, it doesn't change the result. Currently
954not optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
955
956//===---------------------------------------------------------------------===//
957
958This was noticed in the entryblock for grokdeclarator in 403.gcc:
959
960        %tmp = icmp eq i32 %decl_context, 4          
961        %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context 
962        %tmp1 = icmp eq i32 %decl_context_addr.0, 1 
963        %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
964
965tmp1 should be simplified to something like:
966  (!tmp || decl_context == 1)
967
968This allows recursive simplifications, tmp1 is used all over the place in
969the function, e.g. by:
970
971        %tmp23 = icmp eq i32 %decl_context_addr.1, 0            ; <i1> [#uses=1]
972        %tmp24 = xor i1 %tmp1, true             ; <i1> [#uses=1]
973        %or.cond8 = and i1 %tmp23, %tmp24               ; <i1> [#uses=1]
974
975later.
976
977//===---------------------------------------------------------------------===//
978
979[STORE SINKING]
980
981Store sinking: This code:
982
983void f (int n, int *cond, int *res) {
984    int i;
985    *res = 0;
986    for (i = 0; i < n; i++)
987        if (*cond)
988            *res ^= 234; /* (*) */
989}
990
991On this function GVN hoists the fully redundant value of *res, but nothing
992moves the store out.  This gives us this code:
993
994bb:		; preds = %bb2, %entry
995	%.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]	
996	%i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
997	%1 = load i32* %cond, align 4
998	%2 = icmp eq i32 %1, 0
999	br i1 %2, label %bb2, label %bb1
1000
1001bb1:		; preds = %bb
1002	%3 = xor i32 %.rle, 234	
1003	store i32 %3, i32* %res, align 4
1004	br label %bb2
1005
1006bb2:		; preds = %bb, %bb1
1007	%.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]	
1008	%indvar.next = add i32 %i.05, 1	
1009	%exitcond = icmp eq i32 %indvar.next, %n
1010	br i1 %exitcond, label %return, label %bb
1011
1012DSE should sink partially dead stores to get the store out of the loop.
1013
1014Here's another partial dead case:
1015http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1016
1017//===---------------------------------------------------------------------===//
1018
1019Scalar PRE hoists the mul in the common block up to the else:
1020
1021int test (int a, int b, int c, int g) {
1022  int d, e;
1023  if (a)
1024    d = b * c;
1025  else
1026    d = b - c;
1027  e = b * c + g;
1028  return d + e;
1029}
1030
1031It would be better to do the mul once to reduce codesize above the if.
1032This is GCC PR38204.
1033
1034
1035//===---------------------------------------------------------------------===//
1036This simple function from 179.art:
1037
1038int winner, numf2s;
1039struct { double y; int   reset; } *Y;
1040
1041void find_match() {
1042   int i;
1043   winner = 0;
1044   for (i=0;i<numf2s;i++)
1045       if (Y[i].y > Y[winner].y)
1046              winner =i;
1047}
1048
1049Compiles into (with clang TBAA):
1050
1051for.body:                                         ; preds = %for.inc, %bb.nph
1052  %indvar = phi i64 [ 0, %bb.nph ], [ %indvar.next, %for.inc ]
1053  %i.01718 = phi i32 [ 0, %bb.nph ], [ %i.01719, %for.inc ]
1054  %tmp4 = getelementptr inbounds %struct.anon* %tmp3, i64 %indvar, i32 0
1055  %tmp5 = load double* %tmp4, align 8, !tbaa !4
1056  %idxprom7 = sext i32 %i.01718 to i64
1057  %tmp10 = getelementptr inbounds %struct.anon* %tmp3, i64 %idxprom7, i32 0
1058  %tmp11 = load double* %tmp10, align 8, !tbaa !4
1059  %cmp12 = fcmp ogt double %tmp5, %tmp11
1060  br i1 %cmp12, label %if.then, label %for.inc
1061
1062if.then:                                          ; preds = %for.body
1063  %i.017 = trunc i64 %indvar to i32
1064  br label %for.inc
1065
1066for.inc:                                          ; preds = %for.body, %if.then
1067  %i.01719 = phi i32 [ %i.01718, %for.body ], [ %i.017, %if.then ]
1068  %indvar.next = add i64 %indvar, 1
1069  %exitcond = icmp eq i64 %indvar.next, %tmp22
1070  br i1 %exitcond, label %for.cond.for.end_crit_edge, label %for.body
1071
1072
1073It is good that we hoisted the reloads of numf2's, and Y out of the loop and
1074sunk the store to winner out.
1075
1076However, this is awful on several levels: the conditional truncate in the loop
1077(-indvars at fault? why can't we completely promote the IV to i64?).
1078
1079Beyond that, we have a partially redundant load in the loop: if "winner" (aka 
1080%i.01718) isn't updated, we reload Y[winner].y the next time through the loop.
1081Similarly, the addressing that feeds it (including the sext) is redundant. In
1082the end we get this generated assembly:
1083
1084LBB0_2:                                 ## %for.body
1085                                        ## =>This Inner Loop Header: Depth=1
1086	movsd	(%rdi), %xmm0
1087	movslq	%edx, %r8
1088	shlq	$4, %r8
1089	ucomisd	(%rcx,%r8), %xmm0
1090	jbe	LBB0_4
1091	movl	%esi, %edx
1092LBB0_4:                                 ## %for.inc
1093	addq	$16, %rdi
1094	incq	%rsi
1095	cmpq	%rsi, %rax
1096	jne	LBB0_2
1097
1098All things considered this isn't too bad, but we shouldn't need the movslq or
1099the shlq instruction, or the load folded into ucomisd every time through the
1100loop.
1101
1102On an x86-specific topic, if the loop can't be restructure, the movl should be a
1103cmov.
1104
1105//===---------------------------------------------------------------------===//
1106
1107[STORE SINKING]
1108
1109GCC PR37810 is an interesting case where we should sink load/store reload
1110into the if block and outside the loop, so we don't reload/store it on the
1111non-call path.
1112
1113for () {
1114  *P += 1;
1115  if ()
1116    call();
1117  else
1118    ...
1119->
1120tmp = *P
1121for () {
1122  tmp += 1;
1123  if () {
1124    *P = tmp;
1125    call();
1126    tmp = *P;
1127  } else ...
1128}
1129*P = tmp;
1130
1131We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1132we don't sink the store.  We need partially dead store sinking.
1133
1134//===---------------------------------------------------------------------===//
1135
1136[LOAD PRE CRIT EDGE SPLITTING]
1137
1138GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1139leading to excess stack traffic. This could be handled by GVN with some crazy
1140symbolic phi translation.  The code we get looks like (g is on the stack):
1141
1142bb2:		; preds = %bb1
1143..
1144	%9 = getelementptr %struct.f* %g, i32 0, i32 0		
1145	store i32 %8, i32* %9, align  bel %bb3
1146
1147bb3:		; preds = %bb1, %bb2, %bb
1148	%c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1149	%b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1150	%10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1151	%11 = load i32* %10, align 4
1152
1153%11 is partially redundant, an in BB2 it should have the value %8.
1154
1155GCC PR33344 and PR35287 are similar cases.
1156
1157
1158//===---------------------------------------------------------------------===//
1159
1160[LOAD PRE]
1161
1162There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1163GCC testsuite, ones we don't get yet are (checked through loadpre25):
1164
1165[CRIT EDGE BREAKING]
1166loadpre3.c predcom-4.c
1167
1168[PRE OF READONLY CALL]
1169loadpre5.c
1170
1171[TURN SELECT INTO BRANCH]
1172loadpre14.c loadpre15.c 
1173
1174actually a conditional increment: loadpre18.c loadpre19.c
1175
1176//===---------------------------------------------------------------------===//
1177
1178[LOAD PRE / STORE SINKING / SPEC HACK]
1179
1180This is a chunk of code from 456.hmmer:
1181
1182int f(int M, int *mc, int *mpp, int *tpmm, int *ip, int *tpim, int *dpp,
1183     int *tpdm, int xmb, int *bp, int *ms) {
1184 int k, sc;
1185 for (k = 1; k <= M; k++) {
1186     mc[k] = mpp[k-1]   + tpmm[k-1];
1187     if ((sc = ip[k-1]  + tpim[k-1]) > mc[k])  mc[k] = sc;
1188     if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k])  mc[k] = sc;
1189     if ((sc = xmb  + bp[k])         > mc[k])  mc[k] = sc;
1190     mc[k] += ms[k];
1191   }
1192}
1193
1194It is very profitable for this benchmark to turn the conditional stores to mc[k]
1195into a conditional move (select instr in IR) and allow the final store to do the
1196store.  See GCC PR27313 for more details.  Note that this is valid to xform even
1197with the new C++ memory model, since mc[k] is previously loaded and later
1198stored.
1199
1200//===---------------------------------------------------------------------===//
1201
1202[SCALAR PRE]
1203There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
1204GCC testsuite.
1205
1206//===---------------------------------------------------------------------===//
1207
1208There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1209GCC testsuite.  For example, we get the first example in predcom-1.c, but 
1210miss the second one:
1211
1212unsigned fib[1000];
1213unsigned avg[1000];
1214
1215__attribute__ ((noinline))
1216void count_averages(int n) {
1217  int i;
1218  for (i = 1; i < n; i++)
1219    avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
1220}
1221
1222which compiles into two loads instead of one in the loop.
1223
1224predcom-2.c is the same as predcom-1.c
1225
1226predcom-3.c is very similar but needs loads feeding each other instead of
1227store->load.
1228
1229
1230//===---------------------------------------------------------------------===//
1231
1232[ALIAS ANALYSIS]
1233
1234Type based alias analysis:
1235http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1236
1237We should do better analysis of posix_memalign.  At the least it should
1238no-capture its pointer argument, at best, we should know that the out-value
1239result doesn't point to anything (like malloc).  One example of this is in
1240SingleSource/Benchmarks/Misc/dt.c
1241
1242//===---------------------------------------------------------------------===//
1243
1244Interesting missed case because of control flow flattening (should be 2 loads):
1245http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1246With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as | 
1247             opt -mem2reg -gvn -instcombine | llvm-dis
1248we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
1249VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1250
1251//===---------------------------------------------------------------------===//
1252
1253http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1254We could eliminate the branch condition here, loading from null is undefined:
1255
1256struct S { int w, x, y, z; };
1257struct T { int r; struct S s; };
1258void bar (struct S, int);
1259void foo (int a, struct T b)
1260{
1261  struct S *c = 0;
1262  if (a)
1263    c = &b.s;
1264  bar (*c, a);
1265}
1266
1267//===---------------------------------------------------------------------===//
1268
1269simplifylibcalls should do several optimizations for strspn/strcspn:
1270
1271strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1272
1273size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1274                     int __reject3) {
1275  register size_t __result = 0;
1276  while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1277         __s[__result] != __reject2 && __s[__result] != __reject3)
1278    ++__result;
1279  return __result;
1280}
1281
1282This should turn into a switch on the character.  See PR3253 for some notes on
1283codegen.
1284
1285456.hmmer apparently uses strcspn and strspn a lot.  471.omnetpp uses strspn.
1286
1287//===---------------------------------------------------------------------===//
1288
1289simplifylibcalls should turn these snprintf idioms into memcpy (GCC PR47917)
1290
1291char buf1[6], buf2[6], buf3[4], buf4[4];
1292int i;
1293
1294int foo (void) {
1295  int ret = snprintf (buf1, sizeof buf1, "abcde");
1296  ret += snprintf (buf2, sizeof buf2, "abcdef") * 16;
1297  ret += snprintf (buf3, sizeof buf3, "%s", i++ < 6 ? "abc" : "def") * 256;
1298  ret += snprintf (buf4, sizeof buf4, "%s", i++ > 10 ? "abcde" : "defgh")*4096;
1299  return ret;
1300}
1301
1302//===---------------------------------------------------------------------===//
1303
1304"gas" uses this idiom:
1305  else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1306..
1307  else if (strchr ("<>", *intel_parser.op_string)
1308
1309Those should be turned into a switch.
1310
1311//===---------------------------------------------------------------------===//
1312
1313252.eon contains this interesting code:
1314
1315        %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1316        %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1317        %strlen = call i32 @strlen(i8* %3072)    ; uses = 1
1318        %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1319        call void @llvm.memcpy.i32(i8* %endptr, 
1320          i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1321        %3074 = call i32 @strlen(i8* %endptr) nounwind readonly 
1322        
1323This is interesting for a couple reasons.  First, in this:
1324
1325The memcpy+strlen strlen can be replaced with:
1326
1327        %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly 
1328
1329Because the destination was just copied into the specified memory buffer.  This,
1330in turn, can be constant folded to "4".
1331
1332In other code, it contains:
1333
1334        %endptr6978 = bitcast i8* %endptr69 to i32*            
1335        store i32 7107374, i32* %endptr6978, align 1
1336        %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly    
1337
1338Which could also be constant folded.  Whatever is producing this should probably
1339be fixed to leave this as a memcpy from a string.
1340
1341Further, eon also has an interesting partially redundant strlen call:
1342
1343bb8:            ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1344        %682 = getelementptr i8** %argv, i32 6          ; <i8**> [#uses=2]
1345        %683 = load i8** %682, align 4          ; <i8*> [#uses=4]
1346        %684 = load i8* %683, align 1           ; <i8> [#uses=1]
1347        %685 = icmp eq i8 %684, 0               ; <i1> [#uses=1]
1348        br i1 %685, label %bb10, label %bb9
1349
1350bb9:            ; preds = %bb8
1351        %686 = call i32 @strlen(i8* %683) nounwind readonly          
1352        %687 = icmp ugt i32 %686, 254           ; <i1> [#uses=1]
1353        br i1 %687, label %bb10, label %bb11
1354
1355bb10:           ; preds = %bb9, %bb8
1356        %688 = call i32 @strlen(i8* %683) nounwind readonly          
1357
1358This could be eliminated by doing the strlen once in bb8, saving code size and
1359improving perf on the bb8->9->10 path.
1360
1361//===---------------------------------------------------------------------===//
1362
1363I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1364which looks like:
1365       %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0 
1366 
1367
1368bb62:           ; preds = %bb55, %bb53
1369        %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]             
1370        %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1371        %172 = add i32 %171, -1         ; <i32> [#uses=1]
1372        %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172       
1373
1374...  no stores ...
1375       br i1 %or.cond, label %bb65, label %bb72
1376
1377bb65:           ; preds = %bb62
1378        store i8 0, i8* %173, align 1
1379        br label %bb72
1380
1381bb72:           ; preds = %bb65, %bb62
1382        %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]            
1383        %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1384
1385Note that on the bb62->bb72 path, that the %177 strlen call is partially
1386redundant with the %171 call.  At worst, we could shove the %177 strlen call
1387up into the bb65 block moving it out of the bb62->bb72 path.   However, note
1388that bb65 stores to the string, zeroing out the last byte.  This means that on
1389that path the value of %177 is actually just %171-1.  A sub is cheaper than a
1390strlen!
1391
1392This pattern repeats several times, basically doing:
1393
1394  A = strlen(P);
1395  P[A-1] = 0;
1396  B = strlen(P);
1397  where it is "obvious" that B = A-1.
1398
1399//===---------------------------------------------------------------------===//
1400
1401186.crafty has this interesting pattern with the "out.4543" variable:
1402
1403call void @llvm.memcpy.i32(
1404        i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1405       i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1) 
1406%101 = call@printf(i8* ...   @out.4543, i32 0, i32 0)) nounwind 
1407
1408It is basically doing:
1409
1410  memcpy(globalarray, "string");
1411  printf(...,  globalarray);
1412  
1413Anyway, by knowing that printf just reads the memory and forward substituting
1414the string directly into the printf, this eliminates reads from globalarray.
1415Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1416other similar functions) there are many stores to "out".  Once all the printfs
1417stop using "out", all that is left is the memcpy's into it.  This should allow
1418globalopt to remove the "stored only" global.
1419
1420//===---------------------------------------------------------------------===//
1421
1422This code:
1423
1424define inreg i32 @foo(i8* inreg %p) nounwind {
1425  %tmp0 = load i8* %p
1426  %tmp1 = ashr i8 %tmp0, 5
1427  %tmp2 = sext i8 %tmp1 to i32
1428  ret i32 %tmp2
1429}
1430
1431could be dagcombine'd to a sign-extending load with a shift.
1432For example, on x86 this currently gets this:
1433
1434	movb	(%eax), %al
1435	sarb	$5, %al
1436	movsbl	%al, %eax
1437
1438while it could get this:
1439
1440	movsbl	(%eax), %eax
1441	sarl	$5, %eax
1442
1443//===---------------------------------------------------------------------===//
1444
1445GCC PR31029:
1446
1447int test(int x) { return 1-x == x; }     // --> return false
1448int test2(int x) { return 2-x == x; }    // --> return x == 1 ?
1449
1450Always foldable for odd constants, what is the rule for even?
1451
1452//===---------------------------------------------------------------------===//
1453
1454PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1455for next field in struct (which is at same address).
1456
1457For example: store of float into { {{}}, float } could be turned into a store to
1458the float directly.
1459
1460//===---------------------------------------------------------------------===//
1461
1462The arg promotion pass should make use of nocapture to make its alias analysis
1463stuff much more precise.
1464
1465//===---------------------------------------------------------------------===//
1466
1467The following functions should be optimized to use a select instead of a
1468branch (from gcc PR40072):
1469
1470char char_int(int m) {if(m>7) return 0; return m;}
1471int int_char(char m) {if(m>7) return 0; return m;}
1472
1473//===---------------------------------------------------------------------===//
1474
1475int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1476
1477Generates this:
1478
1479define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1480entry:
1481  %0 = and i32 %a, 128                            ; <i32> [#uses=1]
1482  %1 = icmp eq i32 %0, 0                          ; <i1> [#uses=1]
1483  %2 = or i32 %b, 128                             ; <i32> [#uses=1]
1484  %3 = and i32 %b, -129                           ; <i32> [#uses=1]
1485  %b_addr.0 = select i1 %1, i32 %3, i32 %2        ; <i32> [#uses=1]
1486  ret i32 %b_addr.0
1487}
1488
1489However, it's functionally equivalent to:
1490
1491         b = (b & ~0x80) | (a & 0x80);
1492
1493Which generates this:
1494
1495define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1496entry:
1497  %0 = and i32 %b, -129                           ; <i32> [#uses=1]
1498  %1 = and i32 %a, 128                            ; <i32> [#uses=1]
1499  %2 = or i32 %0, %1                              ; <i32> [#uses=1]
1500  ret i32 %2
1501}
1502
1503This can be generalized for other forms:
1504
1505     b = (b & ~0x80) | (a & 0x40) << 1;
1506
1507//===---------------------------------------------------------------------===//
1508
1509These two functions produce different code. They shouldn't:
1510
1511#include <stdint.h>
1512 
1513uint8_t p1(uint8_t b, uint8_t a) {
1514  b = (b & ~0xc0) | (a & 0xc0);
1515  return (b);
1516}
1517 
1518uint8_t p2(uint8_t b, uint8_t a) {
1519  b = (b & ~0x40) | (a & 0x40);
1520  b = (b & ~0x80) | (a & 0x80);
1521  return (b);
1522}
1523
1524define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1525entry:
1526  %0 = and i8 %b, 63                              ; <i8> [#uses=1]
1527  %1 = and i8 %a, -64                             ; <i8> [#uses=1]
1528  %2 = or i8 %1, %0                               ; <i8> [#uses=1]
1529  ret i8 %2
1530}
1531
1532define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1533entry:
1534  %0 = and i8 %b, 63                              ; <i8> [#uses=1]
1535  %.masked = and i8 %a, 64                        ; <i8> [#uses=1]
1536  %1 = and i8 %a, -128                            ; <i8> [#uses=1]
1537  %2 = or i8 %1, %0                               ; <i8> [#uses=1]
1538  %3 = or i8 %2, %.masked                         ; <i8> [#uses=1]
1539  ret i8 %3
1540}
1541
1542//===---------------------------------------------------------------------===//
1543
1544IPSCCP does not currently propagate argument dependent constants through
1545functions where it does not not all of the callers.  This includes functions
1546with normal external linkage as well as templates, C99 inline functions etc.
1547Specifically, it does nothing to:
1548
1549define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1550entry:
1551  %0 = add nsw i32 %y, %z                         
1552  %1 = mul i32 %0, %x                             
1553  %2 = mul i32 %y, %z                             
1554  %3 = add nsw i32 %1, %2                         
1555  ret i32 %3
1556}
1557
1558define i32 @test2() nounwind {
1559entry:
1560  %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1561  ret i32 %0
1562}
1563
1564It would be interesting extend IPSCCP to be able to handle simple cases like
1565this, where all of the arguments to a call are constant.  Because IPSCCP runs
1566before inlining, trivial templates and inline functions are not yet inlined.
1567The results for a function + set of constant arguments should be memoized in a
1568map.
1569
1570//===---------------------------------------------------------------------===//
1571
1572The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1573libanalysis' constantfolding logic.  This would allow IPSCCP to be able to
1574handle simple things like this:
1575
1576static int foo(const char *X) { return strlen(X); }
1577int bar() { return foo("abcd"); }
1578
1579//===---------------------------------------------------------------------===//
1580
1581functionattrs doesn't know much about memcpy/memset.  This function should be
1582marked readnone rather than readonly, since it only twiddles local memory, but
1583functionattrs doesn't handle memset/memcpy/memmove aggressively:
1584
1585struct X { int *p; int *q; };
1586int foo() {
1587 int i = 0, j = 1;
1588 struct X x, y;
1589 int **p;
1590 y.p = &i;
1591 x.q = &j;
1592 p = __builtin_memcpy (&x, &y, sizeof (int *));
1593 return **p;
1594}
1595
1596This can be seen at:
1597$ clang t.c -S -o - -mkernel -O0 -emit-llvm | opt -functionattrs -S
1598
1599
1600//===---------------------------------------------------------------------===//
1601
1602Missed instcombine transformation:
1603define i1 @a(i32 %x) nounwind readnone {
1604entry:
1605  %cmp = icmp eq i32 %x, 30
1606  %sub = add i32 %x, -30
1607  %cmp2 = icmp ugt i32 %sub, 9
1608  %or = or i1 %cmp, %cmp2
1609  ret i1 %or
1610}
1611This should be optimized to a single compare.  Testcase derived from gcc.
1612
1613//===---------------------------------------------------------------------===//
1614
1615Missed instcombine or reassociate transformation:
1616int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
1617
1618The sgt and slt should be combined into a single comparison. Testcase derived
1619from gcc.
1620
1621//===---------------------------------------------------------------------===//
1622
1623Missed instcombine transformation:
1624
1625  %382 = srem i32 %tmp14.i, 64                    ; [#uses=1]
1626  %383 = zext i32 %382 to i64                     ; [#uses=1]
1627  %384 = shl i64 %381, %383                       ; [#uses=1]
1628  %385 = icmp slt i32 %tmp14.i, 64                ; [#uses=1]
1629
1630The srem can be transformed to an and because if %tmp14.i is negative, the
1631shift is undefined.  Testcase derived from 403.gcc.
1632
1633//===---------------------------------------------------------------------===//
1634
1635This is a range comparison on a divided result (from 403.gcc):
1636
1637  %1337 = sdiv i32 %1336, 8                       ; [#uses=1]
1638  %.off.i208 = add i32 %1336, 7                   ; [#uses=1]
1639  %1338 = icmp ult i32 %.off.i208, 15             ; [#uses=1]
1640  
1641We already catch this (removing the sdiv) if there isn't an add, we should
1642handle the 'add' as well.  This is a common idiom with it's builtin_alloca code.
1643C testcase:
1644
1645int a(int x) { return (unsigned)(x/16+7) < 15; }
1646
1647Another similar case involves truncations on 64-bit targets:
1648
1649  %361 = sdiv i64 %.046, 8                        ; [#uses=1]
1650  %362 = trunc i64 %361 to i32                    ; [#uses=2]
1651...
1652  %367 = icmp eq i32 %362, 0                      ; [#uses=1]
1653
1654//===---------------------------------------------------------------------===//
1655
1656Missed instcombine/dagcombine transformation:
1657define void @lshift_lt(i8 zeroext %a) nounwind {
1658entry:
1659  %conv = zext i8 %a to i32
1660  %shl = shl i32 %conv, 3
1661  %cmp = icmp ult i32 %shl, 33
1662  br i1 %cmp, label %if.then, label %if.end
1663
1664if.then:
1665  tail call void @bar() nounwind
1666  ret void
1667
1668if.end:
1669  ret void
1670}
1671declare void @bar() nounwind
1672
1673The shift should be eliminated.  Testcase derived from gcc.
1674
1675//===---------------------------------------------------------------------===//
1676
1677These compile into different code, one gets recognized as a switch and the
1678other doesn't due to phase ordering issues (PR6212):
1679
1680int test1(int mainType, int subType) {
1681  if (mainType == 7)
1682    subType = 4;
1683  else if (mainType == 9)
1684    subType = 6;
1685  else if (mainType == 11)
1686    subType = 9;
1687  return subType;
1688}
1689
1690int test2(int mainType, int subType) {
1691  if (mainType == 7)
1692    subType = 4;
1693  if (mainType == 9)
1694    subType = 6;
1695  if (mainType == 11)
1696    subType = 9;
1697  return subType;
1698}
1699
1700//===---------------------------------------------------------------------===//
1701
1702The following test case (from PR6576):
1703
1704define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1705entry:
1706 %cond1 = icmp eq i32 %b, 0                      ; <i1> [#uses=1]
1707 br i1 %cond1, label %exit, label %bb.nph
1708bb.nph:                                           ; preds = %entry
1709 %tmp = mul i32 %b, %a                           ; <i32> [#uses=1]
1710 ret i32 %tmp
1711exit:                                             ; preds = %entry
1712 ret i32 0
1713}
1714
1715could be reduced to:
1716
1717define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1718entry:
1719 %tmp = mul i32 %b, %a
1720 ret i32 %tmp
1721}
1722
1723//===---------------------------------------------------------------------===//
1724
1725We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates.
1726See GCC PR34949
1727
1728Another interesting case is that something related could be used for variables
1729that go const after their ctor has finished.  In these cases, globalopt (which
1730can statically run the constructor) could mark the global const (so it gets put
1731in the readonly section).  A testcase would be:
1732
1733#include <complex>
1734using namespace std;
1735const complex<char> should_be_in_rodata (42,-42);
1736complex<char> should_be_in_data (42,-42);
1737complex<char> should_be_in_bss;
1738
1739Where we currently evaluate the ctors but the globals don't become const because
1740the optimizer doesn't know they "become const" after the ctor is done.  See
1741GCC PR4131 for more examples.
1742
1743//===---------------------------------------------------------------------===//
1744
1745In this code:
1746
1747long foo(long x) {
1748  return x > 1 ? x : 1;
1749}
1750
1751LLVM emits a comparison with 1 instead of 0. 0 would be equivalent
1752and cheaper on most targets.
1753
1754LLVM prefers comparisons with zero over non-zero in general, but in this
1755case it choses instead to keep the max operation obvious.
1756
1757//===---------------------------------------------------------------------===//
1758
1759define void @a(i32 %x) nounwind {
1760entry:
1761  switch i32 %x, label %if.end [
1762    i32 0, label %if.then
1763    i32 1, label %if.then
1764    i32 2, label %if.then
1765    i32 3, label %if.then
1766    i32 5, label %if.then
1767  ]
1768if.then:
1769  tail call void @foo() nounwind
1770  ret void
1771if.end:
1772  ret void
1773}
1774declare void @foo()
1775
1776Generated code on x86-64 (other platforms give similar results):
1777a:
1778	cmpl	$5, %edi
1779	ja	LBB2_2
1780	cmpl	$4, %edi
1781	jne	LBB2_3
1782.LBB0_2:
1783	ret
1784.LBB0_3:
1785	jmp	foo  # TAILCALL
1786
1787If we wanted to be really clever, we could simplify the whole thing to
1788something like the following, which eliminates a branch:
1789	xorl    $1, %edi
1790	cmpl	$4, %edi
1791	ja	.LBB0_2
1792	ret
1793.LBB0_2:
1794	jmp	foo  # TAILCALL
1795
1796//===---------------------------------------------------------------------===//
1797
1798We compile this:
1799
1800int foo(int a) { return (a & (~15)) / 16; }
1801
1802Into:
1803
1804define i32 @foo(i32 %a) nounwind readnone ssp {
1805entry:
1806  %and = and i32 %a, -16
1807  %div = sdiv i32 %and, 16
1808  ret i32 %div
1809}
1810
1811but this code (X & -A)/A is X >> log2(A) when A is a power of 2, so this case
1812should be instcombined into just "a >> 4".
1813
1814We do get this at the codegen level, so something knows about it, but 
1815instcombine should catch it earlier:
1816
1817_foo:                                   ## @foo
1818## BB#0:                                ## %entry
1819	movl	%edi, %eax
1820	sarl	$4, %eax
1821	ret
1822
1823//===---------------------------------------------------------------------===//
1824
1825This code (from GCC PR28685):
1826
1827int test(int a, int b) {
1828  int lt = a < b;
1829  int eq = a == b;
1830  if (lt)
1831    return 1;
1832  return eq;
1833}
1834
1835Is compiled to:
1836
1837define i32 @test(i32 %a, i32 %b) nounwind readnone ssp {
1838entry:
1839  %cmp = icmp slt i32 %a, %b
1840  br i1 %cmp, label %return, label %if.end
1841
1842if.end:                                           ; preds = %entry
1843  %cmp5 = icmp eq i32 %a, %b
1844  %conv6 = zext i1 %cmp5 to i32
1845  ret i32 %conv6
1846
1847return:                                           ; preds = %entry
1848  ret i32 1
1849}
1850
1851it could be:
1852
1853define i32 @test__(i32 %a, i32 %b) nounwind readnone ssp {
1854entry:
1855  %0 = icmp sle i32 %a, %b
1856  %retval = zext i1 %0 to i32
1857  ret i32 %retval
1858}
1859
1860//===---------------------------------------------------------------------===//
1861
1862This code can be seen in viterbi:
1863
1864  %64 = call noalias i8* @malloc(i64 %62) nounwind
1865...
1866  %67 = call i64 @llvm.objectsize.i64(i8* %64, i1 false) nounwind
1867  %68 = call i8* @__memset_chk(i8* %64, i32 0, i64 %62, i64 %67) nounwind
1868
1869llvm.objectsize.i64 should be taught about malloc/calloc, allowing it to
1870fold to %62.  This is a security win (overflows of malloc will get caught)
1871and also a performance win by exposing more memsets to the optimizer.
1872
1873This occurs several times in viterbi.
1874
1875Note that this would change the semantics of @llvm.objectsize which by its
1876current definition always folds to a constant. We also should make sure that
1877we remove checking in code like
1878
1879  char *p = malloc(strlen(s)+1);
1880  __strcpy_chk(p, s, __builtin_objectsize(p, 0));
1881
1882//===---------------------------------------------------------------------===//
1883
1884This code (from Benchmarks/Dhrystone/dry.c):
1885
1886define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
1887entry:
1888  %sext = shl i32 %0, 24
1889  %conv = ashr i32 %sext, 24
1890  %sext6 = shl i32 %1, 24
1891  %conv4 = ashr i32 %sext6, 24
1892  %cmp = icmp eq i32 %conv, %conv4
1893  %. = select i1 %cmp, i32 10000, i32 0
1894  ret i32 %.
1895}
1896
1897Should be simplified into something like:
1898
1899define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
1900entry:
1901  %sext = shl i32 %0, 24
1902  %conv = and i32 %sext, 0xFF000000
1903  %sext6 = shl i32 %1, 24
1904  %conv4 = and i32 %sext6, 0xFF000000
1905  %cmp = icmp eq i32 %conv, %conv4
1906  %. = select i1 %cmp, i32 10000, i32 0
1907  ret i32 %.
1908}
1909
1910and then to:
1911
1912define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
1913entry:
1914  %conv = and i32 %0, 0xFF
1915  %conv4 = and i32 %1, 0xFF
1916  %cmp = icmp eq i32 %conv, %conv4
1917  %. = select i1 %cmp, i32 10000, i32 0
1918  ret i32 %.
1919}
1920//===---------------------------------------------------------------------===//
1921
1922clang -O3 currently compiles this code
1923
1924int g(unsigned int a) {
1925  unsigned int c[100];
1926  c[10] = a;
1927  c[11] = a;
1928  unsigned int b = c[10] + c[11];
1929  if(b > a*2) a = 4;
1930  else a = 8;
1931  return a + 7;
1932}
1933
1934into
1935
1936define i32 @g(i32 a) nounwind readnone {
1937  %add = shl i32 %a, 1
1938  %mul = shl i32 %a, 1
1939  %cmp = icmp ugt i32 %add, %mul
1940  %a.addr.0 = select i1 %cmp, i32 11, i32 15
1941  ret i32 %a.addr.0
1942}
1943
1944The icmp should fold to false. This CSE opportunity is only available
1945after GVN and InstCombine have run.
1946
1947//===---------------------------------------------------------------------===//
1948
1949memcpyopt should turn this:
1950
1951define i8* @test10(i32 %x) {
1952  %alloc = call noalias i8* @malloc(i32 %x) nounwind
1953  call void @llvm.memset.p0i8.i32(i8* %alloc, i8 0, i32 %x, i32 1, i1 false)
1954  ret i8* %alloc
1955}
1956
1957into a call to calloc.  We should make sure that we analyze calloc as
1958aggressively as malloc though.
1959
1960//===---------------------------------------------------------------------===//
1961
1962clang -O3 doesn't optimize this:
1963
1964void f1(int* begin, int* end) {
1965  std::fill(begin, end, 0);
1966}
1967
1968into a memset.  This is PR8942.
1969
1970//===---------------------------------------------------------------------===//
1971
1972clang -O3 -fno-exceptions currently compiles this code:
1973
1974void f(int N) {
1975  std::vector<int> v(N);
1976
1977  extern void sink(void*); sink(&v);
1978}
1979
1980into
1981
1982define void @_Z1fi(i32 %N) nounwind {
1983entry:
1984  %v2 = alloca [3 x i32*], align 8
1985  %v2.sub = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 0
1986  %tmpcast = bitcast [3 x i32*]* %v2 to %"class.std::vector"*
1987  %conv = sext i32 %N to i64
1988  store i32* null, i32** %v2.sub, align 8, !tbaa !0
1989  %tmp3.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 1
1990  store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
1991  %tmp4.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 2
1992  store i32* null, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
1993  %cmp.i.i.i.i = icmp eq i32 %N, 0
1994  br i1 %cmp.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i, label %cond.true.i.i.i.i
1995
1996_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i: ; preds = %entry
1997  store i32* null, i32** %v2.sub, align 8, !tbaa !0
1998  store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
1999  %add.ptr.i5.i.i = getelementptr inbounds i32* null, i64 %conv
2000  store i32* %add.ptr.i5.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
2001  br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
2002
2003cond.true.i.i.i.i:                                ; preds = %entry
2004  %cmp.i.i.i.i.i = icmp slt i32 %N, 0
2005  br i1 %cmp.i.i.i.i.i, label %if.then.i.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i
2006
2007if.then.i.i.i.i.i:                                ; preds = %cond.true.i.i.i.i
2008  call void @_ZSt17__throw_bad_allocv() noreturn nounwind
2009  unreachable
2010
2011_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i:    ; preds = %cond.true.i.i.i.i
2012  %mul.i.i.i.i.i = shl i64 %conv, 2
2013  %call3.i.i.i.i.i = call noalias i8* @_Znwm(i64 %mul.i.i.i.i.i) nounwind
2014  %0 = bitcast i8* %call3.i.i.i.i.i to i32*
2015  store i32* %0, i32** %v2.sub, align 8, !tbaa !0
2016  store i32* %0, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
2017  %add.ptr.i.i.i = getelementptr inbounds i32* %0, i64 %conv
2018  store i32* %add.ptr.i.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
2019  call void @llvm.memset.p0i8.i64(i8* %call3.i.i.i.i.i, i8 0, i64 %mul.i.i.i.i.i, i32 4, i1 false)
2020  br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
2021
2022This is just the handling the construction of the vector. Most surprising here
2023is the fact that all three null stores in %entry are dead (because we do no
2024cross-block DSE).
2025
2026Also surprising is that %conv isn't simplified to 0 in %....exit.thread.i.i.
2027This is a because the client of LazyValueInfo doesn't simplify all instruction
2028operands, just selected ones.
2029
2030//===---------------------------------------------------------------------===//
2031
2032clang -O3 -fno-exceptions currently compiles this code:
2033
2034void f(char* a, int n) {
2035  __builtin_memset(a, 0, n);
2036  for (int i = 0; i < n; ++i)
2037    a[i] = 0;
2038}
2039
2040into:
2041
2042define void @_Z1fPci(i8* nocapture %a, i32 %n) nounwind {
2043entry:
2044  %conv = sext i32 %n to i64
2045  tail call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %conv, i32 1, i1 false)
2046  %cmp8 = icmp sgt i32 %n, 0
2047  br i1 %cmp8, label %for.body.lr.ph, label %for.end
2048
2049for.body.lr.ph:                                   ; preds = %entry
2050  %tmp10 = add i32 %n, -1
2051  %tmp11 = zext i32 %tmp10 to i64
2052  %tmp12 = add i64 %tmp11, 1
2053  call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %tmp12, i32 1, i1 false)
2054  ret void
2055
2056for.end:                                          ; preds = %entry
2057  ret void
2058}
2059
2060This shouldn't need the ((zext (%n - 1)) + 1) game, and it should ideally fold
2061the two memset's together.
2062
2063The issue with the addition only occurs in 64-bit mode, and appears to be at
2064least partially caused by Scalar Evolution not keeping its cache updated: it
2065returns the "wrong" result immediately after indvars runs, but figures out the
2066expected result if it is run from scratch on IR resulting from running indvars.
2067
2068//===---------------------------------------------------------------------===//
2069
2070clang -O3 -fno-exceptions currently compiles this code:
2071
2072struct S {
2073  unsigned short m1, m2;
2074  unsigned char m3, m4;
2075};
2076
2077void f(int N) {
2078  std::vector<S> v(N);
2079  extern void sink(void*); sink(&v);
2080}
2081
2082into poor code for zero-initializing 'v' when N is >0. The problem is that
2083S is only 6 bytes, but each element is 8 byte-aligned. We generate a loop and
20844 stores on each iteration. If the struct were 8 bytes, this gets turned into
2085a memset.
2086
2087In order to handle this we have to:
2088  A) Teach clang to generate metadata for memsets of structs that have holes in
2089     them.
2090  B) Teach clang to use such a memset for zero init of this struct (since it has
2091     a hole), instead of doing elementwise zeroing.
2092
2093//===---------------------------------------------------------------------===//
2094
2095clang -O3 currently compiles this code:
2096
2097extern const int magic;
2098double f() { return 0.0 * magic; }
2099
2100into
2101
2102@magic = external constant i32
2103
2104define double @_Z1fv() nounwind readnone {
2105entry:
2106  %tmp = load i32* @magic, align 4, !tbaa !0
2107  %conv = sitofp i32 %tmp to double
2108  %mul = fmul double %conv, 0.000000e+00
2109  ret double %mul
2110}
2111
2112We should be able to fold away this fmul to 0.0.  More generally, fmul(x,0.0)
2113can be folded to 0.0 if we can prove that the LHS is not -0.0, not a NaN, and
2114not an INF.  The CannotBeNegativeZero predicate in value tracking should be
2115extended to support general "fpclassify" operations that can return 
2116yes/no/unknown for each of these predicates.
2117
2118In this predicate, we know that uitofp is trivially never NaN or -0.0, and
2119we know that it isn't +/-Inf if the floating point type has enough exponent bits
2120to represent the largest integer value as < inf.
2121
2122//===---------------------------------------------------------------------===//
2123
2124When optimizing a transformation that can change the sign of 0.0 (such as the
21250.0*val -> 0.0 transformation above), it might be provable that the sign of the
2126expression doesn't matter.  For example, by the above rules, we can't transform
2127fmul(sitofp(x), 0.0) into 0.0, because x might be -1 and the result of the
2128expression is defined to be -0.0.
2129
2130If we look at the uses of the fmul for example, we might be able to prove that
2131all uses don't care about the sign of zero.  For example, if we have:
2132
2133  fadd(fmul(sitofp(x), 0.0), 2.0)
2134
2135Since we know that x+2.0 doesn't care about the sign of any zeros in X, we can
2136transform the fmul to 0.0, and then the fadd to 2.0.
2137
2138//===---------------------------------------------------------------------===//
2139
2140We should enhance memcpy/memcpy/memset to allow a metadata node on them
2141indicating that some bytes of the transfer are undefined.  This is useful for
2142frontends like clang when lowering struct copies, when some elements of the
2143struct are undefined.  Consider something like this:
2144
2145struct x {
2146  char a;
2147  int b[4];
2148};
2149void foo(struct x*P);
2150struct x testfunc() {
2151  struct x V1, V2;
2152  foo(&V1);
2153  V2 = V1;
2154
2155  return V2;
2156}
2157
2158We currently compile this to:
2159$ clang t.c -S -o - -O0 -emit-llvm | opt -scalarrepl -S
2160
2161
2162%struct.x = type { i8, [4 x i32] }
2163
2164define void @testfunc(%struct.x* sret %agg.result) nounwind ssp {
2165entry:
2166  %V1 = alloca %struct.x, align 4
2167  call void @foo(%struct.x* %V1)
2168  %tmp1 = bitcast %struct.x* %V1 to i8*
2169  %0 = bitcast %struct.x* %V1 to i160*
2170  %srcval1 = load i160* %0, align 4
2171  %tmp2 = bitcast %struct.x* %agg.result to i8*
2172  %1 = bitcast %struct.x* %agg.result to i160*
2173  store i160 %srcval1, i160* %1, align 4
2174  ret void
2175}
2176
2177This happens because SRoA sees that the temp alloca has is being memcpy'd into
2178and out of and it has holes and it has to be conservative.  If we knew about the
2179holes, then this could be much much better.
2180
2181Having information about these holes would also improve memcpy (etc) lowering at
2182llc time when it gets inlined, because we can use smaller transfers.  This also
2183avoids partial register stalls in some important cases.
2184
2185//===---------------------------------------------------------------------===//
2186
2187We don't fold (icmp (add) (add)) unless the two adds only have a single use.
2188There are a lot of cases that we're refusing to fold in (e.g.) 256.bzip2, for
2189example:
2190
2191 %indvar.next90 = add i64 %indvar89, 1     ;; Has 2 uses
2192 %tmp96 = add i64 %tmp95, 1                ;; Has 1 use
2193 %exitcond97 = icmp eq i64 %indvar.next90, %tmp96
2194
2195We don't fold this because we don't want to introduce an overlapped live range
2196of the ivar.  However if we can make this more aggressive without causing
2197performance issues in two ways:
2198
21991. If *either* the LHS or RHS has a single use, we can definitely do the
2200   transformation.  In the overlapping liverange case we're trading one register
2201   use for one fewer operation, which is a reasonable trade.  Before doing this
2202   we should verify that the llc output actually shrinks for some benchmarks.
22032. If both ops have multiple uses, we can still fold it if the operations are
2204   both sinkable to *after* the icmp (e.g. in a subsequent block) which doesn't
2205   increase register pressure.
2206
2207There are a ton of icmp's we aren't simplifying because of the reg pressure
2208concern.  Care is warranted here though because many of these are induction
2209variables and other cases that matter a lot to performance, like the above.
2210Here's a blob of code that you can drop into the bottom of visitICmp to see some
2211missed cases:
2212
2213  { Value *A, *B, *C, *D;
2214    if (match(Op0, m_Add(m_Value(A), m_Value(B))) && 
2215        match(Op1, m_Add(m_Value(C), m_Value(D))) &&
2216        (A == C || A == D || B == C || B == D)) {
2217      errs() << "OP0 = " << *Op0 << "  U=" << Op0->getNumUses() << "\n";
2218      errs() << "OP1 = " << *Op1 << "  U=" << Op1->getNumUses() << "\n";
2219      errs() << "CMP = " << I << "\n\n";
2220    }
2221  }
2222
2223//===---------------------------------------------------------------------===//
2224
2225define i1 @test1(i32 %x) nounwind {
2226  %and = and i32 %x, 3
2227  %cmp = icmp ult i32 %and, 2
2228  ret i1 %cmp
2229}
2230
2231Can be folded to (x & 2) == 0.
2232
2233define i1 @test2(i32 %x) nounwind {
2234  %and = and i32 %x, 3
2235  %cmp = icmp ugt i32 %and, 1
2236  ret i1 %cmp
2237}
2238
2239Can be folded to (x & 2) != 0.
2240
2241SimplifyDemandedBits shrinks the "and" constant to 2 but instcombine misses the
2242icmp transform.
2243
2244//===---------------------------------------------------------------------===//
2245
2246This code:
2247
2248typedef struct {
2249int f1:1;
2250int f2:1;
2251int f3:1;
2252int f4:29;
2253} t1;
2254
2255typedef struct {
2256int f1:1;
2257int f2:1;
2258int f3:30;
2259} t2;
2260
2261t1 s1;
2262t2 s2;
2263
2264void func1(void)
2265{
2266s1.f1 = s2.f1;
2267s1.f2 = s2.f2;
2268}
2269
2270Compiles into this IR (on x86-64 at least):
2271
2272%struct.t1 = type { i8, [3 x i8] }
2273@s2 = global %struct.t1 zeroinitializer, align 4
2274@s1 = global %struct.t1 zeroinitializer, align 4
2275define void @func1() nounwind ssp noredzone {
2276entry:
2277  %0 = load i32* bitcast (%struct.t1* @s2 to i32*), align 4
2278  %bf.val.sext5 = and i32 %0, 1
2279  %1 = load i32* bitcast (%struct.t1* @s1 to i32*), align 4
2280  %2 = and i32 %1, -4
2281  %3 = or i32 %2, %bf.val.sext5
2282  %bf.val.sext26 = and i32 %0, 2
2283  %4 = or i32 %3, %bf.val.sext26
2284  store i32 %4, i32* bitcast (%struct.t1* @s1 to i32*), align 4
2285  ret void
2286}
2287
2288The two or/and's should be merged into one each.
2289
2290//===---------------------------------------------------------------------===//
2291
2292Machine level code hoisting can be useful in some cases.  For example, PR9408
2293is about:
2294
2295typedef union {
2296 void (*f1)(int);
2297 void (*f2)(long);
2298} funcs;
2299
2300void foo(funcs f, int which) {
2301 int a = 5;
2302 if (which) {
2303   f.f1(a);
2304 } else {
2305   f.f2(a);
2306 }
2307}
2308
2309which we compile to:
2310
2311foo:                                    # @foo
2312# BB#0:                                 # %entry
2313       pushq   %rbp
2314       movq    %rsp, %rbp
2315       testl   %esi, %esi
2316       movq    %rdi, %rax
2317       je      .LBB0_2
2318# BB#1:                                 # %if.then
2319       movl    $5, %edi
2320       callq   *%rax
2321       popq    %rbp
2322       ret
2323.LBB0_2:                                # %if.else
2324       movl    $5, %edi
2325       callq   *%rax
2326       popq    %rbp
2327       ret
2328
2329Note that bb1 and bb2 are the same.  This doesn't happen at the IR level
2330because one call is passing an i32 and the other is passing an i64.
2331
2332//===---------------------------------------------------------------------===//
2333
2334I see this sort of pattern in 176.gcc in a few places (e.g. the start of
2335store_bit_field).  The rem should be replaced with a multiply and subtract:
2336
2337  %3 = sdiv i32 %A, %B
2338  %4 = srem i32 %A, %B
2339
2340Similarly for udiv/urem.  Note that this shouldn't be done on X86 or ARM,
2341which can do this in a single operation (instruction or libcall).  It is
2342probably best to do this in the code generator.
2343
2344//===---------------------------------------------------------------------===//
2345
2346unsigned foo(unsigned x, unsigned y) { return (x & y) == 0 || x == 0; }
2347should fold to (x & y) == 0.
2348
2349//===---------------------------------------------------------------------===//
2350
2351unsigned foo(unsigned x, unsigned y) { return x > y && x != 0; }
2352should fold to x > y.
2353
2354//===---------------------------------------------------------------------===//
2355