History log of /external/clang/test/CodeGen/x86_64-arguments.c
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
7a1b586a383622e3287a5f3d82736ec513032744 12-Jun-2013 Eli Friedman <eli.friedman@gmail.com> Make va_arg and argument passing to varargs functions work correctly with
AVX vectors when AVX is turned on.

Fixes <rdar://problem/10513611>.



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@183813 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
939d83efba53994af07c7dc82b88873132a18c0d 11-Jun-2013 Eli Friedman <eli.friedman@gmail.com> Fix a very silly mistake in r183590.



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@183720 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
eeb00624413d4a4856e66809b84c558d2cdce17f 08-Jun-2013 Eli Friedman <eli.friedman@gmail.com> Fix va_arg on x86-64 for a struct containing a single int128_t. PR16248

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@183590 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
c1ea4b96adca4767991bb0a7b21052cef4db059c 15-Feb-2013 Bill Wendling <isanbard@gmail.com> Update testcases due to Attribute sorting improvements.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@175253 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
5e31474b9c8348e8d0404264ae6a8775e34df6ac 01-Feb-2013 Bill Wendling <isanbard@gmail.com> Update the tests.

This update coincides with r174110. That change ordered the attributes
alphabetically.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@174111 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
f0f7fa022414cc58c8def9fed3c73d0464afe559 29-Jan-2013 Bill Wendling <isanbard@gmail.com> Modify the tests for the (sorted) order that the attributes come out as now.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@173762 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
e56bb36e8eea89bae7dfe6eb6ea0455af126bf4a 07-Dec-2012 John McCall <rjmccall@apple.com> Fix the required args count for variadic blocks.

We were emitting calls to blocks as if all arguments were
required --- i.e. with signature (A,B,C,D,...) rather than
(A,B,...). This patch fixes that and accounts for the
implicit block-context argument as a required argument.
In addition, this patch changes the function type under which
we call unprototyped functions on platforms like x86-64 that
guarantee compatibility of variadic functions with unprototyped
function types; previously we would always call such functions
under the LLVM type T (...)*, but now we will call them under
the type T (A,B,C,D,...)*. This last change should have no
material effect except for making the type conventions more
explicit; it was a side-effect of the most convenient implementation.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@169588 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
060f34d6d12a851faa9463da522f7dff1104d0e1 28-Nov-2012 Manman Ren <mren@apple.com> ABI: comments from Eli on r168820.

rdar://12723368


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@168821 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
f51c61c78e8487061b6a434286488fa3e5b919e5 28-Nov-2012 Manman Ren <mren@apple.com> ABI: modify CreateCoercedLoad and CreateCoercedStore to not use load or store of
the original parameter or return type.

Since we do not accurately represent the data fields of a union, we should not
directly load or store a union type.

As an exmple, if we have i8,i8, i32, i32 as one field type and i32,i32 as
another field type, the first field type will be chosen to represent the union.
If we load with the union's type, the 3rd byte and the 4th byte will be skipped.

rdar://12723368


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@168820 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
edfac0302490d84419eb958c812c533b8df29785 10-Mar-2012 Daniel Dunbar <daniel@zuster.org> IRgen/ABI/x86_64: Avoid passing small structs using byval sometimes.

- We do this when it is easy to determine that the backend will pass them on
the stack properly by itself.

Currently LLVM codegen is really bad in some cases with byval, for example, on
the test case here (which is derived from Sema code, which likes to pass
SourceLocations around)::

struct s47 { unsigned a; };
void f47(int,int,int,int,int,int,struct s47);
void test47(int a, struct s47 b) { f47(a, a, a, a, a, a, b); }

we used to emit code like this::

...
movl %esi, -8(%rbp)
movl -8(%rbp), %ecx
movl %ecx, (%rsp)
...

to handle moving the struct onto the stack, which is just appalling.

Now we generate::

movl %esi, (%rsp)

which seems better, no?

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@152462 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
ee1ad99f1ced9ffee436466ef674d4541c37864e 02-Dec-2011 Eli Friedman <eli.friedman@gmail.com> When we're passing a vector with an illegal type through memory on x86-64, use byval so we're sure the backend does the right thing. Fixes va_arg with illegal vectors and an obscure ABI mismatch with __m64 vectors.



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@145652 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
3ed7903d27f0e7e0cd3a61c165d39eca70f3cff5 01-Dec-2011 Eli Friedman <eli.friedman@gmail.com> Don't use a varargs convention for calls unprototyped functions where one of the arguments is an AVX vector.



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@145574 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
ce275675d33142c235d7027db16abe43da616ee4 29-Nov-2011 Tanya Lattner <tonic@nondot.org> Correct the code generation for function arguments of vec3 types on x86_64 when they are greater than 128 bits. This was incorrectly coercing things like long3 into a double2.
Add test case.



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@145312 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
8d2fe42417fcc861b3324d585dc29ac4da59bee0 18-Nov-2011 Eli Friedman <eli.friedman@gmail.com> Make va_arg on x86-64 compute alignment the same way as argument passing.

Fixes <rdar://problem/10463281>.



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@144966 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
410ffb2bc5f072d58a73c14560345bcf77dec1cc 26-Aug-2011 John McCall <rjmccall@apple.com> Track whether an AggValueSlot is potentially aliased, and do not
emit call results into potentially aliased slots. This allows us
to properly mark indirect return slots as noalias, at the cost
of requiring an extra memcpy when assigning an aggregate call
result into a l-value. It also brings us into compliance with
the x86-64 ABI.



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@138599 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
b8981df0ed2886dfa221f2fad6d86872c39d3549 13-Jul-2011 Bruno Cardoso Lopes <bruno.cardoso@gmail.com> Reapply r134946 with fixes. Tested on Benjamin testcase and other test-suite failures.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@135091 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
548e478b8bd02b0295bc4efd0c282337f00646fd 13-Jul-2011 Bruno Cardoso Lopes <bruno.cardoso@gmail.com> Revert r134946

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@135004 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
cd87d1e4d1b0097877b0f9c2065900717d2aacba 12-Jul-2011 Chris Lattner <sabre@nondot.org> fix an unintended behavior change in the type system rewrite, which caused us to compile
stuff like this:

typedef struct {
int x, y, z;
} foo_t;

foo_t g;

into:
%"struct.<anonymous>" = type { i32, i32, i32 }
we now get:
%struct.foo_t = type { i32, i32, i32 }

This doesn't change the behavior of the compiler, but makes the IR much easier to read.




git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@134969 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
089d8927abe73fe6a806987937d9b54b1a7a8659 12-Jul-2011 Bruno Cardoso Lopes <bruno.cardoso@gmail.com> Do the same as r134946 for arrays. Add more testcases for avx x86_64 arg
passing.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@134951 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
ccafadb68f5a8132a4ee23f441cf5d6976a4133b 12-Jul-2011 Bruno Cardoso Lopes <bruno.cardoso@gmail.com> Fix one x86_64 abi issue and the test to actually look for the right thing,
which is: { <4 x float>, <4 x float> } should continue to go through memory.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@134946 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
4943c15df59fdec444656a48c16e72a2077ab61f 12-Jul-2011 Bruno Cardoso Lopes <bruno.cardoso@gmail.com> Reapply r134754, which turns out to be working correctly and also
add one more testcase.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@134934 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
9cbe4f0ba01ec304e1e3d071c071f7bca33631c0 09-Jul-2011 Chris Lattner <sabre@nondot.org> clang side to match the LLVM IR type system rewrite patch.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@134831 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
528a8c7b4c39ae1c551760fd087a508a71ee9541 09-Jul-2011 Bruno Cardoso Lopes <bruno.cardoso@gmail.com> Revert x86_64 ABI changes until I have time to check the items raised by Eli.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@134765 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
df41b4c10ab2a0096957e415e520bd467f8b2e9e 09-Jul-2011 Bruno Cardoso Lopes <bruno.cardoso@gmail.com> Add support for AVX 256-bit in the x86_64 ABI (as in the 0.99.5 draft)

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@134754 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
14508ff0bffee0fdfe5d336946c6db0e709099c8 02-Jul-2011 Eli Friedman <eli.friedman@gmail.com> Don't use x86_mmx where it isn't necessary.

The start of some work on getting -mno-mmx working the way we want it to.



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@134300 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
855d227967f8332237f1f1cf8eb63a1e22d8be05 23-May-2011 Chris Lattner <sabre@nondot.org> Fix x86-64 byval passing to specify the alignment even when the code
generator will give it something sufficient. This is important because
the mid-level optimizer doesn't know what alignment is required otherwise.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@131879 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
67a5773ba529aebcad03fa5e7cc95555d133e93d 21-Apr-2011 John McCall <rjmccall@apple.com> The 0.98 revision of the x86-64 ABI clarified a lot of things, some
of which break strict compatibility with previous compilers. Implement
one of them and then immediately opt out on Darwin.



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@129899 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
0fefa4175b0c9101564946f6a975ee9946c16d4b 26-Aug-2010 Chris Lattner <sabre@nondot.org> vector of long and ulong are also classified as INTEGER in x86-64 abi,
this fixes rdar://8358475 a failure of the gcc.dg/compat/vector_1 abi
test.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@112205 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
473f8e723be93d84bd5fd15b094f4184802d4676 26-Aug-2010 Chris Lattner <sabre@nondot.org> 1 x ulonglong needs to be classified as INTEGER, just like 1 x longlong,
this fixes a miscompilation on the included testcase, rdar://8359248


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@112201 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
a8b7a7d3eaa51dd200cba1e5541f2542d24d7a6e 26-Aug-2010 Chris Lattner <sabre@nondot.org> tame an assertion, fixing rdar://8357396


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@112174 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
22fd4baf2eba2103e2b41e463f1a5f6486c398fb 26-Aug-2010 Chris Lattner <sabre@nondot.org> Finally pass "two floats in a 64-bit unit" as a <2 x float> instead of
as a double in the x86-64 ABI. This allows us to generate much better
code for certain things, e.g.:

_Complex float f32(_Complex float A, _Complex float B) {
return A+B;
}

Used to compile into (look at the integer silliness!):

_f32: ## @f32
## BB#0: ## %entry
movd %xmm1, %rax
movd %eax, %xmm1
movd %xmm0, %rcx
movd %ecx, %xmm0
addss %xmm1, %xmm0
movd %xmm0, %edx
shrq $32, %rax
movd %eax, %xmm0
shrq $32, %rcx
movd %ecx, %xmm1
addss %xmm0, %xmm1
movd %xmm1, %eax
shlq $32, %rax
addq %rdx, %rax
movd %rax, %xmm0
ret

Now we get:

_f32: ## @f32
movdqa %xmm0, %xmm2
addss %xmm1, %xmm2
pshufd $16, %xmm2, %xmm2
pshufd $1, %xmm1, %xmm1
pshufd $1, %xmm0, %xmm0
addss %xmm1, %xmm0
pshufd $16, %xmm0, %xmm1
movdqa %xmm2, %xmm0
unpcklps %xmm1, %xmm0
ret

and compile stuff like:

extern float _Complex ccoshf( float _Complex ) ;
float _Complex ccosf ( float _Complex z ) {
float _Complex iz;
(__real__ iz) = -(__imag__ z);
(__imag__ iz) = (__real__ z);
return ccoshf(iz);
}

into:

_ccosf: ## @ccosf
## BB#0: ## %entry
pshufd $1, %xmm0, %xmm1
xorps LCPI4_0(%rip), %xmm1
unpcklps %xmm0, %xmm1
movaps %xmm1, %xmm0
jmp _ccoshf ## TAILCALL

instead of:

_ccosf: ## @ccosf
## BB#0: ## %entry
movd %xmm0, %rax
movq %rax, %rcx
shlq $32, %rcx
shrq $32, %rax
xorl $-2147483648, %eax ## imm = 0xFFFFFFFF80000000
addq %rcx, %rax
movd %rax, %xmm0
jmp _ccoshf ## TAILCALL


There is still "stuff to be done" here for the struct case,
but this resolves rdar://6379669 - [x86-64 ABI] Pass and return
_Complex float / double efficiently



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@112111 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
f47c944b5710a545d564b4d4b641a2f8bac96af3 29-Jul-2010 Chris Lattner <sabre@nondot.org> fix rdar://8251384, another case where we could access beyond the
end of a struct. This improves the case when the struct being passed
contains 3 floats, either due to a struct or array of 3 things. Before
we'd generate this IR for the testcase:

define float @bar(double %X.coerce0, double %X.coerce1) nounwind {
entry:
%X = alloca %struct.foof, align 8 ; <%struct.foof*> [#uses=2]
%0 = bitcast %struct.foof* %X to %1* ; <%1*> [#uses=2]
%1 = getelementptr %1* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %X.coerce0, double* %1
%2 = getelementptr %1* %0, i32 0, i32 1 ; <double*> [#uses=1]
store double %X.coerce1, double* %2
%tmp = getelementptr inbounds %struct.foof* %X, i32 0, i32 2 ; <float*> [#uses=1]
%tmp1 = load float* %tmp ; <float> [#uses=1]
ret float %tmp1
}

which compiled (with optimization) to:

_bar: ## @bar
## BB#0: ## %entry
movd %xmm1, %rax
movd %eax, %xmm0
ret

Now we produce:

define float @bar(double %X.coerce0, float %X.coerce1) nounwind {
entry:
%X = alloca %struct.foof, align 8 ; <%struct.foof*> [#uses=2]
%0 = bitcast %struct.foof* %X to %0* ; <%0*> [#uses=2]
%1 = getelementptr %0* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %X.coerce0, double* %1
%2 = getelementptr %0* %0, i32 0, i32 1 ; <float*> [#uses=1]
store float %X.coerce1, float* %2
%tmp = getelementptr inbounds %struct.foof* %X, i32 0, i32 2 ; <float*> [#uses=1]
%tmp1 = load float* %tmp ; <float> [#uses=1]
ret float %tmp1
}

and:

_bar: ## @bar
## BB#0: ## %entry
movaps %xmm1, %xmm0
ret



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@109776 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
9e45a3de3f462785a86bba77dee168ab354d9704 29-Jul-2010 Chris Lattner <sabre@nondot.org> handle a case where we could access off the end of a function
that Eli pointed out, rdar://8249586


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@109762 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
c10ab19fcbc70e3d3897f3c8dbacf6f89a3dfa8c 29-Jul-2010 Chris Lattner <sabre@nondot.org> in release mode, irbuilder doesn't add names to instructions,
this will hopefully fix the osuosl clang-i686-darwin10 builder.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@109760 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
021c3a349d4f55cc2c7970268758bcf37b924493 29-Jul-2010 Chris Lattner <sabre@nondot.org> This is a little bit far, but optimize cases like:

struct a {
struct c {
double x;
int y;
} x[1];
};

void foo(struct a A) {
}

into:

define void @foo(double %A.coerce0, i32 %A.coerce1) nounwind {
entry:
%A = alloca %struct.a, align 8 ; <%struct.a*> [#uses=1]
%0 = bitcast %struct.a* %A to %struct.c* ; <%struct.c*> [#uses=2]
%1 = getelementptr %struct.c* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %A.coerce0, double* %1
%2 = getelementptr %struct.c* %0, i32 0, i32 1 ; <i32*> [#uses=1]
store i32 %A.coerce1, i32* %2

instead of:

define void @foo(double %A.coerce0, i64 %A.coerce1) nounwind {
entry:
%A = alloca %struct.a, align 8 ; <%struct.a*> [#uses=1]
%0 = bitcast %struct.a* %A to %0* ; <%0*> [#uses=2]
%1 = getelementptr %0* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %A.coerce0, double* %1
%2 = getelementptr %0* %0, i32 0, i32 1 ; <i64*> [#uses=1]
store i64 %A.coerce1, i64* %2

I only do this now because I never want to look at this code again :)



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@109738 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
e2962be11e8894329d37985eccaa4f4a12dea402 29-Jul-2010 Chris Lattner <sabre@nondot.org> implement a todo: pass a eight-byte that consists of a
small integer + padding as that small integer. On code
like:

struct c { double x; int y; };
void bar(struct c C) { }

This means that we compile to:

define void @bar(double %C.coerce0, i32 %C.coerce1) nounwind {
entry:
%C = alloca %struct.c, align 8 ; <%struct.c*> [#uses=2]
%0 = getelementptr %struct.c* %C, i32 0, i32 0 ; <double*> [#uses=1]
store double %C.coerce0, double* %0
%1 = getelementptr %struct.c* %C, i32 0, i32 1 ; <i32*> [#uses=1]
store i32 %C.coerce1, i32* %1

instead of:

define void @bar(double %C.coerce0, i64 %C.coerce1) nounwind {
entry:
%C = alloca %struct.c, align 8 ; <%struct.c*> [#uses=3]
%0 = bitcast %struct.c* %C to %0* ; <%0*> [#uses=2]
%1 = getelementptr %0* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %C.coerce0, double* %1
%2 = getelementptr %0* %0, i32 0, i32 1 ; <i64*> [#uses=1]
store i64 %C.coerce1, i64* %2

which gives SRoA heartburn.

This implements rdar://5711709, a nice low number :)



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@109737 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
800588fd230d2c37ddce8fbf4a3881352715d700 29-Jul-2010 Chris Lattner <sabre@nondot.org> Kill off the 'coerce' ABI passing form. Now 'direct' and 'extend' always
have a "coerce to" type which often matches the default lowering of Clang
type to LLVM IR type, but the coerce case can be handled by making them
not be the same.

This simplifies things and fixes issues where X86-64 abi lowering would
return coerce after making preferred types exactly match up. This caused
us to compile:

typedef float v4f32 __attribute__((__vector_size__(16)));
v4f32 foo(v4f32 X) {
return X+X;
}

into this code at -O0:

define <4 x float> @foo(<4 x float> %X.coerce) nounwind {
entry:
%retval = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2]
%coerce = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2]
%X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3]
store <4 x float> %X.coerce, <4 x float>* %coerce
%X = load <4 x float>* %coerce ; <<4 x float>> [#uses=1]
store <4 x float> %X, <4 x float>* %X.addr
%tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1]
store <4 x float> %add, <4 x float>* %retval
%0 = load <4 x float>* %retval ; <<4 x float>> [#uses=1]
ret <4 x float> %0
}

Now we get:

define <4 x float> @foo(<4 x float> %X) nounwind {
entry:
%X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3]
store <4 x float> %X, <4 x float>* %X.addr
%tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1]
ret <4 x float> %add
}

This implements rdar://8248065



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@109733 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
15842bd05bd6d3b7450385ac8f73aaee5f807e19 29-Jul-2010 Chris Lattner <sabre@nondot.org> ignore structs that wrap vectors in IR, the abstraction shouldn't add penalty.

Before we'd compile the example into something like:

%coerce.dive2 = getelementptr %struct.v4f32wrapper* %retval, i32 0, i32 0 ; <<4 x float>*> [#uses=1]
%1 = bitcast <4 x float>* %coerce.dive2 to <2 x double>* ; <<2 x double>*> [#uses=1]
%2 = load <2 x double>* %1, align 1 ; <<2 x double>> [#uses=1]
ret <2 x double> %2

Now we produce:

%coerce.dive2 = getelementptr %struct.v4f32wrapper* %retval, i32 0, i32 0 ; <<4 x float>*> [#uses=1]
%0 = load <4 x float>* %coerce.dive2, align 1 ; <<4 x float>> [#uses=1]
ret <4 x float> %0



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@109732 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
0f408f5242522cbede304472e17931357c1b573d 29-Jul-2010 Chris Lattner <sabre@nondot.org> move the 'pretty 16-byte vector' inferring code up to be shared
with return values, improving stuff that returns __m128 etc.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@109731 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
4711cb065922d46bfe80383b2001ae681f74780a 29-Jul-2010 Chris Lattner <sabre@nondot.org> now that we have CGT around, we can start using preferred types
for return values too. Instead of compiling something like:

struct foo {
int *X;
float *Y;
};

struct foo test(struct foo *P) { return *P; }

to:

%1 = type { i64, i64 }

define %1 @test(%struct.foo* %P) nounwind {
entry:
%retval = alloca %struct.foo, align 8 ; <%struct.foo*> [#uses=2]
%P.addr = alloca %struct.foo*, align 8 ; <%struct.foo**> [#uses=2]
store %struct.foo* %P, %struct.foo** %P.addr
%tmp = load %struct.foo** %P.addr ; <%struct.foo*> [#uses=1]
%tmp1 = bitcast %struct.foo* %retval to i8* ; <i8*> [#uses=1]
%tmp2 = bitcast %struct.foo* %tmp to i8* ; <i8*> [#uses=1]
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 16, i32 8, i1 false)
%0 = bitcast %struct.foo* %retval to %1* ; <%1*> [#uses=1]
%1 = load %1* %0, align 1 ; <%1> [#uses=1]
ret %1 %1
}

We now get the result more type safe, with:

define %struct.foo @test(%struct.foo* %P) nounwind {
entry:
%retval = alloca %struct.foo, align 8 ; <%struct.foo*> [#uses=2]
%P.addr = alloca %struct.foo*, align 8 ; <%struct.foo**> [#uses=2]
store %struct.foo* %P, %struct.foo** %P.addr
%tmp = load %struct.foo** %P.addr ; <%struct.foo*> [#uses=1]
%tmp1 = bitcast %struct.foo* %retval to i8* ; <i8*> [#uses=1]
%tmp2 = bitcast %struct.foo* %tmp to i8* ; <i8*> [#uses=1]
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 16, i32 8, i1 false)
%0 = load %struct.foo* %retval ; <%struct.foo> [#uses=1]
ret %struct.foo %0
}

That memcpy is completely terrible, but I don't know how to fix it.



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@109729 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
ab5722e67794b3954c874a369086fc5f41ac46a5 29-Jul-2010 Chris Lattner <sabre@nondot.org> pass argument vectors in a type that corresponds to the user type if
possible. This improves the example to pass <4 x float> instead of
<2 x double> but we still get awful code, and still don't get the
return value right.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@109700 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
519f68cd26777c755763a644a7f7ed7ac389beb9 29-Jul-2010 Chris Lattner <sabre@nondot.org> use Get8ByteTypeAtOffset for the return value path as well so we
don't get errors similar to PR7714 on the return path.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@109689 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
1daf808a48d26328dd31a3275ce599cee326c957 29-Jul-2010 Chris Lattner <sabre@nondot.org> fix PR7714 by not referencing off the end of a struct when passed by value in
x86-64 abi. This also improves codegen as well. Some refactoring is needed of
this code.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@109681 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
121b3facb4e0585d23766f9c1e4fdf9018a4b217 05-Jul-2010 Chris Lattner <sabre@nondot.org> in the "coerce" case, the ABI handling code ends up making the
alloca for an argument. Make sure the argument gets the proper
decl alignment, which may be different than the type alignment.

This fixes PR7567


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@107627 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
9c254f0415bef9a0bafe5b5026ddb54b727597b1 29-Jun-2010 Chris Lattner <sabre@nondot.org> Change X86_64ABIInfo to have ASTContext and TargetData ivars to
avoid passing ASTContext down through all the methods it has.

When classifying an argument, or argument piece, as INTEGER, check
to see if we have a pointer at exactly the same offset in the
preferred type. If so, use that pointer type instead of i64. This
allows us to compile A function taking a stringref into something
like this:

define i8* @foo(i64 %D.coerce0, i8* %D.coerce1) nounwind ssp {
entry:
%D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=4]
%0 = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
store i64 %D.coerce0, i64* %0
%1 = getelementptr %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1]
store i8* %D.coerce1, i8** %1
%tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
%tmp1 = load i64* %tmp ; <i64> [#uses=1]
%tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1]
%tmp3 = load i8** %tmp2 ; <i8*> [#uses=1]
%add.ptr = getelementptr inbounds i8* %tmp3, i64 %tmp1 ; <i8*> [#uses=1]
ret i8* %add.ptr
}

instead of this:

define i8* @foo(i64 %D.coerce0, i64 %D.coerce1) nounwind ssp {
entry:
%D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=3]
%0 = insertvalue %0 undef, i64 %D.coerce0, 0 ; <%0> [#uses=1]
%1 = insertvalue %0 %0, i64 %D.coerce1, 1 ; <%0> [#uses=1]
%2 = bitcast %struct.DeclGroup* %D to %0* ; <%0*> [#uses=1]
store %0 %1, %0* %2, align 1
%tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
%tmp1 = load i64* %tmp ; <i64> [#uses=1]
%tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1]
%tmp3 = load i8** %tmp2 ; <i8*> [#uses=1]
%add.ptr = getelementptr inbounds i8* %tmp3, i64 %tmp1 ; <i8*> [#uses=1]
ret i8* %add.ptr
}

This implements rdar://7375902 - [codegen quality] clang x86-64 ABI lowering code punishing StringRef



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@107123 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
225e286110bcc8b7b1ff8b35f0d51a10a158b18c 29-Jun-2010 Chris Lattner <sabre@nondot.org> add IR names to coerced arguments.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@107105 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
ce70016434ff82a29a60ef82894d934b8a23f23d 29-Jun-2010 Chris Lattner <sabre@nondot.org> Change CGCall to handle the "coerce" case where the coerce-to type
is a FCA to pass each of the elements as individual scalars. This
produces code fast isel is less likely to reject and is easier on
the optimizers.

For example, before we would compile:
struct DeclGroup { long NumDecls; char * Y; };
char * foo(DeclGroup D) {
return D.NumDecls+D.Y;
}

to:
%struct.DeclGroup = type { i64, i64 }

define i64 @_Z3foo9DeclGroup(%struct.DeclGroup) nounwind {
entry:
%D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=3]
store %struct.DeclGroup %0, %struct.DeclGroup* %D, align 1
%tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
%tmp1 = load i64* %tmp ; <i64> [#uses=1]
%tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1]
%tmp3 = load i64* %tmp2 ; <i64> [#uses=1]
%add = add nsw i64 %tmp1, %tmp3 ; <i64> [#uses=1]
ret i64 %add
}

Now we get:

%0 = type { i64, i64 }
%struct.DeclGroup = type { i64, i8* }

define i8* @_Z3foo9DeclGroup(i64, i64) nounwind {
entry:
%D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=3]
%2 = insertvalue %0 undef, i64 %0, 0 ; <%0> [#uses=1]
%3 = insertvalue %0 %2, i64 %1, 1 ; <%0> [#uses=1]
%4 = bitcast %struct.DeclGroup* %D to %0* ; <%0*> [#uses=1]
store %0 %3, %0* %4, align 1
%tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
%tmp1 = load i64* %tmp ; <i64> [#uses=1]
%tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1]
%tmp3 = load i8** %tmp2 ; <i8*> [#uses=1]
%add.ptr = getelementptr inbounds i8* %tmp3, i64 %tmp1 ; <i8*> [#uses=1]
ret i8* %add.ptr
}

Elimination of the FCA inside the function is still-to-come.



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@107099 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
faf23b72f171ef997d48c453a5a4783c5759f8fe 28-Jun-2010 Chris Lattner <sabre@nondot.org> X86-64:
pass/return structs of float/int as float/i32 instead of double/i64
to make the code generated for ABI cleaner. Passing in the low part
of a double is the same as passing in a float.

For example, we now compile:

struct DeclGroup { float NumDecls; };
float foo(DeclGroup D);
void bar(DeclGroup *D) {
foo(*D);
}

into:

%struct.DeclGroup = type { float }

define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) nounwind {
entry:
%D.addr = alloca %struct.DeclGroup*, align 8 ; <%struct.DeclGroup**> [#uses=2]
%agg.tmp = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2]
store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr
%tmp = load %struct.DeclGroup** %D.addr ; <%struct.DeclGroup*> [#uses=1]
%tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1]
%tmp2 = bitcast %struct.DeclGroup* %tmp to i8* ; <i8*> [#uses=1]
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false)
%coerce.dive = getelementptr %struct.DeclGroup* %agg.tmp, i32 0, i32 0 ; <float*> [#uses=1]
%0 = load float* %coerce.dive, align 1 ; <float> [#uses=1]
%call = call float @_Z3foo9DeclGroup(float %0) ; <float> [#uses=0]
ret void
}

instead of:

%struct.DeclGroup = type { float }

define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) nounwind {
entry:
%D.addr = alloca %struct.DeclGroup*, align 8 ; <%struct.DeclGroup**> [#uses=2]
%agg.tmp = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2]
%tmp3 = alloca double ; <double*> [#uses=2]
store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr
%tmp = load %struct.DeclGroup** %D.addr ; <%struct.DeclGroup*> [#uses=1]
%tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1]
%tmp2 = bitcast %struct.DeclGroup* %tmp to i8* ; <i8*> [#uses=1]
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false)
%coerce.dive = getelementptr %struct.DeclGroup* %agg.tmp, i32 0, i32 0 ; <float*> [#uses=1]
%0 = bitcast double* %tmp3 to float* ; <float*> [#uses=1]
%1 = load float* %coerce.dive ; <float> [#uses=1]
store float %1, float* %0, align 1
%2 = load double* %tmp3 ; <double> [#uses=1]
%call = call float @_Z3foo9DeclGroup(double %2) ; <float> [#uses=0]
ret void
}

which is this machine code (at -O0):

__Z3barP9DeclGroup:
subq $24, %rsp
movq %rdi, 16(%rsp)
movq 16(%rsp), %rdi
leaq 8(%rsp), %rax
movl (%rdi), %ecx
movl %ecx, (%rax)
movss 8(%rsp), %xmm0
callq __Z3foo9DeclGroup
addq $24, %rsp
ret

vs this:

__Z3barP9DeclGroup:
subq $24, %rsp
movq %rdi, 16(%rsp)
movq 16(%rsp), %rdi
leaq 8(%rsp), %rax
movl (%rdi), %ecx
movl %ecx, (%rax)
movss 8(%rsp), %xmm0
movss %xmm0, (%rsp)
movsd (%rsp), %xmm0
callq __Z3foo9DeclGroup
addq $24, %rsp
ret

At -O3, it is the difference between this now:

__Z3barP9DeclGroup:
movss (%rdi), %xmm0
jmp __Z3foo9DeclGroup # TAILCALL

vs this before:

__Z3barP9DeclGroup:
movl (%rdi), %eax
movd %rax, %xmm0
jmp __Z3foo9DeclGroup # TAILCALL



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@107048 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
6d11cdbde41aa847369db85369b2ce5f82515b06 27-Jun-2010 Chris Lattner <sabre@nondot.org> If coercing something from int or pointer type to int or pointer type
(potentially after unwrapping it from a struct) do it without going through
memory. We now compile:

struct DeclGroup {
unsigned NumDecls;
};

int foo(DeclGroup D) {
return D.NumDecls;
}

into:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
%D = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2]
%coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
%coerce.val.ii = trunc i64 %0 to i32 ; <i32> [#uses=1]
store i32 %coerce.val.ii, i32* %coerce.dive
%tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
%tmp1 = load i32* %tmp ; <i32> [#uses=1]
ret i32 %tmp1
}

instead of:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
%D = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2]
%tmp = alloca i64 ; <i64*> [#uses=2]
%coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
store i64 %0, i64* %tmp
%1 = bitcast i64* %tmp to i32* ; <i32*> [#uses=1]
%2 = load i32* %1, align 1 ; <i32> [#uses=1]
store i32 %2, i32* %coerce.dive
%tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
%tmp2 = load i32* %tmp1 ; <i32> [#uses=1]
ret i32 %tmp2
}

... which is quite a bit less terrifying.



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@106975 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
e7bb777caf478ac8b096bd6a0c14d78ea8b2f5be 27-Jun-2010 Chris Lattner <sabre@nondot.org> Same patch as the previous on the store side. Before we compiled this:

struct DeclGroup {
unsigned NumDecls;
};

int foo(DeclGroup D) {
return D.NumDecls;
}

to:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
%D = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2]
%tmp = alloca i64 ; <i64*> [#uses=2]
store i64 %0, i64* %tmp
%1 = bitcast i64* %tmp to %struct.DeclGroup* ; <%struct.DeclGroup*> [#uses=1]
%2 = load %struct.DeclGroup* %1, align 1 ; <%struct.DeclGroup> [#uses=1]
store %struct.DeclGroup %2, %struct.DeclGroup* %D
%tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
%tmp2 = load i32* %tmp1 ; <i32> [#uses=1]
ret i32 %tmp2
}

which caused fast isel bailouts due to the FCA load/store of %2. Now
we generate this just blissful code:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
%D = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2]
%tmp = alloca i64 ; <i64*> [#uses=2]
%coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
store i64 %0, i64* %tmp
%1 = bitcast i64* %tmp to i32* ; <i32*> [#uses=1]
%2 = load i32* %1, align 1 ; <i32> [#uses=1]
store i32 %2, i32* %coerce.dive
%tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
%tmp2 = load i32* %tmp1 ; <i32> [#uses=1]
ret i32 %tmp2
}

This avoids fastisel bailing out and is groundwork for future patch.
This reduces bailouts on CGStmt.ll to 911 from 935.



git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@106974 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
46c54fb8ec45765a475b7b709b9aee7f94c490c2 21-Apr-2010 Daniel Dunbar <daniel@zuster.org> ABI/x86-32 & x86-64: Alignment on 'byval' must be set when when the alignment
exceeds the minimum ABI alignment.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@102019 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
0aa1cba02fb3d08500217a46fa1287e43fdae2e1 21-Apr-2010 Daniel Dunbar <daniel@zuster.org> Convert test to FileCheck.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@102016 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
fb97cf24158aa7f1fd74374052f99733ef331bb9 20-Apr-2010 Chris Lattner <sabre@nondot.org> don't slap noalias attribute on stret result arguments.
This mirror's Dan's patch for llvm-gcc in r97989, and
fixes the miscompilation in PR6525. There is some contention
over whether this is the right thing to do, but it is the
conservative answer and demonstrably fixes a miscompilation.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@101877 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
a5728872c7702ddd09537c95bc3cbd20e1f2fb09 15-Dec-2009 Daniel Dunbar <daniel@zuster.org> Update tests to use %clang_cc1 instead of 'clang-cc' or 'clang -cc1'.
- This is designed to make it obvious that %clang_cc1 is a "test variable"
which is substituted. It is '%clang_cc1' instead of '%clang -cc1' because it
can be useful to redefine what gets run as 'clang -cc1' (for example, to set
a default target).

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@91446 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
2475d76920b43014e661690836642ca3c9967179 08-Nov-2009 Daniel Dunbar <daniel@zuster.org> Remove RUN: true lines.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@86432 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
4fcfde4d5c8f25e40720972a5543d538a0dcb220 08-Nov-2009 Daniel Dunbar <daniel@zuster.org> Eliminate &&s in tests.
- 'for i in $(find . -type f); do sed -e 's#\(RUN:.*[^ ]\) *&& *$#\1#g' $i | FileUpdate $i; done', for the curious.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@86430 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
55a759b8bb52e9d74c92e17543780751c5e5c5ec 23-Aug-2009 Daniel Dunbar <daniel@zuster.org> Fix a few tests to be -Asserts agnostic.
- Ugh.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@79860 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
8d5053c8642db9cdf37d3cf56e712f24b8d57b1f 13-Aug-2009 Daniel Dunbar <daniel@zuster.org> Update test


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@78877 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
c36541e7bfa69cc63e2668a986bc99117559c545 21-Jul-2009 Mike Stump <mrs@apple.com> Prep for new warning.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@76638 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
fdf4986c4c75514df428ed71d5942252f18e129b 05-Jun-2009 Daniel Dunbar <daniel@zuster.org> ABI handling: Fix nasty thinko where IRgen could generate an out-of-bounds read
when generating a coercion for ABI handling purposes.
- This may only manifest itself when building at -O0, but the practical effect
is that other arguments may get clobbered.

- <rdar://problem/6930451> [irgen] ABI coercion clobbers other arguments


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@72932 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
86e13eeb65397f7b64173523a40c742b2702364b 26-May-2009 Daniel Dunbar <daniel@zuster.org> When trying to pass an argument on the stack, assume LLVM will do the right
thing for non-aggregate types.
- Otherwise we unnecessarily pin values to the stack and currently end up
triggering a backend bug in one case.

- This loose cooperation with LLVM to implement the ABI is pretty ugly.

- <rdar://problem/6918722> [irgen] clang miscompile of many pointer varargs on
x86-64


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@72419 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
3a5f5c57e0a262207f7cb721a60df3676ab5209f 22-May-2009 Daniel Dunbar <daniel@zuster.org> x86_64 ABI: Account for sret parameters consuming an integer register.
- PR4242.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@72268 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
7ef455be9beb7a755d815bfbdc38d55d1ce59b86 13-May-2009 Daniel Dunbar <daniel@zuster.org> ABI handling: Fix invalid assertion, it is possible for a valid
coercion to be specified which truncates padding bits. It would be
nice to still have the assert, but we don't have any API call for the
unpadding size of a type yet.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@71695 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
20e95c5eb400c864bbd4421624fdf7b25ce70f56 12-May-2009 Daniel Dunbar <daniel@zuster.org> x86-64 ABI: clang incorrectly passes union { long double, float } in
register.
- Merge algorithm was returning MEMORY as it should.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@71556 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
8236bf1800641d1c296579e25218f68f74c5caac 09-May-2009 Daniel Dunbar <daniel@zuster.org> x86_64 ABI: Ignore padding bit-fields during classification.
- {return-types,single-args}-{32,64} pass the first 1k ABI tests with
bit-fields enabled.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@71272 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
d7d5f0223bd30dfd618762349c6209dd1d5ea3e6 24-Mar-2009 Daniel Dunbar <daniel@zuster.org> Rename clang to clang-cc.

Tests and drivers updated, still need to shuffle dirs.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@67602 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
100f402451da96f74ea58b1f49fc53b4fa149a57 06-Mar-2009 Daniel Dunbar <daniel@zuster.org> x86_64 ABI: Handle long double in union when upper eightbyte results
in a lone X87 class.
- PR3735.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@66277 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
4b87142aba52c76ff9ed7c9c2fe0067bd935a2f4 26-Feb-2009 Mike Stump <mrs@apple.com> Add end of line at end.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@65557 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
730f909e146b0ac5dbcf9b8be65cb8f82c68d883 26-Feb-2009 Anders Carlsson <andersca@mac.com> Add test for enum types

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@65540 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c
644f4c3fe4130c7be75d6191340ba8d857ba0730 14-Feb-2009 Daniel Dunbar <daniel@zuster.org> x86_64 ABI: Pass simple types directly when possible. This is
important for both keeping the generated LLVM simple and for ensuring
that integer types are passed/promoted correctly.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@64529 91177308-0d34-0410-b5e6-96231b3b80d8
/external/clang/test/CodeGen/x86_64-arguments.c