01d5b946acac8519d510781967bf538acdae1853 |
|
03-Mar-2016 |
Elliott Hughes <enh@google.com> |
Remove optimized code for bzero, which was removed from POSIX in 2008. I'll come back for the last bcopy remnant... Bug: http://b/26407170 Change-Id: Iabfeb95fc8a4b4b3992e3cc209ec5221040e7c26
|
62e59646f8795afac2eb4189a153e7decde370e7 |
|
01-Mar-2016 |
Elliott Hughes <enh@google.com> |
Improve diagnostics from the assembler __memset_chk routines. Change-Id: Ic165043ab8cd5e16866b3e11cfba960514cbdc57
|
b83d6747facc5d819a0df0bcb8762477eecfd962 |
|
26-Feb-2016 |
Elliott Hughes <enh@google.com> |
Improve FORTIFY failure diagnostics. Our FORTIFY _chk functions' implementations were very repetitive and verbose but not very helpful. We'd also screwed up and put the SSIZE_MAX checks where they would never fire unless you actually had a buffer as large as half your address space, which probably doesn't happen very often. Factor out the duplication and take the opportunity to actually show details like how big the overrun buffer was, or by how much it was overrun. Also remove the obsolete FORTIFY event logging. Also remove the unused __libc_fatal_no_abort. This change doesn't improve the diagnostics from the optimized assembler implementations. Change-Id: I176a90701395404d50975b547a00bd2c654e1252
|
e1e434af12e801931abaa7dac03915ee4c2d9b15 |
|
06-Jul-2015 |
Christopher Ferris <cferris@google.com> |
Replace bx lr with update of pc from the stack. When there is arm assembler of this format: ldmxx sp!, {..., lr} or pop {..., lr} bx lr It can be replaced with: ldmxx sp!, {..., pc} or pop {..., pc} Change-Id: Ic27048c52f90ac4360ad525daf0361a830dc22a3
|
43850d19f422d4850bebf765607e4f4d4b99df2e |
|
11-May-2015 |
Chih-Hung Hsieh <chh@google.com> |
Use unified syntax to compile with both llvm and gcc. All arch-arm and arch-arm64 .S files were compiled by gcc with and without this patch. The output object files were identical. When compiled with llvm and this patch, the output files were also identical to gcc's output. BUG: 18061004 Change-Id: I458914d512ddf5496e4eb3d288bf032cd526d32b (cherry picked from commit 33f33515b503b634d9fbc57dda7123ea9cf23fc6)
|
33f33515b503b634d9fbc57dda7123ea9cf23fc6 |
|
11-May-2015 |
Chih-Hung Hsieh <chh@google.com> |
Use unified syntax to compile with both llvm and gcc. All arch-arm and arch-arm64 .S files were compiled by gcc with and without this patch. The output object files were identical. When compiled with llvm and this patch, the output files were also identical to gcc's output. BUG: 18061004 Change-Id: I458914d512ddf5496e4eb3d288bf032cd526d32b
|
851e68a2402fa414544e66650e09dfdaac813e51 |
|
20-Feb-2014 |
Elliott Hughes <enh@google.com> |
Unify our assembler macros. Our <machine/asm.h> files were modified from upstream, to the extent that no architecture was actually using the upstream ENTRY or END macros, assuming that architecture even had such a macro upstream. This patch moves everyone to the same macros, with just a few tweaks remaining in the <machine/asm.h> files, which no one should now use directly. I've removed most of the unused cruft from the <machine/asm.h> files, though there's still rather a lot in the mips/mips64 ones. Bug: 12229603 Change-Id: I2fff287dc571ac1087abe9070362fb9420d85d6d
|
68b67113a44311b3568027af5893e316f63ec556 |
|
16-Oct-2013 |
Elliott Hughes <enh@google.com> |
'Avoid confusing "read prevented write" log messages' 2. This time it's assembler. Change-Id: Iae6369833b8046b8eda70238bb4ed0cae64269ea
|
eb847bc8666842a3cfc9c06e8458ad1abebebaf0 |
|
10-Oct-2013 |
Elliott Hughes <enh@google.com> |
Fix x86_64 build, clean up intermediate libraries. The x86_64 build was failing because clone.S had a call to __thread_entry which was being added to a different intermediate .a on the way to making libc.so, and the linker couldn't guarantee statically that such a relocation would be possible. ld: error: out/target/product/generic_x86_64/obj/STATIC_LIBRARIES/libc_common_intermediates/libc_common.a(clone.o): requires dynamic R_X86_64_PC32 reloc against '__thread_entry' which may overflow at runtime; recompile with -fPIC This patch addresses that by ensuring that the caller and callee end up in the same intermediate .a. While I'm here, I've tried to clean up some of the mess that led to this situation too. In particular, this removes libc/private/ from the default include path (except for the DNS code), and splits out the DNS code into its own library (since it's a weird special case of upstream NetBSD code that's diverged so heavily it's unlikely ever to get back in sync). There's more cleanup of the DNS situation possible, but this is definitely a step in the right direction, and it's more than enough to get x86_64 building cleanly. Change-Id: I00425a7245b7a2573df16cc38798187d0729e7c4
|
6861c6f85e6563695c4763e56756398c9d5f6e14 |
|
04-Oct-2013 |
Nick Kralevich <nnk@google.com> |
Make error messages even better! Change-Id: I72bd1eb1d526dc59833e5bc3c636171f7f9545af
|
59a13c122ebc4191583b67c846a95d690dcda5cf |
|
01-Aug-2013 |
Christopher Ferris <cferris@google.com> |
Optimize __memset_chk, __memcpy_chk. DO NOT MERGE. This change creates assembler versions of __memcpy_chk/__memset_chk that is implemented in the memcpy/memset assembler code. This change avoids an extra call to memcpy/memset, instead allowing a simple fall through to occur from the chk code into the body of the real implementation. Testing: - Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices. - Wrote a small test executable that has three calls to __memcpy_chk and three calls to __memset_chk. First call dest_len is length + 1. Second call dest_len is length. Third call dest_len is length - 1. Verified that the first two calls pass, and the third fails. Examined the logcat output on all nexus devices to verify that the fortify error message was sent properly. - I benchmarked the new __memcpy_chk and __memset_chk on all systems. For __memcpy_chk and large copies, the savings is relatively small (about 1%). For small copies, the savings is large on cortex-a15/krait devices (between 5% to 30%). For cortex-a9 and small copies, the speed up is present, but relatively small (about 3% to 5%). For __memset_chk and large copies, the savings is also small (about 1%). However, all processors show larger speed-ups on small copies (about 30% to 100%). Bug: 9293744 Merge from internal master. (cherry-picked from 7c860db0747f6276a6e43984d43f8fa5181ea936) Change-Id: I916ad305e4001269460ca6ebd38aaa0be8ac7f52
|
7c860db0747f6276a6e43984d43f8fa5181ea936 |
|
01-Aug-2013 |
Christopher Ferris <cferris@google.com> |
Optimize __memset_chk, __memcpy_chk. This change creates assembler versions of __memcpy_chk/__memset_chk that is implemented in the memcpy/memset assembler code. This change avoids an extra call to memcpy/memset, instead allowing a simple fall through to occur from the chk code into the body of the real implementation. Testing: - Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices. - Wrote a small test executable that has three calls to __memcpy_chk and three calls to __memset_chk. First call dest_len is length + 1. Second call dest_len is length. Third call dest_len is length - 1. Verified that the first two calls pass, and the third fails. Examined the logcat output on all nexus devices to verify that the fortify error message was sent properly. - I benchmarked the new __memcpy_chk and __memset_chk on all systems. For __memcpy_chk and large copies, the savings is relatively small (about 1%). For small copies, the savings is large on cortex-a15/krait devices (between 5% to 30%). For cortex-a9 and small copies, the speed up is present, but relatively small (about 3% to 5%). For __memset_chk and large copies, the savings is also small (about 1%). However, all processors show larger speed-ups on small copies (about 30% to 100%). Bug: 9293744 Change-Id: I8926d59fe2673e36e8a27629e02a7b7059ebbc98
|
04954a43b362b8c817cc5859513efad0c344f412 |
|
26-Feb-2013 |
Christopher Ferris <cferris@google.com> |
Break bionic implementations into arch versions. Move arch specific code for arm, mips, x86 into separate makefiles. In addition, add different arm cpu versions of memcpy/memset. Bug: 8005082 Merge from internal master (acdde8c1cf8e8beed98c052757d96695b820b50c). Change-Id: I04f3d0715104fab618e1abf7cf8f7eec9bec79df
|
7c83a1ed81a15f3e75836c1ac7d500a952f02e10 |
|
26-Feb-2013 |
Christopher Ferris <cferris@google.com> |
Break bionic implementations into arch versions. DO NOT MERGE Move arch specific code for arm, mips, x86 into separate makefiles. In addition, add different arm cpu versions of memcpy/memset. Bug: 8005082 (cherry picked from commit acdde8c1cf8e8beed98c052757d96695b820b50c) Change-Id: I0108d432af9f6283ae99adfc92a3399e5ab3e31d
|
acdde8c1cf8e8beed98c052757d96695b820b50c |
|
26-Feb-2013 |
Christopher Ferris <cferris@google.com> |
Break bionic implementations into arch versions. Move arch specific code for arm, mips, x86 into separate makefiles. In addition, add different arm cpu versions of memcpy/memset. Bug: 8005082 Change-Id: I04f3d0715104fab618e1abf7cf8f7eec9bec79df
|