1c8ea807ebb54c1533040b60c0a6394abc77e339 |
|
30-Sep-2014 |
Christopher Ferris <cferris@google.com> |
Cleanup arm assembly. Remove the old arm directives. Change the non-local labels to .L labels. Add cfi directives to strcpy.S. Bug: 18157900 (cherry picked from commit c8bd2abab24afe563240297018c4fa79944f193b) Change-Id: Ifa1c3d16553d142eaa0d744af040f0352538106c
|
c8bd2abab24afe563240297018c4fa79944f193b |
|
30-Sep-2014 |
Christopher Ferris <cferris@google.com> |
Cleanup arm assembly. Remove the old arm directives. Change the non-local labels to .L labels. Add cfi directives to strcpy.S. Change-Id: I9bafee1ffe5d85c92d07cfa8a85338cef9759562
|
851e68a2402fa414544e66650e09dfdaac813e51 |
|
20-Feb-2014 |
Elliott Hughes <enh@google.com> |
Unify our assembler macros. Our <machine/asm.h> files were modified from upstream, to the extent that no architecture was actually using the upstream ENTRY or END macros, assuming that architecture even had such a macro upstream. This patch moves everyone to the same macros, with just a few tweaks remaining in the <machine/asm.h> files, which no one should now use directly. I've removed most of the unused cruft from the <machine/asm.h> files, though there's still rather a lot in the mips/mips64 ones. Bug: 12229603 Change-Id: I2fff287dc571ac1087abe9070362fb9420d85d6d
|
507cfe2e10a6c4ad61b9638820ba10bfe881a18c |
|
19-Nov-2013 |
Christopher Ferris <cferris@google.com> |
Add .cfi_startproc/.cfi_endproc to ENTRY/END. Bug: 10414953 Change-Id: I711718098b9f3cc0ba8277778df64557e9c7b2a0
|
68b67113a44311b3568027af5893e316f63ec556 |
|
16-Oct-2013 |
Elliott Hughes <enh@google.com> |
'Avoid confusing "read prevented write" log messages' 2. This time it's assembler. Change-Id: Iae6369833b8046b8eda70238bb4ed0cae64269ea
|
eb847bc8666842a3cfc9c06e8458ad1abebebaf0 |
|
10-Oct-2013 |
Elliott Hughes <enh@google.com> |
Fix x86_64 build, clean up intermediate libraries. The x86_64 build was failing because clone.S had a call to __thread_entry which was being added to a different intermediate .a on the way to making libc.so, and the linker couldn't guarantee statically that such a relocation would be possible. ld: error: out/target/product/generic_x86_64/obj/STATIC_LIBRARIES/libc_common_intermediates/libc_common.a(clone.o): requires dynamic R_X86_64_PC32 reloc against '__thread_entry' which may overflow at runtime; recompile with -fPIC This patch addresses that by ensuring that the caller and callee end up in the same intermediate .a. While I'm here, I've tried to clean up some of the mess that led to this situation too. In particular, this removes libc/private/ from the default include path (except for the DNS code), and splits out the DNS code into its own library (since it's a weird special case of upstream NetBSD code that's diverged so heavily it's unlikely ever to get back in sync). There's more cleanup of the DNS situation possible, but this is definitely a step in the right direction, and it's more than enough to get x86_64 building cleanly. Change-Id: I00425a7245b7a2573df16cc38798187d0729e7c4
|
6861c6f85e6563695c4763e56756398c9d5f6e14 |
|
04-Oct-2013 |
Nick Kralevich <nnk@google.com> |
Make error messages even better! Change-Id: I72bd1eb1d526dc59833e5bc3c636171f7f9545af
|
32bbf8a63bb43a540cc0f1dd5037736d10b70e0b |
|
03-Oct-2013 |
Nick Kralevich <nnk@google.com> |
libc: don't export unnecessary symbols Symbols associated with the internal implementation of memcpy like routines should be private. Change-Id: I2b1d1f59006395c29d518c153928437b08f93d16
|
16e185c9081530859c17270fbaf5798f0ea871f8 |
|
11-Sep-2013 |
Christopher Ferris <cferris@google.com> |
__memcpy_chk: Fix signed cmp of unsigned values. I accidentally did a signed comparison of the size_t values passed in for three of the _chk functions. Changing them to unsigned compares. Add three new tests to verify this failure is fixed. Bug: 10691831 Merge from internal master. (cherry-picked from 883ef2499c2ff76605f73b1240f719ca6282e554) Change-Id: Id9a96b549435f5d9b61dc132cf1082e0e30889f5
|
a57c9c084bc686a35f4f494ce23cf2a9bb3d5d00 |
|
21-Aug-2013 |
Christopher Ferris <cferris@google.com> |
Fix all debug directives. The backtrace when a fortify check failed was not correct. This change adds all of the necessary directives to get a correct backtrace. Fix the strcmp directives and change all labels to local labels. Testing: - Verify that the runtime can decode the stack for __memcpy_chk, __memset_chk, __strcpy_chk, __strcat_chk fortify failures. - Verify that gdb can decode the stack properly when hitting a fortify check. - Verify that the runtime can decode the stack for a seg fault for all of the _chk functions and for memcpy/memset. - Verify that gdb can decode the stack for a seg fault for all of the _chk functions and for memcpy/memset. - Verify that the runtime can decode the stack for a seg fault for strcmp. - Verify that gdb can decode the stack for a seg fault in strcmp. Bug: 10342460 Bug: 10345269 Merge from internal master. (cherry-picked from 05332f2ce7e542d32ff4d5cd9f60248ad71fbf0d) Change-Id: Ibc919b117cfe72b9ae97e35bd48185477177c5ca
|
bd7fe1d3c4c8877ac53839169851621249289bd7 |
|
20-Aug-2013 |
Christopher Ferris <cferris@google.com> |
Update all debug directives. The libcorkscrew stack unwinder does not understand cfi directives, so add .save directives so that it can function properly. Also add the directives in to strcmp.S and fix a missing set of directives in cortex-a9/memcpy_base.S. Bug: 10345269 Merge from internal master. (cherry-picked from 5f7ccea3ffab05aeceecb85c821003cf580630d3) Change-Id: If48a216203216a643807f5d61906015984987189
|
883ef2499c2ff76605f73b1240f719ca6282e554 |
|
11-Sep-2013 |
Christopher Ferris <cferris@google.com> |
__memcpy_chk: Fix signed cmp of unsigned values. I accidentally did a signed comparison of the size_t values passed in for three of the _chk functions. Changing them to unsigned compares. Add three new tests to verify this failure is fixed. Bug: 10691831 Change-Id: Ia831071f7dffd5972a748d888dd506c7cc7ddba3
|
05332f2ce7e542d32ff4d5cd9f60248ad71fbf0d |
|
21-Aug-2013 |
Christopher Ferris <cferris@google.com> |
Fix all debug directives. The backtrace when a fortify check failed was not correct. This change adds all of the necessary directives to get a correct backtrace. Fix the strcmp directives and change all labels to local labels. Testing: - Verify that the runtime can decode the stack for __memcpy_chk, __memset_chk, __strcpy_chk, __strcat_chk fortify failures. - Verify that gdb can decode the stack properly when hitting a fortify check. - Verify that the runtime can decode the stack for a seg fault for all of the _chk functions and for memcpy/memset. - Verify that gdb can decode the stack for a seg fault for all of the _chk functions and for memcpy/memset. - Verify that the runtime can decode the stack for a seg fault for strcmp. - Verify that gdb can decode the stack for a seg fault in strcmp. Bug: 10342460 Bug: 10345269 Change-Id: I1dedadfee207dce4a285e17a21e8952bbc63786a
|
5f7ccea3ffab05aeceecb85c821003cf580630d3 |
|
20-Aug-2013 |
Christopher Ferris <cferris@google.com> |
Update all debug directives. The libcorkscrew stack unwinder does not understand cfi directives, so add .save directives so that it can function properly. Also add the directives in to strcmp.S and fix a missing set of directives in cortex-a9/memcpy_base.S. Bug: 10345269 Change-Id: I043f493e0bb6c45bd3f4906fbe1d9f628815b015
|
5f45d583b0cfb4f7bed1447e8eed003a529cc69e |
|
07-Aug-2013 |
Christopher Ferris <cferris@google.com> |
Create optimized __strcpy_chk/__strcat_chk. This change pulls the memcpy code out into a new file so that the __strcpy_chk and __strcat_chk can use it with an include. The new versions of the two chk functions uses assembly versions of strlen and memcpy to implement this check. This allows near parity with the assembly versions of strcpy/strcat. It also means that as memcpy implementations get faster, so do the chk functions. Other included changes: - Change all of the assembly labels to local labels. The other labels confuse gdb and mess up backtracing. - Add .cfi_startproc and .cfi_endproc directives so that gdb is not confused when falling through from one function to another. - Change all functions to use cfi directives since they are more powerful. - Move the memcpy_chk fail code outside of the memcpy function definition so that backtraces work properly. - Preserve lr before the calls to __fortify_chk_fail so that the backtrace actually works. Testing: - Ran the bionic unit tests. Verified all error messages in logs are set correctly. - Ran libc_test, replacing strcpy with __strcpy_chk and replacing strcat with __strcat_chk. - Ran the debugger on nexus10, nexus4, and old nexus7. Verified that the backtrace is correct for all fortify check failures. Also verify that when falling through from __memcpy_chk to memcpy that the backtrace is still correct. Also verified the same for __memset_chk and bzero. Verified the two different paths in the cortex-a9 memset routine that save variables to the stack still show the backtrace properly. Bug: 9293744 (cherry-picked from 2be91915dcecc956d14ff281db0c7d216ca98af2) Change-Id: Ia407b74d3287d0b6af0139a90b6eb3bfaebf2155
|
59a13c122ebc4191583b67c846a95d690dcda5cf |
|
01-Aug-2013 |
Christopher Ferris <cferris@google.com> |
Optimize __memset_chk, __memcpy_chk. DO NOT MERGE. This change creates assembler versions of __memcpy_chk/__memset_chk that is implemented in the memcpy/memset assembler code. This change avoids an extra call to memcpy/memset, instead allowing a simple fall through to occur from the chk code into the body of the real implementation. Testing: - Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices. - Wrote a small test executable that has three calls to __memcpy_chk and three calls to __memset_chk. First call dest_len is length + 1. Second call dest_len is length. Third call dest_len is length - 1. Verified that the first two calls pass, and the third fails. Examined the logcat output on all nexus devices to verify that the fortify error message was sent properly. - I benchmarked the new __memcpy_chk and __memset_chk on all systems. For __memcpy_chk and large copies, the savings is relatively small (about 1%). For small copies, the savings is large on cortex-a15/krait devices (between 5% to 30%). For cortex-a9 and small copies, the speed up is present, but relatively small (about 3% to 5%). For __memset_chk and large copies, the savings is also small (about 1%). However, all processors show larger speed-ups on small copies (about 30% to 100%). Bug: 9293744 Merge from internal master. (cherry-picked from 7c860db0747f6276a6e43984d43f8fa5181ea936) Change-Id: I916ad305e4001269460ca6ebd38aaa0be8ac7f52
|
f0c3d909136167fdbe32b7815e5e1e02b4c35d62 |
|
07-Aug-2013 |
Christopher Ferris <cferris@google.com> |
Create optimized __strcpy_chk/__strcat_chk. This change pulls the memcpy code out into a new file so that the __strcpy_chk and __strcat_chk can use it with an include. The new versions of the two chk functions uses assembly versions of strlen and memcpy to implement this check. This allows near parity with the assembly versions of strcpy/strcat. It also means that as memcpy implementations get faster, so do the chk functions. Other included changes: - Change all of the assembly labels to local labels. The other labels confuse gdb and mess up backtracing. - Add .cfi_startproc and .cfi_endproc directives so that gdb is not confused when falling through from one function to another. - Change all functions to use cfi directives since they are more powerful. - Move the memcpy_chk fail code outside of the memcpy function definition so that backtraces work properly. - Preserve lr before the calls to __fortify_chk_fail so that the backtrace actually works. Testing: - Ran the bionic unit tests. Verified all error messages in logs are set correctly. - Ran libc_test, replacing strcpy with __strcpy_chk and replacing strcat with __strcat_chk. - Ran the debugger on nexus10, nexus4, and old nexus7. Verified that the backtrace is correct for all fortify check failures. Also verify that when falling through from __memcpy_chk to memcpy that the backtrace is still correct. Also verified the same for __memset_chk and bzero. Verified the two different paths in the cortex-a9 memset routine that save variables to the stack still show the backtrace properly. Bug: 9293744 Change-Id: Id5aec8c3cb14101d91bd125eaf3770c9c8aa3f57 (cherry picked from commit 2be91915dcecc956d14ff281db0c7d216ca98af2)
|
2be91915dcecc956d14ff281db0c7d216ca98af2 |
|
07-Aug-2013 |
Christopher Ferris <cferris@google.com> |
Create optimized __strcpy_chk/__strcat_chk. This change pulls the memcpy code out into a new file so that the __strcpy_chk and __strcat_chk can use it with an include. The new versions of the two chk functions uses assembly versions of strlen and memcpy to implement this check. This allows near parity with the assembly versions of strcpy/strcat. It also means that as memcpy implementations get faster, so do the chk functions. Other included changes: - Change all of the assembly labels to local labels. The other labels confuse gdb and mess up backtracing. - Add .cfi_startproc and .cfi_endproc directives so that gdb is not confused when falling through from one function to another. - Change all functions to use cfi directives since they are more powerful. - Move the memcpy_chk fail code outside of the memcpy function definition so that backtraces work properly. - Preserve lr before the calls to __fortify_chk_fail so that the backtrace actually works. Testing: - Ran the bionic unit tests. Verified all error messages in logs are set correctly. - Ran libc_test, replacing strcpy with __strcpy_chk and replacing strcat with __strcat_chk. - Ran the debugger on nexus10, nexus4, and old nexus7. Verified that the backtrace is correct for all fortify check failures. Also verify that when falling through from __memcpy_chk to memcpy that the backtrace is still correct. Also verified the same for __memset_chk and bzero. Verified the two different paths in the cortex-a9 memset routine that save variables to the stack still show the backtrace properly. Bug: 9293744 Change-Id: Id5aec8c3cb14101d91bd125eaf3770c9c8aa3f57
|
7c860db0747f6276a6e43984d43f8fa5181ea936 |
|
01-Aug-2013 |
Christopher Ferris <cferris@google.com> |
Optimize __memset_chk, __memcpy_chk. This change creates assembler versions of __memcpy_chk/__memset_chk that is implemented in the memcpy/memset assembler code. This change avoids an extra call to memcpy/memset, instead allowing a simple fall through to occur from the chk code into the body of the real implementation. Testing: - Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices. - Wrote a small test executable that has three calls to __memcpy_chk and three calls to __memset_chk. First call dest_len is length + 1. Second call dest_len is length. Third call dest_len is length - 1. Verified that the first two calls pass, and the third fails. Examined the logcat output on all nexus devices to verify that the fortify error message was sent properly. - I benchmarked the new __memcpy_chk and __memset_chk on all systems. For __memcpy_chk and large copies, the savings is relatively small (about 1%). For small copies, the savings is large on cortex-a15/krait devices (between 5% to 30%). For cortex-a9 and small copies, the speed up is present, but relatively small (about 3% to 5%). For __memset_chk and large copies, the savings is also small (about 1%). However, all processors show larger speed-ups on small copies (about 30% to 100%). Bug: 9293744 Change-Id: I8926d59fe2673e36e8a27629e02a7b7059ebbc98
|
04954a43b362b8c817cc5859513efad0c344f412 |
|
26-Feb-2013 |
Christopher Ferris <cferris@google.com> |
Break bionic implementations into arch versions. Move arch specific code for arm, mips, x86 into separate makefiles. In addition, add different arm cpu versions of memcpy/memset. Bug: 8005082 Merge from internal master (acdde8c1cf8e8beed98c052757d96695b820b50c). Change-Id: I04f3d0715104fab618e1abf7cf8f7eec9bec79df
|
7c83a1ed81a15f3e75836c1ac7d500a952f02e10 |
|
26-Feb-2013 |
Christopher Ferris <cferris@google.com> |
Break bionic implementations into arch versions. DO NOT MERGE Move arch specific code for arm, mips, x86 into separate makefiles. In addition, add different arm cpu versions of memcpy/memset. Bug: 8005082 (cherry picked from commit acdde8c1cf8e8beed98c052757d96695b820b50c) Change-Id: I0108d432af9f6283ae99adfc92a3399e5ab3e31d
|
acdde8c1cf8e8beed98c052757d96695b820b50c |
|
26-Feb-2013 |
Christopher Ferris <cferris@google.com> |
Break bionic implementations into arch versions. Move arch specific code for arm, mips, x86 into separate makefiles. In addition, add different arm cpu versions of memcpy/memset. Bug: 8005082 Change-Id: I04f3d0715104fab618e1abf7cf8f7eec9bec79df
|