History log of /external/python/cpython3/Lib/test/tokenize_tests-utf8-coding-cookie-and-utf8-bom-sig.txt
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
3c6938d22b7fd404bdc02576e063988876c5f700 14-Jun-2008 Martin v. Löwis <martin@v.loewis.de> Ran svneol.py
/external/python/cpython3/Lib/test/tokenize_tests-utf8-coding-cookie-and-utf8-bom-sig.txt
428de65ca99492436130165bfbaeb56d6d1daec7 18-Mar-2008 Trent Nelson <trent.nelson@snakebite.org> - Issue #719888: Updated tokenize to use a bytes API. generate_tokens has been
renamed tokenize and now works with bytes rather than strings. A new
detect_encoding function has been added for determining source file encoding
according to PEP-0263. Token sequences returned by tokenize always start
with an ENCODING token which specifies the encoding used to decode the file.
This token is used to encode the output of untokenize back to bytes.

Credit goes to Michael "I'm-going-to-name-my-first-child-unittest" Foord from Resolver Systems for this work.
/external/python/cpython3/Lib/test/tokenize_tests-utf8-coding-cookie-and-utf8-bom-sig.txt