eabfe8cc0e061081d1cbcef4895a99cf7520e8d1 |
|
28-Jun-2015 |
Jason R. Coombs <jaraco@jaraco.com> |
Issue #20387: Backport fix from Python 3.4
/external/python/cpython2/Lib/tokenize.py
|
bd7cf3ade36f00f7d23a0edca9be365c4ad8a762 |
|
24-Feb-2014 |
Terry Jan Reedy <tjreedy@udel.edu> |
Issue #9974: When untokenizing, use row info to insert backslash+newline. Original patches by A. Kuchling and G. Rees (#12691).
/external/python/cpython2/Lib/tokenize.py
|
8ab7cba924155d1ca49f972b2ded16d44c161e2a |
|
18-Feb-2014 |
Terry Jan Reedy <tjreedy@udel.edu> |
whitespace
/external/python/cpython2/Lib/tokenize.py
|
6858f00dab05437a0661a1b1a290908d9b1b0e6a |
|
18-Feb-2014 |
Terry Jan Reedy <tjreedy@udel.edu> |
Issue #8478: Untokenizer.compat now processes first token from iterator input. Patch based on lines from Georg Brandl, Eric Snow, and Gareth Rees.
/external/python/cpython2/Lib/tokenize.py
|
7751a34400d05d125de8d8f23339756f8d3f774d |
|
17-Feb-2014 |
Terry Jan Reedy <tjreedy@udel.edu> |
Untokenize: An logically incorrect assert tested user input validity. Replace it with correct logic that raises ValueError for bad input. Issues #8478 and #12691 reported the incorrect logic. Add an Untokenize test case and an initial test method.
/external/python/cpython2/Lib/tokenize.py
|
2612679ddc202ff684ea525142d2802c962b4f64 |
|
25-Nov-2013 |
Ezio Melotti <ezio.melotti@gmail.com> |
#19620: Fix typo in docstring (noticed by Christopher Welborn).
/external/python/cpython2/Lib/tokenize.py
|
7d24b1698a5590df066f829fb4eeef93186d7150 |
|
03-Nov-2012 |
Ezio Melotti <ezio.melotti@gmail.com> |
#16152: fix tokenize to ignore whitespace at the end of the code when no newline is found. Patch by Ned Batchelder.
/external/python/cpython2/Lib/tokenize.py
|
43f42fc3cb67433c88e31268767c0cab36422351 |
|
17-Jun-2012 |
Meador Inge <meadori@gmail.com> |
Issue #15054: Fix incorrect tokenization of 'b' and 'br' string literals. Patch by Serhiy Storchaka.
/external/python/cpython2/Lib/tokenize.py
|
ca2d2529cee492ea0ac2b88ba0fdeb75fff0781a |
|
15-Oct-2009 |
Benjamin Peterson <benjamin@python.org> |
some cleanups
/external/python/cpython2/Lib/tokenize.py
|
447dc1565826c879faf544cda4bdd62546545166 |
|
15-Oct-2009 |
Benjamin Peterson <benjamin@python.org> |
use floor division and add a test that exercises the tabsize codepath
/external/python/cpython2/Lib/tokenize.py
|
e537adfd080ffe57e9f202a931cf70e22213e8a4 |
|
15-Oct-2009 |
Benjamin Peterson <benjamin@python.org> |
pep8ify if blocks
/external/python/cpython2/Lib/tokenize.py
|
50bb7e12ec32da69b89a9b7cfdfdd899d2c8c685 |
|
02-Aug-2008 |
Brett Cannon <bcannon@gmail.com> |
Remove a tuple unpacking in a parameter list to remove a SyntaxWarning raised while running under -3.
/external/python/cpython2/Lib/tokenize.py
|
8456f64ce2ce992f42b65d86456baa0e5aeee459 |
|
06-Jun-2008 |
Benjamin Peterson <benjamin@python.org> |
revert 63965 for preformance reasons
/external/python/cpython2/Lib/tokenize.py
|
30dc7b8ce27451f7d06099d985d93c09d55567ad |
|
06-Jun-2008 |
Benjamin Peterson <benjamin@python.org> |
use the more idomatic while True
/external/python/cpython2/Lib/tokenize.py
|
da0c025a43bd1c7c9279475ebd8f9edee9e41a0b |
|
28-Mar-2008 |
Amaury Forgeot d'Arc <amauryfa@gmail.com> |
Issue2495: tokenize.untokenize did not insert space between two consecutive string literals: "" "" => """", which is invalid code. Will backport
/external/python/cpython2/Lib/tokenize.py
|
0aed07ad80795bd5856ed60e7edcadeb353cf5a0 |
|
17-Mar-2008 |
Eric Smith <eric@trueblade.com> |
Added PEP 3127 support to tokenize (with tests); added PEP 3127 to NEWS.
/external/python/cpython2/Lib/tokenize.py
|
14404b68d8c5a501a2f5ee6f45494865b7b38276 |
|
19-Jan-2008 |
Georg Brandl <georg@python.org> |
Fix #1679: "0x" was taken as a valid integer literal. Fixes the tokenizer, tokenize.py and int() to reject this. Patches by Malte Helmert.
/external/python/cpython2/Lib/tokenize.py
|
288e89acfc29cf857a8c5d314ba2dd3398a2eae9 |
|
18-Jan-2008 |
Christian Heimes <christian@cheimes.de> |
Added bytes and b'' as aliases for str and ''
/external/python/cpython2/Lib/tokenize.py
|
8a7e76bcfa4c6c778e1bf155b6c4e40f8232d86a |
|
02-Dec-2006 |
Raymond Hettinger <python@rcn.com> |
Add name to credits (for untokenize).
/external/python/cpython2/Lib/tokenize.py
|
39c532c0b639b72384a5f5137d3fd5f7d127d814 |
|
23-Aug-2006 |
Jeremy Hylton <jeremy@alum.mit.edu> |
Replace dead code with an assert. Now that COMMENT tokens are reliably followed by NL or NEWLINE, there is never a need to add extra newlines in untokenize.
/external/python/cpython2/Lib/tokenize.py
|
76467ba6d6cac910523373efe967339f96b10c85 |
|
23-Aug-2006 |
Jeremy Hylton <jeremy@alum.mit.edu> |
Bug fixes large and small for tokenize. Small: Always generate a NL or NEWLINE token following a COMMENT token. The old code did not generate an NL token if the comment was on a line by itself. Large: The output of untokenize() will now match the input exactly if it is passed the full token sequence. The old, crufty output is still generated if a limited input sequence is provided, where limited means that it does not include position information for tokens. Remaining bug: There is no CONTINUATION token (\) so there is no way for untokenize() to handle such code. Also, expanded the number of doctests in hopes of eventually removing the old-style tests that compare against a golden file. Bug fix candidate for Python 2.5.1. (Sigh.)
/external/python/cpython2/Lib/tokenize.py
|
2463f8f831bdf7ed562a26a13a6214f203f0b037 |
|
14-Aug-2006 |
Georg Brandl <georg@python.org> |
Make tabnanny recognize IndentationErrors raised by tokenize. Add a test to test_inspect to make sure indented source is recognized correctly. (fixes #1224621)
/external/python/cpython2/Lib/tokenize.py
|
c259cc9c4c1c90efaeace2c9786786a5813cf950 |
|
30-Mar-2006 |
Guido van Rossum <guido@python.org> |
Insert a safety space after numbers as well as names in untokenize().
/external/python/cpython2/Lib/tokenize.py
|
da99d1cbfeedafd41263ac2d1b397d57c14ab28e |
|
21-Jun-2005 |
Raymond Hettinger <python@rcn.com> |
SF bug #1224621: tokenize module does not detect inconsistent dedents
/external/python/cpython2/Lib/tokenize.py
|
68c04534182f2c09783b6506701a8bc25c98b4a9 |
|
10-Jun-2005 |
Raymond Hettinger <python@rcn.com> |
Add untokenize() function to allow full round-trip tokenization. Should significantly enhance the utility of the module by supporting the creation of tools that modify the token stream and writeback the modified result.
/external/python/cpython2/Lib/tokenize.py
|
c2a5a636545a88f349dbe3e452ffb4494b68e534 |
|
02-Aug-2004 |
Anthony Baxter <anthonybaxter@gmail.com> |
PEP-0318, @decorator-style. In Guido's words: "@ seems the syntax that everybody can hate equally" Implementation by Mark Russell, from SF #979728.
/external/python/cpython2/Lib/tokenize.py
|
68468eba635570400f607e140425a222018e56f9 |
|
27-Feb-2003 |
Guido van Rossum <guido@python.org> |
Get rid of many apply() calls.
/external/python/cpython2/Lib/tokenize.py
|
78a7aeeb1a93b0a3b850355bc7f71dab00fa755a |
|
05-Nov-2002 |
Raymond Hettinger <python@rcn.com> |
SF 633560: tokenize.__all__ needs "generate_tokens"
/external/python/cpython2/Lib/tokenize.py
|
9d6897accc49f40414fbecafeb1c65562c6e4647 |
|
24-Aug-2002 |
Guido van Rossum <guido@python.org> |
Speed up the most egregious "if token in (long tuple)" cases by using a dict instead. (Alas, using a Set would be slower instead of faster.)
/external/python/cpython2/Lib/tokenize.py
|
8ac1495a6a1d18111a626cec0c7f2eb67df3edb3 |
|
23-May-2002 |
Tim Peters <tim.peters@gmail.com> |
Whitespace normalization.
/external/python/cpython2/Lib/tokenize.py
|
d1fa3db52de5f337e9aae5f3baad16fe62da2d0f |
|
15-May-2002 |
Raymond Hettinger <python@rcn.com> |
Added docstrings excerpted from Python Library Reference. Closes patch 556161.
/external/python/cpython2/Lib/tokenize.py
|
496563a5146e2650dcf6f3595c58ff24f39a9afb |
|
01-Apr-2002 |
Tim Peters <tim.peters@gmail.com> |
Remove some now-obsolete generator future statements. I left the email pkg alone; I'm not sure how Barry would like to handle that.
/external/python/cpython2/Lib/tokenize.py
|
e98d16e8a4821efc6e268dd1c96801aee95609fa |
|
26-Mar-2002 |
Neal Norwitz <nnorwitz@gmail.com> |
Cleanup x so it is not left in module
/external/python/cpython2/Lib/tokenize.py
|
d507dab91f9790a24bd53d41d7fcf52fe89a6eff |
|
30-Aug-2001 |
Tim Peters <tim.peters@gmail.com> |
SF patch #455966: Allow leading 0 in float/imag literals. Consequences for Jython still unknown (but raised on Jython-Dev).
/external/python/cpython2/Lib/tokenize.py
|
96204f5e4916c9f4f00855ef4e76c35aa17a474e |
|
08-Aug-2001 |
Guido van Rossum <guido@python.org> |
Add new tokens // and //=, in support of PEP 238.
/external/python/cpython2/Lib/tokenize.py
|
79e75e1916c33ee8e3de4c1b6c38221f2dba315c |
|
20-Jul-2001 |
Fred Drake <fdrake@acm.org> |
Use string.ascii_letters instead of string.letters (SF bug #226706).
/external/python/cpython2/Lib/tokenize.py
|
b09f7ed6235783fca27a4f8730c4c33e0f53c16c |
|
15-Jul-2001 |
Guido van Rossum <guido@python.org> |
Preliminary support for "from __future__ import generators" to enable the yield statement. I figure we have to have this in before I can release 2.2a1 on Wednesday. Note: test_generators is currently broken, I'm counting on Tim to fix this.
/external/python/cpython2/Lib/tokenize.py
|
4efb6e964376a46aaa3acf365a6627a37af236bf |
|
30-Jun-2001 |
Tim Peters <tim.peters@gmail.com> |
Turns out Neil didn't intend for *all* of his gen-branch work to get committed. tokenize.py: I like these changes, and have tested them extensively without even realizing it, so I just updated the docstring and the docs. tabnanny.py: Also liked this, but did a little code fiddling. I should really rewrite this to *exploit* generators, but that's near the bottom of my effort/benefit scale so doubt I'll get to it anytime soon (it would be most useful as a non-trivial example of ideal use of generators; but test_generators.py has already grown plenty of food-for-thought examples). inspect.py: I'm sure Ping intended for this to continue running even under 1.5.2, so I reverted this to the last pre-gen-branch version. The "bugfix" I checked in in-between was actually repairing a bug *introduced* by the conversion to generators, so it's OK that the reverted version doesn't reflect that checkin.
/external/python/cpython2/Lib/tokenize.py
|
5ca576ed0a0c697c7e7547adfd0b3af010fd2053 |
|
19-Jun-2001 |
Tim Peters <tim.peters@gmail.com> |
Merging the gen-branch into the main line, at Guido's direction. Yay! Bugfix candidate in inspect.py: it was referencing "self" outside of a method.
/external/python/cpython2/Lib/tokenize.py
|
28c62bbdb2545eddf04ba7e2f2daa0dcedbb6052 |
|
23-Mar-2001 |
Ka-Ping Yee <ping@zesty.ca> |
Provide a StopTokenizing exception for conveniently exiting the loop.
/external/python/cpython2/Lib/tokenize.py
|
4f64c1358252836900bf3cd0f68c6f83a7ec4e44 |
|
01-Mar-2001 |
Ka-Ping Yee <ping@zesty.ca> |
Better __credits__.
/external/python/cpython2/Lib/tokenize.py
|
244c593598af4db19e410032fb10793617a895ce |
|
01-Mar-2001 |
Ka-Ping Yee <ping@zesty.ca> |
Add __author__ and __credits__ variables.
/external/python/cpython2/Lib/tokenize.py
|
40fc16059f04ee8fda0b5956cc4883eb21ca8f8c |
|
01-Mar-2001 |
Skip Montanaro <skip@pobox.com> |
final round of __all__ lists (I hope) - skipped urllib2 because Moshe may be giving it a slight facelift
/external/python/cpython2/Lib/tokenize.py
|
b08b2d316653bf22d39ad76814b5a0e7dad30c31 |
|
09-Feb-2001 |
Eric S. Raymond <esr@thyrsus.com> |
String method conversion.
/external/python/cpython2/Lib/tokenize.py
|
1ff08b1243dcb07db975640b2f3cbc82985bee81 |
|
15-Jan-2001 |
Ka-Ping Yee <ping@zesty.ca> |
Add tokenizer support and tests for u'', U"", uR'', Ur"", etc.
/external/python/cpython2/Lib/tokenize.py
|
b90f89a496676ec714e111a747344600f3988496 |
|
15-Jan-2001 |
Tim Peters <tim.peters@gmail.com> |
Whitespace normalization.
/external/python/cpython2/Lib/tokenize.py
|
de49583a0d59f806b88b0f6a869f470047b3cbce |
|
07-Oct-2000 |
Tim Peters <tim.peters@gmail.com> |
Possible fix for Skip's bug 116136 (sre recursion limit hit in tokenize.py). tokenize.py has always used naive regexps for matching string literals, and that appears to trigger the sre recursion limit on Skip's platform (he has very long single-line string literals). Replaced all of tokenize.py's string regexps with the "unrolled" forms used in IDLE, where they're known to handle even absurd (multi-megabyte!) string literals without trouble. See Friedl's book for explanation (at heart, the naive regexps create a backtracking choice point for each character in the literal, while the unrolled forms create none).
/external/python/cpython2/Lib/tokenize.py
|
e1519a1b4d8e24ab6a98083c6ec8bf4ec7594111 |
|
24-Aug-2000 |
Thomas Wouters <thomas@python.org> |
Update for augmented assignment, tested & approved by Guido.
/external/python/cpython2/Lib/tokenize.py
|
9b8d801c37fa29420848ebc1b50c601893b36287 |
|
17-Aug-2000 |
Fred Drake <fdrake@acm.org> |
Convert some old-style string exceptions to class exceptions.
/external/python/cpython2/Lib/tokenize.py
|
a90c78b9186f5ba8d91d3be0e684f81f2068c771 |
|
03-Apr-1998 |
Guido van Rossum <guido@python.org> |
Differentiate between NEWLINE token (an official newline) and NL token (a newline that the grammar ignores).
/external/python/cpython2/Lib/tokenize.py
|
fefc922cef9d89bb6e3d12dc3d2f16e3f49d7250 |
|
27-Oct-1997 |
Guido van Rossum <guido@python.org> |
New, fixed version with proper r"..." and R"..." support from Ka-Ping.
/external/python/cpython2/Lib/tokenize.py
|
3b631775b26b866e072cd3340f303bf5903af883 |
|
27-Oct-1997 |
Guido van Rossum <guido@python.org> |
Redone (by Ka-Ping) using the new re module, and adding recognition for r"..." raw strings. (And R"..." string support added by Guido.)
/external/python/cpython2/Lib/tokenize.py
|
2b1566be9d22daaaac7ad5198a415db5336bf034 |
|
04-Jun-1997 |
Guido van Rossum <guido@python.org> |
Correct typo in last line (test program invocation).
/external/python/cpython2/Lib/tokenize.py
|
de65527e4b0925692f0d75f388116b7958a390bb |
|
09-Apr-1997 |
Guido van Rossum <guido@python.org> |
Ping's latest. Fixes triple quoted strings ending in odd #backslashes, and other stuff I don't know.
/external/python/cpython2/Lib/tokenize.py
|
1aec32363f25693e0c3ff81feddf620850b4955d |
|
08-Apr-1997 |
Guido van Rossum <guido@python.org> |
Ka-Ping's muich improved version of March 26, 1997: # Ignore now accepts \f as whitespace. Operator now includes '**'. # Ignore and Special now accept \n or \r\n at the end of a line. # Imagnumber is new. Expfloat is corrected to reject '0e4'.
/external/python/cpython2/Lib/tokenize.py
|
b5dc5e3d7ea44ee4d029d26c98bc99deeffee346 |
|
11-Mar-1997 |
Guido van Rossum <guido@python.org> |
Added support for imaginary constants (e.g. 0j, 1j, 1.0j).
/external/python/cpython2/Lib/tokenize.py
|
b51eaa183e048a928fb363bac4404e6acf0e3bad |
|
07-Mar-1997 |
Guido van Rossum <guido@python.org> |
Fixed doc string, added __version__, fixed 1 bug.
/external/python/cpython2/Lib/tokenize.py
|
fc6f5339a99d103928bce9eda605564f2a9e8477 |
|
07-Mar-1997 |
Guido van Rossum <guido@python.org> |
Ka-Ping's version.
/external/python/cpython2/Lib/tokenize.py
|
b31c7f732aea6abf6ce24d3da7fd67b2172acec9 |
|
11-Nov-1993 |
Guido van Rossum <guido@python.org> |
* test_select.py: (some) tests for built-in select module * test_grammar.py, testall.out: added test for funny things in string literals * token.py, symbol.py: definitions used with built-in parser module. * tokenize.py: added double-quote recognition
/external/python/cpython2/Lib/tokenize.py
|
10d10ffb1b3c5d508b1f345ee40b19024e2514cc |
|
16-Mar-1992 |
Guido van Rossum <guido@python.org> |
Change the order in which Floatnumber and Intnumber are tried so it will correctly recognize floats. Fix the test program so it works again.
/external/python/cpython2/Lib/tokenize.py
|
4d8e859e8f0a209a7e999ce9cc0988156c795949 |
|
01-Jan-1992 |
Guido van Rossum <guido@python.org> |
Initial revision
/external/python/cpython2/Lib/tokenize.py
|