274a76323cb9998df0c1a4dc5b1820d70e0a1886 |
|
19-Sep-2016 |
Benjamin Peterson <benjamin@python.org> |
properly handle the single null-byte file (closes #24022)
/external/python/cpython2/Parser/tokenizer.c
|
5d7d26c403d86e9525820d872eb3e331dbc31750 |
|
14-Nov-2015 |
Serhiy Storchaka <storchaka@gmail.com> |
Issue #25388: Fixed tokenizer hang when processing undecodable source code with a null byte.
/external/python/cpython2/Parser/tokenizer.c
|
223546d55cddf0e1c52e114201747ab154216fd4 |
|
14-Aug-2015 |
Benjamin Peterson <benjamin@python.org> |
add missing NULL checks to get_coding_spec (closes #24854)
/external/python/cpython2/Parser/tokenizer.c
|
3eb554fc828c812a31c1a3cd9f619eacbb708010 |
|
05-Sep-2014 |
Serhiy Storchaka <storchaka@gmail.com> |
Issue #22221: Backported fixes from Python 3 (issue #18960). * Now the source encoding declaration on the second line isn't effective if the first line contains anything except a comment. This affects compile(), eval() and exec() too. * IDLE now ignores the source encoding declaration on the second line if the first line contains anything except a comment. * 2to3 and the findnocoding.py script now ignore the source encoding declaration on the second line if the first line contains anything except a comment.
/external/python/cpython2/Parser/tokenizer.c
|
24b8209a4e2b96ba58730c23f27e039af56476ac |
|
17-Jun-2014 |
Ned Deily <nad@acm.org> |
Issue #21789: fix broken link (reported by Jan Varho)
/external/python/cpython2/Parser/tokenizer.c
|
93e51aac540f0dbd795e709a6fad0b9a62a5ae73 |
|
07-Jun-2014 |
Benjamin Peterson <benjamin@python.org> |
allow the keyword else immediately after (no space) an integer (closes #21642)
/external/python/cpython2/Parser/tokenizer.c
|
22d9ee7e177bb0fcb514e898daaec98907ebde0c |
|
28-Dec-2013 |
Benjamin Peterson <benjamin@python.org> |
complain if the codec doesn't return unicode
/external/python/cpython2/Parser/tokenizer.c
|
729ad5cf561ba644322952b79051269f07bb1ec0 |
|
09-Jun-2013 |
Serhiy Storchaka <storchaka@gmail.com> |
Issue #18038: SyntaxError raised during compilation sources with illegal encoding now always contains an encoding name.
/external/python/cpython2/Parser/tokenizer.c
|
3db4161011cbf6988102228ffc7e5213e680241e |
|
24-Jun-2010 |
Stefan Krah <stefan@bytereef.org> |
Issue #9020: The Py_IS* macros from pyctype.h should generally only be used with signed/unsigned char arguments. For integer arguments, EOF has to be handled separately.
/external/python/cpython2/Parser/tokenizer.c
|
c83ea137d7e717f764e2f31fc2544f522de7d857 |
|
09-May-2010 |
Antoine Pitrou <solipsis@pitrou.net> |
Untabify C files. Will watch buildbots.
/external/python/cpython2/Parser/tokenizer.c
|
88623d76b49a553445fff037bc9cf2e79a24ceef |
|
04-Apr-2010 |
Benjamin Peterson <benjamin@python.org> |
use our own locale independent ctype macros requires building pyctype.o into pgen
/external/python/cpython2/Parser/tokenizer.c
|
4ceeeb09d8ff445888b24aa324bc06175d141cb9 |
|
04-Apr-2010 |
Benjamin Peterson <benjamin@python.org> |
ensure that the locale does not affect the tokenization of identifiers
/external/python/cpython2/Parser/tokenizer.c
|
6664426d7cdb63b88d973a731cc442ecba10047a |
|
10-Mar-2010 |
Victor Stinner <victor.stinner@haypocalc.com> |
Issue #3137: Don't ignore errors at startup, especially a keyboard interrupt (SIGINT). If an error occurs while importing the site module, the error is printed and Python exits. Initialize the GIL before importing the site module.
/external/python/cpython2/Parser/tokenizer.c
|
d23d3930ff1ce72263537bb050824129c6ac74f6 |
|
03-Mar-2010 |
Victor Stinner <victor.stinner@haypocalc.com> |
Issue #7820: The parser tokenizer restores all bytes in the right if the BOM check fails. Fix an assertion in pydebug mode.
/external/python/cpython2/Parser/tokenizer.c
|
42d63847c32fda10b61c1f420402a09ddbbe95eb |
|
06-Dec-2009 |
Benjamin Peterson <benjamin@python.org> |
rewrite translate_newlines for clarity
/external/python/cpython2/Parser/tokenizer.c
|
e36199b49df77c96bad687c6681d8e54c5053b84 |
|
13-Nov-2009 |
Benjamin Peterson <benjamin@python.org> |
fix several compile() issues by translating newlines in the tokenizer
/external/python/cpython2/Parser/tokenizer.c
|
e3383b8e8feb86a37eef780823a17d574bcb7e7b |
|
07-Nov-2009 |
Benjamin Peterson <benjamin@python.org> |
spelling
/external/python/cpython2/Parser/tokenizer.c
|
9586cf8677225ba3bae946de4813655c3db90f88 |
|
09-Oct-2009 |
Benjamin Peterson <benjamin@python.org> |
fix some coding style
/external/python/cpython2/Parser/tokenizer.c
|
08a0bbc8461399ff7dac477c68fc6fc16156ee76 |
|
16-Jun-2009 |
Benjamin Peterson <benjamin@python.org> |
don't mask encoding errors when decoding a string #6289
/external/python/cpython2/Parser/tokenizer.c
|
110a48cf6002db9ac1f38372520c84d7e98cb396 |
|
05-Aug-2008 |
Andrew M. Kuchling <amk@amk.ca> |
#3367: revert rev. 65539: this change causes test_parser to fail
/external/python/cpython2/Parser/tokenizer.c
|
efa61bc15f325bb94d147b8641031d1774fb7a5c |
|
05-Aug-2008 |
Andrew M. Kuchling <amk@amk.ca> |
#3367 from Kristjan Valur Jonsson: If a PyTokenizer_FromString() is called with an empty string, the tokenizer's line_start member never gets initialized. Later, it is compared with the token pointer 'a' in parsetok.c:193 and that behavior can result in undefined behavior.
/external/python/cpython2/Parser/tokenizer.c
|
dd96db63f689e2f0d8ae5a1436b3b3395eec7de5 |
|
09-Jun-2008 |
Gregory P. Smith <greg@mad-scientist.com> |
This reverts r63675 based on the discussion in this thread: http://mail.python.org/pipermail/python-dev/2008-June/079988.html Python 2.6 should stick with PyString_* in its codebase. The PyBytes_* names in the spirit of 3.0 are available via a #define only. See the email thread.
/external/python/cpython2/Parser/tokenizer.c
|
593daf545bd9b7e7bcb27b498ecc6f36db9ae395 |
|
26-May-2008 |
Christian Heimes <christian@cheimes.de> |
Renamed PyString to PyBytes
/external/python/cpython2/Parser/tokenizer.c
|
5216721a532c348bcc59a03c7ee206f2cb2ae497 |
|
24-Apr-2008 |
Amaury Forgeot d'Arc <amauryfa@gmail.com> |
Issue2681: the literal 0o8 was wrongly accepted, and evaluated as float(0.0). This happened only when 8 is the first digit. Credits go to Lukas Meuser.
/external/python/cpython2/Parser/tokenizer.c
|
d183bdd6fb77ae562714c655f0689beacdc00da1 |
|
28-Mar-2008 |
Neal Norwitz <nnorwitz@gmail.com> |
Revert r61969 which added casts to Py_CHARMASK to avoid compiler warnings. Rather than sprinkle casts throughout the code, change Py_CHARMASK to always cast it's result to an unsigned char. This should ensure we do the right thing when accessing an array with the result.
/external/python/cpython2/Parser/tokenizer.c
|
d5b635f1969fdd609f0aff5669c9ec30e61be62e |
|
25-Mar-2008 |
Georg Brandl <georg@python.org> |
Make Py3k warnings consistent w.r.t. punctuation; also respect the EOL 80 limit and supply more alternatives in warning messages.
/external/python/cpython2/Parser/tokenizer.c
|
9ff19b54346d39d15cdcf75e9d66ab46ea6064d6 |
|
17-Mar-2008 |
Eric Smith <eric@trueblade.com> |
Finished backporting PEP 3127, Integer Literal Support and Syntax. Added 0b and 0o literals to tokenizer. Modified PyOS_strtoul to support 0b and 0o inputs. Modified PyLong_FromString to support guessing 0b and 0o inputs. Renamed test_hexoct.py to test_int_literal.py and added binary tests. Added upper and lower case 0b, 0O, and 0X tests to test_int_literal.py
/external/python/cpython2/Parser/tokenizer.c
|
c44af337ce1c0d23c8862fcc1e95d257c593dd70 |
|
27-Jan-2008 |
Neal Norwitz <nnorwitz@gmail.com> |
Add assertion that we do not blow out newl
/external/python/cpython2/Parser/tokenizer.c
|
082c9b0267e45cdff9bb8d30a4332f63bd14c58e |
|
23-Jan-2008 |
Christian Heimes <christian@cheimes.de> |
Fixed bug #1915: Python compiles with --enable-unicode=no again. However several extension methods and modules do not work without unicode support.
/external/python/cpython2/Parser/tokenizer.c
|
898f1879e1bf6fe0a0d94d2abe8e8f5b32f795c1 |
|
21-Jan-2008 |
Georg Brandl <georg@python.org> |
Add a "const" to make gcc happy.
/external/python/cpython2/Parser/tokenizer.c
|
38d1715b0da55238e0c984177848f0005ebc98cf |
|
21-Jan-2008 |
Georg Brandl <georg@python.org> |
Issue #1882: when compiling code from a string, encoding cookies in the second line of code were not always recognized correctly.
/external/python/cpython2/Parser/tokenizer.c
|
14404b68d8c5a501a2f5ee6f45494865b7b38276 |
|
19-Jan-2008 |
Georg Brandl <georg@python.org> |
Fix #1679: "0x" was taken as a valid integer literal. Fixes the tokenizer, tokenize.py and int() to reject this. Patches by Malte Helmert.
/external/python/cpython2/Parser/tokenizer.c
|
288e89acfc29cf857a8c5d314ba2dd3398a2eae9 |
|
18-Jan-2008 |
Christian Heimes <christian@cheimes.de> |
Added bytes and b'' as aliases for str and ''
/external/python/cpython2/Parser/tokenizer.c
|
76b30d1688a7ba1ff1b01a3eb21bf4890f71d404 |
|
07-Jan-2008 |
Georg Brandl <georg@python.org> |
Fix #define ordering.
/external/python/cpython2/Parser/tokenizer.c
|
dfe5dc8455de93fddb3030416e41b92d3a0fd267 |
|
07-Jan-2008 |
Georg Brandl <georg@python.org> |
Make Python compile with --disable-unicode.
/external/python/cpython2/Parser/tokenizer.c
|
6dae85f409642f29ff37c8648f977ea24883e75e |
|
24-Nov-2007 |
Amaury Forgeot d'Arc <amauryfa@gmail.com> |
Warning "<> not supported in 3.x" should be enabled only when the -3 option is set.
/external/python/cpython2/Parser/tokenizer.c
|
02c9ab568d1458e4c1ea2ca700c5d25bb31e8002 |
|
23-Nov-2007 |
Christian Heimes <christian@cheimes.de> |
Fixed problems in the last commit. Filenames and line numbers weren't reported correctly. Backquotes still don't report the correct file. The AST nodes only contain the line number but not the file name.
/external/python/cpython2/Parser/tokenizer.c
|
729ab15370c8e7781f4781428364d203eb9f6416 |
|
23-Nov-2007 |
Christian Heimes <christian@cheimes.de> |
Applied patch #1754273 and #1754271 from Thomas Glee The patches are adding deprecation warnings for back ticks and <>
/external/python/cpython2/Parser/tokenizer.c
|
9fc1b96a19ef821174f5ce37d007b68a55b9ba67 |
|
15-Oct-2007 |
Guido van Rossum <guido@python.org> |
Change a PyErr_Print() into a PyErr_Clear(), per discussion in issue 1031213.
/external/python/cpython2/Parser/tokenizer.c
|
a5136196bce72c51c79a5f961223b4645c90255c |
|
04-Sep-2007 |
Martin v. Löwis <martin@v.loewis.de> |
Patch #1031213: Decode source line in SyntaxErrors back to its original source encoding. Will backport to 2.5.
/external/python/cpython2/Parser/tokenizer.c
|
9b3a82409786af199c9491661e1d06b8dec7a04c |
|
06-Oct-2006 |
Andrew M. Kuchling <amk@amk.ca> |
Comment grammar
/external/python/cpython2/Parser/tokenizer.c
|
71e05f1e0c47670435a1552a38fedc3d44b85a1e |
|
12-Jun-2006 |
Neal Norwitz <nnorwitz@gmail.com> |
Don't truncate if size_t is bigger than uint
/external/python/cpython2/Parser/tokenizer.c
|
d21a7fffb14117e60525613040acb519c7977b5c |
|
02-Jun-2006 |
Neal Norwitz <nnorwitz@gmail.com> |
Patch #1357836: Prevent an invalid memory read from test_coding in case the done flag is set. In that case, the loop isn't entered. I wonder if rather than setting the done flag in the cases before the loop, if they should just exit early. This code looks like it should be refactored. Backport candidate (also the early break above if decoding_fgets fails)
/external/python/cpython2/Parser/tokenizer.c
|
a0b6338823362d950bbc9586a674729b50f5cddc |
|
18-Apr-2006 |
Skip Montanaro <skip@pobox.com> |
C++ compiler cleanup: cast signed to unsigned
/external/python/cpython2/Parser/tokenizer.c
|
08062d6665b6a0c30559eb8a064356ca86151caf |
|
11-Apr-2006 |
Neal Norwitz <nnorwitz@gmail.com> |
As discussed on python-dev, really fix the PyMem_*/PyObject_* memory API mismatches. At least I hope this fixes them all. This reverts part of my change from yesterday that converted everything in Parser/*.c to use PyObject_* API. The encoding doesn't really need to use PyMem_*, however, it uses new_string() which must return PyMem_* for handling the result of PyOS_Readline() which returns PyMem_* memory. If there were 2 versions of new_string() one that returned PyMem_* for tokens and one that return PyObject_* for encodings that could also fix this problem. I'm not sure which version would be clearer. This seems to fix both Guido's and Phillip's problems, so it's good enough for now. After this change, it would be good to review Parser/*.c for consistent use of the 2 memory APIs.
/external/python/cpython2/Parser/tokenizer.c
|
114900298ea26e5e25edd5d05f24648dcd5ea95b |
|
11-Apr-2006 |
Anthony Baxter <anthonybaxter@gmail.com> |
Fix the code in Parser/ to also compile with C++. This was mostly casts for malloc/realloc type functions, as well as renaming one variable called 'new' in tokensizer.c. Still lots more to be done, going to be checking in one chunk at a time or the patch will be massively huge. Still compiles ok with gcc.
/external/python/cpython2/Parser/tokenizer.c
|
2c4e4f98397bcc591ad3a551e1e57cea0e2bd986 |
|
10-Apr-2006 |
Neal Norwitz <nnorwitz@gmail.com> |
SF patch #1467512, fix double free with triple quoted string in standard build. This was the result of inconsistent use of PyMem_* and PyObject_* allocators. By changing to use PyObject_* allocator almost everywhere, this removes the inconsistency.
/external/python/cpython2/Parser/tokenizer.c
|
c9d78aa4709f5a0134bfbf280f637d96e7a6cabd |
|
27-Mar-2006 |
Tim Peters <tim.peters@gmail.com> |
Years in the making. objimpl.h, pymem.h: Stop mapping PyMem_{Del, DEL} and PyMem_{Free, FREE} to PyObject_{Free, FREE} in a release build. They're aliases for the system free() now. _subprocess.c/sp_handle_dealloc(): Since the memory was originally obtained via PyObject_NEW, it must be released via PyObject_FREE (or _DEL). pythonrun.c, tokenizer.c, parsermodule.c: I lost count of the number of PyObject vs PyMem mismatches in these -- it's like the specific function called at each site was picked at random, sometimes even with memory obtained via PyMem getting released via PyObject. Changed most to use PyObject uniformly, since the blobs allocated are predictably small in most cases, and obmalloc is generally faster than system mallocs then. If extension modules in real life prove as sloppy as Python's front end, we'll have to revert the objimpl.h + pymem.h part of this patch. Note that no problems will show up in a debug build (all calls still go thru obmalloc then). Problems will show up only in a release build, most likely segfaults.
/external/python/cpython2/Parser/tokenizer.c
|
2aa9a5dfdd2966c57036dc836ba8e91ad47ecf14 |
|
20-Mar-2006 |
Neal Norwitz <nnorwitz@gmail.com> |
Use macro versions instead of function versions when we already know the type. This will hopefully get rid of some Coverity warnings, be a hint to developers, and be marginally faster. Some asserts were added when the type is currently known, but depends on values from another function.
/external/python/cpython2/Parser/tokenizer.c
|
7eaf2aaf48c08c8bcf639f6b29c1697fd9f02dfb |
|
02-Mar-2006 |
Thomas Wouters <thomas@python.org> |
Fix crashing bug in tokenizer, when tokenizing files with non-ASCII bytes but without a specified encoding: decoding_fgets() (and decoding_feof()) can return NULL and fiddle with the 'tok' struct, making tok->buf NULL. This is okay in the other cases of calls to decoding_*(), it seems, but not in this one. This should get a test added, somewhere, but the testsuite doesn't seem to test encoding anywhere (although plenty of tests use it.) It seems to me that decoding errors in other places in the code (like at the start of a token, instead of in the middle of one) make the code end up adding small integers to NULL pointers, but happen to check for error states before using the calculated new pointers. I haven't been able to trigger any other crashes, in any case. I would nominate this file for a comlete rewrite for Py3k. The whole decoding trick is too bolted-on for my tastes.
/external/python/cpython2/Parser/tokenizer.c
|
49c5da1d88f605248167f4d95b1dfe08c1f703c7 |
|
01-Mar-2006 |
Martin v. Löwis <martin@v.loewis.de> |
Patch #1440601: Add col_offset attribute to AST nodes.
/external/python/cpython2/Parser/tokenizer.c
|
6cba25666c278760a9fea024948b8add7d5c4b1a |
|
28-Feb-2006 |
Martin v. Löwis <martin@v.loewis.de> |
Change non-ASCII warning into a SyntaxError.
/external/python/cpython2/Parser/tokenizer.c
|
f5adf1eb72c755c3f6183199656f18b12a1cb952 |
|
16-Feb-2006 |
Martin v. Löwis <martin@v.loewis.de> |
Use Py_ssize_t to count the length.
/external/python/cpython2/Parser/tokenizer.c
|
18e165558b24d29e7e0ca501842b9236589b012a |
|
15-Feb-2006 |
Martin v. Löwis <martin@v.loewis.de> |
Merge ssize_t branch.
/external/python/cpython2/Parser/tokenizer.c
|
30b5c5d0116f8e670a6ca74dcb6bd076a919d681 |
|
19-Dec-2005 |
Neal Norwitz <nnorwitz@gmail.com> |
Fix SF bug #1072182, problems with signed characters. Most of these can be backported.
/external/python/cpython2/Parser/tokenizer.c
|
db83eb3170ebdf55bd1c1add94838a9aefa8c00b |
|
18-Dec-2005 |
Neal Norwitz <nnorwitz@gmail.com> |
Fix Bug #1378022, UTF-8 files with a leading BOM crashed the interpreter. Needs backport.
/external/python/cpython2/Parser/tokenizer.c
|
dee2fd54481b311ad831ac455a9192bdc0f147e3 |
|
16-Nov-2005 |
Neal Norwitz <nnorwitz@gmail.com> |
Fix some more memory leaks. Call error_ret() in decode_str(). It was called in some other places, but seemed inconsistent. It is safe to call PyTokenizer_Free() after calling error_ret().
/external/python/cpython2/Parser/tokenizer.c
|
c0d5faa9b4a763befbeab0159d2241a9ddf85b56 |
|
21-Oct-2005 |
Neal Norwitz <nnorwitz@gmail.com> |
Free coding spec (cs) if there was an error to prevent mem leak. Maybe backport candidate
/external/python/cpython2/Parser/tokenizer.c
|
40d37814166380b0fb585f818b446159cfbcec0f |
|
02-Oct-2005 |
Neal Norwitz <nnorwitz@gmail.com> |
- Fix segfault with invalid coding. - SF Bug #772896, unknown encoding results in MemoryError, which is not helpful I will only backport the segfault fix. I'll let Anthony decide if he wants the other changes backported. I will do the backport if asked.
/external/python/cpython2/Parser/tokenizer.c
|
c1f5fff2b7c51fd5420a4dfb8a2b1c297c993c10 |
|
12-Jul-2005 |
Walter Dörwald <walter@livinglogic.de> |
Apply SF patch #1101726: Fix buffer overrun in tokenizer.c when a source file with a PEP 263 encoding declaration results in long decoded line.
/external/python/cpython2/Parser/tokenizer.c
|
4bf108d74f2e36f16f4c0c00e7791e418e2d47ff |
|
03-Mar-2005 |
Martin v. Löwis <martin@v.loewis.de> |
Patch #802188: better parser error message for non-EOL following line cont.
/external/python/cpython2/Parser/tokenizer.c
|
7df44b384a4391cfed0a4d26b7e314a06ae4d595 |
|
04-Aug-2004 |
Hye-Shik Chang <hyeshik@gmail.com> |
SF #941229: Decode source code with sys.stdin.encoding in interactive modes like non-interactive modes. This allows for non-latin-1 users to write unicode strings directly and sets Japanese users free from weird manual escaping <wink> in shift_jis environments. (Reviewed by Martin v. Loewis)
/external/python/cpython2/Parser/tokenizer.c
|
c2a5a636545a88f349dbe3e452ffb4494b68e534 |
|
02-Aug-2004 |
Anthony Baxter <anthonybaxter@gmail.com> |
PEP-0318, @decorator-style. In Guido's words: "@ seems the syntax that everybody can hate equally" Implementation by Mark Russell, from SF #979728.
/external/python/cpython2/Parser/tokenizer.c
|
eddc1449bae39414aaf7a4f63ccd3b69c4fb069e |
|
20-Nov-2003 |
Jack Jansen <jack.jansen@cwi.nl> |
Getting rid of all the code inside #ifdef macintosh too.
/external/python/cpython2/Parser/tokenizer.c
|
1fb1400d0819b2bebf17bf5810fa3f05af7235b4 |
|
17-Feb-2003 |
Marc-André Lemburg <mal@egenix.com> |
Add URL for PEP to the source code encoding warning. Remove the usage of PyErr_WarnExplicit() since this could cause sensitive information from the source files to appear in e.g. log files.
/external/python/cpython2/Parser/tokenizer.c
|
f032f86e9e828dfe1147852783aa6784e3ddf610 |
|
09-Feb-2003 |
Just van Rossum <just@letterror.com> |
patch 680474 that fixes bug 679880: compile/eval/exec refused utf-8 bom mark. Added unit test.
/external/python/cpython2/Parser/tokenizer.c
|
a2e303c32d415c2921672d87268b80d6aa824e21 |
|
15-Jan-2003 |
Mark Hammond <mhammond@skippinet.com.au> |
Fix [ 665014 ] files with long lines and an encoding crash. Ensure that the 'size' arg is correctly passed to the encoding reader to prevent buffer overflows.
/external/python/cpython2/Parser/tokenizer.c
|
95292d6caa3af3196c5b9f5f95dae209815c09e5 |
|
11-Dec-2002 |
Martin v. Löwis <martin@v.loewis.de> |
Constify filenames and scripts. Fixes #651362.
/external/python/cpython2/Parser/tokenizer.c
|
e08e1bc80a526ab850ae6d1be72f49e08c34f68b |
|
02-Nov-2002 |
Neal Norwitz <nnorwitz@gmail.com> |
Fix compiler warning on HP-UX. Cast param to isalnum() to int.
/external/python/cpython2/Parser/tokenizer.c
|
566f6afe9a9de23132302020dcb4c612d5180f23 |
|
26-Oct-2002 |
Martin v. Löwis <martin@v.loewis.de> |
Patch #512981: Update readline input stream on sys.stdin/out change.
/external/python/cpython2/Parser/tokenizer.c
|
17db21ffd0659ba3eb864b161cc2c4f63849e62e |
|
03-Sep-2002 |
Tim Peters <tim.peters@gmail.com> |
Removed reliance on gcc/C99 extension.
/external/python/cpython2/Parser/tokenizer.c
|
f62a89b1e09d84fbd60e0356b87430a6ff1e352d |
|
03-Sep-2002 |
Martin v. Löwis <martin@v.loewis.de> |
Ignore encoding declarations inside strings. Fixes #603509.
/external/python/cpython2/Parser/tokenizer.c
|
84b2bed4359e27070fe2eac4b464d4a1bc6e150d |
|
16-Aug-2002 |
Guido van Rossum <guido@python.org> |
Squash a few calls to the hideously expensive PyObject_CallObject(o,a) -- replace then with slightly faster PyObject_Call(o,a,NULL). (The difference is that the latter requires a to be a tuple; the former allows other values and wraps them in a tuple if necessary; it involves two more levels of C function calls to accomplish all that.)
/external/python/cpython2/Parser/tokenizer.c
|
118ec70ea27000db428ba3e3a757f4b423670db6 |
|
15-Aug-2002 |
Skip Montanaro <skip@pobox.com> |
provide less mysterious error messages when seeing end-of-line in single-quoted strings or end-of-file in triple-quoted strings. closes patch 586561.
/external/python/cpython2/Parser/tokenizer.c
|
2863c10a86de1073d3e556779e326b6065347b2c |
|
07-Aug-2002 |
Martin v. Löwis <martin@v.loewis.de> |
Use Py_FatalError instead of abort.
/external/python/cpython2/Parser/tokenizer.c
|
019934b3ccd27463e1bed94b280acba65fb43ccf |
|
07-Aug-2002 |
Martin v. Löwis <martin@v.loewis.de> |
Fix PEP 263 code --without-unicode. Fixes #591943.
/external/python/cpython2/Parser/tokenizer.c
|
cf0a2cfdb269ba8060f86fe5fe7a2b2085edbbc1 |
|
05-Aug-2002 |
Jack Jansen <jack.jansen@cwi.nl> |
Added a cast to shut up a compiler warning.
/external/python/cpython2/Parser/tokenizer.c
|
725bb233b9492eb4b5532d84b60db5daa1e6b195 |
|
05-Aug-2002 |
Martin v. Löwis <martin@v.loewis.de> |
Add 1 to lineno in deprecation warning. Fixes #590888.
/external/python/cpython2/Parser/tokenizer.c
|
1ee99d31d980e8a6e9c9d2379900f8bd5f98a9fe |
|
04-Aug-2002 |
Martin v. Löwis <martin@v.loewis.de> |
Make pgen compile with pydebug. Duplicate normalized names, as it may be longer than the old string.
/external/python/cpython2/Parser/tokenizer.c
|
cd280fb59c6dfc84e0a138039e541ef46df7fb0b |
|
04-Aug-2002 |
Martin v. Löwis <martin@v.loewis.de> |
Group statements properly.
/external/python/cpython2/Parser/tokenizer.c
|
2c3f9c6f04009f01aec1e10a1b6f926372d0f8fe |
|
04-Aug-2002 |
Tim Peters <tim.peters@gmail.com> |
Repaired a fatal compiler error in the debug build: it's not clear what this was trying to assert, but the name it referenced didn't exist.
/external/python/cpython2/Parser/tokenizer.c
|
919603b27a5d81a4b9cd470085e046033299f48a |
|
04-Aug-2002 |
Tim Peters <tim.peters@gmail.com> |
Squash compiler wng about signed-vs-unsigned mismatch.
/external/python/cpython2/Parser/tokenizer.c
|
00f1e3f5a54adb0a7159a446edeca2e36da4092e |
|
04-Aug-2002 |
Martin v. Löwis <martin@v.loewis.de> |
Patch #534304: Implement phase 1 of PEP 263.
/external/python/cpython2/Parser/tokenizer.c
|
7b8c7546ebc1fc3688ef95768fa8b82f0f205490 |
|
14-Apr-2002 |
Jack Jansen <jack.jansen@cwi.nl> |
Mass checkin of universal newline support. Highlights: import and friends will understand any of \r, \n and \r\n as end of line. Python file input will do the same if you use mode 'U'. Everything can be disabled by configuring with --without-universal-newlines. See PEP278 for details.
/external/python/cpython2/Parser/tokenizer.c
|
d507dab91f9790a24bd53d41d7fcf52fe89a6eff |
|
30-Aug-2001 |
Tim Peters <tim.peters@gmail.com> |
SF patch #455966: Allow leading 0 in float/imag literals. Consequences for Jython still unknown (but raised on Jython-Dev).
/external/python/cpython2/Parser/tokenizer.c
|
9aa70d93aae89a9404a58f32f3fcd3c72b1ee56b |
|
27-Aug-2001 |
Tim Peters <tim.peters@gmail.com> |
SF bug [#455775] float parsing discrepancy. PyTokenizer_Get: error if exponent contains no digits (3e, 2.0e+, ...).
/external/python/cpython2/Parser/tokenizer.c
|
4668b000a1d9113394941ad39875c827634feb49 |
|
08-Aug-2001 |
Guido van Rossum <guido@python.org> |
Implement PEP 238 in its (almost) full glory. This introduces: - A new operator // that means floor division (the kind of division where 1/2 is 0). - The "future division" statement ("from __future__ import division) which changes the meaning of the / operator to implement "true division" (where 1/2 is 0.5). - New overloadable operators __truediv__ and __floordiv__. - New slots in the PyNumberMethods struct for true and floor division, new abstract APIs for them, new opcodes, and so on. I emphasize that without the future division statement, the semantics of / will remain unchanged until Python 3.0. Not yet implemented are warnings (default off) when / is used with int or long arguments. This has been on display since 7/31 as SF patch #443474. Flames to /dev/null.
/external/python/cpython2/Parser/tokenizer.c
|
cf96de052f656c400ecfd8302d6507e4f586c365 |
|
21-Apr-2001 |
Tim Peters <tim.peters@gmail.com> |
SF but #417587: compiler warnings compiling 2.1. Repaired *some* of the SGI compiler warnings Sjoerd Mullender reported.
/external/python/cpython2/Parser/tokenizer.c
|
8586991099e4ace18ee94163a96b8ea1bed77ebe |
|
02-Sep-2000 |
Guido van Rossum <guido@python.org> |
REMOVED all CWI, CNRI and BeOpen copyright markings. This should match the situation in the 1.6b1 tree.
/external/python/cpython2/Parser/tokenizer.c
|
434d0828d81855692d45c3fdc0905a67c17d83ba |
|
24-Aug-2000 |
Thomas Wouters <thomas@python.org> |
Support for three-token characters (**=, >>=, <<=) which was written by Michael Hudson, and support in general for the augmented assignment syntax. The graminit.c patch is large!
/external/python/cpython2/Parser/tokenizer.c
|
23c9e0024af99379ae517b016b874d57127e9a97 |
|
22-Jul-2000 |
Thomas Wouters <thomas@python.org> |
Mass ANSIfication. Work around intrcheck.c's desire to pass 'PyErr_CheckSignals' to 'Py_AddPendingCall' by providing a (static) wrapper function that has the right number of arguments.
/external/python/cpython2/Parser/tokenizer.c
|
85f363990cbd6df21015eebdc171c533176e3407 |
|
11-Jul-2000 |
Fred Drake <fdrake@acm.org> |
Create two new exceptions: IndentationError and TabError. These are used for indentation related errors. This patch includes Ping's improvements for indentation-related error messages. Closes SourceForge patches #100734 and #100856.
/external/python/cpython2/Parser/tokenizer.c
|
dbd9ba6a6c19c3d06f5684b3384a934f740038db |
|
09-Jul-2000 |
Tim Peters <tim.peters@gmail.com> |
Nuke all remaining occurrences of Py_PROTO and Py_FPROTO.
/external/python/cpython2/Parser/tokenizer.c
|
ffcc3813d82e6b96db79f518f4e67b940a13ce64 |
|
01-Jul-2000 |
Guido van Rossum <guido@python.org> |
Change copyright notice - 2nd try.
/external/python/cpython2/Parser/tokenizer.c
|
fd71b9e9d496caa510dec56a9b69966558d6ba5d |
|
01-Jul-2000 |
Guido van Rossum <guido@python.org> |
Change copyright notice.
/external/python/cpython2/Parser/tokenizer.c
|
6da3434e037a281c771ad0fe37896920bfedd140 |
|
29-Jun-2000 |
Guido van Rossum <guido@python.org> |
Trent Mick: familiar simple Win64 patches
/external/python/cpython2/Parser/tokenizer.c
|
b18618dab7b6b85bb05b084693706e59211fa180 |
|
04-May-2000 |
Guido van Rossum <guido@python.org> |
Vladimir Marangozov's long-awaited malloc restructuring. For more comments, read the patches@python.org archives. For documentation read the comments in mymalloc.h and objimpl.h. (This is not exactly what Vladimir posted to the patches list; I've made a few changes, and Vladimir sent me a fix in private email for a problem that only occurs in debug mode. I'm also holding back on his change to main.c, which seems unnecessary to me.)
/external/python/cpython2/Parser/tokenizer.c
|
6c981ad25e38a2f02a9347310bdc680755208450 |
|
04-Apr-2000 |
Guido van Rossum <guido@python.org> |
Only write message about changed Tab size with -v.
/external/python/cpython2/Parser/tokenizer.c
|
ab5ca15f94af2d617ff68281ca542adaedb4212e |
|
31-Mar-2000 |
Guido van Rossum <guido@python.org> |
Fix by Eric Raymond: make the code that looks for various bits of tab-setting magic much smarter, more correct, and more easily extensible.
/external/python/cpython2/Parser/tokenizer.c
|
86016cb4828a73c28ea3ae7ed3c76996216c03b3 |
|
10-Mar-2000 |
Guido van Rossum <guido@python.org> |
Marc-Andre Lemburg: add new string token types u"..." and ur"..." (Unicode and raw Unicode).
/external/python/cpython2/Parser/tokenizer.c
|
d5516bc45fdd55b920838806e59a3089ac17ca93 |
|
04-Dec-1998 |
Guido van Rossum <guido@python.org> |
One more fprintf bites the dist -- use PySys_WriteStderr
/external/python/cpython2/Parser/tokenizer.c
|
6e73bf403234be6fde0ad466208a9217f5f1ed95 |
|
25-Aug-1998 |
Guido van Rossum <guido@python.org> |
Replace all calls to fprintf(stderr, ...) with PySys_WriteStderr(...).
/external/python/cpython2/Parser/tokenizer.c
|
926f13a0819eb3d40a0d0fd38ff25ef0c7d489b3 |
|
09-Apr-1998 |
Guido van Rossum <guido@python.org> |
Add checking for inconsistent tab usage
/external/python/cpython2/Parser/tokenizer.c
|
54758fa8ca97123b48b6887d13eaffa642aabc48 |
|
16-Feb-1998 |
Guido van Rossum <guido@python.org> |
Swap two statements in the dedent check loop. This makes absolutely no difference, but avoids triggering an optimizer bug in the AIX compiler where the loop unrolling does the wrong thing...
/external/python/cpython2/Parser/tokenizer.c
|
35685240a91dbb3e4fb0e0b343019ba586e531c2 |
|
16-Feb-1998 |
Guido van Rossum <guido@python.org> |
Fixed the bug in searching for triple quotes -- change the 'quote2' variable from a pointer to an index, so a realloc() of the buffer won't disturb it. Problem found by Vladimir Marangozov.
/external/python/cpython2/Parser/tokenizer.c
|
cf57d8b8c92143342b4d978c6134d307114fbf44 |
|
19-Jan-1998 |
Guido van Rossum <guido@python.org> |
tok_nextc() should return unsigned characters, to avoid mistaking '\377' for EOF.
/external/python/cpython2/Parser/tokenizer.c
|
86bea46b3d16c4ed0453e17f241ddbdfade76c98 |
|
29-Apr-1997 |
Guido van Rossum <guido@python.org> |
Another directory quickly renamed.
/external/python/cpython2/Parser/tokenizer.c
|
5026cb4dc6477c81c465c8d0f3a1e6c5344d20c5 |
|
25-Apr-1997 |
Guido van Rossum <guido@python.org> |
Now that the string-sig has settled on r"obin" strings, restrict the <letter><string> notation to 'r' and 'R'.
/external/python/cpython2/Parser/tokenizer.c
|
2d45be1366162fe3b8b6b5771b1968cc7a6d9256 |
|
11-Apr-1997 |
Guido van Rossum <guido@python.org> |
(Jack:) On the Mac, give syntax error on \r.
/external/python/cpython2/Parser/tokenizer.c
|
24dacb38c563bd1d76aea31ad9fd602d83cbcaec |
|
06-Apr-1997 |
Guido van Rossum <guido@python.org> |
Support for alternative string quotes (a"xx", b"xx", c"xx", ...).
/external/python/cpython2/Parser/tokenizer.c
|
408027ea4635de5c892b28c4fdf41840eb9089a4 |
|
30-Dec-1996 |
Guido van Rossum <guido@python.org> |
Rename DEBUG macro to Py_DEBUG
/external/python/cpython2/Parser/tokenizer.c
|
fd8a393086fbf43597965d5e55bec158a094a466 |
|
02-Dec-1996 |
Guido van Rossum <guido@python.org> |
Make gcc -Wall happy
/external/python/cpython2/Parser/tokenizer.c
|
d266eb460e20ded087d01a29da0a230e235afc40 |
|
25-Oct-1996 |
Guido van Rossum <guido@python.org> |
New permission notice, includes CNRI.
/external/python/cpython2/Parser/tokenizer.c
|
faa436c70bfd3279c2b2391f7f2a3069984dd48a |
|
26-Jan-1996 |
Guido van Rossum <guido@python.org> |
use only j for imaginary constants
/external/python/cpython2/Parser/tokenizer.c
|
f595fde47b25df5ea029f127dedf437a5edbc07f |
|
12-Jan-1996 |
Guido van Rossum <guido@python.org> |
changes for pow(**) and complex
/external/python/cpython2/Parser/tokenizer.c
|
3f6bb865939e7bf0587a63ac8ff067cc5486b940 |
|
21-Sep-1995 |
Guido van Rossum <guido@python.org> |
fix bogus resize length in nextc
/external/python/cpython2/Parser/tokenizer.c
|
94d32b18e0ef1d7bfa8b7bef084f4cc2cecc5d44 |
|
08-Jul-1995 |
Guido van Rossum <guido@python.org> |
ignore control-l in whitespace
/external/python/cpython2/Parser/tokenizer.c
|
2e96eb9266e4ed47d125998287bbca492335e766 |
|
14-Jun-1995 |
Guido van Rossum <guido@python.org> |
replace "\r\n" with "\n" at line end (Jim Ahlstrom)
/external/python/cpython2/Parser/tokenizer.c
|
78c0535a224697e1c7a1a4e68462d3d204e38942 |
|
17-Jan-1995 |
Guido van Rossum <guido@python.org> |
fix loop on unterminated triple quotes
/external/python/cpython2/Parser/tokenizer.c
|
b9f8d6e54d72d108648a411174e57779c212871a |
|
04-Jan-1995 |
Guido van Rossum <guido@python.org> |
Added 1995 to copyright message.
/external/python/cpython2/Parser/tokenizer.c
|
588633daa2fe7276ae4b0c99723094add0d965ff |
|
30-Dec-1994 |
Guido van Rossum <guido@python.org> |
Parser/tokenizer.c (tok_nextc): zap tok->buf after freeing; rest: abort() -> fatal(); small things
/external/python/cpython2/Parser/tokenizer.c
|
1a817c0911f6fe64905b28afa808f4841681f683 |
|
19-Sep-1994 |
Guido van Rossum <guido@python.org> |
* Parser/tokenizer.c (tok_nextc): count line numbers when parsing strings
/external/python/cpython2/Parser/tokenizer.c
|
f4b1a64a215443ac839c1453b1233860bccfdee6 |
|
29-Aug-1994 |
Guido van Rossum <guido@python.org> |
* Parser/tokenizer.c: backup over illegal newline in string literal (for "completeness" test)
/external/python/cpython2/Parser/tokenizer.c
|
8054fad890bbb6668cb2eb2fb3118222bada5975 |
|
26-Oct-1993 |
Guido van Rossum <guido@python.org> |
Changes to accept double-quoted strings on input.
/external/python/cpython2/Parser/tokenizer.c
|
a849b834f1e86bec20027654c91bb4cc74de5c8d |
|
12-May-1993 |
Guido van Rossum <guido@python.org> |
* selectmodule.c: fix (another!) two memory leaks -- this time in list2set * tokenizer.[ch]: allow continuation without \ inside () [] {}.
/external/python/cpython2/Parser/tokenizer.c
|
6ac258d381b5300e3ec935404a111e8dff4617d4 |
|
12-May-1993 |
Guido van Rossum <guido@python.org> |
* pythonrun.c: Print exception type+arg *after* stack trace instead of before it. * ceval.c, object.c: moved testbool() to object.c (now extern visible) * stringobject.c: fix bugs in and rationalize string resize in formatstring() * tokenizer.[ch]: fix non-working code for lines longer than BUFSIZ
/external/python/cpython2/Parser/tokenizer.c
|
9bfef44d97d1ae24e03717e3d59024b44626a9de |
|
29-Mar-1993 |
Guido van Rossum <guido@python.org> |
* Changed all copyright messages to include 1993. * Stubs for faster implementation of local variables (not yet finished) * Added function name to code object. Print it for code and function objects. THIS MAKES THE .PYC FILE FORMAT INCOMPATIBLE (the version number has changed accordingly) * Print address of self for built-in methods * New internal functions getattro and setattro (getattr/setattr with string object arg) * Replaced "dictobject" with more powerful "mappingobject" * New per-type functio tp_hash to implement arbitrary object hashing, and hashobject() to interface to it * Added built-in functions hash(v) and hasattr(v, 'name') * classobject: made some functions static that accidentally weren't; added __hash__ special instance method to implement hash() * Added proper comparison for built-in methods and functions
/external/python/cpython2/Parser/tokenizer.c
|
bab9d0385585fcddf6ee96a4ca38dd18e3517f54 |
|
05-Apr-1992 |
Guido van Rossum <guido@python.org> |
Copyright for 1992 added
/external/python/cpython2/Parser/tokenizer.c
|
4fe872988b3dd9edf004160c44076df839f14516 |
|
26-Feb-1992 |
Guido van Rossum <guido@python.org> |
Make tabs always 8 spaces wide -- it's more portable.
/external/python/cpython2/Parser/tokenizer.c
|
943094566a61c0218f3c5bf2e71b8ed37a381350 |
|
10-Dec-1991 |
Guido van Rossum <guido@python.org> |
Add warning XXX that 09.9 isn't accepted.
/external/python/cpython2/Parser/tokenizer.c
|
baf0ebf43c45e85e9a47d25e5279c99ca9455838 |
|
24-Oct-1991 |
Guido van Rossum <guido@python.org> |
Added shift and mask ops. Allow numbers starting with a period.
/external/python/cpython2/Parser/tokenizer.c
|
fbab905ae1fe6320268301193d953d25d2acb5c1 |
|
20-Oct-1991 |
Guido van Rossum <guido@python.org> |
Added 2-char tokens and new versions of comparisons
/external/python/cpython2/Parser/tokenizer.c
|
8c11a5c759a26137de3a84086641d0ee751ffe5b |
|
27-Jul-1991 |
Guido van Rossum <guido@python.org> |
Completely ignore lines with only a newline token on them, except wholly empty lines interactively.
/external/python/cpython2/Parser/tokenizer.c
|
d6a15ada727e586dc7b2cce8115e65d0abb0d1aa |
|
25-Jun-1991 |
Guido van Rossum <guido@python.org> |
Generalize to macintosh.
/external/python/cpython2/Parser/tokenizer.c
|
f023c463d77c8321dcea935a7225562411f97bd1 |
|
05-May-1991 |
Guido van Rossum <guido@python.org> |
Added recognition of 'l' or 'L' as long integer suffix
/external/python/cpython2/Parser/tokenizer.c
|
f70e43a073b36c6f6e9894c01025243a77a452d4 |
|
19-Feb-1991 |
Guido van Rossum <guido@python.org> |
Added copyright notice.
/external/python/cpython2/Parser/tokenizer.c
|
b156d7259bd8b2c84ca7f8e8e9e8e24d4a49870d |
|
21-Dec-1990 |
Guido van Rossum <guido@python.org> |
Changes for THINK C 4.0.
/external/python/cpython2/Parser/tokenizer.c
|
3f5da24ea304e674a9abbdcffc4d671e32aa70f1 |
|
20-Dec-1990 |
Guido van Rossum <guido@python.org> |
"Compiling" version
/external/python/cpython2/Parser/tokenizer.c
|
a769172f6a872ae21e23d797cf484fc0dbdd98e7 |
|
09-Nov-1990 |
Guido van Rossum <guido@python.org> |
Increment line number for continuation lines.
/external/python/cpython2/Parser/tokenizer.c
|
85a5fbbdfea617f6cc8fae82c9e8c2b5c424436d |
|
14-Oct-1990 |
Guido van Rossum <guido@python.org> |
Initial revision
/external/python/cpython2/Parser/tokenizer.c
|