Jared Grubb added the comment:
You're right. My mistake. I thought "match" meant "the full string must match",
but in Python it means "the beginning must match".
Sorry for the noise.
--
___
Python tracker
Jared Grubb added the comment:
Yes:
>>> re.match('.*', '')
<_sre.SRE_Match object at 0x107c6d308>
>>> re.match('.*?', '')
<_sre.SRE_Match object at 0x107c6d370>
--
New submission from Jared Grubb:
re.match matches, but the capture groups are empty. That's not possible.
Python 2.7.2 (default, Oct 11 2012, 20:14:37)
[GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin
Type "help", "copyright", "
Jared Grubb added the comment:
Ditto on a few dozen lines later:
INPLACE_TRUE_DIVIDE()¶
Implements in-place TOS = TOS1 / TOS when from __future__ import
division is in effect.
--
___
Python tracker
<http://bugs.python.org/issue7
New submission from Jared Grubb :
In the Python 3.1 docs for the 'dis' module, the following appears:
( http://docs.python.org/3.1/library/dis.html )
BINARY_TRUE_DIVIDE()¶
Implements TOS = TOS1 / TOS when from __future__ import division is
in effect.
There is always true in 3
New submission from Jared Grubb :
The existing text:
http://www.python.org/doc/3.0/whatsnew/3.0.html
"A new system for built-in string formatting operations replaces the %
string formatting operator. (However, the % operator is still supported;
it will be deprecated in Python 3.1 and re
Jared Grubb added the comment:
The process that you describe in msg85741 is a way of ensuring
"memcmp(&x, &y, sizeof(x))==0", and it's portable and safe and is the
Right Thing that we all want and expect. But that's not "x==y", as that
Sun paper exp
Jared Grubb added the comment:
I think ANY attempt to rely on eval(repr(x))==x is asking for trouble,
and it should probably be removed from the docs.
Example: The following C code can vary *even* on a IEEE 754 platform,
even in two places in the same source file (so same compile options
New submission from Jared Grubb :
On page library/abc.html documenting abc.abstractmethod, the following
text about C++ is placed in a note:
"Note: Unlike C++’s pure virtual functions, or Java abstract methods,
these abstract methods may have an implementation. This implementation
can be c
Jared Grubb <[EMAIL PROTECTED]> added the comment:
I actually hadnt thought of that. PyPy should actually use universal
newlines to its advantage; after all, it IS written in Python... Thanks
for the suggestion!
In any case, I wanted to get this bug about the standard library in your
reco
Jared Grubb <[EMAIL PROTECTED]> added the comment:
Yes, but exec(string) also gives a syntax error for \r\n:
exec('x=1\r\nprint x')
The only explanation I could find for ONLY permitting \n as newlines in
exec(string) comes from PEP278: "There is no support for universa
Jared Grubb <[EMAIL PROTECTED]> added the comment:
This is not a report on a bug in exec(), but rather a bug in the
tokenize module -- the behavior between the CPython tokenizer and the
tokenize module is not consistent. If you look in the tokenize.py
source, it contains code to recogniz
Jared Grubb <[EMAIL PROTECTED]> added the comment:
I ran into this bug because I created a context manager in one of my own
projects, and the regression tests in test_decimal looked like a good
start for my own regression tests... when some recent changes broke MY
code, I found the test b
New submission from Jared Grubb <[EMAIL PROTECTED]>:
In Lib\test\test_decimal.py, attached is a bugfix for two bugs:
1) If the thfunc2 actually fails, then its thread will throw an
exception and never set the Events that thfunc1 is waiting for; thus,
thfunc1 never returns, causing the
Jared Grubb added the comment:
CPython allows \ at EOF, but tokenize does not.
>>> s = 'print 1\\\n'
>>> exec s
1
>>> tokenize.tokenize(StringIO(s).readline)
1,0-1,5:NAME'print'
1,6-1,7:NUMBER '1'
Traceback (most r
New submission from Jared Grubb:
tokenize recognizes '\n' and '\r\n' as newlines, but does not tolerate '\r':
>>> s = "print 1\nprint 2\r\nprint 3\r"
>>> open('temp.py','w').write(s)
>>> exec(open('t
New submission from Jared Grubb:
tokenize does not handle line joining properly, as the following string
fails the CPython tokenizer but passes the tokenize module.
Example 1:
>>> s = "if 1:\n \\\n #hey\n print 1"
>>> exec s
Traceback (most recent call last):
Changes by Jared Grubb:
--
components: Extension Modules
nosy: jaredgrubb
severity: minor
status: open
title: tokenize: mishandles line joining
type: behavior
versions: Python 2.5
__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/
18 matches
Mail list logo