On 18/05/16 17:21, Ned Batchelder wrote:
> Ideally, an empty test wouldn't be a success, but I'm not sure how
> the test runner could determine that it was empty. I guess it could
> introspect the test function to see if it had any real code in it,
> but I don't know of a test runner that does tha
On 17/05/16 12:39, Cem Karan wrote:
> Just downloaded and used a library that came with unit tests, which all
> passed.
> [...]
> I discovered they had commented out the bodies of some of the unit
tests...
Shouldn't the unit test framework have those "empty" tests reported as
"todo"/"incomplete"
On 11/06/15 14:16, MRAB wrote:
harder then they anticipated.
---^
seems nicer... then having to use self everywhere...
"then"? Should be "than"... (That seems to be happening more and more
these days...)
Indeed :-)
--
Ce n'est pas parce qu'ils sont nombreux à avoir tort qu'ils ont
Thibault Langlois schrieb:
1 > 0 == True
False
What am I missing here ?
This, perhaps:
http://www.primozic.net/nl/chaining-comparison-operators-in-python/
Greetings,
Thomas
--
Ce n'est pas parce qu'ils sont nombreux à avoir tort qu'ils ont raison!
(Coluche)
--
https://mail.python.org/mailman
Jason Friedman schrieb:
Can you recommend an open source project (or two) written in Python;
which covers multi project + sub project issue tracking linked across
github repositories?
Why does it need to be written in Python?
Otherwise it wouldn't be on topic here, would it?
Gr
Thomas Jollans schrieb:
def primes():
yield 1
1 is not a prime number.
Greetings,
Thomas
--
Ce n'est pas parce qu'ils sont nombreux à avoir tort qu'ils ont raison!
(Coluche)
--
http://mail.python.org/mailman/listinfo/python-list
John Machin schrieb:
No, "complicated" is more related to unused features. In
the case of using an aeroplane to transport 3 passengers 10 km along
the autobahn, you aren't using the radar, wheel-retractability, wings,
pressurised cabin, etc. In your original notion of using a dict in
your lexer,
John Machin schrieb:
Rephrasing for clarity: Don't use a data structure that is more
complicated than that indicated by your requirements.
Could you please define "complicated" in this context? In terms of
characters to type and reading, the dict is surely simpler. But I
suppose that under t
John Machin schrieb:
*IF* you need to access the regex associated with a token in O(1)
time, a dict is indicated.
O(1) - Does that mean `mydict[mykey]` takes the same amount of time, no
matter if mydict has 10 or 10 entries? How does this magic work?
O(log n) I would understand, but
Dennis Lee Bieber schrieb:
Is "[ ( name, regex ), ... ]" really "simpler" than "{ name: regex, ...
}"? Intuitively, I would consider the dictionary to be the simpler
structure.
Why, when you aren't /using/ the name to retrieve the expression...
So as soon as I start retrieving a re
John Machin schrieb:
General tip: Don't us a data structure that is more complicated than
what you need.
Is "[ ( name, regex ), ... ]" really "simpler" than "{ name: regex, ...
}"? Intuitively, I would consider the dictionary to be the simpler
structure.
Greetings,
Thomas
--
Ce n'est pas
Aaron Brady schrieb:
And, if you don't intend to use 'myway' on 'listiterator's and such,
'send( None )' is equivalent to 'next( )'.
I didn't know that. But doesn't that impose a restriction somehow? It
makes it impossible to send a None to a generator.
Greetings,
Thomas
--
Ce n'est pas pa
alex23 schrieb:
http://www.python.org/dev/peps/pep-0342/
That links to the original proposal to extend the generator behaviour
After some searching, I found this as a remark in parentheses:
"Introducing a new method instead of overloading next() minimizes
overhead for simple next() calls."
Arnaud Delobelle schrieb:
If you want to simply 'set' the generator (by which I take you mean
'change its state') without without iterating it one step, then what you
need is a class with an __iter__() method. Then you can change the
state of the object between calls to next(). E.g.
class M
Hello,
I was playing around a bit with generators using next() and send(). And
I was wondering why an extra send() method was introduced instead of
simply allowing an argument for next().
Also, I find it a bit counter-intuitive that send(42) not only "sets"
the generator to the specified val
Steve Holden schrieb:
Suppose I use the dict and I want to access the regex associatetd with
the token named "tokenname" (that is, no iteration, but a single
access). I could simple write tokendict["tokenname"]. But with the list
of tuples, I can't think of an equally easy way to do that. But th
John Machin schrieb:
You are getting closer. A better analogy is that using a dict is like
transporting passengers along an autobahn in an aeroplane or
helicopter that never leaves the ground.
It is not a bad idea to transport passengers in an airplane, but then
the airplane should not follow
John Machin schrieb:
Single-character tokens like "<" may be more efficiently handled by
doing a dict lookup after failing to find a match in the list of
(name, regex) tuples.
Yes, I will keep that in mind. For the time being, I will use only
regexes to keep the code simpler. Later, or when t
Paul McGuire schrieb:
Just be sure to account for tabs when computing the column, which this
simple-minded algorithm does not do.
Another thing I had not thought of -- thanks for the hint.
Greetings,
Thomas
--
Ce n'est pas parce qu'ils sont nombreux à avoir tort qu'ils ont raison!
(Coluche)
Paul McGuire schrieb:
loc = data.index("list")
print data[:loc].count("\n")-1
print loc-data[:loc].rindex("\n")-1
prints 5,14
I'm sure it's non-optimal, but it *is* an algorithm that does not
require keeping track of the start of every line...
Yes, I was thinking of something like this. As l
John Machin schrieb:
On the other hand: If all my tokens are "mutually exclusive" then,
But they won't *always* be mutually exclusive (another example is
relational operators (< vs <=, > vs >=)) and AFAICT there is nothing
useful that the lexer can do with an assumption/guess/input that they
Robert Lehmann schrieb:
You don't have to introduce a `next` method to your Lexer class. You
could just transform your `tokenize` method into a generator by replacing
``self.result.append`` with `yield`. It gives you the just in time part
for free while not picking your algorithm into tiny unr
John Machin schrieb:
[...] You have TWO problems: (1) Reporting the error location as
(offset from the start of the file) instead of (line number, column
position) would get you an express induction into the User Interface
Hall of Shame.
Of course. For the actual message I would use at least
Arnaud Delobelle schrieb:
Adding to John's comments, I wouldn't have source as a member of the
Lexer object but as an argument of the tokenise() method (which I would
make public). The tokenise method would return what you currently call
self.result. So it would be used like this.
mylexer =
John Machin schrieb:
Be consistent with your punctuation style. I'd suggest *not* having a
space after ( and before ), as in the previous line. Read
http://www.python.org/dev/peps/pep-0008/
What were the reasons for preferring (foo) over ( foo )? This PEP gives
recommendations for coding styl
Hello,
I started to write a lexer in Python -- my first attempt to do something
useful with Python (rather than trying out snippets from tutorials). It
is not complete yet, but I would like some feedback -- I'm a Python
newbie and it seems that, with Python, there is always a simpler and
bett
Kurien Mathew schrieb:
Any suggestions on a good python equivalent for the following C code:
while (loopCondition)
{
if (condition1)
goto next;
if (condition2)
goto next;
if (condition3)
goto next;
stmt1;
stmt2;
next:
stmt3;
stmt4;
}
while
27 matches
Mail list logo