Francesco Bochicchio wrote:
> Hi all,
>
> anybody can point me to a description of how the default comparison of
> list objects (or other iterables) works?
>
> Apparently l1 < l2 is equivalent to all ( x < y for x,y in
> zip( l1, l2) ), has is shown in the following tests, but I can't find
>
On Mon, 16 Aug 2010 13:46:07 +0300, Francesco Bochicchio
wrote:
anybody can point me to a description of how the default comparison of
list objects (or other iterables) works?
Sequences of the same type are compared using lexicographical ordering:
http://docs.python.org/tutorial/datastructur
Hi all,
anybody can point me to a description of how the default comparison of
list objects (or other iterables) works?
Apparently l1 < l2 is equivalent to all ( x < y for x,y in
zip( l1, l2) ), has is shown in the following tests, but I can't find
it described anywhere:
>>> [1,2,3] < [1,3,2
Loic <[EMAIL PROTECTED]> writes:
> I want to design a function to compare lists and return True only if
> both lists are equal considering memory location of the list.
> I suppose it would be the equivalent of comparing 2 pointers in c++
Use the "is" keyword.
print (l1 is l2)
print (l0 is l2)
Loic wrote:
> I would like to know if it is possible, and how to do this with Python:
>
> I want to design a function to compare lists and return True only if
> both lists are equal considering memory location of the list.
> I suppose it would be the equivalent of comparing 2 pointers in c++
>
>
I would like to know if it is possible, and how to do this with Python:
I want to design a function to compare lists and return True only if
both lists are equal considering memory location of the list.
I suppose it would be the equivalent of comparing 2 pointers in c++
lets call this function c
"Alex Martelli" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Christian Stapfer <[EMAIL PROTECTED]> wrote:
>
>> "Alex Martelli" <[EMAIL PROTECTED]> wrote in message
>> news:[EMAIL PROTECTED]
>> > Christian Stapfer <[EMAIL PROTECTED]> wrote:
>> >
>> >> This is why we would like to
[EMAIL PROTECTED] (Alex Martelli) writes:
> implementation of the components one's considering! Rough ideas of
> *EXPECTED* run-times (big-Theta) for various subcomponents one is
> sketching are *MUCH* more interesting and important than "asymptotic
> worst-case for amounts of input tending to in
Christian Stapfer <[EMAIL PROTECTED]> wrote:
> "Alex Martelli" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
> > Christian Stapfer <[EMAIL PROTECTED]> wrote:
> >
> >> This is why we would like to have a way of (roughly)
> >> estimating the reasonableness of the outlines of a
> >>
"Alex Martelli" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Christian Stapfer <[EMAIL PROTECTED]> wrote:
>
>> This is why we would like to have a way of (roughly)
>> estimating the reasonableness of the outlines of a
>> program's design in "armchair fashion" - i.e. without
>> ha
Christian Stapfer <[EMAIL PROTECTED]> wrote:
> This is why we would like to have a way of (roughly)
> estimating the reasonableness of the outlines of a
> program's design in "armchair fashion" - i.e. without
> having to write any code and/or test harness.
And we would also like to consume vast
Steven D'Aprano <[EMAIL PROTECTED]> writes:
> But if you are unlikely to discover this worst case behaviour by
> experimentation, you are equally unlikely to discover it in day to
> day usage.
Yes, that's the whole point. Since you won't discover it by
experimentation and you won't discover it by
Steven D'Aprano wrote:
> On Sat, 15 Oct 2005 18:17:36 +0200, Christian Stapfer wrote:
>
>
I'd prefer a (however) rough characterization
of computational complexity in terms of Big-Oh
(or Big-whatever) *anytime* to marketing-type
characterizations like this one...
>>>
>>>Oh how nai
On Sun, 16 Oct 2005 14:07:37 -0700, Paul Rubin wrote:
> The complexity of hashing depends intricately on the the data and if
> the data is carefully constructed by someone with detailed knowledge
> of the hash implementation, it may be as bad as O(n) rather than O(1)
> or O(sqrt(n)) or anything li
On Sun, 16 Oct 2005 20:28:55 +0200, Christian Stapfer wrote:
> Experiments
> (not just in computer science) are quite
> frequently botched. How do you discover
> botched experiments?
Normally by comparing them to the results of other experiments, and being
unable to reconcile the results. You may
Ognen Duzlevski <[EMAIL PROTECTED]> writes:
> > Optimizations have a tendency to make a complete mess of Big O
> > calculations, usually for the better. How does this support your
> > theory that Big O is a reliable predictor of program speed?
>
> There are many things that you cannot predict, how
Steven D'Aprano <[EMAIL PROTECTED]> wrote:
> On Sun, 16 Oct 2005 15:16:39 +0200, Christian Stapfer wrote:
> > It turned out that the VAX compiler had been
> > clever enough to hoist his simple-minded test
> > code out of the driving loop.
> Optimizations have a tendency to make a complete me
Christian Stapfer wrote:
> "Ron Adam" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
>
>>Christian Stapfer wrote:
>>
>>>"Ron Adam" <[EMAIL PROTECTED]> wrote in message
>>>news:[EMAIL PROTECTED]
>>>
>>>
Christian Stapfer wrote:
>This discussion begins to sound
"Steven D'Aprano" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> On Sun, 16 Oct 2005 19:42:11 +0200, Christian Stapfer wrote:
>
>> Pauli's prediction of
>> the existence of the neutrino is another. It took
>> experimentalists a great deal of time and patience
>> (about 20 years, I am
"Steven D'Aprano" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> On Sun, 16 Oct 2005 15:16:39 +0200, Christian Stapfer wrote:
>
>> Come to think of an experience that I shared
>> with a student who was one of those highly
>> creative experimentalists you seem to have
>> in mind. He
On Sun, 16 Oct 2005 19:42:11 +0200, Christian Stapfer wrote:
> Pauli's prediction of
> the existence of the neutrino is another. It took
> experimentalists a great deal of time and patience
> (about 20 years, I am told) until they could finally
> muster something amounting to "experimental proof"
"Ron Adam" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Christian Stapfer wrote:
>> "Ron Adam" <[EMAIL PROTECTED]> wrote in message
>> news:[EMAIL PROTECTED]
>>
>>>Christian Stapfer wrote:
>>>
>>>
This discussion begins to sound like the recurring
arguments one hears betwe
Christian Stapfer wrote:
> It turned out that the VAX compiler had been
> clever enough to hoist his simple-minded test
> code out of the driving loop. In fact, our VAX
> calculated the body of the loop only *once*
> and thus *immediately* announced that it had finished
> the whole test - the
"Fredrik Lundh" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Christian Stapfer wrote:
>
>> As to the value of complexity theory for creativity
>> in programming (even though you seem to believe that
>> a theoretical bent of mind can only serve to stifle
>> creativity), the story of
Christian Stapfer wrote:
> "Ron Adam" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
>
>>Christian Stapfer wrote:
>>
>>
>>>This discussion begins to sound like the recurring
>>>arguments one hears between theoretical and
>>>experimental physicists. Experimentalists tend
>>>to overra
On Sun, 16 Oct 2005 15:16:39 +0200, Christian Stapfer wrote:
> Come to think of an experience that I shared
> with a student who was one of those highly
> creative experimentalists you seem to have
> in mind. He had just bought a new PC and
> wanted to check how fast its floating point
> unit was
Christian Stapfer wrote:
> As to the value of complexity theory for creativity
> in programming (even though you seem to believe that
> a theoretical bent of mind can only serve to stifle
> creativity), the story of the discovery of an efficient
> string searching algorithm by D.E.Knuth provides a
"Ron Adam" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Christian Stapfer wrote:
>
>> This discussion begins to sound like the recurring
>> arguments one hears between theoretical and
>> experimental physicists. Experimentalists tend
>> to overrate the importance of experimental d
"Ron Adam" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Christian Stapfer wrote:
>
>> This discussion begins to sound like the recurring
>> arguments one hears between theoretical and
>> experimental physicists. Experimentalists tend
>> to overrate the importance of experimental da
Christian Stapfer wrote:
> This discussion begins to sound like the recurring
> arguments one hears between theoretical and
> experimental physicists. Experimentalists tend
> to overrate the importance of experimental data
> (setting up a useful experiment, how to interpret
> the experimental data
"Steven D'Aprano" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> On Sat, 15 Oct 2005 18:17:36 +0200, Christian Stapfer wrote:
>
I'd prefer a (however) rough characterization
of computational complexity in terms of Big-Oh
(or Big-whatever) *anytime* to marketing-type
>>
On Sat, 15 Oct 2005 18:17:36 +0200, Christian Stapfer wrote:
>>> I'd prefer a (however) rough characterization
>>> of computational complexity in terms of Big-Oh
>>> (or Big-whatever) *anytime* to marketing-type
>>> characterizations like this one...
>>
>> Oh how naive.
>
> Why is it that even co
"Steven D'Aprano" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> On Sat, 15 Oct 2005 06:31:53 +0200, Christian Stapfer wrote:
>
>> "jon" <[EMAIL PROTECTED]> wrote in message
>> news:[EMAIL PROTECTED]
>>>
>>> To take the heat out of the discussion:
>>>
>>> sets are blazingly fast.
>>
On Sat, 15 Oct 2005 06:31:53 +0200, Christian Stapfer wrote:
> "jon" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
>>
>> To take the heat out of the discussion:
>>
>> sets are blazingly fast.
>
> I'd prefer a (however) rough characterization
> of computational complexity in terms
"jon" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
>
> To take the heat out of the discussion:
>
> sets are blazingly fast.
I'd prefer a (however) rough characterization
of computational complexity in terms of Big-Oh
(or Big-whatever) *anytime* to marketing-type
characterizations l
Let me begin by apologizing to Christian as I was too snippy in
my reply, and sounded even snippier than I meant to.
Christian Stapfer wrote:
> "Scott David Daniels" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
>>a "better" set implementation will win if
>>it can show better perfo
To take the heat out of the discussion:
sets are blazingly fast.
--
http://mail.python.org/mailman/listinfo/python-list
"Scott David Daniels" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Christian Stapfer wrote:
>> "Steve Holden" <[EMAIL PROTECTED]> wrote in message
>> news:[EMAIL PROTECTED]
>>
>>>Christian Stapfer wrote:
>>>
"George Sakkis" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL P
Christian Stapfer wrote:
> "Steve Holden" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
>
>>Christian Stapfer wrote:
>>
>>>"George Sakkis" <[EMAIL PROTECTED]> wrote in message
>>>news:[EMAIL PROTECTED]
>>>
>>>
"Christian Stapfer" <[EMAIL PROTECTED]> wrote:
><[EMAIL P
"Steve Holden" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Christian Stapfer wrote:
>> "George Sakkis" <[EMAIL PROTECTED]> wrote in message
>> news:[EMAIL PROTECTED]
>>
>>>"Christian Stapfer" <[EMAIL PROTECTED]> wrote:
<[EMAIL PROTECTED]> wrote:
>try to use set.
>>>
Christian Stapfer wrote:
> "George Sakkis" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
>
>>"Christian Stapfer" <[EMAIL PROTECTED]> wrote:
>>
>>
>>><[EMAIL PROTECTED]> wrote:
>>>
try to use set.
>>>
>>>Sorting the two lists and then extracting
>>>A-B, B-A, A|B, A & B and
"George Sakkis" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> "Christian Stapfer" <[EMAIL PROTECTED]> wrote:
>
>> <[EMAIL PROTECTED]> wrote:
>> > try to use set.
>>
>> Sorting the two lists and then extracting
>> A-B, B-A, A|B, A & B and A ^ B in one single
>> pass seems to me
On Mon, 10 Oct 2005 14:34:35 +0200, Christian Stapfer wrote:
> Sorting the two lists and then extracting
> A-B, B-A, A|B, A & B and A ^ B in one single
> pass seems to me very likely to be much faster
> for large lists.
Unless you are running a Python compiler in your head, chances are your
intui
"Christian Stapfer" <[EMAIL PROTECTED]> wrote:
> <[EMAIL PROTECTED]> wrote:
> > try to use set.
>
> Sorting the two lists and then extracting
> A-B, B-A, A|B, A & B and A ^ B in one single
> pass seems to me very likely to be much faster
> for large lists.
Why don't you implement it, test it
<[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> try to use set.
>L1 = [1,1,2,3,4]
>L2 = [1,3, 99]
>A = set(L1)
>B = set(L2)
>
>X = A-B
>print X
>
>Y = B-A
>print Y
>
>Z = A | B
>print Z
But how "efficient" is this? Could you be a bit
more expl
try to use set.
L1 = [1,1,2,3,4]
L2 = [1,3, 99]
A = set(L1)
B = set(L2)
X = A-B
print X
Y = B-A
print Y
Z = A | B
print Z
Cheers,
pujo
--
http://mail.python.org/mailman/listinfo/python-list
Odd-R. wrote:
>I have to lists, A and B, that may, or may not be equal. If they are not
>identical, I want the output to be three new lists, X,Y and Z where X has
>all the elements that are in A, but not in B, and Y contains all the
>elements that are B but not in A. Z will then have the elemen
I have to lists, A and B, that may, or may not be equal. If they are not
identical, I want the output to be three new lists, X,Y and Z where X has
all the elements that are in A, but not in B, and Y contains all the
elements that are B but not in A. Z will then have the elements that are
in bot
48 matches
Mail list logo