>
> Have tracked-down and communicated with the site owner/operator. He
> advised a loop-back problem which has now been blocked.
>
I believe this has been corrected in the past, more than once, though my
memory is a bit hazy now. It's not clear to me why this particular site
keeps messing up thei
Have tracked-down and communicated with the site owner/operator. He
advised a loop-back problem which has now been blocked.
--
Regards =dn
--
https://mail.python.org/mailman/listinfo/python-list
> I filter out these messages in my news setup (using gnus on Emacs) on
> the header:
>
> ("head"
> ("Injection-Info: news.bbs.nz" -1002 nil s))
>
> i.e. each message that contains "news.bbs.nz" in the "Injection-Info"
> header will be made invisible.
> This solved the problem for me.
Thanks. M
Skip Montanaro writes:
>> This just arrived at my newserver:
>>
> ...
>> I find that very curious because the post is mine but which I
>> sent out with these headers:
>>
I filter out these messages in my news setup (using gnus on Emacs) on
the header:
("head"
("Injection-Info: news.bbs.nz" -
> This just arrived at my newserver:
>
...
> I find that very curious because the post is mine but which I
> sent out with these headers:
>
...
> The timezone on the date header has changed, the subject has been
> truncated, the Path and injection info is all different, and most
> crucially, the ME
On Tue, 21 Apr 2020 21:42:42 + (UTC), Eli the Bearded wrote:
> This just arrived at my newserver:
>
> Path:
> reader2.panix.com!panix!goblin2!goblin.stu.neva.ru!news.unit0.net!2.eu.feeder.erje.net!4.us.feeder.erje.net!feeder.erje.net!xmission!csiph.com!news.bbs.nz!.POSTED.agency.bbs.nz!not
This just arrived at my newserver:
Path:
reader2.panix.com!panix!goblin2!goblin.stu.neva.ru!news.unit0.net!2.eu.feeder.erje.net!4.us.feeder.erje.net!feeder.erje.net!xmission!csiph.com!news.bbs.nz!.POSTED.agency.bbs.nz!not-for-mail
From: Eli the Bearded <*@eli.users.panix.com> (Eli the Bea
On 17Nov2019 12:26, Iranna Mathapati wrote:
How to remove duplicates dict from the list of dictionary based on one
of
the duplicate elements in the dictionary,
l = [{"component":"software", "version":"1.2" },
{"component":"
Hi,
How to remove duplicates dict from the list of dictionary based on one of
the duplicate elements in the dictionary,
l = [{"component":"software", "version":"1.2" },
{"component":"hardware", "version":"2.2"
To: python-list@python.org
Subject: Duplicates
I get two messages for every post - p...@netrh.com
--
Regards,
Milt
m...@ratcliffnet.com
--
https://mail.python.org/mailman/listinfo/python-list
--
https://mail.python.org/mailman/listinfo/python-list
I get two messages for every post - p...@netrh.com
--
Regards,
Milt
m...@ratcliffnet.com
--
https://mail.python.org/mailman/listinfo/python-list
tysondog...@gmail.com wrote:
> I am trying to delete duplicates but the job just finishes with an exit
> code 0 and does not delete any duplicates.
>
> The duplicates for the data always exist in Column F and I am desiring to
> delete the entire row B-I
>
> Any ideas?
&g
I am trying to delete duplicates but the job just finishes with an exit code 0
and does not delete any duplicates.
The duplicates for the data always exist in Column F and I am desiring to
delete the entire row B-I
Any ideas?
import openpyxl
wb1 = openpyxl.load_workbook('C:/dwad/SWWA
', 'b.jpg', 'c.jpg', 'd.jpg',
> 'e.jpg', 'f.jpg', 'g.jpg'], columns=['model', 'dtime'])
>
> print(df.head(10))
>
> model dtime
> a.jpg first 2017-01-01_112233
> b.jpg fir
pg', 'g.jpg'], columns=['model', 'dtime'])
print(df.head(10))
model dtime
a.jpg first 2017-01-01_112233
b.jpg first 2017-01-01_112234
c.jpg second 2017-01-01_112234
d.jpg second 2017-01-01_112234
e.jpg second 2017-01-01_112234
f.j
Grant Edwards wrote:
Does windows even _have_ a library dependancy system that lets
an application specify which versions of which libraries it
requires?
Well you could argue that easy_install does it a bit during install.
Then there is 'Windows Side By Side' (winsxs) system which sorta does i
On 2009-12-08, Martin P. Hellwig wrote:
> Lie Ryan wrote:
>
>>
>> The only thing that package managers couldn't provide is for the
>> extremist bleeding edge; those that want the latest and the greatest in
>> the first few seconds the developers releases them. The majority of
>> users don't fa
Lie Ryan wrote:
The only thing that package managers couldn't provide is for the
extremist bleeding edge; those that want the latest and the greatest in
the first few seconds the developers releases them. The majority of
users don't fall into that category, most users are willing to wait a
On 2009-12-08, Martin P. Hellwig wrote:
> - In the ideal world, a upgrade of a dependency won't break
> your program, in reality users fear upgrading dependencies
> because they don't know for sure it won't result in a dll
> hell type of problem.
In my experience with binary-based distros
On 12/9/2009 12:02 AM, David Cournapeau wrote:
On Tue, Dec 8, 2009 at 9:02 PM, Lie Ryan wrote:
I disagree, what you should have is an Operating System with a package
management system that addresses those issues. The package management must
update your software and your dependencies, and keep
On Tue, Dec 8, 2009 at 9:02 PM, Lie Ryan wrote:
>
> I disagree, what you should have is an Operating System with a package
> management system that addresses those issues. The package management must
> update your software and your dependencies, and keep track of
> incompatibilities between you a
Lie Ryan wrote:
Yes from an argumentative perspective you are right.
But given the choice of being right and alienate the fast majority of my
potential user base, I rather be wrong.
For me the 'Although practicality beats purity' is more important than
trying to beat a dead horse that is a p
On 12/8/2009 3:25 PM, Martin P. Hellwig wrote:
Ben Finney wrote:
"Martin P. Hellwig" writes:
Along with the duplication this introduces, it also means that any bug
fixes — even severe security fixes — in the third-party code will not be
addressed in your duplicate.
I disagree, what you ne
Ben Finney wrote:
This omits the heart of the problem: There is an extra delay between
release and propagation of the security fix. When the third-party code
is released with a security fix, and is available in the operating
system, the duplicate in your application will not gain the advantage o
"Martin P. Hellwig" writes:
> Ben Finney wrote:
> > Along with the duplication this introduces, it also means that any bug
> > fixes — even severe security fixes — in the third-party code will not be
> > addressed in your duplicate.
> I disagree, what you need is:
> - An automated build system
Ben Finney wrote:
"Martin P. Hellwig" writes:
Along with the duplication this introduces, it also means that any bug
fixes — even severe security fixes — in the third-party code will not be
addressed in your duplicate.
I disagree, what you need is:
- An automated build system for your del
ill not be
addressed in your duplicate. This defeats one of the many benefits of a
package management operating system: that libraries, updated once, will
benefit any other package depending on them.
Please reconsider policies like including duplicates of third-party
code. Don't Repeat Yourself is
unqs.append(row)
print "\nUniques:\n"
for row in unqs:
print row
print "\nDuplicates:\n"
for row in dups:
print row
print "\n"
Result:
-
Originals:
['a.a', 'sn-01']
['b.b', 'sn-02']
['c.c',
3'
'ccc.444', 'T400', 'pn123', 'sn444'
'ddd', 'T500', 'pn123', 'sn555'
'eee.666', 'T600', 'pn123', 'sn444'
'fff.777', 'T700', 'pn123', '
0', 'pn123', 'sn444'
'ddd', 'T500', 'pn123', 'sn555'
'eee.666', 'T600', 'pn123', 'sn444'
'fff.777', 'T700', 'pn123', 'sn777'
How can I extract duplicates che
'pn123', 'sn444'
'ddd', 'T500', 'pn123', 'sn555'
'eee.666', 'T600', 'pn123', 'sn444'
'fff.777', 'T700', 'pn123', 'sn777'
How can I extract duplicates checking eac
Hello,
I have a strange problem with pexpect:
$ cat test.py
#!/usr/bin/python
import pexpect
child = pexpect.spawn("./test.pl")
while True:
try:
line = raw_input()
except EOFError:
break
child.sendline(line)
print child.readline().rstrip("\r\n")
child.close()
--- delete end ---
And here you're scanning the entire list _for every item_; if there are
'n' items then it's being scanned 'n' times!
The number of times each item occurred is now stored in oid_count.
for oid in tmp_list:
a = str(oid) + ' '
ows.Next()
writeMessage(' ')
writeMessage(str(strftime("%H:%M:%S", localtime())) + ' generating
statistics...')
dup_count = len(tmp_list)
tmp_list = list(set(tmp_list))
tmp_list.sort()
for oid in tmp_list:
a = str(oid) + ' '
while len(a) < 2
On Thu, 26 Mar 2009 16:02:20 -0400
"D'Arcy J.M. Cain" wrote:
or
l = ( randint(0,9) for x in xrange(8) )
> On Thu, 26 Mar 2009 16:00:01 -0400
> Albert Hopkins wrote:
> > > l = list()
> > > for i in xrange(8):
> > > l.append(randint(0,10))
> > ^^^
> > should
On Mar 27, 8:14 am, paul.scipi...@aps.com wrote:
> Hi D'Arcy J.M. Cain,
>
> Thank you. I tried this and my list of 76,979 integers got reduced to a
> dictionary of 76,963 items, each item listing the integer value from the
> list, a comma, and a 1.
I doubt this very much. Please show:
(a) your
ontain only 11 items listing 11 integer values
and the number of times they appear in my original list.
Not all of the values are 1. The 11 duplicates will be higher. Just
iterate through the dict to find all keys with values > 1.
>>> icounts
{1: 2, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7
umber of times they appear in my original
> list.
>
Not all of the values are 1. The 11 duplicates will be higher. Just iterate
through the dict to find all keys with values > 1.
>>> icounts
{1: 2, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7: 5, 8: 3, 9: 1, 10: 1, 11: 1}
Python 2.x :
>
abase Administrator
work: 602-371-7091
cell: 480-980-4721
-Original Message-
From: D'Arcy J.M. Cain [mailto:da...@druid.net]
Sent: Thursday, March 26, 2009 12:50 PM
To: Scipione, Paul (ZP5296)
Cc: python-list@python.org
Subject: Re: Find duplicates in a list and count them ...
On Thu, 2
"D'Arcy J.M. Cain" writes:
> icount = {}
> for i in list_of_ints:
> icount[i] = icount.get(i, 0) + 1
from collections import defaultdict
icount = defaultdict(int)
for i in list_of_ints:
icount[i] += 1
--
http://mail.python.org/mailman/listinfo/python-list
On Thu, 26 Mar 2009 16:00:01 -0400
Albert Hopkins wrote:
> > l = list()
> > for i in xrange(8):
> > l.append(randint(0,10))
> ^^^
> should have been:
> l.append(randint(0,9))
Or even:
l = [randint(0,9) for x in xrange(8)]
--
D'Arcy J.M. Cain
On Thu, 2009-03-26 at 15:54 -0400, Albert Hopkins wrote:
[...]
> $ cat test.py
> from random import randint
>
> l = list()
> for i in xrange(8):
> l.append(randint(0,10))
^^^
should have been:
l.append(randint(0,9))
>
> hist = dict()
> for i in l:
>
On Thu, 2009-03-26 at 12:22 -0700, paul.scipi...@aps.com wrote:
> Hello,
>
> I'm a newbie to Python. I have a list which contains integers (about
> 80,000). I want to find a quick way to get the numbers that occur in
> the list more than once, and how many times that number is duplicated
> in t
On Thu, 26 Mar 2009 12:22:27 -0700
paul.scipi...@aps.com wrote:
> I'm a newbie to Python. I have a list which contains integers (about
> 80,000). I want to find a quick way to get the numbers that occur in the
> list more than once, and how many times that number is duplicated in the
> list.
Hello,
I'm a newbie to Python. I have a list which contains integers (about 80,000).
I want to find a quick way to get the numbers that occur in the list more than
once, and how many times that number is duplicated in the list. I've done this
right now by looping through the list, getting a
plicate checking code, although fast, is executed so many times.
For a sudoku solver, you may be better dodging the problem, and
maintaining a set per row, column and box saying which numbers have
been placed already - and thus avoiding adding duplicates in the first
place. It may be better to use a
I have found a lot of material on removing duplicates from a list, but I
am trying to find the most efficient way to just check for the existence
of duplicates in a list. Here is the best I have come up with so far:
CheckList = [x[ValIndex] for x in self.__XRList[z
thank you everybody for your help! That worked perfectly. :) I really
appreciate the time you spent answering what is probably a pretty basic
question for you. It's nice not to be ignored.
be well,
-matt
--
http://mail.python.org/mailman/listinfo/python-list
Simon Forman wrote:
>
> Do ','.join(clean) to make a single string with commas between the
> items in the set. (If the items aren't all strings, you'll need to
> convert them to strings first.)
>
And if the items themselves could contain commas, or quote characters,
you might like to look at the
[EMAIL PROTECTED] wrote:
> Hello,
>
> I have some lists for which I need to remove duplicates. I found the
> sets.Sets() module which does exactly this
I think you mean that you found the sets.Set() constructor in the set
module.
If you are using Python 2.4, use the built-in se
[EMAIL PROTECTED] wrote:
> Hello,
>
> I have some lists for which I need to remove duplicates. I found the
> sets.Sets() module which does exactly this, but how do I get the set
> back out again?
>
> # existing input: A,B,B,C,D
> # desired result: A,B,C,D
>
> import
The write accepts strings only, so you may do:
out.write( repr(list(clean)) )
Notes:
- If you need the strings in a nice order, you may sort them before
saving them:
out.write( repr(sorted(clean)) )
- If you need them in the original order you need a stable method, you
can extract the relevant co
Hello,
I have some lists for which I need to remove duplicates. I found the
sets.Sets() module which does exactly this, but how do I get the set
back out again?
# existing input: A,B,B,C,D
# desired result: A,B,C,D
import sets
dupes = ['A','B','B','C',
drochom wrote:
> i suppose this one is faster (but in most cases efficiency doesn't
> matter)
>
def stable_unique(s):
>
> e = {}
> ret = []
> for x in s:
> if not e.has_key(x):
> e[x] = 1
> ret.append(x)
> retur
Thanks for all the information.
And now I understand the timeit module ;)
GC-Martijn
--
http://mail.python.org/mailman/listinfo/python-list
thanks, nice job. but this benchmark is pretty deceptive:
try this:
(definition of unique2 and unique3 as above)
>>> import timeit
>>> a = range(1000)
>>> t = timeit.Timer('unique2(a)','from __main__ import unique2,a')
>>> t2 = timeit.Timer('stable_unique(a)','from __main__ import stable_unique,a
Ow thanks , i'm I newbie and I did this test. (don't know if this is
the best way to do a small speed test)
import timeit
def unique2(keys):
unique = []
for i in keys:
if i not in unique:unique.append(i)
return unique
def unique3(s):
e = {}
ret = []
for x in s:
i suppose this one is faster (but in most cases efficiency doesn't
matter)
>>> def stable_unique(s):
e = {}
ret = []
for x in s:
if not e.has_key(x):
e[x] = 1
ret.append(x)
return ret
cheers,
przemek
there wasn't any information about ordering...
maybe i'll find something better which don't destroy original ordering
regards
przemek
--
http://mail.python.org/mailman/listinfo/python-list
Look at the code below
def unique(s):
return list(set(s))
def unique2(keys):
unique = []
for i in keys:
if i not in unique:unique.append(i)
return unique
tmp = [0,1,2,4,2,2,3,4,1,3,2]
print tmp
print unique(tmp)
print unique2(tmp)
--
[0, 1, 2, 4, 2
Rubinho napisal(a):
> I've a list with duplicate members and I need to make each entry
> unique.
>
hi,
other possibility (my newest discovery:) )
>>> a = [1,2,2,4,2,1,3,4]
>>> unique = d.fromkeys(a).keys()
>>> unique
[1, 2, 3, 4]
regards
przemek
--
http://mail.python.org/mailman/listinfo/pyth
przemek drochomirecki wrote:
> def unique(s):
> e = {}
> for x in s:
> if not e.has_key(x):
>e[x] = 1
> return e.keys()
This is basically identical in functionality to the code:
def unique(s):
return list(set(s))
And with the new-and-improved C implementation of sets comin
This works too, if speed isn't your thing..
>> a = [ 1,2,3,2,6,1,3,4,1,7,5,6,7]
>> a = dict( ( (i,None) for i in a)).keys()
a
[1, 2, 3, 4, 5, 6, 7]
--
http://mail.python.org/mailman/listinfo/python-list
roach)
>
> for x in mylist:
> if mylist.count(x) > 1:
> mylist.remove(x)
>
> Method 2 (not so traditional)
>
> mylist = set(mylist)
> mylist = list(mylist)
>
> Converting to a set drops all the duplicates and converting back to a
> list, well, gets it back to
Steven D'Aprano wrote:
>
>
> Don't imagine, measure.
>
> Resist the temptation to guess. Write some test functions and time the two
> different methods. But first test that the functions do what you expect:
> there is no point having a blindingly fast bug.
Thats is absolutely correct. Although
On Wed, 14 Sep 2005 13:28:58 +0100, Will McGugan wrote:
> Rubinho wrote:
>> I can't imagine one being much faster than the other except in the case
>> of a huge list and mine's going to typically have less than 1000
>> elements.
>
> I would imagine that 2 would be significantly faster.
Don't
Rubinho wrote:
> I can't imagine one being much faster than the other except in the case
> of a huge list and mine's going to typically have less than 1000
> elements.
To add to what others said, I'd imagine that the technique that's going
to be fastest is going to depend not only on the lengt
I do this:
def unique(keys):
unique = []
for i in keys:
if i not in unique:unique.append(i)
return unique
I don't know what is faster at the moment.
--
http://mail.python.org/mailman/listinfo/python-list
c, O(n^2), in the length n of the list
if all keys are unique.
Conversion to a set just might use a better sorting
algorithm than this (i.e. n*log(n)) and throwing out
duplicates (which, after sorting, are positioned
next to each other) is O(n). If conversion
to a set should turn out to be slow
Peter Otten wrote:
> Rubinho wrote:
>
> > I've a list with duplicate members and I need to make each entry
> > unique.
> >
> > I've come up with two ways of doing it and I'd like some input on what
> > would be considered more pythonic (or at least best practice).
> >
> > Method 1 (the traditional
Rubinho wrote:
> I've a list with duplicate members and I need to make each entry
> unique.
>
> I've come up with two ways of doing it and I'd like some input on what
> would be considered more pythonic (or at least best practice).
>
> Method 1 (the traditional approach)
>
> for x in mylist:
>
ional approach)
>
> for x in mylist:
> if mylist.count(x) > 1:
> mylist.remove(x)
>
> Method 2 (not so traditional)
>
> mylist = set(mylist)
> mylist = list(mylist)
>
> Converting to a set drops all the duplicates and converting back to a
> list,
ctice).
> mylist = set(mylist)
> mylist = list(mylist)
>
> Converting to a set drops all the duplicates and converting back to a
> list, well, gets it back to a list which is what I want.
>
> I can't imagine one being much faster than the other except in the case
> of a
st.count(x) > 1:
mylist.remove(x)
Method 2 (not so traditional)
mylist = set(mylist)
mylist = list(mylist)
Converting to a set drops all the duplicates and converting back to a
list, well, gets it back to a list which is what I want.
I can't imagine one being much faster than the other
em.Saturday + item.Sunday, the order is> already this preset configuration. I want 'collect' to be static so it can> compare it against another libraries hours and group it if necessary. The
> libraries that fail to be duplicates of other libraries will be generated as> u
sday + item.Wednesday +
> item.Thursday + item.Friday + item.Saturday + item.Sunday, the order is
> already this preset configuration. I want 'collect' to be static so it can
> compare it against another libraries hours and group it if necessary. The
> libraries that fai
sary. The libraries that fail to be duplicates
of other libraries will be generated as usual under the grouped
libraries. They will have a single heading.
An example can be seen here of what I am trying to achieve: http://www.libraries.wvu.edu/hours/summer.pdf
These are the outputs I failed to
string addition and the result is a string. The ouput
you provide is in fact a list with no duplicates, i.e. there are no
two strings the same.
If order is not important to you a structure that will give you an
'unordered list with no duplicates' is a set (available in the std
library
I've been un-triumphantly trying to get a list of mine to have no
repeats in it. First, I'm pulling attributes from Zope and forming a
list. Next, I'm pulling those same values and comparing them against
the same list and if the values equal each other and are not already in
the list, they appe
Steven Bethard <[EMAIL PROTECTED]> wrote:
...
> I have a list[1] of objects from which I need to remove duplicates. I
> have to maintain the list order though, so solutions like set(lst), etc.
> will not work for me. What are my options? So far, I can see:
I think the recipe
John Machin wrote:
So, just to remove ambiguity, WHICH one of the bunch should be
retained? Short answer: "the first seen" is what the proverbial "man in
the street" would expect
For my purposes, it doesn't matter which instance is retained and which
are removed, so yes, retaining the first one is
> You have to exhaust the iteratable before yielding anything.
Last solution? All of them have essentially the same logic to decide
which items to reject.
Further, what you say is true only if you are interpreting Steven's
ambiguous(?) requirement as: remove ALL instances of a bunch of
duplicat
Francis Girard wrote:
I think your last solution is not good unless your "list" is sorted (in which
case the solution is trivial) since you certainly do have to see all the
elements in the list before deciding that a given element is not a duplicate.
You have to exhaust the iteratable before yie
gt; I'm sorry, I assume this has been discussed somewhere already, but I
> found only a few hits in Google Groups... If you know where there's a
> good summary, please feel free to direct me there.
>
>
> I have a list[1] of objects from which I need to remove duplicates.
Steven Bethard wrote:
I'm sorry, I assume this has been discussed somewhere already, but I
found only a few hits in Google Groups... If you know where there's a
good summary, please feel free to direct me there.
I have a list[1] of objects from which I need to remove duplicates.
You could create a class based on a list which takes a list as
argument, like this:
-class uniquelist(list):
-def __init__(self, l):
-for item in l:
-self.append(item)
-
-def append(self, item):
-if item not in self:
-list.append(self, item)
-
-l = [1
Carl Banks wrote:
from itertools import *
[ x for (x,s) in izip(iterable,repeat(set()))
if (x not in s,s.add(x))[0] ]
Wow, that's evil! Pretty cool, but for the sake of readers of my code,
I think I'll have to opt against it. ;)
STeVe
--
http://mail.python.org/mailman/listinfo/python-list
e's a
good summary, please feel free to direct me there.
I have a list[1] of objects from which I need to remove duplicates. I
have to maintain the list order though, so solutions like set(lst), etc.
will not work for me. What are my options? So far, I can see:
def filterdups(iterable):
You could do it with a class, like this, I guess it is a bit faster
than option 1, although I'm no connaisseur of python internals.
-class uniquelist(list):
-def __init__(self, l):
-for item in l:
-self.append(item)
-def append(self, item):
-if item not in s
Steven Bethard wrote:
> I'm sorry, I assume this has been discussed somewhere already, but I
> found only a few hits in Google Groups... If you know where there's
a
> good summary, please feel free to direct me there.
>
>
> I have a list[1] of objects from which I
I'm sorry, I assume this has been discussed somewhere already, but I
found only a few hits in Google Groups... If you know where there's a
good summary, please feel free to direct me there.
I have a list[1] of objects from which I need to remove duplicates. I
have to maintain the
91 matches
Mail list logo