Re: JUST GOT HACKED

2013-10-03 Thread Antoon Pardon
Op 02-10-13 15:17, Steven D'Aprano schreef:

> [...]
>> And you don't treat all others in the way you hope to be treated if you
>> would be in their shoes. I suspect that should you one day feel so
>> frustrated you need to vent, you will hope to get treated differently
>> than how you treat those that need to vent now. You are very selective
>> about the people in whose shoes you can imagine yourself.
> 
> I am only an imperfect human being. I don't always live up to my ideals. 
> Sometimes I behave poorly. I have a tendency to react to newbies' poor 
> questions with sarcasm. Perhaps a little bit of sarcasm is okay, but 
> there is a fine line between making a point and being unnecessarily 
> nasty. If I cross that line, I hope that somebody will call me out on it.

Steve it is not about failing. It is about not even trying. You don't
follow the principle of treating others in the way you hope to be
treated if you were in their shoes. You just mention the principle if
you think it can support your behaviour.

Your paragraph above is a nice illustration. Suppose you develop a new
interest in which you are now the newbie and you go to a newsgroup or
forum where as a nebie you ask a poor question. Are you hoping they will
answer with sarcasm? I doubt that very much. Yet here you are telling
us sarcasm is okay as long as you don't cross the line instead of
you showing us that you would like to follow this priciple although you
find it hard in a number of circumstances.

-- 
Antoon Pardon
-- 
https://mail.python.org/mailman/listinfo/python-list


Nodebox(v1) on the web via RapydScript

2013-10-03 Thread Salvatore DI DIO
Hello,

Nodebox is a program in the spirit of Processing but for Python.

The first version runs only on MAC.
Tom, the creator has partly ported it to Javascript.

But many of you dislike Javascript.
The solution was to use a translator,  Python -> Javascript

Of the both two greats solutions Brython / RapydScript, I've choosen RapydScript
(Brython and RapydScript does not achieve the same goal)

You can see a preview of 'Nodebox on the Web' namely 'RapydBox' here :

http://salvatore.pythonanywhere.com/RapydBox

Regards




-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Running code from source that includes extension modules

2013-10-03 Thread Oscar Benjamin
On 2 October 2013 23:28, Michael Schwarz  wrote:
>
> I will look into that too, that sounds very convenient. But am I right, that 
> to use Cython the non-Python code needs to be written in the Cython language, 
> which means I can't just copy&past C code into it? For my current project, 
> this is exactly what I do, because the C code I use already existed.

It's better than that. Don't copy/paste your code. Just declare it in
Cython and you can call straight into the existing C functions cutting
out most of the boilerplate involved in making C code accessible to
Python:
http://docs.cython.org/src/userguide/external_C_code.html

You'll sometimes need a short Cython wrapper function to convert from
Python types to corresponding C types. But this is about 5 lines of
easy to read Cython code vs maybe 30 lines of hard to follow C code.

Having written CPython extension modules both by hand and using Cython
I strongly recommend to use Cython.


Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Lowest Value in List

2013-10-03 Thread Peter Otten
subhabangal...@gmail.com wrote:

> Dear Group,
> 
> I am trying to work out a solution to the following problem in Python.
> 
> The Problem:
> Suppose I have three lists.
> Each list is having 10 elements in ascending order.
> I have to construct one list having 10 elements which are of the lowest
> value among these 30 elements present in the three given lists.
> 
> The Solution:
> 
> I tried to address the issue in the following ways:
> 
> a) I took three lists, like,
> list1=[1,2,3,4,5,6,7,8,9,10]
> list2=[0,1,2,3,4,5,6,7,8,9]
> list3=[-5,-4,-3,-2,-1,0,1,2,3,4]
> 
> I tried to make sum and convert them as set to drop the repeating
> elements: set_sum=set(list1+list2+list3)
> set([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -1, -5, -4, -3, -2])
> 
> In the next step I tried to convert it back to list as,
> list_set=list(set_sum)
> gave the value as,
> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -1, -5, -4, -3, -2]
> 
> Now, I imported heapq as,
> import heapq
> 
> and took the result as,
> result=heapq.nsmallest(10,list_set)
> it gave as,
> [-5, -4, -3, -2, -1, 0, 1, 2, 3, 4]
> 
> b) I am thinking to work out another approach.
> I am taking the lists again as,
> 
> list1=[1,2,3,4,5,6,7,8,9,10]
> list2=[0,1,2,3,4,5,6,7,8,9]
> list3=[-5,-4,-3,-2,-1,0,1,2,3,4]
> 
> as they are in ascending order, I am trying to take first four/five
> elements of each list,like,
> 
> list1_4=list1[:4]
 list2_4=list2[:4]
 list3_4=list3[:4]
> 
> Now, I am trying to add them as,
> 
> list11=list1_4+list2_4+list3_4
> 
> thus, giving us the result
> 
> [1, 2, 3, 4, 0, 1, 2, 3, -5, -4, -3, -2]
> 
> Now, we are trying to sort the list of the set of the sum as,
> 
> sort_sum=sorted(list(set(list11)))
> 
> giving us the required result as,
> 
> [-5, -4, -3, -2, 0, 1, 2, 3, 4]
> 
> If by taking the value of each list portion as 4 gives as less number of
> elements in final value, as we are making set to avoid repeating numbers,
> we increase element count by one or two and if final result becomes more
> than 10 we take first ten.
> 
> Are these approaches fine. Or should we think some other way.
> 
> If any learned member of the group can kindly let me know how to solve I
> would be helpful enough.

A bit late to the show here's my take. You could separate your problem into 
three simpler ones:

(1) combine multiple sequences into one big sequence
(2) filter out duplicate items
(3) find the largest items

(1) is covered by the stdlib:

items = itertools.chain.from_iterable([list1, list2, list3])

(2) is easy assuming the items are hashable:

def unique(items):
seen = set()
for item in items:
if item not in seen:
seen.add(item)
yield item

items = unique(items)

(3) is also covered by the stdlib:

largest = heapq.nlargest(3, items)

This approach has one disadvantage: the `seen` set in unique() may grow 
indefinitely if the sequence passed to it is "long" and has an unlimited 
number of distinct duplicates.

So here's an alternative using a heap and a set both limited by the length 
of the result:

import heapq

def unique_nlargest(n, items):
items = iter(items)
heap = []
seen = set()
for item in items:
if item not in seen:
seen.add(item)
heapq.heappush(heap, item)
if len(heap) > n:
max_discard = heapq.heappop(heap)
seen.remove(max_discard)
break
for item in items:
if item > max_discard and item not in seen:
max_discard = heapq.heappushpop(heap, item)
seen.remove(max_discard)
return heap


if __name__ == "__main__":
print(unique_nlargest(3, [1,2,3,4,5,4,3,2,1,6,2,7]))

I did not test it, so there may be bugs, but the idea behind the code is 
simple: you can remove from the set all items that are below the minimum 
item in the heap. Thus both lengths can never grow beyond n (or n+1 in my 
actual implementation).

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Goodbye: was JUST GOT HACKED

2013-10-03 Thread Steven D'Aprano
On Thu, 03 Oct 2013 09:21:08 +0530, Ravi Sahni wrote:

> On Thu, Oct 3, 2013 at 2:43 AM, Walter Hurry 
> wrote:
>> Ding ding! Nikos is simply trolling. It's easy enough to killfile him
>> but inconvenient to skip all the answers to his lengthy threads. If
>> only people would just ignore him!
> 
> Hello Walter Hurry please wait!
> 
> Did I do/say something wrong?!

Don't worry about it Ravi, you haven't done anything wrong.

Walter is not a regular here. At best he is a lurker who neither asks 
Python questions nor answers them. In the last four months, I can see 
four posts from him: three are complaining about Nikos, and one is a two-
line "Me to!" response to a post about defensive programming. 



> If one of us should go it should be me -- Im just a newbie here.

No, you are welcome here. You've posted more in just a few days than 
Walter has in months. We need more people like you.



-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: JUST GOT HACKED

2013-10-03 Thread Steven D'Aprano
On Thu, 03 Oct 2013 09:01:29 +0200, Antoon Pardon wrote:

> You don't
> follow the principle of treating others in the way you hope to be 
> treated if you were in their shoes.
[...] 
> Suppose you develop a new
> interest in which you are now the newbie and you go to a newsgroup or
> forum where as a nebie you ask a poor question. Are you hoping they will
> answer with sarcasm? I doubt that very much.

Then you would be wrong. You don't know me very well at all.

If I asked a dumb question -- not an ignorant question, but a dumb 
question -- then I hope somebody will rub my nose in it. Sarcasm strikes 
me as a good balance between being too namby-pamby to correct me for 
wasting everyone's time, and being abusive.

An ignorant question would be:

"I don't understand closures, can somebody help me?"

or even:

"I wrote this function:

def f(arg=[]):
arg.append(1); return arg

and it behaves strangely. Is that a bug in Python?"

This, on the other hand, is a dumb question:

"I wrote a function to print prime numbers, and it didn't work. What did 
I do wrong?"

In the last case, the question simply is foolish. Short of mind-reading, 
how is anyone supposed to know which of the infinite number of errors I 
made? In this case, I would *much* prefer a gentle, or even not-so-
gentle, reminder of my foolishness via a sarcastic retort about looking 
in crystal balls or reading minds, than either being ignored or being 
abused.

And quite frankly, although I might *prefer* a gentle request asking for 
more information, I might *need* something harsher for the lesson to 
really sink in. Negative reinforcement is a legitimate teaching tool, 
provided it doesn't cross the line into abuse.



-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Goodbye: was JUST GOT HACKED

2013-10-03 Thread Ravi Sahni
On Thu, Oct 3, 2013 at 5:05 PM, Steven D'Aprano
 wrote:
> On Thu, 03 Oct 2013 09:21:08 +0530, Ravi Sahni wrote:
>
>> On Thu, Oct 3, 2013 at 2:43 AM, Walter Hurry 
>> wrote:
>>> Ding ding! Nikos is simply trolling. It's easy enough to killfile him
>>> but inconvenient to skip all the answers to his lengthy threads. If
>>> only people would just ignore him!
>>
>> Hello Walter Hurry please wait!
>>
>> Did I do/say something wrong?!
>
> Don't worry about it Ravi, you haven't done anything wrong.
>
> Walter is not a regular here. At best he is a lurker who neither asks
> Python questions nor answers them. In the last four months, I can see
> four posts from him: three are complaining about Nikos, and one is a two-
> line "Me to!" response to a post about defensive programming.
>
>
>
>> If one of us should go it should be me -- Im just a newbie here.
>
> No, you are welcome here. You've posted more in just a few days than
> Walter has in months. We need more people like you.

Thanks for the welcome!

But No thanks for the non-welcome -- I dont figure why Walter Hurry
(or anyone else) should be unwelcome just because I am welcome.

The world (and the python list hopefully!!) is big enough for all of us

-- 
Ravi
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread random832
On Wed, Oct 2, 2013, at 17:33, Terry Reedy wrote:
> 5. Conversion of apparent recursion to iteration assumes that the 
> function really is intended to be recursive.  This assumption is the 
> basis for replacing the recursive call with assignment and an implied 
> internal goto. The programmer can determine that this semantic change is 
> correct; the compiler should not assume that. (Because of Python's late 
> name-binding semantics, recursive *intent* is better expressed in Python 
> with iterative syntax than function call syntax. )

Speaking of assumptions, I would almost say that we should make the
assumption that operators (other than the __i family, and
setitem/setattr/etc) are not intended to have visible side effects. This
would open a _huge_ field of potential optimizations - including that
this would no longer be a semantic change (since relying on one of the
operators being allowed to change the binding of fact would no longer be
guaranteed).
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread random832
On Wed, Oct 2, 2013, at 21:46, MRAB wrote:
> > The difference is that a tuple can be reused, so it makes sense for the
> > comiler to produce it as a const.  (Much like the interning of small
> > integers)  The list, however, would always have to be copied from the
> > compile-time object.  So that object itself would be a phantom, used
> > only as the template with which the list is to be made.
> >
> The key point here is that the tuple is immutable, including its items.

Hey, while we're on the subject, can we talk about frozen(set|dict)
literals again? I really don't understand why this discussion fizzles
out whenever it's brought up on python-ideas.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread random832
On Wed, Oct 2, 2013, at 22:34, Steven D'Aprano wrote:
> You are both assuming that LOAD_CONST will re-use the same tuple 
> (1, 2, 3) in multiple places. But that's not the case, as a simple test 
> will show you:

>>> def f():
...   return (1, 2, 3)
>>> f() is f()
True

It does, in fact, re-use it when it is _the same LOAD_CONST
instruction_.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread Duncan Booth
Alain Ketterlin  wrote:

> Terry Reedy  writes:
> 
>> Part of the reason that Python does not do tail call optimization is
>> that turning tail recursion into while iteration is almost trivial,
>> once you know the secret of the two easy steps. Here it is.
>>
>> Assume that you have already done the work of turning a body recursive
>> ('not tail recursive') form like
>>
>> def fact(n): return 1 if n <= 1 else n * fact(n-1)
>>
>> into a tail recursion like
> [...]
> 
> How do know that either "<=" or "*" didn't rebind the name "fact" to
> something else? I think that's the main reason why python cannot apply
> any procedural optimization (even things like inlining are impossible,
> or possible only under very conservative assumption, that make it
> worthless).
> 

That isn't actually sufficient reason.

It isn't hard to imagine adding a TAIL_CALL opcode to the interpreter that 
checks whether the function to be called is the same as the current 
function and if it is just updates the arguments and jumps to the start of 
the code block. If the function doesn't match it would simply fall through 
to doing the same as the current CALL_FUNCTION opcode.

There is an issue that you would lose stack frames in any traceback. Also 
it means code for this modified Python wouldn't run on other non-modified 
interpreters, but it is at least theoretically possible without breaking 
Python's assumptions.

-- 
Duncan Booth http://kupuguy.blogspot.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Multiple scripts versus single multi-threaded script

2013-10-03 Thread JL
What is the difference between running multiple python scripts and a single 
multi-threaded script? May I know what are the pros and cons of each approach? 
Right now, my preference is to run multiple separate python scripts because it 
is simpler.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread Neil Cerutti
On 2013-10-03, Duncan Booth  wrote:
>> How do know that either "<=" or "*" didn't rebind the name
>> "fact" to something else? I think that's the main reason why
>> python cannot apply any procedural optimization (even things
>> like inlining are impossible, or possible only under very
>> conservative assumption, that make it worthless).
>
> That isn't actually sufficient reason.
>
> It isn't hard to imagine adding a TAIL_CALL opcode to the
> interpreter that checks whether the function to be called is
> the same as the current function and if it is just updates the
> arguments and jumps to the start of the code block.

Tail call optimization doesn't involve verification that the
function is calling itself; you just have to verfify that the
call is in tail position.

The current frame would be removed from the stack frame and
replaced with the one that results from calling the function.

> There is an issue that you would lose stack frames in any
> traceback.

I don't think that's a major issue. Frames that got replaced
would quite uninteresting. 

> Also it means code for this modified Python wouldn't run on
> other non-modified interpreters, but it is at least
> theoretically possible without breaking Python's assumptions.

In any case it's so easy to implement yourself I'm not sure
there's any point.

-- 
Neil Cerutti
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] summing integer and class

2013-10-03 Thread Chris Kaynor
This list is for development OF Python, not for development in python. For
that reason, I will redirect this to python-list as well. My actual answer
is below.

On Thu, Oct 3, 2013 at 6:45 AM, Igor Vasilyev 
 wrote:

> Hi.
>
> Example test.py:
>
> class A():
> def __add__(self, var):
> print("I'm in A class")
> return 5
> a = A()
> a+1
> 1+a


> Execution:
> python test.py
> I'm in A class
> Traceback (most recent call last):
>   File "../../test.py", line 7, in 
> 1+a
> TypeError: unsupported operand type(s) for +: 'int' and 'instance'
>
>
> So adding integer to class works fine, but adding class to integer fails.
> I could not understand why it happens. In objects/abstact.c we have the
> following function:
>

Based on the code you provided, you are only overloading the __add__
operator, which is only called when an "A" is added to something else, not
when something is added to an "A". You can also override the __radd__
method to perform the swapped addition. See
http://docs.python.org/2/reference/datamodel.html#object.__radd__ for the
documentation (it is just below the entry on __add__).

Note that for many simple cases, you could define just a single function,
which then is defined as both the __add__ and __radd__ operator. For
example, you could modify your "A" sample class to look like:

class A():
def __add__(self, var):
print("I'm in A")
return 5
__radd__ = __add__


Which will produce:
>>> a = A()
>>> a + 1
I'm in A
5
>>> 1 + a
I'm in A
5

Chris
-- 
https://mail.python.org/mailman/listinfo/python-list


feature requests

2013-10-03 Thread macker
Hi, hope this is the right group for this:

I miss two basic (IMO) features in parallel processing:

1. make `threading.Thread.start()` return `self`

I'd like to be able to `workers = [Thread(params).start() for params in 
whatever]`. Right now, it's 5 ugly, menial lines:

workers = []
for params in whatever:
thread = threading.Thread(params)
thread.start()
workers.append(thread)

2. make multiprocessing pools (incl. ThreadPool) limit the size of their 
internal queues

As it is now, the queue will greedily consume its entire input, and if the 
input is large and the pool workers are slow in consuming it, this blows up 
RAM. I'd like to be able to `pool = Pool(4, max_qsize=1000)`. Same with the 
output queue (finished tasks).

Or does anyone know of a way to achieve this?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: feature requests

2013-10-03 Thread Chris Angelico
On Fri, Oct 4, 2013 at 2:12 AM, macker  wrote:
> I'd like to be able to `workers = [Thread(params).start() for params in 
> whatever]`. Right now, it's 5 ugly, menial lines:
>
> workers = []
> for params in whatever:
> thread = threading.Thread(params)
> thread.start()
> workers.append(thread)

You could shorten this by iterating twice, if that helps:

workers = [Thread(params).start() for params in whatever]
for thrd in workers: thrd.start()

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Multiple scripts versus single multi-threaded script

2013-10-03 Thread Roy Smith
In article ,
 JL  wrote:

> What is the difference between running multiple python scripts and a single 
> multi-threaded script? May I know what are the pros and cons of each 
> approach? Right now, my preference is to run multiple separate python scripts 
> because it is simpler.

First, let's take a step back and think about multi-threading vs. 
multi-processing in general (i.e. in any language).

Threads are lighter-weight.  That means it's faster to start a new 
thread (compared to starting a new process), and a thread consumes fewer 
system resources than a process.  If you have lots of short-lived tasks 
to run, this can be significant.  If each task will run for a long time 
and do a lot of computation, the cost of startup becomes less of an 
issue because it's amortized over the longer run time.

Threads can communicate with each other in ways that processes can't.  
For example, file descriptors are shared by all the threads in a 
process, so one thread can open a file (or accept a network connection), 
then hand the descriptor off to another thread for processing.  Threads 
also make it easy to share large amounts of data because they all have 
access to the same memory.  You can do this between processes with 
shared memory segments, but it's more work to set up.

The downside to threads is that all of of this sharing makes them much 
more complicated to use properly.  You have to be aware of how all the 
threads are interacting, and mediate access to shared resources.  If you 
do that wrong, you get memory corruption, deadlocks, and all sorts of 
(extremely) difficult to debug problems.  A lot of the really hairy 
problems (i.e. things like one thread continuing to use memory which 
another thread has freed) are solved by using a high-level language like 
Python which handles all the memory allocation for you, but you can 
still get deadlocks and data corruption.

So, the full answer to your question is very complicated.  However, if 
you're looking for a short answer, I'd say just keep doing what you're 
doing using multiple processes and don't get into threading.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: feature requests

2013-10-03 Thread Tim Chase
On 2013-10-04 02:21, Chris Angelico wrote:
> > workers = []
> > for params in whatever:
> > thread = threading.Thread(params)
> > thread.start()
> > workers.append(thread)  
> 
> You could shorten this by iterating twice, if that helps:
> 
> workers = [Thread(params).start() for params in whatever]
> for thrd in workers: thrd.start()

Do you mean

  workers = [Thread(params) for params in whatever]
  for thrd in workers: thrd.start()

?  ("Thread(params)" vs. "Thread(params).start()" in your list comp)

-tkc



-- 
https://mail.python.org/mailman/listinfo/python-list


Re: feature requests

2013-10-03 Thread Chris Angelico
On Fri, Oct 4, 2013 at 2:42 AM, Tim Chase  wrote:
> Do you mean
>
>   workers = [Thread(params) for params in whatever]
>   for thrd in workers: thrd.start()
>
> ?  ("Thread(params)" vs. "Thread(params).start()" in your list comp)

Whoops, copy/paste fail. Yes, that's what I meant.

Thanks for catching!

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Multiple scripts versus single multi-threaded script

2013-10-03 Thread Chris Angelico
On Fri, Oct 4, 2013 at 2:41 AM, Roy Smith  wrote:
> The downside to threads is that all of of this sharing makes them much
> more complicated to use properly.  You have to be aware of how all the
> threads are interacting, and mediate access to shared resources.  If you
> do that wrong, you get memory corruption, deadlocks, and all sorts of
> (extremely) difficult to debug problems.  A lot of the really hairy
> problems (i.e. things like one thread continuing to use memory which
> another thread has freed) are solved by using a high-level language like
> Python which handles all the memory allocation for you, but you can
> still get deadlocks and data corruption.

With CPython, you don't have any headaches like that; you have one
very simple protection, a Global Interpreter Lock (GIL), which
guarantees that no two threads will execute Python code
simultaneously. No corruption, no deadlocks, no hairy problems.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Multiple scripts versus single multi-threaded script

2013-10-03 Thread Chris Angelico
On Fri, Oct 4, 2013 at 2:01 AM, JL  wrote:
> What is the difference between running multiple python scripts and a single 
> multi-threaded script? May I know what are the pros and cons of each 
> approach? Right now, my preference is to run multiple separate python scripts 
> because it is simpler.

(Caveat: The below is based on CPython. If you're using IronPython,
Jython, or some other implementation, some details may be a little
different.)

Multiple threads can share state easily by simply referencing each
other's variables, but the cost of that is that they'll never actually
execute simultaneously. If you want your scripts to run in parallel on
multiple CPUs/cores, you need multiple processes. But if you're doing
something I/O bound (like servicing sockets), threads work just fine.

As to using separate scripts versus the multiprocessing module, that's
purely a matter of what looks cleanest. Do whatever suits your code.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Rounding off Values of dicts (in a list) to 2 decimal points

2013-10-03 Thread tripsvt
On Wednesday, October 2, 2013 10:01:16 AM UTC-7, tri...@gmail.com wrote:
> am trying to round off values in a dict to 2 decimal points but have been 
> unsuccessful so far. The input I have is like this:
> 
> 
> 
> 
> 
> y = [{'a': 80.0, 'b': 0.0786235, 'c': 10.0, 'd': 10.6742903}, {'a': 
> 80.73246, 'b': 0.0, 'c': 10.780323, 'd': 10.0}, {'a': 80.7239, 'b': 
> 0.7823640, 'c': 10.0, 'd': 10.0}, {'a': 80.7802313217234, 'b': 0.0, 'c': 
> 10.0, 'd': 10.9762304}]
> 
> 
> 
> 
> 
> 
> 
> I want to round off all the values to two decimal points using the ceil 
> function. Here's what I have:
> 
> 
> 
> 
> 
> def roundingVals_toTwoDeci():
> 
> global y
> 
> for d in y:
> 
> for k, v in d.items():
> 
> v = ceil(v*100)/100.0
> 
> return
> 
> roundingVals_toTwoDeci()
> 
> 
> 
> 
> 
> 
> 
> But it is not working - I am still getting the old values.


I am not sure what's going on but here's the current scenario: I get the values 
with 2 decimal places as I originally required. When I do json.dumps(), it 
works fine. The goal is to send them to a URL and so I do a urlencode. When I 
decode the urlencoded string, it gives me the same goodold 2 decimal places. 
But, for some reason, at the URL, when I check, it no longer limits the values 
to 2 decimal places, but shows values like 9.10003677694312. What's going on. 
Here's the code that I have:

class LessPrecise(float):
def __repr__(self):
return str(self)

def roundingVals_toTwoDeci(y):
for d in y:
for k, v in d.iteritems():
d[k] = LessPrecise(round(v, 2))
return

roundingVals_toTwoDeci(y)
j = json.dumps(y)
print j

//At this point, print j gives me 

[{"a": 80.0, "b": 0.0, "c": 10.0, "d": 10.0}, {"a": 100.0, "b": 0.0, "c": 0.0, 
"d": 0.0}, {"a":  
80.0, "b": 0.0, "c": 10.0, "d": 10.0}, {"a": 90.0, "b": 0.0, "c": 0.0, "d": 
10.0}]

//then I do, 
params = urllib.urlencode({'thekey': j}) 

//I then decode params and print it and it gives me

thekey=[{"a": 80.0, "b": 0.0, "c": 10.0, "d": 10.0}, {"a": 100.0, "b": 0.0, 
"c": 0.0, "d": 
0.0}, {"a": 80.0, "b": 0.0, "c": 10.0, "d": 10.0}, {"a": 90.0, "b": 0.0, "c": 
0.0, "d": 10.0}]

However, at the URL, the values show up as 90.43278694123

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread Ravi Sahni
On Wed, Oct 2, 2013 at 10:46 AM, rusi wrote:
> 4. There is a whole spectrum of such optimizaitons --
> 4a eg a single-call structural recursion example, does not need to push 
> return address on the stack. It only needs to store the recursion depth:
>
> If zero jump to outside return add; if > 0 jump to internal return address
>
> 4b An example like quicksort in which one call is a tail call can be 
> optimized with your optimization and the other, inner one with 4a above

I am interested in studying more this 'whole spectrum of optimizations'
Any further pointers?

Thanks

-- 
Ravi
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Rounding off Values of dicts (in a list) to 2 decimal points

2013-10-03 Thread Peter Otten
trip...@gmail.com wrote:

> On Wednesday, October 2, 2013 10:01:16 AM UTC-7, tri...@gmail.com wrote:
>> am trying to round off values in a dict to 2 decimal points but have been
>> unsuccessful so far. The input I have is like this:
>> 
>> 
>> 
>> 
>> 
>> y = [{'a': 80.0, 'b': 0.0786235, 'c': 10.0, 'd': 10.6742903}, {'a':
>> 80.73246, 'b': 0.0, 'c': 10.780323, 'd': 10.0}, {'a': 80.7239, 'b':
>> 0.7823640, 'c': 10.0, 'd': 10.0}, {'a': 80.7802313217234, 'b': 0.0,
>> 'c': 10.0, 'd': 10.9762304}]
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> I want to round off all the values to two decimal points using the ceil
>> function. Here's what I have:
>> 
>> 
>> 
>> 
>> 
>> def roundingVals_toTwoDeci():
>> 
>> global y
>> 
>> for d in y:
>> 
>> for k, v in d.items():
>> 
>> v = ceil(v*100)/100.0
>> 
>> return
>> 
>> roundingVals_toTwoDeci()
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> But it is not working - I am still getting the old values.
> 
> 
> I am not sure what's going on but here's the current scenario: I get the
> values with 2 decimal places as I originally required. When I do
> json.dumps(), it works fine. The goal is to send them to a URL and so I do
> a urlencode. When I decode the urlencoded string, it gives me the same
> goodold 2 decimal places. But, for some reason, at the URL, when I check,
> it no longer limits the values to 2 decimal places, but shows values like
> 9.10003677694312. What's going on. Here's the code that I have:
> 
> class LessPrecise(float):
> def __repr__(self):
> return str(self)
> 
> def roundingVals_toTwoDeci(y):
> for d in y:
> for k, v in d.iteritems():
> d[k] = LessPrecise(round(v, 2))
> return

That should only process the first dict in the list, due to a misplaced 
return.
 
> roundingVals_toTwoDeci(y)
> j = json.dumps(y)
> print j
> 
> //At this point, print j gives me
> 
> [{"a": 80.0, "b": 0.0, "c": 10.0, "d": 10.0}, {"a": 100.0, "b": 0.0, "c":
> [{0.0, "d": 0.0}, {"a":
> 80.0, "b": 0.0, "c": 10.0, "d": 10.0}, {"a": 90.0, "b": 0.0, "c": 0.0,
> "d": 10.0}]
> 
> //then I do,
> params = urllib.urlencode({'thekey': j})
> 
> //I then decode params and print it and it gives me
> 
> thekey=[{"a": 80.0, "b": 0.0, "c": 10.0, "d": 10.0}, {"a": 100.0, "b":
> 0.0, "c": 0.0, "d": 0.0}, {"a": 80.0, "b": 0.0, "c": 10.0, "d": 10.0},
> {"a": 90.0, "b": 0.0, "c": 0.0, "d": 10.0}]
> 
> However, at the URL, the values show up as 90.43278694123

Can you give the actual code, including the decoding part? Preferably you'd 
put both encoding and decoding into one small self-contained demo script.

-- 
https://mail.python.org/mailman/listinfo/python-list


ipy %run noob confusion

2013-10-03 Thread jshrager
I have some rather complex code that works perfectly well if I paste it in by 
hand to ipython, but if I use %run it can't find some of the libraries, but 
others it can. The confusion seems to have to do with mathplotlib. I get it in 
stream by:

   %pylab osx

and do a bunch of stuff interactively that works just fine, for example:

  clf()

But I want it to run on a %run, but %pylab is (apparently) not allowed from a 
%run script, and importing matplotlib explicitly doesn't work...I mean, it 
imports, but then clf() is only defined in the module, not interactively. 

More confusing, if I do all the setup interactively, and the try to just run my 
script, again, clf() [etc] don't work (don't appear to exist), even though I 
can do them interactively. 

There seems to be some sort of scoping problem ... or, put more correctly, my 
problem is that I don't seem to understand the scoping, like, are %run eval'ed 
in some closed context that doesn't work the same way as ipython interactive? 
Is there any way to really do what I mean, which is: Please just read in 
commands from that script (short of getting out and passing my script through 
stdin to ipython?)

Thanks!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: feature requests

2013-10-03 Thread Ethan Furman

On 10/03/2013 09:12 AM, macker wrote:

Hi, hope this is the right group for this:

I miss two basic (IMO) features in parallel processing:

1. make `threading.Thread.start()` return `self`

I'd like to be able to `workers = [Thread(params).start() for params in 
whatever]`. Right now, it's 5 ugly, menial lines:

 workers = []
 for params in whatever:
 thread = threading.Thread(params)
 thread.start()
 workers.append(thread)


Ugly, menial lines are a clue that a function to hide it could be useful.



2. make multiprocessing pools (incl. ThreadPool) limit the size of their 
internal queues

As it is now, the queue will greedily consume its entire input, and if the 
input is large and the pool workers are slow in consuming it, this blows up 
RAM. I'd like to be able to `pool = Pool(4, max_qsize=1000)`. Same with the 
output queue (finished tasks).


Have you verified that this is a problem in Python?



Or does anyone know of a way to achieve this?


You could try subclassing.

--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


compare two list of dictionaries

2013-10-03 Thread Mohan L
Dear All,

I have two list of dictionaries like below:

In the below dictionaries the value of ip can be either hostname or ip
address.

output1=[
{'count': 3 , 'ip': 'xxx.xx.xxx.1'},
{'count': 4, 'ip': 'xxx.xx.xxx.2'},
{'count': 8, 'ip': 'xxx.xx.xxx.3'},
{'count': 10, 'ip': 'xxx.xx.xxx.4'},
{'count': 212, 'ip': 'hostname1'},
{'count': 27, 'ip': 'hostname2'},
{'count': 513, 'ip': 'hostname3'},
{'count': 98, 'ip': 'hostname4'},
{'count': 1, 'ip': 'hostname10'},
{'count': 2, 'ip': 'hostname8'},
{'count': 3, 'ip': 'xxx.xx.xxx.11'},
{'count': 90, 'ip': 'xxx.xx.xxx.12'},
{'count': 12, 'ip': 'xxx.xx.xxx.13'},
{'count': 21, 'ip': 'xxx.xx.xxx.14'},
{'count': 54, 'ip': 'xxx.xx.xxx.15'},
{'count': 34, 'ip': 'xxx.xx.xxx.16'},
{'count': 11, 'ip': 'xxx.xx.xxx.17'},
{'count': 2, 'ip': 'xxx.xx.xxx.18'},
{'count': 19, 'ip': 'xxx.xx.xxx.19'},
{'count': 21, 'ip': 'xxx.xx.xxx.20'},
{'count': 25, 'ip': 'xxx.xx.xxx.21'},
{'count': 31, 'ip': 'xxx.xx.xxx.22'},
{'count': 43, 'ip': 'xxx.xx.xxx.23'},
{'count': 46, 'ip': 'xxx.xx.xxx.24'},
{'count': 80, 'ip': 'xxx.xx.xxx.25'},
{'count': 91, 'ip': 'xxx.xx.xxx.26'},
{'count': 90, 'ip': 'xxx.xx.xxx.27'},
{'count': 10, 'ip': 'xxx.xx.xxx.28'},
{'count': 3, 'ip': 'xxx.xx.xxx.29'}]


In the below dictionaries have either hostname or ip or both.

output2=(

{'hostname': 'INNCHN01', 'ip_addr': 'xxx.xx.xxx.11'},
{'hostname': 'HYDRHC02', 'ip_addr': 'xxx.xx.xxx.12'},
{'hostname': 'INNCHN03', 'ip_addr': 'xxx.xx.xxx.13'},
{'hostname': 'MUMRHC01', 'ip_addr': 'xxx.xx.xxx.14'},
{'hostname': 'n/a', 'ip_addr': 'xxx.xx.xxx.15'},
{'hostname': 'INNCHN05', 'ip_addr': 'xxx.xx.xxx.16'},
{'hostname': 'hostname1', 'ip_addr': 'n/a'},
{'hostname': 'hostname2', 'ip_addr': 'n/a'},
{'hostname': 'hostname10', 'ip_addr': ''},
{'hostname': 'hostname8', 'ip_addr': ''},
{'hostname': 'hostname200', 'ip_addr': 'xxx.xx.xxx.200'},
{'hostname': 'hostname300', 'ip_addr': 'xxx.xx.xxx.400'},

)

trying to get the following difference from the above dictionary

1). compare the value of 'ip' in output1 dictionary with either 'hostname'
and 'ip_addr' output2 dictionary and print their intersection. Tried below
code:


for doc in output1:
for row in output2:
if((row["hostname"] == doc["ip"]) or (row["ip_addr"] ==
doc["ip"])):
print doc["ip"],doc["count"]

*output:*
hostname1 212
hostname2 27
hostname10 1
hostname8 2
xxx.xx.xxx.11 3
xxx.xx.xxx.12 90
xxx.xx.xxx.13 12
xxx.xx.xxx.14 21
xxx.xx.xxx.15 54
xxx.xx.xxx.16 34

2). need to print the below output if the value of 'ip' in output1
dictionary is not there in in output2 dictionary(ip/hostname which is there
in output1 and not there in output2):

 xxx.xx.xxx.1 3
 xxx.xx.xxx.2 4
 xxx.xx.xxx.3  8
 xxx.xx.xxx.4  10
 hostname3  513
 hostname4  98
 xxx.xx.xxx.17  11
 xxx.xx.xxx.18  2
 xxx.xx.xxx.19  19
 xxx.xx.xxx.20  21
 xxx.xx.xxx.21  25
 xxx.xx.xxx.22  31
 xxx.xx.xxx.23  43
 xxx.xx.xxx.24  46
 xxx.xx.xxx.25  80
 xxx.xx.xxx.26  91
 xxx.xx.xxx.27  90
 xxx.xx.xxx.28  10
 xxx.xx.xxx.29  3

3). Ip address with is there only in output2 dictionary.

xxx.xx.xxx.200
xxx.xx.xxx.400

Any help would be really appreciated. Thank you

Thanks
Mohan L
-- 
https://mail.python.org/mailman/listinfo/python-list


Why didn't my threads exit correctly ?

2013-10-03 Thread 李洛
Hi list,
I write an example script using threading as follow.
It look like hang when the list l_ip is empty. And any suggestion with
debug over the threading in Python ?

  1 #!/usr/bin/env python
  2 # -*- coding: utf-8 -*-
  3 import re
  4 import os
  5 import threading
  6 from Queue import Queue
  7 from time import sleep
  8
  9 l_ip = []
 10 l_result = []
 11 result = re.compile(r"[1-3] received")
 12
 13 class ping(threading.Thread):
 14 """ """
 15 def __init__(self, l_ip, l_result):
 16 threading.Thread.__init__(self)
 17 self.l_ip = l_ip
 18 #self.l_result = l_result
 19
 20 def run(self):
 21 """ """
 22 while True:
 23 try:
 24 ip = self.l_ip.pop()
 25 except IndexError as e:
 26 print e
 27 break
 28 ping_out = os.popen(''.join(['ping -q -c3 ',ip]), 'r')
 29 print 'Ping ip:%s' % ip
 30 while True:
 31 line = ping_out.readline()
 32 if not line: break
 33 if result.findall(line):
 34 l_result.append(ip)
 35 break
 36
 37 queue = Queue()
 38
 39 for i in range(1,110):
 40 l_ip.append(''.join(['192.168.1.', str(i)]))
 41 for i in xrange(10):
 42 t = ping(l_ip, l_result)
 43 t.start()
 44 queue.put(t)
 45 queue.join()
 46 print "Result will go here."
 47 for i in l_result:
 48 print 'IP %s is OK' % i

-- 
All the best!

http://luolee.me
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Rounding off Values of dicts (in a list) to 2 decimal points

2013-10-03 Thread Neil Cerutti
On 2013-10-03, trip...@gmail.com  wrote:
> thekey=[{"a": 80.0, "b": 0.0, "c": 10.0, "d": 10.0}, {"a":
> 100.0, "b": 0.0, "c": 0.0, "d": 0.0}, {"a": 80.0, "b": 0.0,
> "c": 10.0, "d": 10.0}, {"a": 90.0, "b": 0.0, "c": 0.0, "d":
> 10.0}]
>
> However, at the URL, the values show up as 90.43278694123

You'll need to convert them to strings yourself before submitting
them, by using % formatting or str.format.

-- 
Neil Cerutti
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Get the selected tab in a enthought traits application

2013-10-03 Thread petmertens
Here's the answer:

from enthought.traits.api import HasTraits, Str, List, Button, Any
from enthought.traits.ui.api import View, Item
from enthought.traits.ui.api import ListEditor

class A(HasTraits):
   StringA = Str

   view = View(Item('StringA'))

class B(HasTraits):
   StringB = Str

   view = View(Item('StringB'))

class C(HasTraits):
   MyList = List(HasTraits)
   MyButton = Button(label="Test")
   SelectedTab = Any

   def _MyButton_fired(self):
  if self.SelectedTab == self.MyList[0]:
 print self.MyList[0].StringA
  if self.SelectedTab == self.MyList[1]:
 print self.MyList[1].StringB

   view = View(Item('MyList', style='custom', show_label=False, 
  editor=ListEditor(use_notebook=True, deletable=False, 
dock_style='tab', selected='SelectedTab')),
   Item('MyButton', show_label=False))

a = A()
b = B()
c = C()
c.MyList = [a, b]

c.configure_traits()
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Rounding off Values of dicts (in a list) to 2 decimal points

2013-10-03 Thread tripsvt
On Thursday, October 3, 2013 11:03:17 AM UTC-7, Neil Cerutti wrote:
> On 2013-10-03, trip...@gmail.com  wrote:
> 
> > thekey=[{"a": 80.0, "b": 0.0, "c": 10.0, "d": 10.0}, {"a":
> 
> > 100.0, "b": 0.0, "c": 0.0, "d": 0.0}, {"a": 80.0, "b": 0.0,
> 
> > "c": 10.0, "d": 10.0}, {"a": 90.0, "b": 0.0, "c": 0.0, "d":
> 
> > 10.0}]
> 
> >
> 
> > However, at the URL, the values show up as 90.43278694123
> 
> 
> 
> You'll need to convert them to strings yourself before submitting
> 
> them, by using % formatting or str.format.
> 
> 
> 
> -- 
> 
> Neil Cerutti

I thought the class 'LessPrecise' converts them to strings. But even when I try 
doing it directly without the class at all, as in str(round(v, 2)), it gives 
all the expected values (as in {"a": "10.1", "b": "3.4", etc.}) but at the URL, 
it gives all the decimal places - 10.78324783923783
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why didn't my threads exit correctly ?

2013-10-03 Thread MRAB

On 03/10/2013 18:37, 李洛 wrote:

Hi list,
I write an example script using threading as follow.
It look like hang when the list l_ip is empty. And any suggestion with
debug over the threading in Python ?

   1 #!/usr/bin/env python
   2 # -*- coding: utf-8 -*-
   3 import re
   4 import os
   5 import threading
   6 from Queue import Queue
   7 from time import sleep
   8
   9 l_ip = []
  10 l_result = []
  11 result = re.compile(r"[1-3] received")
  12
  13 class ping(threading.Thread):
  14 """ """
  15 def __init__(self, l_ip, l_result):
  16 threading.Thread.__init__(self)
  17 self.l_ip = l_ip
  18 #self.l_result = l_result
  19
  20 def run(self):
  21 """ """
  22 while True:
  23 try:
  24 ip = self.l_ip.pop()
  25 except IndexError as e:
  26 print e
  27 break
  28 ping_out = os.popen(''.join(['ping -q -c3 ',ip]), 'r')
  29 print 'Ping ip:%s' % ip
  30 while True:
  31 line = ping_out.readline()
  32 if not line: break
  33 if result.findall(line):
  34 l_result.append(ip)
  35 break
  36
  37 queue = Queue()
  38
  39 for i in range(1,110):
  40 l_ip.append(''.join(['192.168.1.', str(i)]))
  41 for i in xrange(10):
  42 t = ping(l_ip, l_result)
  43 t.start()
  44 queue.put(t)
  45 queue.join()
  46 print "Result will go here."
  47 for i in l_result:
  48 print 'IP %s is OK' % i


queue.join() will block until the queue is empty, which is never is!

You're putting the workers in the queue, whereas the normal way of
doing it is to put them into a list, the inputs into a queue, and the
outputs into another queue. The workers then 'get' from the input
queue, do some processing, and 'put' to the output queue.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Multiple scripts versus single multi-threaded script

2013-10-03 Thread Roy Smith
In article ,
 Chris Angelico  wrote:

> On Fri, Oct 4, 2013 at 2:41 AM, Roy Smith  wrote:
> > The downside to threads is that all of of this sharing makes them much
> > more complicated to use properly.  You have to be aware of how all the
> > threads are interacting, and mediate access to shared resources.  If you
> > do that wrong, you get memory corruption, deadlocks, and all sorts of
> > (extremely) difficult to debug problems.  A lot of the really hairy
> > problems (i.e. things like one thread continuing to use memory which
> > another thread has freed) are solved by using a high-level language like
> > Python which handles all the memory allocation for you, but you can
> > still get deadlocks and data corruption.
> 
> With CPython, you don't have any headaches like that; you have one
> very simple protection, a Global Interpreter Lock (GIL), which
> guarantees that no two threads will execute Python code
> simultaneously. No corruption, no deadlocks, no hairy problems.
> 
> ChrisA

Well, the GIL certainly eliminates a whole range of problems, but it's 
still possible to write code that deadlocks.  All that's really needed 
is for two threads to try to acquire the same two resources, in 
different orders.  I'm running the following code right now.  It appears 
to be doing a pretty good imitation of a deadlock.  Any similarity to 
current political events is purely intentional.

import threading
import time

lock1 = threading.Lock()
lock2 = threading.Lock()

class House(threading.Thread):
def run(self):
print "House starting..."
lock1.acquire()
time.sleep(1)
lock2.acquire()
print "House running"
lock2.release()
lock1.release()

class Senate(threading.Thread):
def run(self):
print "Senate starting..."
lock2.acquire()
time.sleep(1)
lock1.acquire()
print "Senate running"
lock1.release()
lock2.release()

h = House()
s = Senate()

h.start()
s.start()

Similarly, I can have data corruption.  I can't get memory corruption in 
the way you can get in a C/C++ program, but I can certainly have one 
thread produce data for another thread to consume, and then 
(incorrectly) continue to mutate that data after it relinquishes 
ownership.

Let's say I have a Queue.  A producer thread pushes work units onto the 
Queue and a consumer thread pulls them off the other end.  If my 
producer thread does something like:

work = {'id': 1, 'data': "The Larch"}
my_queue.put(work)
work['id'] = 3

I've got a race condition where the consumer thread may get an id of 
either 1 or 3, depending on exactly when it reads the data from its end 
of the queue (more precisely, exactly when it uses that data).

Here's a somewhat different example of data corruption between threads:

import threading
import random
import sys

sketch = "The Dead Parrot"

class T1(threading.Thread):
def run(self):
current_sketch = str(sketch)
while 1:
if sketch != current_sketch:
print "Blimey, it's changed!"
return

class T2(threading.Thread):
def run(self):
sketches = ["Piranah Brothers",
"Spanish Enquisition",
"Lumberjack"]
while 1:
global sketch
sketch = random.choice(sketches)

t1 = T1()
t2 = T2()
t2.daemon = True

t1.start()
t2.start()

t1.join()
sys.exit()
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Multiple scripts versus single multi-threaded script

2013-10-03 Thread Chris Angelico
On Fri, Oct 4, 2013 at 4:28 AM, Roy Smith  wrote:
> Well, the GIL certainly eliminates a whole range of problems, but it's
> still possible to write code that deadlocks.  All that's really needed
> is for two threads to try to acquire the same two resources, in
> different orders.  I'm running the following code right now.  It appears
> to be doing a pretty good imitation of a deadlock.  Any similarity to
> current political events is purely intentional.

Right. Sorry, I meant that the GIL protects you from all that
happening in the lower level code (even lower than the Senate, here),
but yes, you can get deadlocks as soon as you acquire locks. That's
nothing to do with threading, you can have the same issues with
databases, file systems, or anything else that lets you lock
something. It's a LOT easier to deal with deadlocks or data corruption
that occurs in pure Python code than in C, since Python has awesome
introspection facilities... and you're guaranteed that corrupt data is
still valid Python objects.

As to your corrupt data example, though, I'd advocate a very simple
system of object ownership: as soon as the object has been put on the
queue, it's "owned" by the recipient and shouldn't be mutated by
anyone else. That kind of system generally isn't hard to maintain.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Multiple scripts versus single multi-threaded script

2013-10-03 Thread Dave Angel
On 3/10/2013 12:50, Chris Angelico wrote:

> On Fri, Oct 4, 2013 at 2:41 AM, Roy Smith  wrote:
>> The downside to threads is that all of of this sharing makes them much
>> more complicated to use properly.  You have to be aware of how all the
>> threads are interacting, and mediate access to shared resources.  If you
>> do that wrong, you get memory corruption, deadlocks, and all sorts of
>> (extremely) difficult to debug problems.  A lot of the really hairy
>> problems (i.e. things like one thread continuing to use memory which
>> another thread has freed) are solved by using a high-level language like
>> Python which handles all the memory allocation for you, but you can
>> still get deadlocks and data corruption.
>
> With CPython, you don't have any headaches like that; you have one
> very simple protection, a Global Interpreter Lock (GIL), which
> guarantees that no two threads will execute Python code
> simultaneously. No corruption, no deadlocks, no hairy problems.
>
> ChrisA

The GIL takes care of the gut-level interpreter issues like reference
counts for shared objects.  But it does not avoid deadlock or hairy
problems.  I'll just show one, trivial, problem, but many others exist.

If two threads process the same global variable as follows,
myglobal = myglobal + 1

Then you have no guarantee that the value will really get incremented
twice.  Presumably there's a mutex/critsection function in the threading
module that can make this safe, but once you use it in two different
places, you raise the possibility of deadlock.

On the other hand, if you're careful to have the thread use only data
that is unique to that thread, then it would seem to be safe.  However,
you still have the same risk if you call some library that wasn't
written to be thread safe.  I'll assume that print() and suchlike are
safe, but some third party library could well use the equivalent of a
global variable in an unsafe way.



-- 
DaveA


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: compare two list of dictionaries

2013-10-03 Thread MRAB

On 03/10/2013 17:11, Mohan L wrote:

Dear All,

I have two list of dictionaries like below:

In the below dictionaries the value of ip can be either hostname or ip
address.

output1=[
{'count': 3 , 'ip': 'xxx.xx.xxx.1'},
{'count': 4, 'ip': 'xxx.xx.xxx.2'},
{'count': 8, 'ip': 'xxx.xx.xxx.3'},
{'count': 10, 'ip': 'xxx.xx.xxx.4'},
{'count': 212, 'ip': 'hostname1'},
{'count': 27, 'ip': 'hostname2'},
{'count': 513, 'ip': 'hostname3'},
{'count': 98, 'ip': 'hostname4'},
{'count': 1, 'ip': 'hostname10'},
{'count': 2, 'ip': 'hostname8'},
{'count': 3, 'ip': 'xxx.xx.xxx.11'},
{'count': 90, 'ip': 'xxx.xx.xxx.12'},
{'count': 12, 'ip': 'xxx.xx.xxx.13'},
{'count': 21, 'ip': 'xxx.xx.xxx.14'},
{'count': 54, 'ip': 'xxx.xx.xxx.15'},
{'count': 34, 'ip': 'xxx.xx.xxx.16'},
{'count': 11, 'ip': 'xxx.xx.xxx.17'},
{'count': 2, 'ip': 'xxx.xx.xxx.18'},
{'count': 19, 'ip': 'xxx.xx.xxx.19'},
{'count': 21, 'ip': 'xxx.xx.xxx.20'},
{'count': 25, 'ip': 'xxx.xx.xxx.21'},
{'count': 31, 'ip': 'xxx.xx.xxx.22'},
{'count': 43, 'ip': 'xxx.xx.xxx.23'},
{'count': 46, 'ip': 'xxx.xx.xxx.24'},
{'count': 80, 'ip': 'xxx.xx.xxx.25'},
{'count': 91, 'ip': 'xxx.xx.xxx.26'},
{'count': 90, 'ip': 'xxx.xx.xxx.27'},
{'count': 10, 'ip': 'xxx.xx.xxx.28'},
{'count': 3, 'ip': 'xxx.xx.xxx.29'}]


In the below dictionaries have either hostname or ip or both.

output2=(

{'hostname': 'INNCHN01', 'ip_addr': 'xxx.xx.xxx.11'},
{'hostname': 'HYDRHC02', 'ip_addr': 'xxx.xx.xxx.12'},
{'hostname': 'INNCHN03', 'ip_addr': 'xxx.xx.xxx.13'},
{'hostname': 'MUMRHC01', 'ip_addr': 'xxx.xx.xxx.14'},
{'hostname': 'n/a', 'ip_addr': 'xxx.xx.xxx.15'},
{'hostname': 'INNCHN05', 'ip_addr': 'xxx.xx.xxx.16'},
{'hostname': 'hostname1', 'ip_addr': 'n/a'},
{'hostname': 'hostname2', 'ip_addr': 'n/a'},
{'hostname': 'hostname10', 'ip_addr': ''},
{'hostname': 'hostname8', 'ip_addr': ''},
{'hostname': 'hostname200', 'ip_addr': 'xxx.xx.xxx.200'},
{'hostname': 'hostname300', 'ip_addr': 'xxx.xx.xxx.400'},

)

trying to get the following difference from the above dictionary

1). compare the value of 'ip' in output1 dictionary with either
'hostname' and 'ip_addr' output2 dictionary and print their
intersection. Tried below code:


for doc in output1:
 for row in output2:
 if((row["hostname"] == doc["ip"]) or (row["ip_addr"] ==
doc["ip"])):
 print doc["ip"],doc["count"]

*output:*
hostname1 212
hostname2 27
hostname10 1
hostname8 2
xxx.xx.xxx.11 3
xxx.xx.xxx.12 90
xxx.xx.xxx.13 12
xxx.xx.xxx.14 21
xxx.xx.xxx.15 54
xxx.xx.xxx.16 34


1. Create a dict from output1 in which the key is the ip and the value
is the count.

2. Create a set from output2 containing all the hostnames and ip_addrs.

3. Get the intersection of the keys of the dict with the set.

4. Print the entries of the dict for each member of the intersection.


2). need to print the below output if the value of 'ip' in output1
dictionary is not there in in output2 dictionary(ip/hostname which is
there in output1 and not there in output2):

  xxx.xx.xxx.1 3
  xxx.xx.xxx.2 4
  xxx.xx.xxx.3  8
  xxx.xx.xxx.4  10
  hostname3  513
  hostname4  98
  xxx.xx.xxx.17  11
  xxx.xx.xxx.18  2
  xxx.xx.xxx.19  19
  xxx.xx.xxx.20  21
  xxx.xx.xxx.21  25
  xxx.xx.xxx.22  31
  xxx.xx.xxx.23  43
  xxx.xx.xxx.24  46
  xxx.xx.xxx.25  80
  xxx.xx.xxx.26  91
  xxx.xx.xxx.27  90
  xxx.xx.xxx.28  10
  xxx.xx.xxx.29  3


1. Get the difference between the keys of the dict and the intersection.

2. Print the entries of the dict for each member of the difference.


3). Ip address with is there only in output2 dictionary.

xxx.xx.xxx.200
xxx.xx.xxx.400


1. Create a set from output2 containing all the ip_addrs.

2. Get the difference between the set and the keys of the dict created
from output1.


Any help would be really appreciated. Thank you



--
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread Terry Reedy

On 10/2/2013 10:34 PM, Steven D'Aprano wrote:


You are both assuming that LOAD_CONST will re-use the same tuple
(1, 2, 3) in multiple places.


No I did not. To save tuple creation time, a pre-compiled tuple is 
reused when its display expression is re-executed. If I had been 
interested in multiple occurrences of the same display, I would have tested.


>>> def f():
a = 1,'a',, 'bbb'; x = 1,'a',, 'bbb'
b = 1,'a',, 'bbb'
c = 'a'
d =  + 

>>> f.__code__.co_consts
(None, 1, 'a', , 'bbb', (1, 'a', , 'bbb'), (1, 'a', , 
'bbb'), (1, 'a', , 'bbb'), )


Empirically, ints and strings are checked for prior occurrence in 
co_consts before being added. I suspect None is too, but will not assume.


How is the issue of multiple occurrences of constants relevant to my 
topic statement? Let me quote it, with misspellings corrected.


"CPython core developers have been very conservative about what
transformations they put into the compiler." [misspellings corrected]

Aha! Your example and that above reinforce this statement. Equal tuples 
are not necessarily identical and cannot necessarily be substituted for 
each other in all code.


>>> (1, 2) == (1.0, 2.0)
True

But replacing (1.0, 2.0) with (1, 2), by only storing the latter, would 
not be valid without some possibly tricky context analysis. The same is 
true for equal numbers, and the optimizer pays attention.


>>> def g():
a = 1
b = 1.0

>>> g.__code__.co_consts
(None, 1, 1.0)

For numbers, the proper check is relatively easy:

for item in const_list:
  if type(x) is type(item) and x == item:
break  # identical item already in working list
else:
  const_list.append(x)

Writing a valid recursive function to do the same for tuples, and 
proving its validity to enough other core developers to make it 
accepted, is much harder and hardly seems worthwhile.


It would probably be easier to compare the parsed AST subtrees for the 
displays rather than the objects created from them.


---
> py> def f():
> ... a = (1, 2, 3)
> ... b = (1, 2, 3)
[snip]
> So even though both a and b are created by the same LOAD_CONST 
byte-code,


I am not sure what you mean by 'created'. LOAD_CONST puts the address of 
an object in co_consts on the top of the virtual machine stack.


> the object is not re-used (although it could be!)

It can be reused, in this case, because the constant displays are 
identical, as defined above.


> and two distinct tuples are created.

Because it is not easy to make the compiler see that only one is needed.

--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: ipy %run noob confusion

2013-10-03 Thread Terry Reedy

On 10/3/2013 1:42 PM, jshra...@gmail.com wrote:

I have some rather complex code that works perfectly well if I paste it in by 
hand to ipython, but if I use %run it can't find some of the libraries, but 
others it can.


Ipython is a separate product built on top of Python. If no answer here, 
look for an ipython-specific list or discussion group.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: ipy %run noob confusion

2013-10-03 Thread Mark Lawrence

On 03/10/2013 20:26, Terry Reedy wrote:

On 10/3/2013 1:42 PM, jshra...@gmail.com wrote:

I have some rather complex code that works perfectly well if I paste
it in by hand to ipython, but if I use %run it can't find some of the
libraries, but others it can.


Ipython is a separate product built on top of Python. If no answer here,
look for an ipython-specific list or discussion group.



Such as news.gmane.org/gmane.comp.python.ipython.user

--
Roses are red,
Violets are blue,
Most poems rhyme,
But this one doesn't.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Re: Multiple scripts versus single multi-threaded script

2013-10-03 Thread Roy Smith
In article ,
 Chris Angelico  wrote:

> As to your corrupt data example, though, I'd advocate a very simple
> system of object ownership: as soon as the object has been put on the
> queue, it's "owned" by the recipient and shouldn't be mutated by
> anyone else.

Well, sure.  I agree with you that threading in Python is about a 
zillion times easier to manage than threading in C/C++, but there are 
still things you need to think about when using threading in Python 
which you don't need to think about if you're not using threading at 
all.  Transfer of ownership when you put something on a queue is one of 
those things.

So, I think my original statement:

> if you're looking for a short answer, I'd say just keep doing what 
> you're doing using multiple processes and don't get into threading.

is still good advice for somebody who isn't sure they need threads.

On the other hand, for somebody who is interested in learning about 
threads, Python is a great platform to learn because you get to 
experiment with the basic high-level concepts without getting bogged 
down in pthreads minutiae.  And, as Chris pointed out, if you get it 
wrong, at least you've still got valid Python objects to puzzle over, 
not a smoking pile of bits on the floor.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: wil anyone ressurect medusa and pypersist?

2013-10-03 Thread c-gschuette
On Thursday, May 16, 2013 11:15:45 AM UTC-7, vispha...@gmail.com wrote:
> www.prevayler.org in python = pypersist
> 
> 
> 
> medusa = python epoll web server and ftp server eventy and async

wow interesting

sprevayler ??

cl-prevalence
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Multiple scripts versus single multi-threaded script

2013-10-03 Thread Chris Angelico
On Fri, Oct 4, 2013 at 5:53 AM, Roy Smith  wrote:
> So, I think my original statement:
>
>> if you're looking for a short answer, I'd say just keep doing what
>> you're doing using multiple processes and don't get into threading.
>
> is still good advice for somebody who isn't sure they need threads.
>
> On the other hand, for somebody who is interested in learning about
> threads, Python is a great platform to learn because you get to
> experiment with the basic high-level concepts without getting bogged
> down in pthreads minutiae.  And, as Chris pointed out, if you get it
> wrong, at least you've still got valid Python objects to puzzle over,
> not a smoking pile of bits on the floor.

Agree wholeheartedly to both halves. I was just explaining a similar
concept to my brother last night, with regard to network/database
request handling:

1) The simplest code starts, executes, and finishes, with no threads,
fork(), or other confusions.or shared state or anything. Execution can
be completely predicted by eyeballing the source code. You can pretend
that you have a dedicated CPU core that does nothing but run your
program.

2) Threaded code adds a measure of complexity that you have to get
your head around. Now you need to concern yourself with preemption,
multiple threads doing things in different orders, locking, shared
state, etc, etc. But you can still pretend that the execution of one
job will happen as a single "thing", top down, with predictable
intermediate state, if you like. (Python's threading and multiprocess
modules both follow this style, they just have different levels of
shared state.)

3) Asynchronous code adds significantly more "get your head around"
complexity, since you now have to retain state for multiple
jobs/requests in the same thread. You can't use local variables to
keep track of where you're up to. Most likely, your code will do some
tiny thing, update the state object for that request, fire off an
asynchronous request of your own (maybe to the hard disk, with a
callback when the data's read/written), and then return, back to some
main loop.

Now imagine you have a database written in style #1, and you have to
drag it, kicking and screaming, into the 21st century. Oh look, it's
easy! All you have to do is start multiple threads doing the same job!
And then you'll have some problems with simultaneous edits, so you put
some big fat locks all over the place to prevent two threads from
doing the same thing at the same time. Even if one of those threads
was handling something interactive and might hold its lock for some
number of minutes. Suboptimal design, maybe, but hey, it works right?
That's what my brother has to deal with every day, as a user of said
database... :|

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Literal syntax for frozenset, frozendict (was: Tail recursion to while iteration in 2 easy steps)

2013-10-03 Thread Ben Finney
random...@fastmail.us writes:

> Hey, while we're on the subject, can we talk about frozen(set|dict)
> literals again? I really don't understand why this discussion fizzles
> out whenever it's brought up on python-ideas.

Can you start us off by searching for previous threads discussing it,
and summarise the arguments here?

-- 
 \ “If you ever catch on fire, try to avoid seeing yourself in the |
  `\mirror, because I bet that's what REALLY throws you into a |
_o__) panic.” —Jack Handey |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread Steven D'Aprano
On Wed, 02 Oct 2013 22:41:00 -0400, Terry Reedy wrote:

> I am referring to constant-value objects included in the code object.
>  >>> def f(): return (1,2,3)
> 
>  >>> f.__code__.co_consts
> (None, 1, 2, 3, (1, 2, 3))

Okay, now that's more clear. I didn't understand what you meant before. 
So long as we understand we're talking about a CPython implementation 
detail.


> None is present as the default return, even if not needed for a
> particular function. Every literal is also tossed in, whether needed or
> not.
> 
 which in Python 3.3 understands tuples like (1, 2, 3), but not lists.
> 
> The byte-code does not understand anything about types. LOAD_CONST n
> simply loads the (n+1)st object in .co_consts onto the top of the stack.

Right, this is more clear to me now.

As I understand it, the contents of code objects are implementation 
details, not required for implementations. For example, IronPython 
provides a co_consts attribute, but it only contains None. Jython doesn't 
provide a co_consts attribute at all. So while it's interesting to 
discuss what CPython does, we should not be fooled into thinking that 
this is guaranteed by every Python.

I can imagine a Python implementation that compiles constants into some 
opaque object like __closure__ or co_code. In that case, it could treat 
the list in "for i in [1, 2, 3]: ..." as a constant too, since there is 
no fear that some other object could reach into the opaque object and 
change it.

Of course, that would likely be a lot of effort for very little benefit. 
The compiler would have to be smart enough to see that the list was never 
modified or returned. Seems like a lot of trouble to go to just to save 
creating a small list.

More likely would be implementations that didn't re-use constants, than 
implementations that aggressively re-used everything possible.


-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Literal syntax for frozenset, frozendict

2013-10-03 Thread Ethan Furman

On 10/03/2013 05:18 PM, Ben Finney wrote:

random...@fastmail.us writes:


Hey, while we're on the subject, can we talk about frozen(set|dict)
literals again? I really don't understand why this discussion fizzles
out whenever it's brought up on python-ideas.


Can you start us off by searching for previous threads discussing it,
and summarise the arguments here?


And then start a new thread.  :)

--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: Goodbye: was JUST GOT HACKED

2013-10-03 Thread Steven D'Aprano
On Thu, 03 Oct 2013 17:31:44 +0530, Ravi Sahni wrote:

> On Thu, Oct 3, 2013 at 5:05 PM, Steven D'Aprano
>  wrote:

>> No, you are welcome here. You've posted more in just a few days than
>> Walter has in months. We need more people like you.
> 
> Thanks for the welcome!
> 
> But No thanks for the non-welcome -- I dont figure why Walter Hurry (or
> anyone else) should be unwelcome just because I am welcome.



Who said Walter was unwelcome? It's *his* choice to leave, nobody is
kicking him out.

Regards,



-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread Steven D'Aprano
On Thu, 03 Oct 2013 10:09:25 -0400, random832 wrote:

> Speaking of assumptions, I would almost say that we should make the
> assumption that operators (other than the __i family, and
> setitem/setattr/etc) are not intended to have visible side effects. This
> would open a _huge_ field of potential optimizations - including that
> this would no longer be a semantic change (since relying on one of the
> operators being allowed to change the binding of fact would no longer be
> guaranteed).

I like the idea of such optimizations, but I'm afraid that your last 
sentence seems a bit screwy to me. You seem to be saying, if we make this 
major semantic change to Python, we can then retroactively declare that 
it's not a semantic change at all, since under the new rules, it's no 
different from the new rules.

Anyway... I think that it's something worth investigating, but it's not 
as straight forward as you might hope. There almost certainly is code out 
in the world that uses operator overloading for DSLs. For instance, I've 
played around something vaguely like this DSL:

chain = Node('spam') 
chain >> 'eggs'
chain >> 'ham'
chain.head <= 'cheese'

where I read >> as appending and <= as inserting. I was never quite happy 
with the syntax, so my experiments never went anywhere, but I expect that 
some people, somewhere, have. This is a legitimate way to use Python, and 
changing the semantics to prohibit it would be a Bad Thing.

However, I can imagine something like a __future__ directive that 
enables, or disables, such optimizations on a per-module basis. In Python 
3, it would have to be disabled by default. Python 4000 could make the 
optimizations enabled by default and use the __future__ machinery to 
disable it.


-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: compare two list of dictionaries

2013-10-03 Thread Mohan L
On Fri, Oct 4, 2013 at 12:14 AM, MRAB  wrote:

> On 03/10/2013 17:11, Mohan L wrote:
>
>> Dear All,
>>
>> I have two list of dictionaries like below:
>>
>> In the below dictionaries the value of ip can be either hostname or ip
>> address.
>>
>> output1=[
>> {'count': 3 , 'ip': 'xxx.xx.xxx.1'},
>> {'count': 4, 'ip': 'xxx.xx.xxx.2'},
>> {'count': 8, 'ip': 'xxx.xx.xxx.3'},
>> {'count': 10, 'ip': 'xxx.xx.xxx.4'},
>> {'count': 212, 'ip': 'hostname1'},
>> {'count': 27, 'ip': 'hostname2'},
>> {'count': 513, 'ip': 'hostname3'},
>> {'count': 98, 'ip': 'hostname4'},
>> {'count': 1, 'ip': 'hostname10'},
>> {'count': 2, 'ip': 'hostname8'},
>> {'count': 3, 'ip': 'xxx.xx.xxx.11'},
>> {'count': 90, 'ip': 'xxx.xx.xxx.12'},
>> {'count': 12, 'ip': 'xxx.xx.xxx.13'},
>> {'count': 21, 'ip': 'xxx.xx.xxx.14'},
>> {'count': 54, 'ip': 'xxx.xx.xxx.15'},
>> {'count': 34, 'ip': 'xxx.xx.xxx.16'},
>> {'count': 11, 'ip': 'xxx.xx.xxx.17'},
>> {'count': 2, 'ip': 'xxx.xx.xxx.18'},
>> {'count': 19, 'ip': 'xxx.xx.xxx.19'},
>> {'count': 21, 'ip': 'xxx.xx.xxx.20'},
>> {'count': 25, 'ip': 'xxx.xx.xxx.21'},
>> {'count': 31, 'ip': 'xxx.xx.xxx.22'},
>> {'count': 43, 'ip': 'xxx.xx.xxx.23'},
>> {'count': 46, 'ip': 'xxx.xx.xxx.24'},
>> {'count': 80, 'ip': 'xxx.xx.xxx.25'},
>> {'count': 91, 'ip': 'xxx.xx.xxx.26'},
>> {'count': 90, 'ip': 'xxx.xx.xxx.27'},
>> {'count': 10, 'ip': 'xxx.xx.xxx.28'},
>> {'count': 3, 'ip': 'xxx.xx.xxx.29'}]
>>
>>
>> In the below dictionaries have either hostname or ip or both.
>>
>> output2=(
>>
>> {'hostname': 'INNCHN01', 'ip_addr': 'xxx.xx.xxx.11'},
>> {'hostname': 'HYDRHC02', 'ip_addr': 'xxx.xx.xxx.12'},
>> {'hostname': 'INNCHN03', 'ip_addr': 'xxx.xx.xxx.13'},
>> {'hostname': 'MUMRHC01', 'ip_addr': 'xxx.xx.xxx.14'},
>> {'hostname': 'n/a', 'ip_addr': 'xxx.xx.xxx.15'},
>> {'hostname': 'INNCHN05', 'ip_addr': 'xxx.xx.xxx.16'},
>> {'hostname': 'hostname1', 'ip_addr': 'n/a'},
>> {'hostname': 'hostname2', 'ip_addr': 'n/a'},
>> {'hostname': 'hostname10', 'ip_addr': ''},
>> {'hostname': 'hostname8', 'ip_addr': ''},
>> {'hostname': 'hostname200', 'ip_addr': 'xxx.xx.xxx.200'},
>> {'hostname': 'hostname300', 'ip_addr': 'xxx.xx.xxx.400'},
>>
>> )
>>
>> trying to get the following difference from the above dictionary
>>
>> 1). compare the value of 'ip' in output1 dictionary with either
>> 'hostname' and 'ip_addr' output2 dictionary and print their
>> intersection. Tried below code:
>>
>>
>> for doc in output1:
>>  for row in output2:
>>  if((row["hostname"] == doc["ip"]) or (row["ip_addr"] ==
>> doc["ip"])):
>>  print doc["ip"],doc["count"]
>>
>> *output:*
>>
>> hostname1 212
>> hostname2 27
>> hostname10 1
>> hostname8 2
>> xxx.xx.xxx.11 3
>> xxx.xx.xxx.12 90
>> xxx.xx.xxx.13 12
>> xxx.xx.xxx.14 21
>> xxx.xx.xxx.15 54
>> xxx.xx.xxx.16 34
>>
>>  1. Create a dict from output1 in which the key is the ip and the value
> is the count.
>
> 2. Create a set from output2 containing all the hostnames and ip_addrs.
>
> 3. Get the intersection of the keys of the dict with the set.
>
> 4. Print the entries of the dict for each member of the intersection.
>
>
>  2). need to print the below output if the value of 'ip' in output1
>> dictionary is not there in in output2 dictionary(ip/hostname which is
>> there in output1 and not there in output2):
>>
>>   xxx.xx.xxx.1 3
>>   xxx.xx.xxx.2 4
>>   xxx.xx.xxx.3  8
>>   xxx.xx.xxx.4  10
>>   hostname3  513
>>   hostname4  98
>>   xxx.xx.xxx.17  11
>>   xxx.xx.xxx.18  2
>>   xxx.xx.xxx.19  19
>>   xxx.xx.xxx.20  21
>>   xxx.xx.xxx.21  25
>>   xxx.xx.xxx.22  31
>>   xxx.xx.xxx.23  43
>>   xxx.xx.xxx.24  46
>>   xxx.xx.xxx.25  80
>>   xxx.xx.xxx.26  91
>>   xxx.xx.xxx.27  90
>>   xxx.xx.xxx.28  10
>>   xxx.xx.xxx.29  3
>>
>>  1. Get the difference between the keys of the dict and the intersection.
>
> 2. Print the entries of the dict for each member of the difference.



#!/bin/env python
import sys


output1=[
{'count': 3 , 'ip': 'xxx.xx.xxx.1'},
{'count': 4, 'ip': 'xxx.xx.xxx.2'},
{'count': 8, 'ip': 'xxx.xx.xxx.3'},
{'count': 10, 'ip': 'xxx.xx.xxx.4'},
{'count': 212, 'ip': 'hostname1'},
{'count': 27, 'ip': 'hostname2'},
{'count': 513, 'ip': 'hostname3'},
{'count': 98, 'ip': 'hostname4'},
{'count': 1, 'ip': 'hostname10'},
{'count': 2, 'ip': 'hostname8'},
{'count': 3, 'ip': 'xxx.xx.xxx.11'},
{'count': 90, 'ip': 'xxx.xx.xxx.12'},
{'count': 12, 'ip': 'xxx.xx.xxx.13'},
{'count': 21, 'ip': 'xxx.xx.xxx.14'},
{'count': 54, 'ip': 'xxx.xx.xxx.15'},
{'count': 34, 'ip': 'xxx.xx.xxx.16'},
{'count': 11, 'ip': 'xxx.xx.xxx.17'},
{'count': 2, 'ip': 'xxx.xx.xxx.18'},
{'count': 19, 'ip': 'xxx.xx.xxx.19'},
{'count': 21, 'ip': 'xxx.xx.xxx.20'},
{'count': 25, 'ip': 'xxx.xx.xxx.21'},
{'count': 31, 'ip': 'xxx.xx.xxx.22'},
{'count': 43, 'ip': 'xxx.xx.xxx.23'},
{'count': 46, 'ip': 'xxx.xx.xxx.24'},
{'count': 80, 'ip': 'xxx.xx.xxx.25'},
{'count': 91, 'ip': 'xxx.xx.xxx.26'},
{'count': 90, 'ip': 'xxx.xx.xxx.27'},
{'count': 10, 'ip': 'xxx.xx.xxx.28'},
{'count': 3, 'ip': 'xxx.xx.xxx.29'}]

Re: Efficency help for a Calculator Program

2013-10-03 Thread Chris Angelico
On Fri, Oct 4, 2013 at 9:15 AM, Dennis Lee Bieber  wrote:
> On Thu, 3 Oct 2013 10:25:47 +1000, Chris Angelico 
> declaimed the following:
>
>>On Thu, Oct 3, 2013 at 9:47 AM, Dennis Lee Bieber  
>>wrote:
>>> try:
>>> numItems = int(raw_input("\n\nHow many values? "))
>>> except: #naked exception is not really good programming
>>> print "Invalid input, exiting..."
>>> sys.exit(1)
>>
>>Please don't _ever_ advocate this programming style! Wrapping
>>something in a try/except that emits a generic message and terminates
>>is a bad idea - the default behaviour, if you simply let the exception
>>happen, is to emit a very useful message and terminate. Never test for
>>any error condition you're not prepared to handle, as the BOFH advised
>>his boss.
>>
> Note: I DID include a comment that this was NOT good style.

You mentioned that bare except is a problem; I'm more looking at the
fact that the except clause simply writes a message and terminates.
They're two separate issues, both bad style.

I know _you_ know it's bad style; but someone reading over this needs
to be aware that this shouldn't normally be done.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list