Re: Ubuntu package "python3" does not include tkinter

2013-04-23 Thread Rui Maciel
Chris Angelico wrote:

> 30 years ago, people weren't using Tk. 

And after 30 years gone by, some people still don't use Tk, let alone 
Tkinter.  There is absolutely no reason to force them to install that if 
they don't need to.


> We've moved on beyond worrying about the odd kilobyte of space.

That must be reason why you are the only one complaining about that.


Rui Maciel

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Running simultaneuos "FOR" loops

2013-04-23 Thread inshu chauhan
Yes Simultaneously means all three running at the same time, I looked up
zip just now, but will it not disturb my dictionaries ?
And yes the dictionaries have same number of keys.

thanks


On Tue, Apr 23, 2013 at 12:16 PM, Chris Angelico  wrote:

> On Tue, Apr 23, 2013 at 4:40 PM, inshu chauhan 
> wrote:
> > i have to implement the below line in one of my code:
> >
> > for  p in sorted(segments.iterkeys()) and for k in
> > sorted(class_count.iterkeys()) and for j in
> sorted(pixel_count.iterkeys()):
> >
> > Its giving me a syntax error which is obvious, but how can I make all
> three
> > for loop run simultaneously or any other way to do this simultaneous work
> > ???
>
> Define simultaneously. Do the three dictionaries have the same number
> of keys? If so, look up zip() or itertools.izip; if not, you may have
> to more clearly define "simultaneous".
>
> ChrisA
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Ubuntu package "python3" does not include tkinter

2013-04-23 Thread Rui Maciel
Steven D'Aprano wrote:

> No, the job of the package system is to manage dependencies. It makes no
> guarantee about whether or not something will "work".

The purpose of establishing dependencies is to guarantee that once a 
software package is installed, all the necessary components needed for it to 
run properly are already present in the system or can be installed 
automatically.

http://en.wikipedia.org/wiki/Dependency_hell


Rui Maciel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Blind Anagram
On 23/04/2013 00:28, Steven D'Aprano wrote:
> On Mon, 22 Apr 2013 22:25:50 +0100, Blind Anagram wrote:
> 
>> I have looked at solutions based on listing primes and here I have found
>> that they are very much slower than my existing solution when the sieve
>> is not large (which is the majority use case).
> 
> Yes. This is hardly surprising. Algorithms suitable for dealing with the 
> first million primes are not suitable for dealing with the first trillion 
> primes, and vice versa. We like to pretend that computer programming is 
> an abstraction, and for small enough data we often can get away with 
> that, but like all abstractions eventually it breaks and the cost of 
> dealing with real hardware becomes significant.
> 
> But I must ask, given that the primes are so widely distributed, why are 
> you storing them in a list instead of a sparse array (i.e. a dict)? There 
> are 50,847,534 primes less than or equal to 1,000,000,000, so you are 
> storing roughly 18 False values for every True value. That ratio will 
> only get bigger. With a billion entries, you are using 18 times more 
> memory than necessary.

Because the majority use case for my Prime class is for a sieve that is
not large.  I am just pushing the envelope for a minority use case so
that it still works for huge sieves, albeit inefficiently.

I accept it is inefficient, but the fact remains that I can produce a
sieve that can yield and count a billion primes in a reasonable time but
this fails when I wish to count on a part of the sieve because this can
double the memory requirement for the lack of a list.count(value, limit)
function.

I would not dream of doing this job by copying a slice in any other
language that I have used so I was simply asking for advice to discover
whether this copy could be avoided whilst staying with the simple sieve
design.

Thank you for your input.

   Brian

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Running simultaneuos "FOR" loops

2013-04-23 Thread Gary Herron

On 04/22/2013 11:40 PM, inshu chauhan wrote:

i have to implement the below line in one of my code:

for  p in sorted(segments.iterkeys()) and for k in 
sorted(class_count.iterkeys()) and for j in 
sorted(pixel_count.iterkeys()):


Its giving me a syntax error which is obvious, but how can I make all 
three for loop run simultaneously or any other way to do 
this simultaneous work ???





Be clearer about the problem please.

Do you wish to produce a loop that:
  On pass 1, each of p,k, and t hold the first item of their respective 
lists, and
  on pass 2, each of p,k, and t hold the second item of their 
respective lists, and

  so on
until one (or all) lists run out?

If that is what you want, then check out the zip builtin function. But 
also consider this:  Do you care what happens if one list runs out 
before the others?




Or is it something else you want?  Perhaps nested loops?
  for  p in sorted(segments.iterkeys()):
  for k in sorted(class_count.iterkeys()):
  for j in sorted(pixel_count.iterkeys()):
 # This will be run with all possible combinations of p,k, 
and t.


Gary Herron




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Running simultaneuos "FOR" loops

2013-04-23 Thread inshu chauhan
zip isn't doing the required


On Tue, Apr 23, 2013 at 12:28 PM, inshu chauhan wrote:

> Yes Simultaneously means all three running at the same time, I looked up
> zip just now, but will it not disturb my dictionaries ?
> And yes the dictionaries have same number of keys.
>
> thanks
>
>
> On Tue, Apr 23, 2013 at 12:16 PM, Chris Angelico  wrote:
>
>> On Tue, Apr 23, 2013 at 4:40 PM, inshu chauhan 
>> wrote:
>> > i have to implement the below line in one of my code:
>> >
>> > for  p in sorted(segments.iterkeys()) and for k in
>> > sorted(class_count.iterkeys()) and for j in
>> sorted(pixel_count.iterkeys()):
>> >
>> > Its giving me a syntax error which is obvious, but how can I make all
>> three
>> > for loop run simultaneously or any other way to do this simultaneous
>> work
>> > ???
>>
>> Define simultaneously. Do the three dictionaries have the same number
>> of keys? If so, look up zip() or itertools.izip; if not, you may have
>> to more clearly define "simultaneous".
>>
>> ChrisA
>> --
>> http://mail.python.org/mailman/listinfo/python-list
>>
>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Blind Anagram
On 23/04/2013 02:47, Dave Angel wrote:
> On 04/22/2013 05:32 PM, Blind Anagram wrote:
>> On 22/04/2013 22:03, Oscar Benjamin wrote:
>>> On 22 April 2013 21:18, Oscar Benjamin 
>>> wrote:
 On 22 April 2013 17:38, Blind Anagram  wrote:
> On 22/04/2013 17:06, Oscar Benjamin wrote:
>
>> I don't know what your application is but I would say that my first
>> port of call here would be to consider a different algorithmic
>> approach. An obvious question would be about the sparsity of this
>> data
>> structure. How frequent are the values that you are trying to count?
>> Would it make more sense to store a list of their indices?
>
> Actually it is no more than a simple prime sieve implemented as a
> Python
> class (and, yes, I realize that there are plenty of these around).

 If I understand correctly, you have a list of roughly a billion
 True/False values indicating which integers are prime and which are
 not. You would like to discover how many prime numbers there are
 between two numbers a and b. You currently do this by counting the
 number of True values in your list between the indices a and b.

 If my description is correct then I would definitely consider using a
 different algorithmic approach. The density of primes from 1 to 1
 billlion is about 5%. Storing the prime numbers themselves in a sorted
 list would save memory and allow a potentially more efficient way of
 counting the number of primes within some interval.
>>>
>>> In fact it is probably quicker if you don't mind using all that memory
>>> to just store the cumulative sum of your prime True/False indicator
>>> list. This would be the prime counting function pi(n). You can then
>>> count the primes between a and b in constant time with pi[b] - pi[a].
>>
>> I did wonder whether, after creating the sieve, I should simply go
>> through the list and replace the True values with a count.  This would
>> certainly speed up the prime count function, which is where the issue
>> arises.  I will try this and see what sort of performance trade-offs
>> this involves.
>>
> 
> By doing that replacement, you'd increase memory usage manyfold (maybe
> 3:1, I don't know).  As long as you're only using bools in the list, you
> only have the list overhead to consider, because all the objects
> involved are already cached (True and False exist only once each).  If
> you have integers, you'll need a new object for each nonzero count.

Thank you, Dave, you have answered a question that I was going to ask
before I even asked it!


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Running simultaneuos "FOR" loops

2013-04-23 Thread inshu chauhan
Thanks Gary.



>
> Be clearer about the problem please.
>
> Do you wish to produce a loop that:
>   On pass 1, each of p,k, and t hold the first item of their respective
> lists, and
>   on pass 2, each of p,k, and t hold the second item of their respective
> lists, and
>   so on
> until one (or all) lists run out?
>

Yes this is excatly what I want each loop holds the first item on each
pass.

>
> If that is what you want, then check out the zip builtin function.  But
> also consider this:  Do you care what happens if one list runs out before
> the others?
>

Yes, but all dictionaries have same number of items.

>
> Or is it something else you want?  Perhaps nested loops?
>   for  p in sorted(segments.iterkeys()):
>   for k in sorted(class_count.iterkeys()):
>   for j in sorted(pixel_count.iterkeys()):
>  # This will be run with all possible combinations of p,k, and
> t
>

No, I know about nested loops but I dont want that because all the loops
have same number of items, inner loops will run out earlier.


>
> Gary Herron
>
>
>
>
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Blind Anagram
On 23/04/2013 00:01, Steven D'Aprano wrote:
> On Mon, 22 Apr 2013 15:15:19 +0100, Blind Anagram wrote:
> 
>> But when using a sub-sequence, I do suffer a significant reduction in
>> speed for a count when compared with count on the full list.  When the
>> list is small enough not to cause memory allocation issues this is about
>> 30% on 100,000,000 items.  But when the list is 1,000,000,000 items, OS
>> memory allocation becomes an issue and the cost on my system rises to
>> over 600%.
> 
> Buy more memory :-)
> 
> 
>> I agree that this is not a big issue but it seems to me a high price to
>> pay for the lack of a sieve.count(value, limit), which I feel is a
>> useful function (given that memoryview operations are not available for
>> lists).
> 
> There is no need to complicate the count method for such a specialised 
> use-case. A more general solution would be to provide list views. 
> 
> Another solution might be to use arrays rather than lists. Since your 
> sieve list is homogeneous, you could possibly use an array of 1 or 0 
> bytes rather than a list of True or False bools. That would reduce the 
> memory overhead by a factor of four, and similarly reduce the overhead of 
> any copying:

I did a lot of work comparing the overall performance of the sieve when
using either lists or arrays and I found that lists were a lot faster
for the majority use case when the sieve is not large.

   Brian

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Running simultaneuos "FOR" loops

2013-04-23 Thread inshu chauhan
This statement is giving me the following error

Statement:
for p, k, j in zip(sorted(segments.iterkeys(), class_count.iterkeys(),
pixel_count.iterkeys())):

Error:
Traceback (most recent call last):
  File "C:\Users\inshu\Desktop\Training_segs_trial2.py", line 170, in

access_segments(segimage, data)
  File "C:\Users\inshu\Desktop\Training_segs_trial2.py", line 147, in
access_segments
for p, k, j in zip(sorted(segments.iterkeys(), class_count.iterkeys(),
pixel_count.iterkeys())):
TypeError: 'dictionary-keyiterator' object is not callable





On Tue, Apr 23, 2013 at 12:33 PM, inshu chauhan wrote:

> zip isn't doing the required
>
>
> On Tue, Apr 23, 2013 at 12:28 PM, inshu chauhan wrote:
>
>> Yes Simultaneously means all three running at the same time, I looked up
>> zip just now, but will it not disturb my dictionaries ?
>> And yes the dictionaries have same number of keys.
>>
>> thanks
>>
>>
>> On Tue, Apr 23, 2013 at 12:16 PM, Chris Angelico wrote:
>>
>>> On Tue, Apr 23, 2013 at 4:40 PM, inshu chauhan 
>>> wrote:
>>> > i have to implement the below line in one of my code:
>>> >
>>> > for  p in sorted(segments.iterkeys()) and for k in
>>> > sorted(class_count.iterkeys()) and for j in
>>> sorted(pixel_count.iterkeys()):
>>> >
>>> > Its giving me a syntax error which is obvious, but how can I make all
>>> three
>>> > for loop run simultaneously or any other way to do this simultaneous
>>> work
>>> > ???
>>>
>>> Define simultaneously. Do the three dictionaries have the same number
>>> of keys? If so, look up zip() or itertools.izip; if not, you may have
>>> to more clearly define "simultaneous".
>>>
>>> ChrisA
>>> --
>>> http://mail.python.org/mailman/listinfo/python-list
>>>
>>
>>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Ubuntu package "python3" does not include tkinter

2013-04-23 Thread Andrew Berg
On 2013.04.23 00:49, Steven D'Aprano wrote:
>  Obviously you cannot display an X window without 
> X, well duh, but merely importing tkinter doesn't require an X display.
Importing it doesn't. Doing anything useful with it, however, does. Would you 
consider the engine an optional part of a car? After all, the
radio would still work and you can put things in the glove compartment.

> We just disagree on where to break the packages up.
We disagree on what a dependency is. I say a dependency is something required 
in order to have any functionality that is not defined as
optional or extra by the author(s). You say it's anything required in order to 
initialize, even if there is little to no actual
functionality. Perhaps you are fond of hunting down components to make 
something work, but most people would expect a packaging system to
automatically install whatever is required to make the software they want to 
use do what it is supposed to. Or perhaps you had a dummy
package in mind that would automatically pull in Tcl/Tk and X and whatever else 
is required to make tkinter draw things on a screen as a
convenience. Of course, that brings us back to the OP's problem...

Since Linux distros already include whatever third-party software they see fit 
as part of their base (or have the OS installer install
whatever the user specifies during installation), why not have desktop 
configurations include tkinter by default when installing?
-- 
CPython 3.3.1 | Windows NT 6.2.9200 / FreeBSD 9.1
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Running simultaneuos "FOR" loops

2013-04-23 Thread Chris Angelico
On Tue, Apr 23, 2013 at 5:13 PM, inshu chauhan  wrote:
> This statement is giving me the following error
>
> Statement:
> for p, k, j in zip(sorted(segments.iterkeys(), class_count.iterkeys(),
> pixel_count.iterkeys())):

You probably want to sort them separately. By the way, using
iterkeys() isn't going to do much for you, since you're sorting them;
you need to see all the keys to sort them.

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Ubuntu package "python3" does not include tkinter

2013-04-23 Thread Chris Angelico
On Tue, Apr 23, 2013 at 4:48 PM, Rui Maciel  wrote:
> Chris Angelico wrote:
>
>> 30 years ago, people weren't using Tk.
>
> And after 30 years gone by, some people still don't use Tk, let alone
> Tkinter.  There is absolutely no reason to force them to install that if
> they don't need to.

Agreed; my preference is GTK, when I do GUI work.

>> We've moved on beyond worrying about the odd kilobyte of space.
>
> That must be reason why you are the only one complaining about that.

I'm not.

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Lists and arrays

2013-04-23 Thread 88888 Dihedral
Ana Dionísio於 2013年4月23日星期二UTC+8上午2時13分38秒寫道:
> Hello!
> 
> 
> 
> I need your help!
> 
> 
> 
> I have an array and I need pick some data from that array and put it in a 
> list, for example:
> 
> 
> 
> array= [a,b,c,1,2,3]
> 
> 
> 
> list=array[0]+ array[3]+ array[4]
> 
> 
> 
> list: [a,1,2]
> 
> 
> 
> When I do it like this: list=array[0]+ array[3]+ array[4] I get an error:
> 
> 
> 
> "TypeError: unsupported operand type(s) for +: 'numpy.ndarray' and 
> 'numpy.ndarray'"
> 
> 
> 
> Can you help me?

The list part in Python is more versatile but definitely
executed slower than the array in C.

What I like is that maintaining the Python part
is not as tedious and painful for the same programs in LISP.



 
-- 
http://mail.python.org/mailman/listinfo/python-list


percent faster than format()? (was: Re: optomizations)

2013-04-23 Thread Ulrich Eckhardt

Am 23.04.2013 06:00, schrieb Steven D'Aprano:

If it comes down to micro-optimizations to shave a few microseconds off,
consider using string % formatting rather than the format method.


Why? I don't see any obvious difference between the two...


Greetings!

Uli

--
http://mail.python.org/mailman/listinfo/python-list


Re: Running simultaneuos "FOR" loops

2013-04-23 Thread Ulrich Eckhardt

Am 23.04.2013 09:13, schrieb inshu chauhan:

This statement is giving me the following error

Statement:
for p, k, j in zip(sorted(segments.iterkeys(), class_count.iterkeys(),
pixel_count.iterkeys())):

Error:
Traceback (most recent call last):
   File "C:\Users\inshu\Desktop\Training_segs_trial2.py", line 170, in

 access_segments(segimage, data)
   File "C:\Users\inshu\Desktop\Training_segs_trial2.py", line 147, in
access_segments
 for p, k, j in zip(sorted(segments.iterkeys(), class_count.iterkeys(),
pixel_count.iterkeys())):
TypeError: 'dictionary-keyiterator' object is not callable


Which of the statements on that line causes the error? I guess asking 
yourself that question will lead you to the answer already! ;)


Any reason you quoted your own and several others' messages, am I 
missing some reference there?


Good luck!

Uli

--
http://mail.python.org/mailman/listinfo/python-list


Re: percent faster than format()? (was: Re: optomizations)

2013-04-23 Thread Chris “Kwpolska” Warrick
On Tue, Apr 23, 2013 at 9:46 AM, Ulrich Eckhardt
 wrote:
> Am 23.04.2013 06:00, schrieb Steven D'Aprano:
>>
>> If it comes down to micro-optimizations to shave a few microseconds off,
>> consider using string % formatting rather than the format method.
>
>
> Why? I don't see any obvious difference between the two...
>
>
> Greetings!
>
> Uli
>
> --
> http://mail.python.org/mailman/listinfo/python-list

$ python -m timeit "a = '{0} {1} {2}'.format(1, 2, 42)"
100 loops, best of 3: 0.824 usec per loop
$ python -m timeit "a = '%s %s %s' % (1, 2, 42)"
1000 loops, best of 3: 0.0286 usec per loop

--
Kwpolska  | GPG KEY: 5EAAEA16
stop html mail| always bottom-post
http://asciiribbon.org| http://caliburn.nl/topposting.html
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Running simultaneuos "FOR" loops

2013-04-23 Thread Duncan Booth
inshu chauhan  wrote:

> This statement is giving me the following error
> 
> Statement:
> for p, k, j in zip(sorted(segments.iterkeys(), class_count.iterkeys(),
> pixel_count.iterkeys())):
> 
> Error:
> Traceback (most recent call last):
>   File "C:\Users\inshu\Desktop\Training_segs_trial2.py", line 170, in
>
> access_segments(segimage, data)
>   File "C:\Users\inshu\Desktop\Training_segs_trial2.py", line 147, in
> access_segments
> for p, k, j in zip(sorted(segments.iterkeys(),
> class_count.iterkeys(), 
> pixel_count.iterkeys())):
> TypeError: 'dictionary-keyiterator' object is not callable
> 

The second argument to `sorted()` is a comparison or key function, if you 
want to sort all three key lists you need to sort them separately. Try:

for p, k, j in zip(sorted(segments),
   sorted(class_count),
   sorted(pixel_count)):

also you don't need to call the `iterkeys()` method as you need them all to 
sort and just treating the dict as a sequence will do the right thing.

-- 
Duncan Booth http://kupuguy.blogspot.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Serial Port Issue

2013-04-23 Thread Chris “Kwpolska” Warrick
On Mon, Apr 22, 2013 at 11:34 AM, chandan kumar  wrote:
> Python Ver: 2.5

Old.  Please upgrade to 2.7.4 ASAP.

> ser=ser=serial.Serial(port=21,baudrate=9600)

That double `ser=` thing is not necessary.  It should only be `ser =
serial.Serial(port=21, baudrate=9600)`.

Look at Phil Birkelbach’s post for a possible solution.

--
Kwpolska  | GPG KEY: 5EAAEA16
stop html mail| always bottom-post
http://asciiribbon.org| http://caliburn.nl/topposting.html
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: optomizations

2013-04-23 Thread Chris Angelico
On Tue, Apr 23, 2013 at 11:53 AM, Roy Smith  wrote:
> In article ,
>  Rodrick Brown  wrote:
>
>> I would like some feedback on possible solutions to make this script run
>> faster.
>
> If I had to guess, I would think this stuff:
>
>> line = line.replace('mediacdn.xxx.com', 'media.xxx.com')
>> line = line.replace('staticcdn.xxx.co.uk', '
>> static.xxx.co.uk')
>> line = line.replace('cdn.xxx', 'www.xxx')
>> line = line.replace('cdn.xxx', 'www.xxx')
>> line = line.replace('cdn.xx', 'www.xx')
>> siteurl = line.split()[6].split('/')[2]
>> line = re.sub(r'\bhttps?://%s\b' % siteurl, "", line, 1)
>
> You make 6 copies of every line.  That's slow.

One of those is a regular expression substitution, which is also
likely to be a hot-spot. But definitely profile.

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Oscar Benjamin
On 23 April 2013 08:05, Blind Anagram  wrote:
> On 23/04/2013 00:01, Steven D'Aprano wrote:
>> On Mon, 22 Apr 2013 15:15:19 +0100, Blind Anagram wrote:
>>
>>> But when using a sub-sequence, I do suffer a significant reduction in
>>> speed for a count when compared with count on the full list.  When the
>>> list is small enough not to cause memory allocation issues this is about
>>> 30% on 100,000,000 items.  But when the list is 1,000,000,000 items, OS
>>> memory allocation becomes an issue and the cost on my system rises to
>>> over 600%.
[snip]
>>
>> Another solution might be to use arrays rather than lists. Since your
>> sieve list is homogeneous, you could possibly use an array of 1 or 0
>> bytes rather than a list of True or False bools. That would reduce the
>> memory overhead by a factor of four, and similarly reduce the overhead of
>> any copying:
>
> I did a lot of work comparing the overall performance of the sieve when
> using either lists or arrays and I found that lists were a lot faster
> for the majority use case when the sieve is not large.

Okay, now I understand. I thought you were trying to optimise for
large lists, but in fact you have already optimised for small lists.
As a result you have made algorithmic choices that don't scale very
well. Since you're still concerned about performance on small lists
you don't want to rethink those choices. Instead you want a
micro-optimisation that would compensate for them.

Elsewhere you said:

> I would not dream of doing this job by copying a slice in any other
> language that I have used so I was simply asking for advice to discover
> whether this copy could be avoided whilst staying with the simple sieve
> design.

So you already knew that there would be problems with this method, but
you've chosen it anyway since it turned out to be fastest for small
lists. You could always just do a different thing when the list is
large:

def pi(self, n):
  if n < 100:
return self.indicator[:n].sum()
  else:
return sum(itertools.islice(self.indicator, n))

However, if you really want to improve performance in computing pi(n)
for large n you should just use one of the existing algorithms having
sublinear space/time complexity. These also use evaluate pi(n) with
sieves but the sieve only needs to be as big as sqrt(n) rather than n
for the obvious method:
http://en.wikipedia.org/wiki/Prime-counting_function#Algorithms_for_evaluating_.CF.80.28x.29


Oscar
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: xlrd 0.9.2 released!

2013-04-23 Thread Ondrej Ján
Hello. Can you please tell me, how compatible is this version with older 
versions? In Fedora/CentOS we have versions 0.7 and 0.6. Can I release and 
Fedora/EPEL update to 0.9.2?

Thank you.


  
SAL

Dňa utorok, 9. apríla 2013 21:38:30 UTC+2 Chris Withers napísal(-a):
>
> Hi All, 
>
> I'm pleased to announce the release of xlrd 0.9.2: 
>
> http://pypi.python.org/pypi/xlrd/0.9.2 
>
> This release includes the following changes: 
>
> - Fix some packaging issues that meant docs and examples were missing 
> from the tarball. 
>
> - Fixed a small but serious regression that caused problems opening 
> .xlsx files. 
>
> If you find any problems, please ask about them on the 
> python...@googlegroups.com  list, or submit an issue on 
> GitHub: 
>
> https://github.com/python-excel/xlrd/issues 
>
> Full details of all things Python and Excel related can be found here: 
>
> http://www.python-excel.org/ 
>
> cheers, 
>
> Chris 
>
> -- 
> Simplistix - Content Management, Batch Processing & Python Consulting 
>  - http://www.simplistix.co.uk 
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: xlrd 0.9.2 released!

2013-04-23 Thread Karim


Thx ! I will update my 0.6 version!

Cheers
Karim

On 09/04/2013 21:38, Chris Withers wrote:

Hi All,

I'm pleased to announce the release of xlrd 0.9.2:

http://pypi.python.org/pypi/xlrd/0.9.2

This release includes the following changes:

- Fix some packaging issues that meant docs and examples were missing 
from the tarball.


- Fixed a small but serious regression that caused problems opening 
.xlsx files.


If you find any problems, please ask about them on the 
python-ex...@googlegroups.com list, or submit an issue on GitHub:


https://github.com/python-excel/xlrd/issues

Full details of all things Python and Excel related can be found here:

http://www.python-excel.org/

cheers,

Chris



--
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Blind Anagram
On 23/04/2013 12:08, Oscar Benjamin wrote:
> On 23 April 2013 08:05, Blind Anagram  wrote:
>> On 23/04/2013 00:01, Steven D'Aprano wrote:
>>> On Mon, 22 Apr 2013 15:15:19 +0100, Blind Anagram wrote:
>>>
 But when using a sub-sequence, I do suffer a significant reduction in
 speed for a count when compared with count on the full list.  When the
 list is small enough not to cause memory allocation issues this is about
 30% on 100,000,000 items.  But when the list is 1,000,000,000 items, OS
 memory allocation becomes an issue and the cost on my system rises to
 over 600%.
> [snip]
>>>
>>> Another solution might be to use arrays rather than lists. Since your
>>> sieve list is homogeneous, you could possibly use an array of 1 or 0
>>> bytes rather than a list of True or False bools. That would reduce the
>>> memory overhead by a factor of four, and similarly reduce the overhead of
>>> any copying:
>>
>> I did a lot of work comparing the overall performance of the sieve when
>> using either lists or arrays and I found that lists were a lot faster
>> for the majority use case when the sieve is not large.
> 
> Okay, now I understand. I thought you were trying to optimise for
> large lists, but in fact you have already optimised for small lists.
> As a result you have made algorithmic choices that don't scale very
> well. Since you're still concerned about performance on small lists
> you don't want to rethink those choices. Instead you want a
> micro-optimisation that would compensate for them.
> 
> Elsewhere you said:
> 
>> I would not dream of doing this job by copying a slice in any other
>> language that I have used so I was simply asking for advice to discover
>> whether this copy could be avoided whilst staying with the simple sieve
>> design.
> 
> So you already knew that there would be problems with this method, but
> you've chosen it anyway since it turned out to be fastest for small
> lists. You could always just do a different thing when the list is
> large:

Your analysis of my rationale is sound except that I only found out that
I had a problem with counting in a subset of a list when I actually
tried this for a huge sieve.

It was only then that I discovered that there was no way of setting the
upper limit in list.count(x) and hence that I would have to create a
slice or find an alternative approach.

I then wondered why count for lists has no limits whereas count for
other objects (e.g. strings) has these.  I also wondered whether there
was an easy way of avoiding the slice, not that this is critical, but
rather because it is just nice to have a sieve that still actually works
for huge values, albeit inefficiently.

> def pi(self, n):
>   if n < 100:
> return self.indicator[:n].sum()
>   else:
> return sum(itertools.islice(self.indicator, n))
>

I have looked at itertools.islice as a complete replacement for count()
where, on average, it was a lot slower. But I have not tried hybrid
strategies as you suggest - I'll take a look at this.

My thanks to you and others for the advice that has been offered.

   Brian

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Ubuntu package "python3" does not include tkinter

2013-04-23 Thread rusi
On Apr 23, 11:44 am, Rui Maciel  wrote:
> Steven D'Aprano wrote:
> > Nobody forces you to do anything. Python is open source, and the source
> > code is freely available.
>
> That goes both ways, with the added benefit that python-tkinter is already
> available in distro's official repositories.  If you want to install it, go
> for it.  Nothing stops you.  If you don't then you aren't forced to install
> half the packages in the repository just to have a python interpreter in
> your system.
>
> Rui Maciel

Collecting together what are the conflicting principles
-


1 Fail early Fail fast
2 Good error messages
3 No crap
4 A working system
   that is easily upgradable and keeps working
5 Package system permissive
 allows wide variation of package combinations
6 Package system strict
 Disallows error-prone situations/combinations
7 Easy on learners/noobs
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Remove some images from a mail message

2013-04-23 Thread Jason Friedman
This seemed to work.

#!/usr/bin/env python3
from email.mime.image import MIMEImage
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.iterators import typed_subpart_iterator
from email.generator import Generator, BytesGenerator
import email.iterators
import os
import sys
import tempfile

IGNORE_SET = set((588, 1279, 1275, 1576, 1272, 1591,))
IMAGE = "image"
TEXT = "text"
PLAIN = "plain"
SUBJECT = "Subject"
FROM = "From"
TO = "To"
DATE = "Date"
WRITE_BINARY = "wb"
READ_BINARY = "rb"

output_message_file_name = "/home/jason/resulting_message"
input_file_name = "/home/jason/example_message"
with open(input_file_name, READ_BINARY) as reader:
message_in = email.message_from_binary_file(reader)

message_out = MIMEMultipart()
header_field_list = (SUBJECT, FROM, DATE)
for field_name in header_field_list:
message_out.__setitem__(field_name, message_in[field_name])
message_out[TO] = "jason"

temp_dir = tempfile.TemporaryDirectory()
for part in typed_subpart_iterator(message_in, maintype=IMAGE):
file_name = part.get_filename()
full_path = os.path.join(temp_dir.name, file_name)
with open(full_path, WRITE_BINARY) as writer:
writer.write(part.get_payload(decode=True))
if os.stat(full_path).st_size not in IGNORE_SET:
with open(full_path, READ_BINARY) as reader:
attachment = MIMEImage(reader.read())
message_out.attach(attachment)

for part in email.iterators.typed_subpart_iterator(message_in,
maintype=TEXT, subtype=PLAIN):
message_out.attach(part)

with open(output_message_file_name, WRITE_BINARY) as writer:
x = BytesGenerator(writer)
x.flatten(message_out)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Running simultaneuos "FOR" loops

2013-04-23 Thread Dave Angel

On 04/23/2013 02:58 AM, inshu chauhan wrote:

Yes Simultaneously means all three running at the same time, I looked up
zip just now, but will it not disturb my dictionaries ?
And yes the dictionaries have same number of keys.



More crucially, do all the dictionaries have the *same* keys?  If so, 
then all the zip logic is unnecessary, as the p, k, and j values will 
always be identical.


If they have the same keys, then do something like:

for key in sorted(segments):
val1 = segments[key]
val2 = class_count[key]
val3 = pixel_count[key]
... do something with those values

If any of the keys is missing in one of the other dicts, you'll get an 
exception.  You might also want to do a check of the size of the 3 
dicts, just in case.



--
DaveA
--
http://mail.python.org/mailman/listinfo/python-list


Re: There must be a better way

2013-04-23 Thread Neil Cerutti
On 2013-04-22, Colin J. Williams  wrote:
> Since I'm only interested in one or two columns, the simpler
> approach is probably better.

Here's a sketch of how one of my projects handles that situation.
I think the index variables are invaluable documentation, and
make it a bit more robust. (Python 3, so not every bit is
relevant to you).

with open("today.csv", encoding='UTF-8', newline='') as today_file:
reader = csv.reader(today_file)
header = next(reader)
majr_index = header.index('MAJR')
div_index = header.index('DIV')
for rec in reader:
major = rec[majr_index]
rec[div_index] = DIVISION_TABLE[major]

But a csv.DictReader might still be more efficient. I never
tested. This is the only place I've used this "optimization".
It's fast enough. ;)

-- 
Neil Cerutti
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: There must be a better way

2013-04-23 Thread Oscar Benjamin
On 23 April 2013 14:36, Neil Cerutti  wrote:
> On 2013-04-22, Colin J. Williams  wrote:
>> Since I'm only interested in one or two columns, the simpler
>> approach is probably better.
>
> Here's a sketch of how one of my projects handles that situation.
> I think the index variables are invaluable documentation, and
> make it a bit more robust. (Python 3, so not every bit is
> relevant to you).
>
> with open("today.csv", encoding='UTF-8', newline='') as today_file:
> reader = csv.reader(today_file)
> header = next(reader)

I once had a bug that took a long time to track down and was caused by
using next() without an enclosing try/except StopIteration (or the
optional default argument to next).

This is a sketch of how you can get the bug that I had:

$ cat next.py
#!/usr/bin/env python

def join(iterables):
'''Join iterable of iterables, stripping first item'''
for iterable in iterables:
iterator = iter(iterable)
header = next(iterator)  # Here's the problem
for val in iterator:
yield val

data = [
['foo', 1, 2, 3],
['bar', 4, 5, 6],
[], # Whoops! Who put this empty iterable here?
['baz', 7, 8, 9],
]

for x in join(data):
print(x)

$ ./next.py
1
2
3
4
5
6

The values 7, 8 and 9 are not printed but no error message is shown.
This is because calling next on the iterator over the empty list
raises a StopIteration that is not caught in the join generator. The
StopIteration is then "caught" by the for loop that iterates over
join() causing the loop to terminate prematurely. Since the exception
is caught and cleared by the for loop there's no practical way to get
a debugger to hook into the event that causes it.

In my case this happened somewhere in the middle of a long running
process. It was difficult to pin down what was causing this as the
iteration was over non-constant data and I didn't know what I was
looking for. As a result of the time spent fixing this I'm always very
cautious about calling next() to think about what a StopIteration
would do in context.

In this case a StopIteration is raised when reading from an empty csv file:

>>> import csv
>>> with open('test.csv', 'w'): pass
...
>>> with open('test.csv') as csvfile:
... reader = csv.reader(csvfile)
... header = next(reader)
...
Traceback (most recent call last):
  File "", line 3, in 
StopIteration

If that code were called from a generator then it would most likely be
susceptible to the problem I'm describing. The fix is to use
next(reader, None) or try/except StopIteration.


Oscar
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: There must be a better way

2013-04-23 Thread Tim Chase
On 2013-04-23 13:36, Neil Cerutti wrote:
> On 2013-04-22, Colin J. Williams  wrote:
> > Since I'm only interested in one or two columns, the simpler
> > approach is probably better.
> 
> Here's a sketch of how one of my projects handles that situation.
> I think the index variables are invaluable documentation, and
> make it a bit more robust. (Python 3, so not every bit is
> relevant to you).
> 
> with open("today.csv", encoding='UTF-8', newline='') as today_file:
> reader = csv.reader(today_file)
> header = next(reader)
> majr_index = header.index('MAJR')
> div_index = header.index('DIV')
> for rec in reader:
> major = rec[majr_index]
> rec[div_index] = DIVISION_TABLE[major]
> 
> But a csv.DictReader might still be more efficient. I never
> tested. This is the only place I've used this "optimization".
> It's fast enough. ;)

I believe the csv module does all the work at c-level, rather than
as  pure Python, so it should be notably faster.  The only times I've
had to do things by hand like that are when there are header
peculiarities that I can't control, such as mismatched case or
added/remove punctuation (client files are notorious for this).  So I
often end up doing something like

  def normalize(header):
return header.strip().upper() # other cleanup as needed

  reader = csv.reader(f)
  headers = next(reader)
  header_map = dict(
(normalize(header), i)
for i, header
in enumerate(headers)
)
  item = lambda col: row[header_map[col]].strip()
  for row in reader:
major = item("MAJR").upper()
division = item("DIV")
# ...

The function calling might add overhead (in which case one could
just use explicit indirect indexing for each value assignment:

  major = row[header_map["MAJR"]].strip().upper()

but I usually find that processing CSV files leaves me I/O bound
rather than CPU bound.

-tkc



-- 
http://mail.python.org/mailman/listinfo/python-list


distutils and libraries

2013-04-23 Thread Nick Gnedin


Folks,

I would like to install a Python module from a complete library. So, my 
question: if I already have a fully build Python module libMyModule.so, 
is there a way to use setup.py to just install it, skipping the build step?


Here are details if needed:

My build process consists of 2 steps - first, I build a static library 
libtemp.a (using CMake) that depends on 3rd party software. From that 
library I build a python module by compiling the file my_py.cpp that 
contains PyInit_MyModule function etc for proper module initialization.


I can build that module in two ways: by using CMake or distutils. CMake 
builds the module properly, finding all dependencies, and when I install 
it manually, everything works just fine - but then the problem is that 
it has to be installed manually. With distutils, when I use


module1 = Extension('ifrit',
libraries = ['temp'],
library_dirs = ['.'],
sources = ['my_py.cpp'])

the module is build and installed, but when I import it, it does not 
find the 3rd party libraries and complain about undefined symbols.


Adding all 3rd party library paths to setup.py is not an option - those 
can be installed by a user anywhere. So, ideally, I would like to do 
something like that


module1 = Extension('ifrit',
libraries = ['MyModule'],
sources = [])

where libMyModule.so is a complete Python module build by CMake, but 
that does not work because setup.py still tries to build the module from 
an already existing complete module, and just creates an empty library.


So, my question again: if I already have a fully build Python module 
libMyModule.so, is there a way to use setup.py to just install it, 
skipping the build step?


Many thanks for any help,

Nick Gnedin




--
http://mail.python.org/mailman/listinfo/python-list


Re: There must be a better way

2013-04-23 Thread Skip Montanaro
> But a csv.DictReader might still be more efficient.

Depends on what efficiency you care about.  The DictReader class is
implemented in Python, and builds a dict for every row.  It will never
be more efficient CPU-wise than instantiating the csv.reader type
directly and only doing what you need.

OTOH, the DictReader class "just works" and its usage is more obvious
when you come back later to modify your code.  It also makes the code
insensitive to column ordering (though yours seems to be as well, if
I'm reading it correctly).  On the programmer efficiency axis, I score
the DictReader class higher than the reader type.

A simple test:

##
import csv
from timeit import Timer

setup = '''import csv
lst = ["""a,b,c,d,e,f,g"""]
lst.extend(["""05:38:24,0.6326,1,0,1.0,0.0,0.0"""] * 100)
reader = csv.reader(lst)
dreader = csv.DictReader(lst)
'''

t1 = Timer("for row in reader: pass", setup)
t2 = Timer("for row in dreader: pass", setup)

print(min(t1.repeat(number=10)))
print(min(t2.repeat(number=10)))
###

demonstrates that the raw reader is, indeed, much faster than the DictReader:

0.972723007202
8.29047989845

but that's for the basic iteration.  Whatever you need to add to the
raw reader to insulate yourself from changes to the structure of the
CSV file and improve readability will slow it down, while the
DictReader will never be worse than the above.

Skip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: percent faster than format()? (was: Re: optomizations)

2013-04-23 Thread Steven D'Aprano
On Tue, 23 Apr 2013 09:46:53 +0200, Ulrich Eckhardt wrote:

> Am 23.04.2013 06:00, schrieb Steven D'Aprano:
>> If it comes down to micro-optimizations to shave a few microseconds
>> off, consider using string % formatting rather than the format method.
> 
> Why? I don't see any obvious difference between the two...


Possibly the state of the art has changed since then, but some years ago 
% formatting was slightly faster than the format method. Let's try it and 
see:

# Using Python 3.3.

py> from timeit import Timer
py> setup = "a = 'spam'; b = 'ham'; c = 'eggs'"
py> t1 = Timer("'%s, %s and %s for breakfast' % (a, b, c)", setup)
py> t2 = Timer("'{}, {} and {} for breakfast'.format(a, b, c)", setup)
py> print(min(t1.repeat()))
0.8319804421626031
py> print(min(t2.repeat()))
1.2395259491167963


Looks like the format method is about 50% slower.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Steven D'Aprano
On Tue, 23 Apr 2013 08:05:53 +0100, Blind Anagram wrote:

> I did a lot of work comparing the overall performance of the sieve when
> using either lists or arrays and I found that lists were a lot faster
> for the majority use case when the sieve is not large.

And when the sieve is large?

I don't actually believe that the bottleneck is the cost of taking a list 
slice. That's pretty fast, even for huge lists, and all efforts to skip 
making a copy by using itertools.islice actually ended up slower. But 
suppose it is the bottleneck. Then *sooner or later* arrays will win over 
lists, simply because they're smaller.

Of course, "sooner or later" might be much later.

I expect that you will not find a single algorithm, or data structure, 
that works optimally for both small and huge inputs. In general, there 
are two strategies you might take:


1) Use an algorithm or data structure which is efficient for small inputs 
with small inputs, and after some cut-off size, swap to a different 
algorithm which is efficient for large inputs. That swap over may require 
a one-off conversion cost, but provided your sieve never shrinks, this 
may not matter.


2) Use only the algorithm for large inputs. For small inputs, the 
difference between the two is insignificant in absolute terms (who cares 
if the operation takes 5ms instead of 1ms?), but for large N, there is a 
clear winner.

There's nothing that says you're limited to two algorithms. You may find 
that to really optimize things, you need three or more algorithms, each 
one optimized for a particular subset of inputs. Of course, all this 
added complexity is itself very costly. Is it worth it?



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: percent faster than format()? (was: Re: optomizations)

2013-04-23 Thread Chris Angelico
On Wed, Apr 24, 2013 at 12:36 AM, Steven D'Aprano
 wrote:
> # Using Python 3.3.
>
> py> from timeit import Timer
> py> setup = "a = 'spam'; b = 'ham'; c = 'eggs'"
> py> t1 = Timer("'%s, %s and %s for breakfast' % (a, b, c)", setup)
> py> t2 = Timer("'{}, {} and {} for breakfast'.format(a, b, c)", setup)
> py> print(min(t1.repeat()))
> 0.8319804421626031
> py> print(min(t2.repeat()))
> 1.2395259491167963
>
>
> Looks like the format method is about 50% slower.

Figures on my hardware are (naturally) different, with a similar (but
slightly more pronounced) difference:

>>> sys.version
'3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:55:48) [MSC v.1600 32 bit (Intel)]'
>>> print(min(t1.repeat()))
1.4841416995735415
>>> print(min(t2.repeat()))
2.5459869899666074
>>> t3 = Timer("a+', '+b+' and '+c+' for breakfast'", setup)
>>> print(min(t3.repeat()))
1.5707538248576327
>>> t4 = Timer("''.join([a, ', ', b, ' and ', c, ' for breakfast'])", setup)
>>> print(min(t4.repeat()))
1.5026834416105999

So on the face of it, format() is slower than everything else by a
good margin... until you note that repeat() is doing one million
iterations, so those figures are effectively in microseconds. Yeah, I
think I can handle a couple of microseconds.

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: There must be a better way (correction)

2013-04-23 Thread Tim Chase
On 2013-04-23 09:30, Tim Chase wrote:
> > But a csv.DictReader might still be more efficient. I never
> > tested. This is the only place I've used this "optimization".
> > It's fast enough. ;)
> 
> I believe the csv module does all the work at c-level, rather than
> as  pure Python, so it should be notably faster.

A little digging shows that csv.DictReader is pure Python, using the
underlying _csv.reader which is written in C for speed.

-tkc



-- 
http://mail.python.org/mailman/listinfo/python-list


Finding referents with Gdb

2013-04-23 Thread Dave Butler
with gdb, can you find referents of an object given an object id?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: percent faster than format()?

2013-04-23 Thread Ulrich Eckhardt

Am 23.04.2013 10:26, schrieb Chris “Kwpolska” Warrick:

On Tue, Apr 23, 2013 at 9:46 AM, Ulrich Eckhardt
 wrote:

Am 23.04.2013 06:00, schrieb Steven D'Aprano:


If it comes down to micro-optimizations to shave a few microseconds off,
consider using string % formatting rather than the format method.



Why? I don't see any obvious difference between the two...

[...]


$ python -m timeit "a = '{0} {1} {2}'.format(1, 2, 42)"
100 loops, best of 3: 0.824 usec per loop
$ python -m timeit "a = '%s %s %s' % (1, 2, 42)"
1000 loops, best of 3: 0.0286 usec per loop



Well, I don't question that for at least some CPython implementations 
one is faster than the other. I don't see a reason why one must be 
faster than the other though. In other words, I don't understand where 
the other one needs more time to achieve basically the same. To me, the 
only difference is the syntax, but not greatly so.


So again, why is one faster than the other? What am I missing?

Uli

--
http://mail.python.org/mailman/listinfo/python-list


Re: percent faster than format()?

2013-04-23 Thread Lele Gaifax
Ulrich Eckhardt  writes:

> So again, why is one faster than the other? What am I missing?

The .format() syntax is actually a function, and that alone carries some
overload. Even optimizing the lookup may give a little advantage:

>>> from timeit import Timer
>>> setup = "a = 'spam'; b = 'ham'; c = 'eggs'"
>>> t1 = Timer("'%s, %s and %s for breakfast' % (a, b, c)", setup)
>>> t2 = Timer("'{}, {} and {} for breakfast'.format(a, b, c)", setup)
>>> print(min(t1.repeat()))
>>> print(min(t2.repeat()))
>>> setup = "a = 'spam'; b = 'ham'; c = 'eggs'; f = '{}, {} and {} for 
>>> breakfast'.format"
>>> t3 = Timer("f(a, b, c)", setup)
>>> print(min(t3.repeat()))
0.3076407820044551
0.44008257299719844
0.418146252995939

But building the call frame still takes its bit of time.

ciao, lele.
-- 
nickname: Lele Gaifax | Quando vivrò di quello che ho pensato ieri
real: Emanuele Gaifas | comincerò ad aver paura di chi mi copia.
l...@metapensiero.it  | -- Fortunato Depero, 1929.

-- 
http://mail.python.org/mailman/listinfo/python-list


Nested iteration?

2013-04-23 Thread Roy Smith
In reviewing somebody else's code today, I found the following
construct (eliding some details):

f = open(filename)
for line in f:
if re.search(pattern1, line):
outer_line = f.next()
for inner_line in f:
if re.search(pattern2, inner_line):
inner_line = f.next()

Somewhat to my surprise, the code worked.  I didn't know it was legal
to do nested iterations over the same iterable (not to mention mixing
calls to next() with for-loops).  Is this guaranteed to work in all
situations?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Selenium Webdriver + Python (How to get started ??)

2013-04-23 Thread Santi
On Monday, April 22, 2013 8:24:40 AM UTC-4, arif7...@gmail.com wrote:
> Note that:- I have some experience of using Selenium IDE and Webdriver 
> (Java). but no prior experience of Python.
> 
> 
> 
> Now there is a project for which I will need to work with webdriver + Python. 
> 
> 
> 
> So far I have done following steps.. 
> 
> 
> 
> Install JDK
> 
> Setup Eclipse
> 
> download & Installed Python v3.3.1
> 
> Download & Installed Pydev (for Eclipse) also configured
> 
> download & installed (Distribute + PIP) 
> http://www.lfd.uci.edu/~gohlke/pythonlibs/#pip
> 
> Installed Selenium using command prompt
> 
> 
> 
> Running following commands from windows 7 command prompt, successfully opens 
> firefox browser
> 
> 
> 
> python
> 
> >>>from selenium import webdriver
> 
> >>>webdriver.Firefox()
> 
> 
> 
> --
> 
> 
> 
> ISSUE is that, I do not know exact steps of creating a python webdriver test 
> project.
> 
> 
> 
> I create new Pydev project with a "src" folder and also used sample python 
> code from internet but selenium classes cannot be recognized. I have tried 
> various approaches to import libraries but none seems to work. Any one can 
> guide me what i need to do step by step to successfully run a simple test via 
> python webdriver!! (eclipse pydev)
> 
> 
> 
> Thank you.

I'm guessing your PyDev setup is not configured to use pip and your 
dependencies?
How about this: 
http://stackoverflow.com/questions/4631377/unresolved-import-issues-with-pydev-and-eclipse
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested iteration?

2013-04-23 Thread Oscar Benjamin
On 23 April 2013 16:40, Roy Smith  wrote:
> In reviewing somebody else's code today, I found the following
> construct (eliding some details):
>
> f = open(filename)
> for line in f:
> if re.search(pattern1, line):
> outer_line = f.next()
> for inner_line in f:
> if re.search(pattern2, inner_line):
> inner_line = f.next()
>
> Somewhat to my surprise, the code worked.  I didn't know it was legal
> to do nested iterations over the same iterable (not to mention mixing
> calls to next() with for-loops).  Is this guaranteed to work in all
> situations?

For Python 3 you'd need next(f) instead of f.next(). Otherwise, yes,
this works just fine with any non-restarting iterator (i.e. so that
__iter__ just returns self rather than a new iterator).

I recently posted in another thread about why it's a bad idea to call
next() without catching StopIteration though. I wouldn't accept the
code above for that reason.


Oscar
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested iteration?

2013-04-23 Thread Ian Kelly
On Tue, Apr 23, 2013 at 9:40 AM, Roy Smith  wrote:
> In reviewing somebody else's code today, I found the following
> construct (eliding some details):
>
> f = open(filename)
> for line in f:
> if re.search(pattern1, line):
> outer_line = f.next()
> for inner_line in f:
> if re.search(pattern2, inner_line):
> inner_line = f.next()
>
> Somewhat to my surprise, the code worked.  I didn't know it was legal
> to do nested iterations over the same iterable (not to mention mixing
> calls to next() with for-loops).  Is this guaranteed to work in all
> situations?

Yes, although the results will be different depending on whether the
iterable stores its iteration state on itself (like a file object) or
in the iterator (like a list).  In the latter case, you would simply
have two independent simultaneous iterations of the same object.  You
can replicate the same effect in the latter case though by getting an
iterator from the object and explicitly looping over the same
iterator, like so:

i = iter(range(10))
for x in i:
if x % 4 == 1:
for y in i:
if y % 4 == 3:
print("%d + %d = %d" % (x, y, x+y))
break
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested iteration?

2013-04-23 Thread Peter Otten
Roy Smith wrote:

> In reviewing somebody else's code today, I found the following
> construct (eliding some details):
> 
> f = open(filename)
> for line in f:
> if re.search(pattern1, line):
> outer_line = f.next()
> for inner_line in f:
> if re.search(pattern2, inner_line):
> inner_line = f.next()
> 
> Somewhat to my surprise, the code worked.  I didn't know it was legal
> to do nested iterations over the same iterable (not to mention mixing
> calls to next() with for-loops).  Is this guaranteed to work in all
> situations?

That depends on what you mean by "all". A well-behaved iterator like 
Python's file object allows mixing of for loops and next(...) calls, but 
stupid people who deserve to burn in hell sometimes do

class MyIterable:
def __iter__(self):
 reset_internal_counter()
 return self


with the consequence that every for loop implicitly resets the iterator's 
state.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested iteration?

2013-04-23 Thread Chris Angelico
On Wed, Apr 24, 2013 at 1:40 AM, Roy Smith  wrote:
> In reviewing somebody else's code today, I found the following
> construct (eliding some details):
>
> f = open(filename)
> for line in f:
> if re.search(pattern1, line):
> outer_line = f.next()
> for inner_line in f:
> if re.search(pattern2, inner_line):
> inner_line = f.next()
>
> Somewhat to my surprise, the code worked.  I didn't know it was legal
> to do nested iterations over the same iterable (not to mention mixing
> calls to next() with for-loops).  Is this guaranteed to work in all
> situations?

The definition of the for loop is sufficiently simple that this is
safe, with the caveat already mentioned (that __iter__ is just
returning self). And calling next() inside the loop will simply
terminate the loop if there's nothing there, so I'd not have a problem
with code like that - for instance, if I wanted to iterate over pairs
of lines, I'd happily do this:

for line1 in f:
  line2=next(f)
  print(line2)
  print(line1)

That'll happily swap pairs, ignoring any stray line at the end of the
file. Why bother catching StopIteration just to break?

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


احلى مجموعة كفرات فيس بوك facebook covers 2013

2013-04-23 Thread 23alagmy
احلى مجموعة كفرات فيس بوك facebook covers 2013

http://natigtas7ab.blogspot.com/2013/04/facebook-covers-2013.html
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested iteration?

2013-04-23 Thread Ian Kelly
On Tue, Apr 23, 2013 at 10:21 AM, Chris Angelico  wrote:
> The definition of the for loop is sufficiently simple that this is
> safe, with the caveat already mentioned (that __iter__ is just
> returning self). And calling next() inside the loop will simply
> terminate the loop if there's nothing there, so I'd not have a problem
> with code like that - for instance, if I wanted to iterate over pairs
> of lines, I'd happily do this:
>
> for line1 in f:
>   line2=next(f)
>   print(line2)
>   print(line1)
>
> That'll happily swap pairs, ignoring any stray line at the end of the
> file. Why bother catching StopIteration just to break?

The next() there will *not* "simply terminate the loop" if it raises a
StopIteration; for loops do not catch StopIteration exceptions that
are raised from the body of the loop.  The StopIteration will continue
to propagate until it is caught or it reaches the sys.excepthook.  In
unusual circumstances, it is even possible that it could cause some
*other* loop higher in the stack to break (i.e. if the current code is
being run as a result of the next() method being called by the looping
construct).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested iteration?

2013-04-23 Thread Steven D'Aprano
On Tue, 23 Apr 2013 11:40:31 -0400, Roy Smith wrote:

> In reviewing somebody else's code today, I found the following construct
> (eliding some details):
> 
> f = open(filename)
> for line in f:
> if re.search(pattern1, line):
> outer_line = f.next()
> for inner_line in f:
>   if re.search(pattern2, inner_line):
> inner_line = f.next()
> 
> Somewhat to my surprise, the code worked.  I didn't know it was legal to
> do nested iterations over the same iterable (not to mention mixing calls
> to next() with for-loops).  Is this guaranteed to work in all
> situations?


In "all" situations? No of course not, this is Python, you can write 
nasty code that explodes the *second* time you iterate over it, but not 
the first.

class Demo:
flag = False
def __iter__(self):
if self.flag:
raise RuntimeError("don't do that!")
self.flag = True
return iter([1, 2, 3])


But under normal circumstances with normal iterables, yes, it's fine. If 
the object is a sequence, like lists or strings, each for-loop is 
independent of the others:

py> s = "ab"
py> for c in s:
... for k in s:
... print c, k
...
a a
a b
b a
b b


If the object is an iterator, each loop consumes a single value:

py> it = iter("abcd")
py> for c in it:
... for k in it:
... print c, k
...
a b
a c
a d


Each time you call next(), a single value is consumed. It doesn't matter 
whether you have one for-loop calling next() behind the scenes, or ten 
loops, or you call next() yourself, the same rule applies.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested iteration?

2013-04-23 Thread Ian Kelly
On Tue, Apr 23, 2013 at 10:30 AM, Ian Kelly  wrote:
> On Tue, Apr 23, 2013 at 10:21 AM, Chris Angelico  wrote:
>> The definition of the for loop is sufficiently simple that this is
>> safe, with the caveat already mentioned (that __iter__ is just
>> returning self). And calling next() inside the loop will simply
>> terminate the loop if there's nothing there, so I'd not have a problem
>> with code like that - for instance, if I wanted to iterate over pairs
>> of lines, I'd happily do this:
>>
>> for line1 in f:
>>   line2=next(f)
>>   print(line2)
>>   print(line1)
>>
>> That'll happily swap pairs, ignoring any stray line at the end of the
>> file. Why bother catching StopIteration just to break?
>
> The next() there will *not* "simply terminate the loop" if it raises a
> StopIteration; for loops do not catch StopIteration exceptions that
> are raised from the body of the loop.  The StopIteration will continue
> to propagate until it is caught or it reaches the sys.excepthook.  In
> unusual circumstances, it is even possible that it could cause some
> *other* loop higher in the stack to break (i.e. if the current code is
> being run as a result of the next() method being called by the looping
> construct).

To expand on this, the prevailing wisdom here is that calls to next()
should always be guarded with a StopIteration exception handler.  The
one exception to this is when the next() call is inside the body of a
generator function, and the exception handler would cause the
generator to exit anyway; in that case there is little difference
between "except StopIteration: return" and letting the StopIteration
propagate to the generator object.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested iteration?

2013-04-23 Thread Chris Angelico
On Wed, Apr 24, 2013 at 2:30 AM, Ian Kelly  wrote:
> On Tue, Apr 23, 2013 at 10:21 AM, Chris Angelico  wrote:
>> The definition of the for loop is sufficiently simple that this is
>> safe, with the caveat already mentioned (that __iter__ is just
>> returning self). And calling next() inside the loop will simply
>> terminate the loop if there's nothing there, so I'd not have a problem
>> with code like that - for instance, if I wanted to iterate over pairs
>> of lines, I'd happily do this:
>>
>> for line1 in f:
>>   line2=next(f)
>>   print(line2)
>>   print(line1)
>>
>> That'll happily swap pairs, ignoring any stray line at the end of the
>> file. Why bother catching StopIteration just to break?
>
> The next() there will *not* "simply terminate the loop" if it raises a
> StopIteration; for loops do not catch StopIteration exceptions that
> are raised from the body of the loop.  The StopIteration will continue
> to propagate until it is caught or it reaches the sys.excepthook.  In
> unusual circumstances, it is even possible that it could cause some
> *other* loop higher in the stack to break (i.e. if the current code is
> being run as a result of the next() method being called by the looping
> construct).

Ah, whoops, my bad. This is what I get for not checking. I know I've
done weird stuff with for loops before, but I guess it was fiddling
inside the top of it, not in its body.

I love this list. If I make a mistake, it's sure to be caught by
someone else. The record is guaranteed to be set straight. Thanks Ian!

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested iteration?

2013-04-23 Thread Steven D'Aprano
On Wed, 24 Apr 2013 02:42:41 +1000, Chris Angelico wrote:

> I love this list. If I make a mistake, it's sure to be caught by someone
> else.

No it's not!


Are-you-here-for-the-five-minute-argument-or-the-full-ten-minutes-ly y'rs,


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Blind Anagram
On 23/04/2013 15:49, Steven D'Aprano wrote:
> On Tue, 23 Apr 2013 08:05:53 +0100, Blind Anagram wrote:
> 
>> I did a lot of work comparing the overall performance of the sieve when
>> using either lists or arrays and I found that lists were a lot faster
>> for the majority use case when the sieve is not large.
> 
> And when the sieve is large?

I don't know but since the majority use case is when the sieve is small,
it makes sense to choose a list.

> I don't actually believe that the bottleneck is the cost of taking a list 
> slice. That's pretty fast, even for huge lists, and all efforts to skip 
> making a copy by using itertools.islice actually ended up slower. But 
> suppose it is the bottleneck. Then *sooner or later* arrays will win over 
> lists, simply because they're smaller.

Maybe you have not noticed that, when I am discussing a huge sieve, I am
simply pushing a sieve designed primarily for a small sieve lengths to
the absolute limit.  This is most definitely a minority use case.

In pushing the size of the sieve upwards, it is the slice operation that
is the first thing that 'breaks'.  This is because the slice can be
almost as big as the primary array so the OS gets driven into memory
allocation problems for a sieve that is about half the length it would
otherwise be. It still works but the cost of the slice once this point
is reached rises from about 20% to over 600% because of all the paging
going on.

The unavailable list.count(value, limit) function would hence allow the
sieve length to be up to twice as large before running into problems and
would also cut the 20% slice cost I am seeing on smaller sieve lengths.

So, all I was doing in asking for advice was to check whether there is
an easy way of avoiding the slice copy, not because this is critical,
but rather because it is a pity to limit the performance because Python
forces a (strictly unnecessary) copy in order to perform a count within
a part of a list.

In other words, the lack of a list.count(value, limit) function makes
Python less effective than it would otherwise be.  I haven't looked at
Python's C code base but I still wonder if there a good reason for NOT
providing this?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pip does not find packages

2013-04-23 Thread rusi
On Apr 22, 2:03 pm, Olive  wrote:
> I am using virtualenv and pip (from archlinux). What I have done:
> virtualenv was installed by my distribution. I have made a virtual 
> environment and activate it, it has installed pip, so far so good.
>
> Now I am trying to install package in the virtualenvironnement:
>
> pip install Impacket
> Downloading/unpacking Impacket
>   Could not find any downloads that satisfy the requirement Impacket
> No distributions at all found for Impacket
>
> but Impacket is found by
> pip search Impacket
> Impacket                  - Network protocols Constructors and Dissectors
>
> exactly the same happens with pcapy. With PyGTK, the pip command just hang 
> when trying to download it. What is going on? Maybe a misconfigured server? 
> Is there anything that I can do?


There is a GG http://groups.google.com/group/python-virtualenv
For things like virtualenv/pip/wheel etc

>
> Olive

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Oscar Benjamin
On 23 April 2013 17:57, Blind Anagram  wrote:
> On 23/04/2013 15:49, Steven D'Aprano wrote:
>> On Tue, 23 Apr 2013 08:05:53 +0100, Blind Anagram wrote:
>>
>>> I did a lot of work comparing the overall performance of the sieve when
>>> using either lists or arrays and I found that lists were a lot faster
>>> for the majority use case when the sieve is not large.
>>
>> And when the sieve is large?
>
> I don't know but since the majority use case is when the sieve is small,
> it makes sense to choose a list.

That's an odd comment given what you said at the start of this thread:

Blind Anagram wrote:
> I would be grateful for any advice people can offer on the fastest way
> to count items in a sub-sequence of a large list.
>
> I have a list of boolean values that can contain many hundreds of
> millions of elements for which I want to count the number of True values
> in a sub-sequence, one from the start up to some value (say hi).


>> I don't actually believe that the bottleneck is the cost of taking a list
>> slice. That's pretty fast, even for huge lists, and all efforts to skip
>> making a copy by using itertools.islice actually ended up slower. But
>> suppose it is the bottleneck. Then *sooner or later* arrays will win over
>> lists, simply because they're smaller.
>
> Maybe you have not noticed that, when I am discussing a huge sieve, I am
> simply pushing a sieve designed primarily for a small sieve lengths to
> the absolute limit.  This is most definitely a minority use case.
>
> In pushing the size of the sieve upwards, it is the slice operation that
> is the first thing that 'breaks'.  This is because the slice can be
> almost as big as the primary array so the OS gets driven into memory
> allocation problems for a sieve that is about half the length it would
> otherwise be. It still works but the cost of the slice once this point
> is reached rises from about 20% to over 600% because of all the paging
> going on.

You keep mentioning that you want it to work with a large sieve. I
would much rather compute the same quantities with a small sieve if
possible. If you were using the Lehmer/Meissel algorithm you would be
able to compute the same quantity (i.e. pi(1e9)) using a much smaller
sieve with 30k items instead of 1e9. that would fit *very* comfortably
in memory and you wouldn't even need to slice the list. Or to put it
another way, you could compute pi(~1e18) using your current sieve
without slicing or paging. If you want to lift the limit on computing
pi(x) this is clearly the way to go.

>
> The unavailable list.count(value, limit) function would hence allow the
> sieve length to be up to twice as large before running into problems and
> would also cut the 20% slice cost I am seeing on smaller sieve lengths.
>
> So, all I was doing in asking for advice was to check whether there is
> an easy way of avoiding the slice copy, not because this is critical,
> but rather because it is a pity to limit the performance because Python
> forces a (strictly unnecessary) copy in order to perform a count within
> a part of a list.
>
> In other words, the lack of a list.count(value, limit) function makes
> Python less effective than it would otherwise be.  I haven't looked at
> Python's C code base but I still wonder if there a good reason for NOT
> providing this?

If you feel that this is a good suggestion for an improvement to
Python consider posting it on python-ideas. I wasn't aware of the
equivalent functionality on strings but I see that the tuple.count()
function is the same as list.count().


Oscar
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Blind Anagram
On 23/04/2013 18:45, Oscar Benjamin wrote:

 I did a lot of work comparing the overall performance of the sieve when
 using either lists or arrays and I found that lists were a lot faster
 for the majority use case when the sieve is not large.
>>>
>>> And when the sieve is large?
>>
>> I don't know but since the majority use case is when the sieve is small,
>> it makes sense to choose a list.
> 
> That's an odd comment given what you said at the start of this thread:
> 
> Blind Anagram wrote:
>> I would be grateful for any advice people can offer on the fastest way
>> to count items in a sub-sequence of a large list.
>>
>> I have a list of boolean values that can contain many hundreds of
>> millions of elements for which I want to count the number of True values
>> in a sub-sequence, one from the start up to some value (say hi).

At this early stage in the discussion I was simply explaining the
immediate context of the problem on which I was seeking advice.

Here I didn't think it was necessary to expand on the wider context
since there might have been an very easy way that people could suggest
for avoiding the slice copy when counting on a part of a list.

But there isn't, so the wider details then became important in
explaining why some proposals might work for the limiting case but would
not make sense within the overall context of use.  And here I have said
on more than one occasion that the huge sieve case is a minority use case.

[snip]
> You keep mentioning that you want it to work with a large sieve. I
> would much rather compute the same quantities with a small sieve if
> possible. If you were using the Lehmer/Meissel algorithm you would be
> able to compute the same quantity (i.e. pi(1e9)) using a much smaller
> sieve with 30k items instead of 1e9. that would fit *very* comfortably
> in memory and you wouldn't even need to slice the list. Or to put it
> another way, you could compute pi(~1e18) using your current sieve
> without slicing or paging. If you want to lift the limit on computing
> pi(x) this is clearly the way to go.

If prime_pi for huge numbers was really important to me I wouldn't be
using Python!

This is just one of a number of functions implemented in the class.  It
is nice to have and it is also nice for testing purposes to be able to
run it for large sieves. But it is by no means important enough to
devote dedicated code to computing it. The prime generator within the
class is far more important and is the workhorse for most uses

[snip]
>> In other words, the lack of a list.count(value, limit) function makes
>> Python less effective than it would otherwise be.  I haven't looked at
>> Python's C code base but I still wonder if there a good reason for NOT
>> providing this?
> 
> If you feel that this is a good suggestion for an improvement to
> Python consider posting it on python-ideas. I wasn't aware of the
> equivalent functionality on strings but I see that the tuple.count()
> function is the same as list.count().

To be honest, I am not really in a position to judge whether this is a
'good' suggestion.  It has turned up as potentially useful in two cases
for me but one of these (the one discussed here) is a minority use case.
 I would also be a bit worried about launching this into a group of
Python experts who will almost certainly be even more demanding of
explanations than you folk here!

   Brian

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Terry Jan Reedy

On 4/23/2013 7:45 AM, Blind Anagram wrote:


I then wondered why count for lists has no limits


Probably because no one has asked for such, as least partly because it 
is not really needed. In any case, .count(s) is a generic method. It is 
part of the definition of a Sequence. It can also be implemented for 
non-sequence collections, such as a Tree class, that allow multiple 
occurrences of an item.


> whereas count for other objects (e.g. strings) has these.

Strings (of unicode or bytes) are exceptional in multiple ways.

--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Oscar Benjamin
On 23 April 2013 19:30, Blind Anagram  wrote:
> On 23/04/2013 18:45, Oscar Benjamin wrote:
>
> [snip]
>> You keep mentioning that you want it to work with a large sieve. I
>> would much rather compute the same quantities with a small sieve if
>> possible. If you were using the Lehmer/Meissel algorithm you would be
>> able to compute the same quantity (i.e. pi(1e9)) using a much smaller
>> sieve with 30k items instead of 1e9. that would fit *very* comfortably
>> in memory and you wouldn't even need to slice the list. Or to put it
>> another way, you could compute pi(~1e18) using your current sieve
>> without slicing or paging. If you want to lift the limit on computing
>> pi(x) this is clearly the way to go.
>
> If prime_pi for huge numbers was really important to me I wouldn't be
> using Python!

I would, at least to begin with. The advantage that Python has for
this sort of thing is that it takes a minimal amount of programmer
effort to implement these kind of algorithms. This is because you
don't have to worry about nuts and bolts problems like integer
overflow and memory allocation. You also have an abundance of
different data structures to fulfil the big-O requirements of most
algorithms. It's easy to make generators that can iterate over
infinite or arbitrarily large sequences while still using constant
memory. And so on...

To implement the algorithm I mentioned in e.g. C would be considerably
more work. C would, however, perform much better using your brute
force approach and would lift the memory constraints of your program
by a small (in asymptotic terms) amount.

So I actually find that I can often get a faster program in Python
simply because it's much easier to implement fancier algorithms with
the optimal asymptotic performance that will ultimately outperform a
brute force approach whichever language is used.

Although, in saying that, those particular algorithms are probably
most naturally implemented in a functional language like Haskell with
scalable recursion.

>>> In other words, the lack of a list.count(value, limit) function makes
>>> Python less effective than it would otherwise be.  I haven't looked at
>>> Python's C code base but I still wonder if there a good reason for NOT
>>> providing this?
>>
>> If you feel that this is a good suggestion for an improvement to
>> Python consider posting it on python-ideas. I wasn't aware of the
>> equivalent functionality on strings but I see that the tuple.count()
>> function is the same as list.count().
>
> To be honest, I am not really in a position to judge whether this is a
> 'good' suggestion.  It has turned up as potentially useful in two cases
> for me but one of these (the one discussed here) is a minority use case.
>  I would also be a bit worried about launching this into a group of
> Python experts who will almost certainly be even more demanding of
> explanations than you folk here!

I wouldn't worry about that. I've never felt the need for this but
then I would probably use numpy to do what you're doing. It's
certainly not an outlandish suggestion and I'd be surprised if no one
agreed with the idea.


Oscar
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Lists and arrays

2013-04-23 Thread Denis McMahon
On Mon, 22 Apr 2013 11:13:38 -0700, Ana Dionísio wrote:

> I have an array and I need pick some data from that array and put it in
> a list, for example:
> 
> array= [a,b,c,1,2,3]
> 
> list=array[0]+ array[3]+ array[4]
> 
> list: [a,1,2]
> 
> When I do it like this: list=array[0]+ array[3]+ array[4] I get an
> error:
> 
> "TypeError: unsupported operand type(s) for +: 'numpy.ndarray' and
> 'numpy.ndarray'"
> 
> Can you help me?

arr1 = [ 'a', 'b', 'c', 1, 2, 3 ]
# populate a new list with individual members
arr2 = [ arr1[0], arr1[3], arr1[5] ]
# create a new list by adding slices together
arr3 = arr1[:1] + arr1[2:4] + arr1[5:]
print arr2
# output is: ['a', 1, 3]
print arr3
# output is: ['a', 'c', 1, 3]

-- 
Denis McMahon, denismfmcma...@gmail.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Terry Jan Reedy

On 4/23/2013 12:57 PM, Blind Anagram wrote:


So, all I was doing in asking for advice was to check whether there is
an easy way of avoiding the slice copy,


And there is.


not because this is critical,
but rather because it is a pity to limit the performance because Python
forces a (strictly unnecessary) copy in order to perform a count within
a part of a list.


Python does not force that. You have been given several simple no-copy 
alternatives. They happen to be slower *with CPython* because of the 
speed difference between Python code and C code. If the same timing 
tests were done with any of the implementations that execute python code 
faster, the results would likely be different.


I thing str/byte/bytearray.count have more need for optional start,stop 
boundary parameters because a) people search in long texts and subtexts, 
more so I think that for other sequences, b) they search for substrings 
longer than 1 and hence c) the generic no-slice alternatives do not work 
for counting substrings.


That said, I do see that tuple/list.index have had start, stop 
paramaters added, so doing the same for .count is conceivable. I just do 
not remember anyone else asking for such. The use case must be very 
rare. And as I said in my other post, .count(x) applies to any 
collections, but start,stop would only apply to sequences.



In other words, the lack of a list.count(value, limit) function makes
Python less effective than it would otherwise be.


Untrue. The alternatives are just as *effective*.


--
http://mail.python.org/mailman/listinfo/python-list


ANN: template-engine pyratemp 0.3.0/0.2.3

2013-04-23 Thread Roland Koebler
Hi,

since there were some questions about template-engines some time ago,
I would like to announce:

- I updated my comparison and benchmarks of several template-engines
  on http://www.simple-is-better.org/template/
- I have released a new version of my small and simple but powerful and
  pythonic template-engine "pyratemp":


=
pyratemp 0.3.0 / 0.2.3 released -- 2013-04-03
=

A new version of pyratemp is released, which officially adds Python 3
support; and a backport of this version to Python <=2.5:

- 0.3.0 for Python >=2.6 / 3.x
- 0.2.3 for Python <=2.5

No changes in your templates and your Python-code should be necessary,
except if you use cmp() / xrange() in your templates, which are gone
in Python 3 and pyratemp 0.3.0/0.2.3.

About pyratemp
--
pyratemp is a small, simple and powerful template-engine for Python.

Changes
---
see http://www.simple-is-better.org/template/pyratemp-latest/NEWS

The main changes are:

- Python 3 support
- added setup.py for installation via distutils
- renamed yaml2pyratemp.py to pyratemp_tool.py
- added LaTeX- and mail-header-escaping
- removed cmp(), xrange() from the template-functions

Resources
-
Homepage, documentation, download and mailinglists:
   http://www.simple-is-better.org/template/pyratemp.html

Download:
- http://www.simple-is-better.org/template/pyratemp-0.3.0.tgz
- http://www.simple-is-better.org/template/pyratemp-0.2.3.tgz

on PyPI:
- https://pypi.python.org/pypi/pyratemp/0.3.0
- https://pypi.python.org/pypi/pyratemp/0.2.3

---

Roland
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Oscar Benjamin
On 23 April 2013 21:00, Terry Jan Reedy  wrote:
>
> That said, I do see that tuple/list.index have had start, stop paramaters
> added, so doing the same for .count is conceivable.

Are those new? I don't remember them not being there.

You need the start/stop parameters to be able to use index for finding
all occurrences of an item rather than just the first:

def indices(value, sequence):
i = -1
while True:
try:
i = sequence.index(value, i + 1)
except ValueError:
return
yield i

I thought this was a common use case for .index() and that if the
interface had been designed more recently then it might have just been
an .indices() method that returned an iterator.


Oscar
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Blind Anagram
On 23/04/2013 21:00, Terry Jan Reedy wrote:
> On 4/23/2013 12:57 PM, Blind Anagram wrote:
> 
>> So, all I was doing in asking for advice was to check whether there is
>> an easy way of avoiding the slice copy,
> 
> And there is.
> 
>> not because this is critical,
>> but rather because it is a pity to limit the performance because Python
>> forces a (strictly unnecessary) copy in order to perform a count within
>> a part of a list.
>
> Python does not force that. You have been given several simple no-copy
> alternatives. They happen to be slower *with CPython* because of the
> speed difference between Python code and C code. If the same timing
> tests were done with any of the implementations that execute python code
> faster, the results would likely be different.

Then being pedantic rather than colloquial, the lack of start, end
parameters in the function list.count(value) means that anyone wishing
to use this function on a part of a list is forced to slice the list and
thereby invoke a possibly costly copy operation, one that is, in
principle, not necessary in order to perform the underlying operation.

[snip]
>> In other words, the lack of a list.count(value, limit) function makes
>> Python less effective than it would otherwise be.
> 
> Untrue. The alternatives are just as *effective*.

Then I fear that we will have to accept that we disagree on this.

   Brian

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested iteration?

2013-04-23 Thread Terry Jan Reedy

On 4/23/2013 11:40 AM, Roy Smith wrote:

In reviewing somebody else's code today, I found the following
construct (eliding some details):

 f = open(filename)
 for line in f:
 if re.search(pattern1, line):
 outer_line = f.next()
 for inner_line in f:
if re.search(pattern2, inner_line):
 inner_line = f.next()


Did you possibly elide a 'break' after the inner_line assignment?


Somewhat to my surprise, the code worked.


Without a break, the inner loop will continue iterating through the rest 
of the file (billions of lines?) looking for pattern2 and re-binding 
inner-line if there is another line or raising StopIteration if there is 
not. Does this really constitute 'working'?


This is quite aside from issue of what one wants if there is no pattern1 
or if there is no line after the first match (probably not 
StopIteration) or if there is no pattern2.



I didn't know it was legal to do nested iterations over the same iterable


Yes, but the effect is quite different for iterators (start where the 
outer iteration left off) and non-iterators (restart at the beginning).


r = range(2)
for i in r:
for j in r:
print(i,j)
# this is a common idiom to get all pairs
0 0
0 1
1 0
1 1

ri= iter(range(3))
for i in ri:
for j in ri:
print(i,j)
# this is somewhat deceptive as the outer loop executes just once
0 1
0 2

I personally would add a 'break' after 'outer_line = next(f)', since the 
first loop is effectively done anyway at that point, and dedent the 
second for statement. I find to following clearer


ri= iter(range(3))
for i in ri:
break
for j in ri:
print(i,j)
# this makes it clear that the first loop executes just once
0 1
0 2

I would only nest if the inner loop could terminate without exhausting 
the iterator and I wanted the outer loop to then resume.


__
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: Nested iteration?

2013-04-23 Thread Joshua Landau
On 23 April 2013 21:49, Terry Jan Reedy  wrote:

> ri= iter(range(3))
> for i in ri:
> for j in ri:
> print(i,j)
> # this is somewhat deceptive as the outer loop executes just once
> 0 1
> 0 2
>
> I personally would add a 'break' after 'outer_line = next(f)', since the
> first loop is effectively done anyway at that point, and dedent the second
> for statement. I find to following clearer
>
> ri= iter(range(3))
> for i in ri:
> break
> for j in ri:
> print(i,j)
> # this makes it clear that the first loop executes just once
> 0 1
> 0 2
>
> I would only nest if the inner loop could terminate without exhausting the
> iterator and I wanted the outer loop to then resume.
>

Surely a normal programmer would think "next(ri, None)" rather than a loop
that just breaks.
-- 
http://mail.python.org/mailman/listinfo/python-list


datetime.strptime() not padding 0's

2013-04-23 Thread Rodrick Brown
I thought I read some where that strptime() will pad 0's for day's for some
reason this isnt working for me and I'm wondering if i'm doing something
wrong.

>>> from datetime import datetime
>>> dt = datetime.strptime('Apr 9 2013', '%b %d %Y')
>>> dt.day
9
>>>

How can I get strptime to run 09? instead of 9


--RB
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested iteration?

2013-04-23 Thread Oscar Benjamin
On 23 April 2013 17:30, Ian Kelly  wrote:
> On Tue, Apr 23, 2013 at 10:21 AM, Chris Angelico  wrote:
>> The definition of the for loop is sufficiently simple that this is
>> safe, with the caveat already mentioned (that __iter__ is just
>> returning self). And calling next() inside the loop will simply
>> terminate the loop if there's nothing there, so I'd not have a problem
>> with code like that - for instance, if I wanted to iterate over pairs
>> of lines, I'd happily do this:
>>
>> for line1 in f:
>>   line2=next(f)
>>   print(line2)
>>   print(line1)
>>
>> That'll happily swap pairs, ignoring any stray line at the end of the
>> file. Why bother catching StopIteration just to break?
>
> The next() there will *not* "simply terminate the loop" if it raises a
> StopIteration; for loops do not catch StopIteration exceptions that
> are raised from the body of the loop.  The StopIteration will continue
> to propagate until it is caught or it reaches the sys.excepthook.  In
> unusual circumstances, it is even possible that it could cause some
> *other* loop higher in the stack to break (i.e. if the current code is
> being run as a result of the next() method being called by the looping
> construct).

I don't find that the circumstances are unusual. Pretty much any time
one of the functions in the call stack is a generator this problem
will occur if StopIteration propagates.

I just thought I'd add that Python 3 has a convenient way to avoid
this problem with next() which is to use the starred unpacking syntax:

>>> numbers = [1, 2, 3, 4]
>>> first, *numbers = numbers
>>> first
1
>>> for x in numbers:
... print(x)
...
2
3
4
>>> first, *numbers = []
Traceback (most recent call last):
  File "", line 1, in 
ValueError: need more than 0 values to unpack

Since we get a ValueError instead of a StopIteration we don't have the
described problem.


Oscar
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: datetime.strptime() not padding 0's

2013-04-23 Thread John Gordon
In  Rodrick Brown 
 writes:

> I thought I read some where that strptime() will pad 0's for day's for some
> reason this isnt working for me and I'm wondering if i'm doing something
> wrong.

> >>> from datetime import datetime
> >>> dt = datetime.strptime('Apr 9 2013', '%b %d %Y')
> >>> dt.day
> 9

> How can I get strptime to run 09? instead of 9

dt.day is just an integer.  If you want to print it with zero padding,
use a format string:

>>> n = 9
>>> print n
9
>>> print '%02d' % n
09

or for python 3:
>>> print("{0:02d}".format(n))
09

-- 
John Gordon   A is for Amy, who fell down the stairs
gor...@panix.com  B is for Basil, assaulted by bears
-- Edward Gorey, "The Gashlycrumb Tinies"

-- 
http://mail.python.org/mailman/listinfo/python-list


Reading a CSV file

2013-04-23 Thread Ana Dionísio
Hello!

I need to read a CSV file that has "n" rows and "m" columns and if a certain 
condition is met, for exameple n==200, it prints all the columns in that row. 
How can I do this? I tried to save all the data in a multi-dimensional array 
but I get this error:

"ValueError: array is too big."

Thank you!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested iteration?

2013-04-23 Thread Joshua Landau
On 23 April 2013 22:29, Oscar Benjamin  wrote:

> I just thought I'd add that Python 3 has a convenient way to avoid
> this problem with next() which is to use the starred unpacking syntax:
>
> >>> numbers = [1, 2, 3, 4]
> >>> first, *numbers = numbers
>

That creates a new list every time. You'll not want that over
try-next-except if you're doing this in a loop, and on addition (if you
were talking in context) your method will exhaust the iterator in the outer
loop.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: datetime.strptime() not padding 0's

2013-04-23 Thread Kyle Shannon
On Tue, Apr 23, 2013 at 3:14 PM, Rodrick Brown  wrote:
> I thought I read some where that strptime() will pad 0's for day's for some
> reason this isnt working for me and I'm wondering if i'm doing something
> wrong.
>
 from datetime import datetime
 dt = datetime.strptime('Apr 9 2013', '%b %d %Y')
 dt.day
> 9

>
> How can I get strptime to run 09? instead of 9
>
>
> --RB
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>

dt.day is an integer:

>>> from datetime import datetime
>>> dt = datetime.strptime('Apr 9 2013', '%b %d %Y')
>>> type(dt.day)


I think you are confusing strftime() with strptime():

>>> dt.strftime('%b %D %Y')
'Apr 04/09/13 2013'

or if you just want a 0 padded string for the day, use string formatting:

>>> s = '%02d' % dt.day
>>> type(s)

>>> s
'09'
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: datetime.strptime() not padding 0's

2013-04-23 Thread Kyle Shannon
On Tue, Apr 23, 2013 at 3:41 PM, Kyle Shannon  wrote:
> On Tue, Apr 23, 2013 at 3:14 PM, Rodrick Brown  
> wrote:
>> I thought I read some where that strptime() will pad 0's for day's for some
>> reason this isnt working for me and I'm wondering if i'm doing something
>> wrong.
>>
> from datetime import datetime
> dt = datetime.strptime('Apr 9 2013', '%b %d %Y')
> dt.day
>> 9
>
>>
>> How can I get strptime to run 09? instead of 9
>>
>>
>> --RB
>>
>> --
>> http://mail.python.org/mailman/listinfo/python-list
>>
>
> dt.day is an integer:
>
 from datetime import datetime
 dt = datetime.strptime('Apr 9 2013', '%b %d %Y')
 type(dt.day)
> 
>
> I think you are confusing strftime() with strptime():
>
 dt.strftime('%b %D %Y')
> 'Apr 04/09/13 2013'
>
> or if you just want a 0 padded string for the day, use string formatting:
>
 s = '%02d' % dt.day
 type(s)
> 
 s
> '09'


Whoops, wrong cut/paste, I meant:

>>> dt.strftime('%b %d %Y')
'Apr 09 2013'
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Reading a CSV file

2013-04-23 Thread Dan Stromberg
On Tue, Apr 23, 2013 at 2:39 PM, Ana Dionísio wrote:

> Hello!
>
> I need to read a CSV file that has "n" rows and "m" columns and if a
> certain condition is met, for exameple n==200, it prints all the columns in
> that row. How can I do this? I tried to save all the data in a
> multi-dimensional array but I get this error:
>
> "ValueError: array is too big


Use:
csv.reader(*csvfile*, *dialect='excel'*,
***fmtparams*)¶

This will allow you to iterate over the values, instead of reading them all
into memory at once.

csv.reader is documented at:

http://docs.python.org/3/library/csv.html

http://docs.python.org/2/library/csv.html
-- 
http://mail.python.org/mailman/listinfo/python-list


What is the reason for defining classes within classes in Python?

2013-04-23 Thread vasudevram

Hi list,

I saw an example of defining a class within another class, here, in the docs 
for peewee, a simple ORM for Python:

http://peewee.readthedocs.org/en/latest/peewee/quickstart.html

In what way is this useful?

Thanks,
Vasudev
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Reading a CSV file

2013-04-23 Thread Ana Dionísio
Thank you, but can you explain it a little better? I am just starting in python 
and I don't think I understood how to apply your awnser
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is the reason for defining classes within classes in Python?

2013-04-23 Thread Ian Kelly
On Tue, Apr 23, 2013 at 3:50 PM, vasudevram  wrote:
>
> Hi list,
>
> I saw an example of defining a class within another class, here, in the docs 
> for peewee, a simple ORM for Python:
>
> http://peewee.readthedocs.org/en/latest/peewee/quickstart.html
>
> In what way is this useful?

In that particular case they're just using it as a namespace.  Django
does the same thing.
-- 
http://mail.python.org/mailman/listinfo/python-list


using pandoc instead of rst to document python

2013-04-23 Thread Peng Yu
Hi,

I'm wondering if it possible to use pandoc instead of rst to document
python. Is there a documentation system support this format of python
document?

-- 
Regards,
Peng
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: using pandoc instead of rst to document python

2013-04-23 Thread R. Michael Weylandt
On Tue, Apr 23, 2013 at 6:36 PM, Peng Yu  wrote:
> Hi,
>
> I'm wondering if it possible to use pandoc instead of rst to document
> python. Is there a documentation system support this format of python
> document?

Pandoc is a converter while rst is a format so they're not directly
comparable; pandoc can convert _to_ and _from_ rst to a wide variety
of other formats, but you still have to write documentation in one
format or another. If you want to use an rst-centric documentation
tool, you can write in, e.g., Markdown, convert to rst and then run
your other tool on it.

Michael
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested iteration?

2013-04-23 Thread Oscar Benjamin
On 23 April 2013 22:41, Joshua Landau  wrote:
> On 23 April 2013 22:29, Oscar Benjamin  wrote:
>>
>> I just thought I'd add that Python 3 has a convenient way to avoid
>> this problem with next() which is to use the starred unpacking syntax:
>>
>> >>> numbers = [1, 2, 3, 4]
>> >>> first, *numbers = numbers
>
>
> That creates a new list every time. You'll not want that over
> try-next-except if you're doing this in a loop, and on addition (if you were
> talking in context) your method will exhaust the iterator in the outer loop.

Oh, you're right. I'm not using Python 3 yet and I assumed without
checking that it would be giving me an iterator rather than unpacking
everything into a list.

Then the best I can think of is a helper function:

>>> def unpack(iterable, count):
...   iterator = iter(iterable)
...   for n in range(count):
... yield next(iterator)
...   yield iterator
...
>>> numbers = [1, 2, 3, 4]
>>> first, numbers = unpack(numbers, 1)
>>> first
1
>>> numbers

>>> list(numbers)
[2, 3, 4]
>>> first, numbers = unpack([], 1)
Traceback (most recent call last):
  File "", line 1, in 
ValueError: need more than 0 values to unpack


Oscar
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Reading a CSV file

2013-04-23 Thread Dan Stromberg
On Tue, Apr 23, 2013 at 2:58 PM, Ana Dionísio wrote:

> Thank you, but can you explain it a little better? I am just starting in
> python and I don't think I understood how to apply your awnser
> --
> http://mail.python.org/mailman/listinfo/python-list
>

#!/usr/local/pypy-1.9/bin/pypy

import csv

def main():
with open('test.csv', 'r') as file_:
for row in csv.reader(file_, delimiter="|"):
print row

main()

# Example input:
# abc|def|ghi
# jkl|mno|pqr

In this way, you get one row at a time, instead of all rows at once.

HTH
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Reading a CSV file

2013-04-23 Thread Ana Dionísio
The condition I want to meet is in the first column, so is there a way to read 
only the first column and if the condition is true, print the rest?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: using pandoc instead of rst to document python

2013-04-23 Thread Peng Yu
On Tue, Apr 23, 2013 at 5:40 PM, R. Michael Weylandt
 wrote:
> On Tue, Apr 23, 2013 at 6:36 PM, Peng Yu  wrote:
>> Hi,
>>
>> I'm wondering if it possible to use pandoc instead of rst to document
>> python. Is there a documentation system support this format of python
>> document?

Sorry for the confusion. When I said pandoc, I meant pandoc's markdown.

http://johnmacfarlane.net/pandoc/README.html#pandocs-markdown

> Pandoc is a converter while rst is a format so they're not directly
> comparable; pandoc can convert _to_ and _from_ rst to a wide variety
> of other formats, but you still have to write documentation in one
> format or another. If you want to use an rst-centric documentation
> tool, you can write in, e.g., Markdown, convert to rst and then run
> your other tool on it.

I currently use sphinx to generate the doc (in rst). How to figure it
to support pandoc's markdown?

-- 
Regards,
Peng
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is the reason for defining classes within classes in Python?

2013-04-23 Thread vasudevram
On Wednesday, April 24, 2013 3:52:57 AM UTC+5:30, Ian wrote:
> On Tue, Apr 23, 2013 at 3:50 PM, vasudevram  wrote:
> 
> >
> 
> > Hi list,
> 
> >
> 
> > I saw an example of defining a class within another class, here, in the 
> > docs for peewee, a simple ORM for Python:
> 
> >
> 
> > http://peewee.readthedocs.org/en/latest/peewee/quickstart.html
> 
> >
> 
> > In what way is this useful?
> 
> 
> 
> In that particular case they're just using it as a namespace.  Django
> 
> does the same thing.

Not clear. An example or more explanation might help, if you can. Thanks.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Reading a CSV file

2013-04-23 Thread Fábio Santos
The enumerate function should allow you to check whether you are in the
first iteration.

Like so:

  for row_number, row in enumerate(csv.reader(<...>)):
  if enumerate == 0:
  if :
  break
  ...

Enumerate allows you to know how far into the iteration you are.

You could use the iterator's next() method too.

On 23 Apr 2013 23:53, "Ana Dionísio"  wrote:
>
> The condition I want to meet is in the first column, so is there a way to
read only the first column and if the condition is true, print the rest?
> --
> http://mail.python.org/mailman/listinfo/python-list
-- 
http://mail.python.org/mailman/listinfo/python-list


AttributeError Problem

2013-04-23 Thread animemaiden
Hi,

I'm trying to display a graph in Tkinter  that reads a graph from a file and 
displays it on a panel which the first line in the file contains a number that 
indicates the number of vertices (n). The vertices are labeled as 0,1,…,n-1. 
Each subsequent line, with the format u x y v1, v2, …describes that the vertex 
u is located at position (x,y) with the edges (u,1). (u,v2), and so on.


I'm using Python 3.2.3 and I keep getting this error:

numberOfVertices = int(infile.readline().decode()) # Read the first line from 
the file
AttributeError: 'str' object has no attribute 'readline'


Here is my code:

from tkinter import * # Import tkinter
from tkinter import filedialog



def displayGraph(canvas, vertices, edges):
radius = 3
for vertex, x, y in vertices:
canvas.create_text(x - 2 * radius, y - 2 * radius, text = str(vertex), 
tags = "graph")
canvas.create_oval(x - radius, y - radius, x + radius, y + radius, fill 
= "black", tags = "graph")

for v1, v2 in edges:
canvas.create_line(vertices[v1][1], vertices[v1][2], vertices[v2][1], 
vertices[v2][2], tags = "graph")

def main():

infile = filedialog.askopenfilename()


numberOfVertices = int(infile.readline().decode()) # Read the first line 
from the file
print(numberOfVertices)

vertices = []
edges = []
for i in range(numberOfVertices):
items = infile.readline().strip().split() # Read the info for one vertex
vertices.append([int(items[0]), int(items[1]), int(items[2])])
for j in range(3, len(items)):
edges.append([int(items[0]), int(items[j])])

print(vertices)
print(edges)

infile.close()  # Close the input file

window = Tk() # Create a window
window.title("Display a Graph") # Set title

frame1 = Frame(window) # Hold four labels for displaying cards
frame1.pack()
canvas = Canvas(frame1, width = 300, height = 200)
canvas.pack()

displayGraph(canvas, vertices, edges)

window.mainloop() # Create an event loop

main()

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: AttributeError Problem

2013-04-23 Thread animemaiden
On Tuesday, April 23, 2013 7:41:27 PM UTC-4, animemaiden wrote:
> Hi,
> 
> 
> 
> I'm trying to display a graph in Tkinter  that reads a graph from a file and 
> displays it on a panel which the first line in the file contains a number 
> that indicates the number of vertices (n). The vertices are labeled as 
> 0,1,…,n-1. Each subsequent line, with the format u x y v1, v2, …describes 
> that the vertex u is located at position (x,y) with the edges (u,1). (u,v2), 
> and so on.
> 
> 
> 
> 
> 
> I'm using Python 3.2.3 and I keep getting this error:
> 
> 
> 
> numberOfVertices = int(infile.readline().decode()) # Read the first line from 
> the file
> 
> AttributeError: 'str' object has no attribute 'readline'
> 
> 
> 
> 
> 
> Here is my code:
> 
> 
> 
> from tkinter import * # Import tkinter
> 
> from tkinter import filedialog
> 
> 
> 
> 
> 
> 
> 
> def displayGraph(canvas, vertices, edges):
> 
> radius = 3
> 
> for vertex, x, y in vertices:
> 
> canvas.create_text(x - 2 * radius, y - 2 * radius, text = 
> str(vertex), tags = "graph")
> 
> canvas.create_oval(x - radius, y - radius, x + radius, y + radius, 
> fill = "black", tags = "graph")
> 
> 
> 
> for v1, v2 in edges:
> 
> canvas.create_line(vertices[v1][1], vertices[v1][2], vertices[v2][1], 
> vertices[v2][2], tags = "graph")
> 
> 
> 
> def main():
> 
> 
> 
> infile = filedialog.askopenfilename()
> 
> 
> 
> 
> 
> numberOfVertices = int(infile.readline().decode()) # Read the first line 
> from the file
> 
> print(numberOfVertices)
> 
> 
> 
> vertices = []
> 
> edges = []
> 
> for i in range(numberOfVertices):
> 
> items = infile.readline().strip().split() # Read the info for one 
> vertex
> 
> vertices.append([int(items[0]), int(items[1]), int(items[2])])
> 
> for j in range(3, len(items)):
> 
> edges.append([int(items[0]), int(items[j])])
> 
> 
> 
> print(vertices)
> 
> print(edges)
> 
> 
> 
> infile.close()  # Close the input file
> 
> 
> 
> window = Tk() # Create a window
> 
> window.title("Display a Graph") # Set title
> 
> 
> 
> frame1 = Frame(window) # Hold four labels for displaying cards
> 
> frame1.pack()
> 
> canvas = Canvas(frame1, width = 300, height = 200)
> 
> canvas.pack()
> 
> 
> 
> displayGraph(canvas, vertices, edges)
> 
> 
> 
> window.mainloop() # Create an event loop
> 
> 
> 
> main()

Also, it reads data from a file.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: AttributeError Problem

2013-04-23 Thread Skip Montanaro
> numberOfVertices = int(infile.readline().decode()) # Read the first line from 
> the file
> AttributeError: 'str' object has no attribute 'readline'
...
> infile = filedialog.askopenfilename()

This is just returning a filename.  You need to open it to get a file
object.  For example:

infile = filedialog.askopenfilename()
fd = open(infile)
...
numberOfVertices = int(fd.readline().decode())

Skip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: AttributeError Problem

2013-04-23 Thread animemaiden
On Tuesday, April 23, 2013 8:02:08 PM UTC-4, Skip Montanaro wrote:
> > numberOfVertices = int(infile.readline().decode()) # Read the first line 
> > from the file
> 
> > AttributeError: 'str' object has no attribute 'readline'
> 
> ...
> 
> > infile = filedialog.askopenfilename()
> 
> 
> 
> This is just returning a filename.  You need to open it to get a file
> 
> object.  For example:
> 
> 
> 
> infile = filedialog.askopenfilename()
> 
> fd = open(infile)
> 
> ...
> 
> numberOfVertices = int(fd.readline().decode())
> 
> 
> 
> Skip
Thanks, but now I have this error AttributeError: 'str' object has no attribute 
'decode'
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is the reason for defining classes within classes in Python?

2013-04-23 Thread alex23
On Apr 24, 9:13 am, vasudevram  wrote:
> On Wednesday, April 24, 2013 3:52:57 AM UTC+5:30, Ian wrote:
> > On Tue, Apr 23, 2013 at 3:50 PM, vasudevram  wrote:
> > > I saw an example of defining a class within another class
> > > In what way is this useful?
>
> > In that particular case they're just using it as a namespace.
>
> Not clear. An example or more explanation might help, if you can. Thanks.

Namespaces are used to allow for the same label to be applied to
different concepts without the labels conflicting with each other. If
I was writing a program that dealt with the mathematics of morality, I
might want to use the sine function and refer to it in the standard
way as 'sin', and I might also want to store a value representing your
lack of goodness as 'sin'. As you can't use the same label in the same
scope to refer to two different objects, one way of dealing with this
that lets you still use what you feel are the most appropriate names
is to put them into a namespace. So you could express this as:

class Math(object):
sin = function()

class Morality(object):
sin = True

Then in your code you can clearly distinguish between the two by using
Math.sin and Morality.sin. Modules & packages are also namespaces, so
in this example we'd replace the Math class with `import math`, which
has a sin function defined within it.

A nested class definition will be defined as an attribute of the class
its defined within:

>>> class Outer(object):
... foo = 'FOO'
... class Inner(object):
... bar = 'BAR'
...
>>> Outer.Inner

>>> Outer.Inner.bar
'BAR'

With peewee, the Model class looks for a Meta attribute and uses
attributes on it to perform some tasks, like where to retrieve/store
the model data. This allows for a way of distinguishing between
attributes used to define fields, and attributes needed for those
tasks. It also means your Models can use field names that the class
would otherwise reserve for its own internal purposes:

class DatabaseDetails(Model):
# these attributes are fields
database = CharField()
owner = CharField()

# ...but the Meta attribute isn't
class Meta:
# these attributes are used by the Model class
database = db

Here, database as a field is a text string that could contain a
database name, while DatabaseDetails.Meta.database contains a
reference to an actual database where the DatabaseDetails record would
be stored.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: AttributeError Problem

2013-04-23 Thread MRAB

On 24/04/2013 01:28, animemaiden wrote:

On Tuesday, April 23, 2013 8:02:08 PM UTC-4, Skip Montanaro wrote:

> numberOfVertices = int(infile.readline().decode()) # Read the first line from 
the file

> AttributeError: 'str' object has no attribute 'readline'

...

> infile = filedialog.askopenfilename()



This is just returning a filename.  You need to open it to get a file

object.  For example:



infile = filedialog.askopenfilename()

fd = open(infile)

...

numberOfVertices = int(fd.readline().decode())



Skip

Thanks, but now I have this error AttributeError: 'str' object has no attribute 
'decode'


You already have a string (the line) that you've read from the file, so
what are you trying to 'decode' anyway? Just remove that erroneous
method call.
--
http://mail.python.org/mailman/listinfo/python-list


Re: List Count

2013-04-23 Thread Steven D'Aprano
On Tue, 23 Apr 2013 17:57:17 +0100, Blind Anagram wrote:

> On 23/04/2013 15:49, Steven D'Aprano wrote:
>> On Tue, 23 Apr 2013 08:05:53 +0100, Blind Anagram wrote:
>> 
>>> I did a lot of work comparing the overall performance of the sieve
>>> when using either lists or arrays and I found that lists were a lot
>>> faster for the majority use case when the sieve is not large.
>> 
>> And when the sieve is large?
> 
> I don't know but since the majority use case is when the sieve is small,
> it makes sense to choose a list.
> 
>> I don't actually believe that the bottleneck is the cost of taking a
>> list slice. That's pretty fast, even for huge lists, and all efforts to
>> skip making a copy by using itertools.islice actually ended up slower.
>> But suppose it is the bottleneck. Then *sooner or later* arrays will
>> win over lists, simply because they're smaller.
> 
> Maybe you have not noticed that, when I am discussing a huge sieve, I am
> simply pushing a sieve designed primarily for a small sieve lengths to
> the absolute limit.  This is most definitely a minority use case.

You don't say? I hadn't noticed the first three hundred times you 
mentioned it :-P


In my opinion, it is more important to be efficient for large sieves, not 
small. As they say, for small N, everything is fast. Nobody is going to 
care about the difference between small-N taking 1ms or 10ms, but they 
will care about the difference between big-N taking 1 minute or 1 hour. 
So, as a general rule, optimize the expensive cases, and if the cheap 
cases are a little less cheap than they otherwise would have been, nobody 
will care or notice.

Of course, it's your code, and you're free to make whatever trade-offs 
between time and space and programmer effort that you feel are best. I'm 
just making a suggestion.


[...]
> In other words, the lack of a list.count(value, limit) function makes
> Python less effective than it would otherwise be.  I haven't looked at
> Python's C code base but I still wonder if there a good reason for NOT
> providing this?

The same reasons for not providing any other of an infinite number of 
potential features. You have to weigh up the potential benefits of the 
feature against the costs:

- Is this a good use of developer time to build the feature?

- Every new feature requires somebody to write it, somebody to check it, 
somebody to write tests for it, somebody to document it. Who is going to 
do all this work?

- Every new feature increases the size and complexity of the language and 
makes it harder for people to learn.

- Does this new feature actually have the advantages claimed? Many so-
called optimizations actually turn out to be pessimizations instead.

- Will the new feature be an attractive nuisance, fooling programmers 
into using it inappropriately?


In this case, I would say that adding a limit argument to list.count is 
*absolutely not* worthwhile, because it is insufficiently general. To be 
general enough to be worthwhile, you would have to add three arguments:

list.count(value, start=0, end=None, step=1)

But even there I don't think it's general enough to cover the costs. I 
believe that a better solution would be to create list views to offer a 
subset of the list without actually copying.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Reading a CSV file

2013-04-23 Thread Dave Angel

On 04/23/2013 06:40 PM, Ana Dionísio wrote:

The condition I want to meet is in the first column, so is there a way to read 
only the first column and if the condition is true, print the rest?



The CSV module will read a row at a time, but nothing gets printed till 
you print it.  So starting with Dan's code,


row[0] is column one of a given row, while row[1] is the next column, 
and so on.


import csv

def main():
with open('test.csv', 'r') as file_:
for row in csv.reader(file_, delimiter="|"):
if row[0] == "special":
print row[1:] #print columns starting at the second


--
DaveA
--
http://mail.python.org/mailman/listinfo/python-list


ANN: MacroPy: bringing Macros to Python

2013-04-23 Thread Haoyi Li
MacroPy is a pure-python library that allows user-defined AST rewrites as part 
of the import process (using PEP 302). In short, it makes mucking around with 
Python's semantics so easy as to be almost trivial: you write a function that 
takes an AST and returns an AST, register it as a macro, and you're off to the 
races. To give a sense of it, I just finished implementing Scala/Groovy style 
anonymous lambdas:

map(f%(_ + 1), [1, 2, 3])
#[2, 3, 4]

reduce(f%(_ + _), [1, 2, 3])
#6

...which took about half an hour and 30 lines of code, start to finish. We're 
currently working on implementing destructuring-pattern-matching on objects 
(i.e. like in Haskell/Scala) and a clone of .NET's LINQ to SQL.

It's still very much a work in progress, but we have a list of pretty cool 
macros already done, which shows off what you can do with it. If anyone else 
was thinking about messing around with the semantics of the Python language but 
was too scared to jump into the CPython internals, this offers a somewhat 
easier path.

Thanks!
-Haoyi
-- 
http://mail.python.org/mailman/listinfo/python-list


problem with saving data in a text file

2013-04-23 Thread Debashish Saha
I tried the following:

Am_cor=np.vectorize(Am_cor)

#plt.plot(t, signal, color='blue', label='Original signal')
fig=plt.figure()
plt.xlabel('Time(minute)')
plt.ylabel('$ Re()$')
plt.xlim([t[0], t[-1]])
plt.ylim((-.3*a1-a1,a1+.3*a1))
plt.grid()
plt.plot(t,Am_cor(t),'o-',label='with parallax', markersize=2)
plt.legend(loc="best")
plt.show()


f = open("Amp_cor_with_parallax(sd=.001)Ilamda=5.txt", "w")
f.write("# time(minute) \tAmp_cor_with_parallax(sd=.001)
\n")  # column names
np.savetxt(f, np.array([t, Am_cor(t)]).T)
f.close()


now the error which I am getting is

   166 else:
167 filename = fname
--> 168 exec compile(scripttext, filename, 'exec') in glob, loc
169 else:
170 def execfile(fname, *where):

C:\Users\as\desktop\24_04_13\testing23_04.py in ()
191 f = open("Amp_cor_with_parallax(sd=.001)Ilamda=5.txt", "w")
192 f.write("# time(minute) \t
 Amp_cor_with_parallax(sd=.001) \n")  # column names
--> 193 np.savetxt(f, np.array([t, Am_cor(t)]).T)
194 f.close()
195

error: First argument must be a callable function.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Confusing Algorithm

2013-04-23 Thread Tim Roberts
RBotha  wrote:
>
>I'm facing the following problem:
>
>"""
>In a city of towerblocks, Spiderman can 
>“cover” all the towers by connecting the 
>first tower with a spider-thread to the top 
>of a later tower and then to a next tower 
>and then to yet another tower until he 
>reaches the end of the city. ...
>
>-Example:
>List of towers: 1 5 3 7 2 5 2
>Output: 4
>"""
>
>I'm not sure how a 'towerblock' could be defined. How square does a shape have 
>to be to qualify as a towerblock? Any help on solving this problem?

Here's your city;

  [  ]
  [  ]
  [  ][  ][  ]
  [  ][  ][  ]
  [  ][  ][  ][  ]
  [  ][  ][  ][  ][  ][  ]
  [  ][  ][  ][  ][  ][  ][  ]
 --
   A   B   C   D   E   F   G

Once you see the picture, you can see that you'd thread from B to D without
involving C.  I think you'll go A to B to D to F to G -- 4 threads.
-- 
Tim Roberts, t...@probo.com
Providenza & Boekelheide, Inc.
-- 
http://mail.python.org/mailman/listinfo/python-list


QTableWidget updating columns in a single row

2013-04-23 Thread Sara Lochtie
I have written a GUI that gets data sent to it in real time and this data is 
displayed in a table. Every time data is sent in it is displayed in the table 
in a new row. My problem is that I would like to have the data just replace the 
old in the first row.

The table has 6 columns (A, B, C, D, E, F) I want the new data to continue 
replacing the old data in the same row unless the data that goes under column A 
changes, at which point a new row would be added.

Does anyone have tips on how to approach this? I can post a portion of my code 
to get a better idea of what I have done.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to get JSON values and how to trace sessions??

2013-04-23 Thread Tim Roberts
webmas...@terradon.nl wrote:
> 
>But now i am wondering how to trace sessions? it is needed for a
>multiplayer game, connected to a webserver. How do i trace a PHP-session?
>I suppose i have to save a cookie with the sessionID from the webserver?

Yes.  The server will send a Set-Cookie: header with your first response.
You just need to return that in a Cookie: header with every request you
make after that.

>Are their other ways to keep control over which players sends the gamedata?

Not sure what you mean.  If the web site requires cookies, this is what you
have to do.
-- 
Tim Roberts, t...@probo.com
Providenza & Boekelheide, Inc.
-- 
http://mail.python.org/mailman/listinfo/python-list