Re: John Carmack glorifying functional programing in 3k words

2012-05-03 Thread Pascal J. Bourguignon
Tim Bradshaw  writes:

> On 2012-05-02 14:44:36 +, jaialai.technol...@gmail.com said:
>
>> He may be nuts
>
> But he's right: programmers are pretty much fuckwits[*]: if you think
> that's not true you are not old enough.
>
> [*] including me, especially.

You need to watch: 
http://blog.ted.com/2012/02/29/the-only-way-to-learn-to-fly-is-to-fly-regina-dugan-at-ted2012/

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: key/value store optimized for disk storage

2012-05-03 Thread Steve Howell
On May 2, 11:48 pm, Paul Rubin  wrote:
> Paul Rubin  writes:
> >looking at the spec more closely, there are 256 hash tables.. ...
>
> You know, there is a much simpler way to do this, if you can afford to
> use a few hundred MB of memory and you don't mind some load time when
> the program first starts.  Just dump all the data sequentially into a
> file.  Then scan through the file, building up a Python dictionary
> mapping data keys to byte offsets in the file (this is a few hundred MB
> if you have 3M keys).  Then dump the dictionary as a Python pickle and
> read it back in when you start the program.
>
> You may want to turn off the cyclic garbage collector when building or
> loading the dictionary, as it badly can slow down the construction of
> big lists and maybe dicts (I'm not sure of the latter).

I'm starting to lean toward the file-offset/seek approach.  I am
writing some benchmarks on it, comparing it to a more file-system
based approach like I mentioned in my original post.  I'll report back
when I get results, but it's already way past my bedtime for tonight.

Thanks for all your help and suggestions.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: key/value store optimized for disk storage

2012-05-03 Thread Kiuhnm

On 5/3/2012 10:42, Steve Howell wrote:

On May 2, 11:48 pm, Paul Rubin  wrote:

Paul Rubin  writes:

looking at the spec more closely, there are 256 hash tables.. ...


You know, there is a much simpler way to do this, if you can afford to
use a few hundred MB of memory and you don't mind some load time when
the program first starts.  Just dump all the data sequentially into a
file.  Then scan through the file, building up a Python dictionary
mapping data keys to byte offsets in the file (this is a few hundred MB
if you have 3M keys).  Then dump the dictionary as a Python pickle and
read it back in when you start the program.

You may want to turn off the cyclic garbage collector when building or
loading the dictionary, as it badly can slow down the construction of
big lists and maybe dicts (I'm not sure of the latter).


I'm starting to lean toward the file-offset/seek approach.  I am
writing some benchmarks on it, comparing it to a more file-system
based approach like I mentioned in my original post.  I'll report back
when I get results, but it's already way past my bedtime for tonight.

Thanks for all your help and suggestions.


You should really cache the accesses to that file hoping that the 
accesses are not as random as you think. If that's the case you should 
notice a *huge* improvement.


Kiuhnm
--
http://mail.python.org/mailman/listinfo/python-list


What is the use of python.cache_ubuntu?

2012-05-03 Thread Fayaz Yusuf Khan
My Ubuntu 11.04 server ran out of inodes due to too many files in 
'/tmp/python.cache_ubuntu'. Does anyone know what it does?
--
Cloud architect and hacker, Dexetra, India
fayaz.yusuf.khan_AT_gmail_DOT_com
fayaz_AT_dexetra_DOT_com
+91-9746-830-823

-- 
http://mail.python.org/mailman/listinfo/python-list


docstrings for data fields

2012-05-03 Thread Ulrich Eckhardt
Hi!

My class Foo exports a constant, accessible as Foo.MAX_VALUE. Now, with
functions I would simply add a docstring explaining the meaning of this,
but how do I do that for a non-function member? Note also that ideally,
this constant wouldn't show up inside instances of the class but only
inside the class itself.

There are decorators for static functions or class functions, similarly
there is one for instance properties but there isn't one for class
properties. Would that be a useful addition?

Uli
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: try/except in a loop

2012-05-03 Thread Jean-Michel Pichavant

Chris Kaynor wrote:

On Wed, May 2, 2012 at 12:51 PM, J. Mwebaze  wrote:
  

I have multiple objects, where any of them can serve my purpose.. However
some objects might not have some dependencies. I can not tell before hand if
the all the dependencies exsit. What i want to is begin processing from the
1st object, if no exception is raised, i am done.. if an exception is
raised, the next object is tried, etc  Something like

objs = [... ]
try:
  obj = objs[0]
  obj.make()
except Exception, e:
  try:
  obj = objs[1]
  obj.make()
  except Exception, e:
 try:
obj = objs[2]
obj.make()
 except Exception, e:
   continue

The problem is the length of the list of objs is variable... How can i do
this?




for obj in objs:
try:
obj.make()
except Exception:
continue
else:
break
else:
raise RuntimeError('No object worked')

  

For the record, an alternative solution without try block:

candidates = [obj for obj in objs if hasattr(obj, 'make') and 
callable(obj.make)]

if candidates:
   candidates[0].make()


JM
--
http://mail.python.org/mailman/listinfo/python-list


Re: try/except in a loop

2012-05-03 Thread Peter Otten
Jean-Michel Pichavant wrote:

> Chris Kaynor wrote:
>> On Wed, May 2, 2012 at 12:51 PM, J. Mwebaze  wrote:
>>   
>>> I have multiple objects, where any of them can serve my purpose..
>>> However some objects might not have some dependencies. I can not tell
>>> before hand if the all the dependencies exsit. What i want to is begin
>>> processing from the 1st object, if no exception is raised, i am done..
>>> if an exception is
>>> raised, the next object is tried, etc  Something like
>>>
>>> objs = [... ]
>>> try:
>>>   obj = objs[0]
>>>   obj.make()
>>> except Exception, e:
>>>   try:
>>>   obj = objs[1]
>>>   obj.make()
>>>   except Exception, e:
>>>  try:
>>> obj = objs[2]
>>> obj.make()
>>>  except Exception, e:
>>>continue
>>>
>>> The problem is the length of the list of objs is variable... How can i
>>> do this?
>>> 
>>
>>
>> for obj in objs:
>> try:
>> obj.make()
>> except Exception:
>> continue
>> else:
>> break
>> else:
>> raise RuntimeError('No object worked')
>>
>>   
> For the record, an alternative solution without try block:

Hmm, it's not sufficient that the method exists, it should succeed, too.

class Obj:
def make(self):
raise Exception("I'm afraid I can't do that")
objs = [Obj()]

> candidates = [obj for obj in objs if hasattr(obj, 'make') and
> callable(obj.make)]
> if candidates:
> candidates[0].make()

It is often a matter of taste, but I tend to prefer EAFP over LBYL.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: try/except in a loop

2012-05-03 Thread Jean-Michel Pichavant

Peter Otten wrote:

Jean-Michel Pichavant wrote:

  

Chris Kaynor wrote:


On Wed, May 2, 2012 at 12:51 PM, J. Mwebaze  wrote:
  
  

I have multiple objects, where any of them can serve my purpose..
However some objects might not have some dependencies. I can not tell
before hand if the all the dependencies exsit. What i want to is begin
processing from the 1st object, if no exception is raised, i am done..
if an exception is
raised, the next object is tried, etc  Something like

objs = [... ]
try:
  obj = objs[0]
  obj.make()
except Exception, e:
  try:
  obj = objs[1]
  obj.make()
  except Exception, e:
 try:
obj = objs[2]
obj.make()
 except Exception, e:
   continue

The problem is the length of the list of objs is variable... How can i
do this?



for obj in objs:
try:
obj.make()
except Exception:
continue
else:
break
else:
raise RuntimeError('No object worked')

  
  

For the record, an alternative solution without try block:



Hmm, it's not sufficient that the method exists, it should succeed, too.

class Obj:
def make(self):
raise Exception("I'm afraid I can't do that")
objs = [Obj()]

  

candidates = [obj for obj in objs if hasattr(obj, 'make') and
callable(obj.make)]
if candidates:
candidates[0].make()



It is often a matter of taste, but I tend to prefer EAFP over LBYL.

  
Could be that the OP did its job by calling the make method if it 
exists. If the method raises an exception, letting it through is a 
viable option if you cannot handle the exception.
Additionaly, having a method not raising any exception is not a criteria 
for success, for instance


def make(self):
   return 42

will surely fail to do what the OP is expecting.

By the way on a unrelated topic, using try blocks to make the code 
"robust" is never a good idea, I hope the OP is not try to do that.


JM

--
http://mail.python.org/mailman/listinfo/python-list


Re: Create directories and modify files with Python

2012-05-03 Thread deltaquattro
I'm leaving the thread because I cannot read any posts, apart from Irmen's. 
Anyway, I would like to publicly thank all who contributed, in particular rurpy 
who solved my problem (and kindly sent me a personal email, so that I could see 
his/her post :)

Best Regards

Sergio Rossi
-- 
http://mail.python.org/mailman/listinfo/python-list


c-based version of pyPdf?

2012-05-03 Thread Chris Curvey
I'm a long-time user of the pyPdf library, but now I'm having to work with 
bigger volumes -- larger PDFs and thousands of them at a shot.  So performance 
is starting to become a problem. 

Does anyone know of an analogue to pyPdf that is faster?  (Maybe something 
based on C with Python bindings?)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: docstrings for data fields

2012-05-03 Thread Jean-Michel Pichavant

Ulrich Eckhardt wrote:

Hi!

My class Foo exports a constant, accessible as Foo.MAX_VALUE. Now, with
functions I would simply add a docstring explaining the meaning of this,
but how do I do that for a non-function member? Note also that ideally,
this constant wouldn't show up inside instances of the class but only
inside the class itself.

There are decorators for static functions or class functions, similarly
there is one for instance properties but there isn't one for class
properties. Would that be a useful addition?

Uli
  

class Foo:
   MAX_VALUE = 42
   """The maximum value"""

epydoc support such docstring.

If you need a native support for the python help function for instance, 
document it within the class docstring:


class Foo:
   """Foo support
  
   Attributes:

   MAX_VALUE: the maximum value
   """
   MAX_VALUE = 42
--
http://mail.python.org/mailman/listinfo/python-list


Re: pyjamas / pyjs

2012-05-03 Thread Temia Eszteri
>Anyone else following the apparent hijack of the pyjs project from its
>lead developer?

Not beyond what the lead developer has been posting on the newsgroup,
no. Still a damn shame, though. What happens when you have an
unresolvable ideological seperation like that is you branch, not take
over.

~Temia
--
When on earth, do as the earthlings do.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: c-based version of pyPdf?

2012-05-03 Thread Kushal Kumaran
On Thu, May 3, 2012 at 8:23 PM, Chris Curvey  wrote:
> I'm a long-time user of the pyPdf library, but now I'm having to work with 
> bigger volumes -- larger PDFs and thousands of them at a shot.  So 
> performance is starting to become a problem.
>
> Does anyone know of an analogue to pyPdf that is faster?  (Maybe something 
> based on C with Python bindings?)

I wonder if it is possible to use cython to speed it up.

-- 
regards,
kushal
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pyjamas / pyjs

2012-05-03 Thread Ian Kelly
On Thu, May 3, 2012 at 5:52 AM, alex23  wrote:
> Anyone else following the apparent hijack of the pyjs project from its
> lead developer?

I've been following it but quietly since I don't use pyjs.  It
surprises me that nobody is talking much about it outside of the
thread on pyjamas-dev.  Seems to me that any credibility in the
long-term stability of the project has been pretty much shot.
-- 
http://mail.python.org/mailman/listinfo/python-list


Problems to list *all* mountedwindows partitions

2012-05-03 Thread joblack
I do have a script which shows me the mounted partitions:

c = wmi.WMI ('localhost')
for disk in c.Win32_DiskPartition (DriveType=3):
diskspace = int(disk.FreeSpace)/100
if diskspace < mfspace:
trigger = True
ldisks.append(disk.Name +'\\
'+str('{0:,}'.format(diskspace).replace(",", "."))+'  MByte\t
*LOW_DISK_SPACE*')
else:
ldisks.append(disk.Name+'\\
'+str('{0:,}'.format(diskspace).replace(",", "."))+'  MByte\t *OK*')

Unfortunetly it only shows partitions mounted to a character (e.g. c:
d:).

There is another physical partition mounted in d:\www1 which isn't
shown. Any idea how to add those partitions as well?
-- 
http://mail.python.org/mailman/listinfo/python-list


Algorithms in Python, cont'd

2012-05-03 Thread Antti J Ylikoski

I wrote here about some straightforward ways to program D. E. Knuth in
Python, and John Nagle answered that the value of Knuth's book series
to the programmer has been significantly diminished by the fact that
many functionalities such as sorting and hashing have either been
built in the Python language, or are available in libraries (à propos,
as an aside, very many functionalities are available notably in the
CPAN, the Comprehensive Perl Language Network.  I wonder what were the
corresponding repository with the Python language)

Nagle's comment is to my opinion very true.  So I carried out a search
procedure -- and found two good sources of algorithms for the Python
programmer:

1) Cormen-Leiserson-Rivest-Stein: Introduction to Algorithms, 2nd
edition, ISBN 0-262-53196-8.  The 3rd edition has been published, I
don't know which one is the most recent one.

2) Atallah-Blanton: Algorithms and Theory of Computation Handbook,
Second Edition, 2 books, ISBNs 978-1-58488-822-2 and
978-1-58488-820-8.  This one in particular is really good as a general
computer science source.

The point of this entry is that my answer to Nagle's criticism is that
numerous such more or less sophisticated algorithm reference books can
be found.

I intended to write some demonstrations in Python -- I chose the RSA
cryptosystem from Cormen et al's book and the linear programming
ellipsoid algorithm from Atallah-Blanton's book -- but I have not yet
done so, it would have been straightforward but too time-consuming.

yours, and V/R, Antti J Ylikoski
Helsinki, Finland, the EU
--
http://mail.python.org/mailman/listinfo/python-list


Re: docstrings for data fields

2012-05-03 Thread mblume
Am Thu, 03 May 2012 14:51:54 +0200 schrieb Ulrich Eckhardt:

> Hi!
> 
> My class Foo exports a constant, accessible as Foo.MAX_VALUE. Now, with
> functions I would simply add a docstring explaining the meaning of this,
> but how do I do that for a non-function member? Note also that ideally,
> this constant wouldn't show up inside instances of the class but only
> inside the class itself.
> 
> There are decorators for static functions or class functions, similarly
> there is one for instance properties but there isn't one for class
> properties. Would that be a useful addition?
> 
> Uli

Docstring for Foo?


>>> 
>>> class Foo:
... """ exports a FOO_MAX value """
... FOO_MAX = 42
... 
>>> 
>>> 
>>> 
>>> 
>>> help(Foo)
Help on class Foo in module __main__:

class Foo
 |  exports a FOO_MAX value
 |  
 |  Data and other attributes defined here:
 |  
 |  FOO_MAX = 42

>>> Foo.FOO_MAX
42
>>> 
>>> 
>>> 


HTH 
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: syntax for code blocks

2012-05-03 Thread Kiuhnm

On 5/3/2012 2:20, alex23 wrote:

On May 2, 8:52 pm, Kiuhnm  wrote:

  func(some_args, locals())


I think that's very bad. It wouldn't be safe either. What about name
clashing


locals() is a dict. It's not injecting anything into func's scope
other than a dict so there's not going to be any name clashes. If you
don't want any of its content in your function's scope, just don't use
that content.


The clashing is *inside* the dictionary itself. It contains *all* local 
functions and variables.



and how would you pass only some selected functions?


You wouldn't. You would just refer to the required functions in the
dict _in the same way you would in both your "bad python" and code
block versions.


See above.


But as you're _passing them in by name_ why not just make it
func(some_args) and pick them up out of the scope.


Because that's not clean and maintainable. It's not different from using
global variables.


...I'm beginning to suspect we're not engaging in the same
conversation.

This is very common in Python:

 from module1 import func1

 def func2(args): pass

 def main():
 # do something with func1 and func2

And I've never had problems maintaining code like this. I know
_exactly_ the scope that the functions exist within because I added
them to it. They're not _global_ because they're restricted to that
specific scope.


That's not the same thing. If a function accepts some optional 
callbacks, and you call that function more than once, you will have 
problems. You'll need to redefine some callbacks and remove others.

That's total lack of encapsulation.


_No one_ writes Python code like this. Presenting bad code as
"pythonic" is a bit of a straw man.


How can I present good code where there's no good way of doing that
without my module or something equivalent?
That was my point.


You haven't presented *any* good code or use cases.


Says who? You and some others? Not enough.


This is unintuitive, to say the least. You're effectively replacing
the common form of function definition with "with when_odd as 'n'",
then using the surrounding context manager to limit the scope.


What's so unintuitive about it? It's just "different".


Because under no circumstance does "with function_name as
string_signature" _read_ in an understandable way. It's tortuous
grammar that makes no sense as a context manager. You're asking people
to be constantly aware that there are two completely separate meanings
to 'with x as y'.


The meaning is clear from the context. I would've come up with something 
even better if only Python wasn't so rigid.



Rather than overload
one single function and push the complexity out to the caller, why not
have multiple functions with obvious names about what they do that
only take the data they need to act on?


Because that would reveal part of the implementation.
Suppose you have a complex visitor. The OOP way is to subclass, while 
the FP way is to accept callbacks. Why the FP way? Because it's more 
concise.
In any case, you don't want to reveal how the visitor walks the data 
structure or, better, the user doesn't need to know about it.



Then again, it's _really difficult_ to tell if something named
'func()' could have a real use like this.


The problem is always the same. Those functions are defined at the
module level so name clashing and many other problems are possible.


So define&  use a different scope! Thankfully module level isn't the
only one to play with.


We can do OOP even in ASM, you know?


I remember a post on this ng when one would create a list of commands
and then use that list as a switch table. My module let you do that very
easily. The syntax is:

  with func()<<  ':list':
  with 'arg':
  cmd_code
  with 'arg':
  cmd_code
  with '':
  cmd_code


I'm sorry but it is still clear-as-mud what you're trying to show
here. Can you show _one_ practical, real-world, non-toy example that
solves a real problem in a way that Python cannot?


I just did. It's just that you can't see it.

Kiuhnm
--
http://mail.python.org/mailman/listinfo/python-list


Re: syntax for code blocks

2012-05-03 Thread Temia Eszteri
> if only Python wasn't so rigid.

what.

You realize you'd have a little more luck with Python if you weren't
wielding it like a cudgel in the examples you've posted here, right?
Because it looks like you're treating the language as everything it
isn't and nothing it is this whole time. No wonder you're having
trouble making your code Pythonic.

Go with the flow.

~Temia
--
When on earth, do as the earthlings do.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-03 Thread someone

On 05/02/2012 11:45 PM, Russ P. wrote:

On May 2, 1:29 pm, someone  wrote:


If your data starts off with only 1 or 2 digits of accuracy, as in your
example, then the result is meaningless -- the accuracy will be 2-2
digits, or 0 -- *no* digits in the answer can be trusted to be accurate.


I just solved a FEM eigenvalue problem where the condition number of the
mass and stiffness matrices was something like 1e6... Result looked good
to me... So I don't understand what you're saying about 10 = 1 or 2
digits. I think my problem was accurate enough, though I don't know what
error with 1e6 in condition number, I should expect. How did you arrive
at 1 or 2 digits for cond(A)=10, if I may ask ?


As Steven pointed out earlier, it all depends on the precision you are
dealing with. If you are just doing pure mathematical or numerical
work with no real-world measurement error, then a condition number of
1e6 may be fine. But you had better be using "double precision" (64-
bit) floating point numbers (which are the default in Python, of
course). Those have approximately 12 digits of precision, so you are
in good shape. Single-precision floats only have 6 or 7 digits of
precision, so you'd be in trouble there.

For any practical engineering or scientific work, I'd say that a
condition number of 1e6 is very likely to be completely unacceptable.


So how do you explain that the natural frequencies from FEM (with 
condition number ~1e6) generally correlates really good with real 
measurements (within approx. 5%), at least for the first 3-4 natural 
frequencies?


I would say that the problem lies with the highest natural frequencies, 
they for sure cannot be verified - there's too little energy in them. 
But the lowest frequencies (the most important ones) are good, I think - 
even for high cond number.



--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-03 Thread Russ P.
On May 3, 10:30 am, someone  wrote:
> On 05/02/2012 11:45 PM, Russ P. wrote:
>
>
>
> > On May 2, 1:29 pm, someone  wrote:
>
> >>> If your data starts off with only 1 or 2 digits of accuracy, as in your
> >>> example, then the result is meaningless -- the accuracy will be 2-2
> >>> digits, or 0 -- *no* digits in the answer can be trusted to be accurate.
>
> >> I just solved a FEM eigenvalue problem where the condition number of the
> >> mass and stiffness matrices was something like 1e6... Result looked good
> >> to me... So I don't understand what you're saying about 10 = 1 or 2
> >> digits. I think my problem was accurate enough, though I don't know what
> >> error with 1e6 in condition number, I should expect. How did you arrive
> >> at 1 or 2 digits for cond(A)=10, if I may ask ?
>
> > As Steven pointed out earlier, it all depends on the precision you are
> > dealing with. If you are just doing pure mathematical or numerical
> > work with no real-world measurement error, then a condition number of
> > 1e6 may be fine. But you had better be using "double precision" (64-
> > bit) floating point numbers (which are the default in Python, of
> > course). Those have approximately 12 digits of precision, so you are
> > in good shape. Single-precision floats only have 6 or 7 digits of
> > precision, so you'd be in trouble there.
>
> > For any practical engineering or scientific work, I'd say that a
> > condition number of 1e6 is very likely to be completely unacceptable.
>
> So how do you explain that the natural frequencies from FEM (with
> condition number ~1e6) generally correlates really good with real
> measurements (within approx. 5%), at least for the first 3-4 natural
> frequencies?
>
> I would say that the problem lies with the highest natural frequencies,
> they for sure cannot be verified - there's too little energy in them.
> But the lowest frequencies (the most important ones) are good, I think -
> even for high cond number.

Did you mention earlier what "FEM" stands for? If so, I missed it. Is
it finite-element modeling? Whatever the case, note that I said, "If
you are just doing pure mathematical or numerical work with no real-
world measurement error, then a condition number of
1e6 may be fine." I forgot much more than I know about finite-element
modeling, but isn't it a purely numerical method of analysis? If that
is the case, then my comment above is relevant.

By the way, I didn't mean to patronize you with my earlier explanation
of orthogonal transformations. They are fundamental to understanding
the SVD, and I thought it might be interesting to anyone who is not
familiar with the concept.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: syntax for code blocks

2012-05-03 Thread Ian Kelly
On Thu, May 3, 2012 at 10:17 AM, Kiuhnm
 wrote:
> On 5/3/2012 2:20, alex23 wrote:
>>
>> On May 2, 8:52 pm, Kiuhnm  wrote:

      func(some_args, locals())
>>>
>>>
>>> I think that's very bad. It wouldn't be safe either. What about name
>>> clashing
>>
>>
>> locals() is a dict. It's not injecting anything into func's scope
>> other than a dict so there's not going to be any name clashes. If you
>> don't want any of its content in your function's scope, just don't use
>> that content.
>
>
> The clashing is *inside* the dictionary itself. It contains *all* local
> functions and variables.

Since all locals within a frame must have different names (or else
they would actually be the same local), they cannot clash with one
another.


>> Because under no circumstance does "with function_name as
>> string_signature" _read_ in an understandable way. It's tortuous
>> grammar that makes no sense as a context manager. You're asking people
>> to be constantly aware that there are two completely separate meanings
>> to 'with x as y'.
>
>
> The meaning is clear from the context. I would've come up with something
> even better if only Python wasn't so rigid.

It's really not very clear.  I think the biggest difficulty is that it
effectively reverses the roles of the elements in the with statement.
The usual meaning of:

with func():
do_stuff

is that func is called to set up a context, and then the block is
executed within that context.  The modified meaning is that the block
is gathered up as a code object and then passed as an argument into
func (despite that func *appears* to be called with no arguments),
which may or may not do something with it.  In the former, the
emphasis is on the code block; func is effectively an adverb.  In the
latter, func describes the main action, and the code block is the
adverb.

For that reason, I think that this would really need a brand new
syntax in order to gain any real acceptance, not just a complicated
overload of an existing statement.  For that you'll need to use a
preprocessor or a compiler patch (although I'm not denying that the
run-time module rewriting is a neat trick).
-- 
http://mail.python.org/mailman/listinfo/python-list


Lack of whitespace between contain operator ("in") and other expression tokens doesn't result in SyntaxError: bug or feature?

2012-05-03 Thread Garrett Cooper
Hi Python folks!
I came across a piece of code kicking around a sourcebase that
does something similar to the following:

>>> START 
#!/usr/bin/env python

import sys

def foo():
bar = 'abcdefg'
foo = [ 'a' ]

# Should throw SyntaxError?
for foo[0]in bar:
sys.stdout.write('%s' % foo[0])
sys.stdout.write('\n')
sys.stdout.write('%s\n' % (str(foo)))
# Should throw SyntaxError?
if foo[0]in bar:
return True
return False

sys.stdout.write('%r\n' % (repr(sys.version_info)))
sys.stdout.write('%s\n' % (str(foo(
>>> END 

I ran it against several versions of python to ensure that it
wasn't a regression or fixed in a later release:

$ /scratch/bin/bin/python ~/test_bad_in.py
"(2, 3, 7, 'final', 0)"
abcdefg
['g']
True
$ python2.7 ~/test_bad_in.py
"sys.version_info(major=2, minor=7, micro=3, releaselevel='final', serial=0)"
abcdefg
['g']
True
$ python3.2 ~/test_bad_in.py
"sys.version_info(major=3, minor=2, micro=3, releaselevel='final', serial=0)"
abcdefg
['g']
True
$ uname -rom
FreeBSD 9.0-STABLE amd64
$

And even tried a different OS, just to make sure it wasn't a
FreeBSD thing...

% python test_bad_in.py
"(2, 6, 5, 'final', 0)"
abcdefg
['g']
True
% uname -rom
2.6.32-71.el6.x86_64 x86_64 GNU/Linux

I was wondering whether this was a parser bug or feature (seems
like a bug, in particular because it implicitly encourages bad syntax,
but I could be wrong). The grammar notes (for 2.7 at least [1]) don't
seem to explicitly require a space between 'in' and another parser
token (reserved work, expression, operand, etc), but I could be
misreading the documentation.
Thanks!
-Garrett

1. http://docs.python.org/reference/grammar.html
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Lack of whitespace between contain operator ("in") and other expression tokens doesn't result in SyntaxError: bug or feature?

2012-05-03 Thread Ian Kelly
On Thu, May 3, 2012 at 12:49 PM, Garrett Cooper  wrote:
>    I was wondering whether this was a parser bug or feature (seems
> like a bug, in particular because it implicitly encourages bad syntax,
> but I could be wrong). The grammar notes (for 2.7 at least [1]) don't
> seem to explicitly require a space between 'in' and another parser
> token (reserved work, expression, operand, etc), but I could be
> misreading the documentation.

The grammar doesn't require whitespace there.  It tends to be flexible
about whitespace wherever it's not necessary to resolve ambiguity.

>>> x = [3, 2, 1]
>>> x [0]if x [1]else x [2]
3
>>> 1 . real
1
>>> 1.5.real
1.5
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Lack of whitespace between contain operator ("in") and other expression tokens doesn't result in SyntaxError: bug or feature?

2012-05-03 Thread Garrett Cooper
On Thu, May 3, 2012 at 12:03 PM, Ian Kelly  wrote:
> On Thu, May 3, 2012 at 12:49 PM, Garrett Cooper  wrote:
>>    I was wondering whether this was a parser bug or feature (seems
>> like a bug, in particular because it implicitly encourages bad syntax,
>> but I could be wrong). The grammar notes (for 2.7 at least [1]) don't
>> seem to explicitly require a space between 'in' and another parser
>> token (reserved work, expression, operand, etc), but I could be
>> misreading the documentation.
>
> The grammar doesn't require whitespace there.  It tends to be flexible
> about whitespace wherever it's not necessary to resolve ambiguity.
>
 x = [3, 2, 1]
 x [0]if x [1]else x [2]
> 3
 1 . real
> 1
 1.5.real
> 1.5

Sure.. it's just somewhat inconsistent with other expectations in
other languages, and seems somewhat unpythonic.
Not really a big deal (if it was I would have filed a bug
instead), but this was definitely a bit confusing when I ran it
through the interpreter a couple of times...
Thanks!
-Garrett
-- 
http://mail.python.org/mailman/listinfo/python-list


"

2012-05-03 Thread John Nagle

  An HTML page for a major site (http://www.chase.com) has
some incorrect HTML.  It contains


Immediate need: Python Developer position in Waukesha, Wisconsin, USA-12 months contract with direct client with very good pay rate!

2012-05-03 Thread Preeti Bhattad
Hi there,
If you have USA work visa and if you reside in USA; please send the
resume to pre...@groupwaremax.com or pnbhat...@gmail.com


Title Python Developer for Test Development
Location: Waukesha, WI (53188)
Duration: 12 months

Job Description
•   Proficient in Python scripting and Pyunit.
•   Proficient in Python related packages knowledge.
•   Experience in Unix internals and working knowledge as user.
•   Expertise in Unit, Integration and System test methodologies and
techniques.
•   Excellent written and oral communication along with problem solving
skills
•   Good analytical and trouble shooting skills.
•   Knowledge of C/C++ and RTOS is desired.
•   Experience of designing a solution for Testing framework is desired.


Regards,
Preeti Bhattad | Technical Recruiter |Groupware Solution Inc
Work: (732) 543 7000 x 208 |Fax: (831) 603 4007 |
Email: pre...@groupwaremax.com |
Gmail: pnbhat...@gmail.com

-- 
http://mail.python.org/mailman/listinfo/python-list


Immediate need: Python Developer position in Waukesha, Wisconsin, USA-12 months contract with direct client with very good pay rate!

2012-05-03 Thread Preeti Bhattad
Please send the resume to preeti at groupwaremax dot com or pnbhattad
at gmail dot com


Title Python Developer for Test Development
Location: Waukesha, WI (53188)
Duration: 12 months

Job Description
•   Proficient in Python scripting and Pyunit.
•   Proficient in Python related packages knowledge.
•   Experience in Unix internals and working knowledge as user.
•   Expertise in Unit, Integration and System test methodologies and
techniques.
•   Excellent written and oral communication along with problem solving
skills
•   Good analytical and trouble shooting skills.
•   Knowledge of C/C++ and RTOS is desired.
•   Experience of designing a solution for Testing framework is desired.


Regards,
Preeti Bhattad | Technical Recruiter |Groupware Solution Inc
Work: (732) 543 7000 x 208 |Fax: (831) 603 4007 |
Email: pre...@groupwaremax.com |
Gmail: pnbhat...@gmail.com
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Lack of whitespace between contain operator ("in") and other expression tokens doesn't result in SyntaxError: bug or feature?

2012-05-03 Thread Prasad, Ramit
> > Sure.. it's just somewhat inconsistent with other expectations in
> > other languages, and seems somewhat unpythonic.
> 
>   Never done FORTRAN, have you... Classic FORTRAN even allows
> white-space INSIDE keywords.

Java tends to ignore a lot of spaces as well...though not as much
as classic FORTRAN it would seem.

class test{
public static void main( String []args ){
System.out. 
println( "test" );
for (String each : args){
System.out. println( each );
}
System.out. println( args [0] );
}

 }

Ramit


Ramit Prasad | JPMorgan Chase Investment Bank | Currencies Technology
712 Main Street | Houston, TX 77002
work phone: 713 - 216 - 5423

--

This email is confidential and subject to important disclaimers and
conditions including on offers for the purchase or sale of
securities, accuracy and completeness of information, viruses,
confidentiality, legal privilege, and legal entity disclaimers,
available at http://www.jpmorgan.com/pages/disclosures/email.  
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-03 Thread someone

On 05/03/2012 07:55 PM, Russ P. wrote:

On May 3, 10:30 am, someone  wrote:

On 05/02/2012 11:45 PM, Russ P. wrote:



For any practical engineering or scientific work, I'd say that a
condition number of 1e6 is very likely to be completely unacceptable.


So how do you explain that the natural frequencies from FEM (with
condition number ~1e6) generally correlates really good with real
measurements (within approx. 5%), at least for the first 3-4 natural
frequencies?

I would say that the problem lies with the highest natural frequencies,
they for sure cannot be verified - there's too little energy in them.
But the lowest frequencies (the most important ones) are good, I think -
even for high cond number.


Did you mention earlier what "FEM" stands for? If so, I missed it. Is
it finite-element modeling? Whatever the case, note that I said, "If


Sorry, yes: Finite Element Model.


you are just doing pure mathematical or numerical work with no real-
world measurement error, then a condition number of
1e6 may be fine." I forgot much more than I know about finite-element
modeling, but isn't it a purely numerical method of analysis? If that


I'm not sure exactly, what is the definition of a purely numerical 
method of analysis? I would guess that the answer is yes, it's a purely 
numerical method? But I also thing it's a practical engineering or 
scientific work...



is the case, then my comment above is relevant.


Uh, I just don't understand the difference:

1) "For any practical engineering or scientific work, I'd say that a 
condition number of 1e6 is very likely to be completely unacceptable."


vs.

2) "If you are just doing pure mathematical or numerical work with no 
real-world measurement error, then a condition number of, 1e6 may be fine."


I would think that FEM is a practical engineering work and also pure 
numerical work... Or something...



By the way, I didn't mean to patronize you with my earlier explanation
of orthogonal transformations. They are fundamental to understanding
the SVD, and I thought it might be interesting to anyone who is not
familiar with the concept.


Don't worry, I think it was really good and I don't think anyone 
patronized me, on the contrary, people was/is very helpful. SVD isn't my 
strongest side and maybe I should've thought a bit more about this 
singular matrix and perhaps realized what some people here already 
explained, a bit earlier (maybe before I asked). Anyway, it's been good 
to hear/read what you've (and others) have written.


Yesterday and earlier today I was at work during the day so 
answering/replying took a bit longer than I like, considering the huge 
flow of posts in the matlab group. But now I'm home most of the time, 
for the next 3 days and will check for followup posts quite frequent, I 
think...


--
http://mail.python.org/mailman/listinfo/python-list


Re: Lack of whitespace between contain operator ("in") and other expression tokens doesn't result in SyntaxError: bug or feature?

2012-05-03 Thread Dan Stromberg
On Thu, May 3, 2012 at 12:21 PM, Garrett Cooper  wrote:

> On Thu, May 3, 2012 at 12:03 PM, Ian Kelly  wrote:
> > On Thu, May 3, 2012 at 12:49 PM, Garrett Cooper 
> wrote:
> >>I was wondering whether this was a parser bug or feature (seems
> >> like a bug, in particular because it implicitly encourages bad syntax,
> >> but I could be wrong). The grammar notes (for 2.7 at least [1]) don't
> >> seem to explicitly require a space between 'in' and another parser
> >> token (reserved work, expression, operand, etc), but I could be
> >> misreading the documentation.
> >
> > The grammar doesn't require whitespace there.  It tends to be flexible
> > about whitespace wherever it's not necessary to resolve ambiguity.
> >
>  x = [3, 2, 1]
>  x [0]if x [1]else x [2]
> > 3
>  1 . real
> > 1
>  1.5.real
> > 1.5
>
> Sure.. it's just somewhat inconsistent with other expectations in
> other languages, and seems somewhat unpythonic.
>Not really a big deal (if it was I would have filed a bug
> instead), but this was definitely a bit confusing when I ran it
> through the interpreter a couple of times...
> Thanks!
> -Garrett
> --
> http://mail.python.org/mailman/listinfo/python-list
>

For the code prettiness police, check out pep8 and/or pylint.  I highly
value pylint for projects more than a couple hundred lines.

For the whitespace matter that's been beaten to death:
http://stromberg.dnsalias.org/~strombrg/significant-whitespace.html

I'll include one issue about whitespace here.  In FORTRAN 77, the following
two statements look very similar, but have completely different meanings,
because FORTRAN had too little significant whitespace:

DO10I=1,10
DO10I=1.10

The first is the start of a loop, the second is an assignment statement.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-03 Thread Russ P.
Yeah, I realized that I should rephrase my previous statement to
something like this:

For any *empirical* engineering or scientific work, I'd say that a
condition number of 1e6 is likely to be unacceptable.

I'd put finite elements into the category of theoretical and numerical
rather than empirical. Still, a condition number of 1e6 would bother
me, but maybe that's just me.

--Russ P.


On May 3, 3:21 pm, someone  wrote:
> On 05/03/2012 07:55 PM, Russ P. wrote:
>
>
>
> > On May 3, 10:30 am, someone  wrote:
> >> On 05/02/2012 11:45 PM, Russ P. wrote:
> >>> For any practical engineering or scientific work, I'd say that a
> >>> condition number of 1e6 is very likely to be completely unacceptable.
>
> >> So how do you explain that the natural frequencies from FEM (with
> >> condition number ~1e6) generally correlates really good with real
> >> measurements (within approx. 5%), at least for the first 3-4 natural
> >> frequencies?
>
> >> I would say that the problem lies with the highest natural frequencies,
> >> they for sure cannot be verified - there's too little energy in them.
> >> But the lowest frequencies (the most important ones) are good, I think -
> >> even for high cond number.
>
> > Did you mention earlier what "FEM" stands for? If so, I missed it. Is
> > it finite-element modeling? Whatever the case, note that I said, "If
>
> Sorry, yes: Finite Element Model.
>
> > you are just doing pure mathematical or numerical work with no real-
> > world measurement error, then a condition number of
> > 1e6 may be fine." I forgot much more than I know about finite-element
> > modeling, but isn't it a purely numerical method of analysis? If that
>
> I'm not sure exactly, what is the definition of a purely numerical
> method of analysis? I would guess that the answer is yes, it's a purely
> numerical method? But I also thing it's a practical engineering or
> scientific work...
>
> > is the case, then my comment above is relevant.
>
> Uh, I just don't understand the difference:
>
> 1) "For any practical engineering or scientific work, I'd say that a
> condition number of 1e6 is very likely to be completely unacceptable."
>
> vs.
>
> 2) "If you are just doing pure mathematical or numerical work with no
> real-world measurement error, then a condition number of, 1e6 may be fine."
>
> I would think that FEM is a practical engineering work and also pure
> numerical work... Or something...
>
> > By the way, I didn't mean to patronize you with my earlier explanation
> > of orthogonal transformations. They are fundamental to understanding
> > the SVD, and I thought it might be interesting to anyone who is not
> > familiar with the concept.
>
> Don't worry, I think it was really good and I don't think anyone
> patronized me, on the contrary, people was/is very helpful. SVD isn't my
> strongest side and maybe I should've thought a bit more about this
> singular matrix and perhaps realized what some people here already
> explained, a bit earlier (maybe before I asked). Anyway, it's been good
> to hear/read what you've (and others) have written.
>
> Yesterday and earlier today I was at work during the day so
> answering/replying took a bit longer than I like, considering the huge
> flow of posts in the matlab group. But now I'm home most of the time,
> for the next 3 days and will check for followup posts quite frequent, I
> think...

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "

2012-05-03 Thread Ian Kelly
On Thu, May 3, 2012 at 1:59 PM, John Nagle  wrote:
>  An HTML page for a major site (http://www.chase.com) has
> some incorrect HTML.  It contains
>
>        
> which is not valid HTML, XML, or SMGL.  However, most browsers
> ignore it.  BeautifulSoup treats it as the start of a CDATA section,
> and consumes the rest of the document in CDATA format.
>
>  Bug?

Seems like a bug to me.  BeautifulSoup is supposed to parse like a
browser would, so if most browsers just ignore an unterminated CDATA
section, then BeautifulSoup probably should too.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-03 Thread someone

On 05/04/2012 12:58 AM, Russ P. wrote:

Yeah, I realized that I should rephrase my previous statement to
something like this:

For any *empirical* engineering or scientific work, I'd say that a
condition number of 1e6 is likely to be unacceptable.


Still, I don't understand it. Do you have an example of this kind of 
work, if it's not FEM?



I'd put finite elements into the category of theoretical and numerical
rather than empirical. Still, a condition number of 1e6 would bother
me, but maybe that's just me.


Ok, but I just don't understand what's in the "empirical" category, sorry...

Maybe the conclusion is just that if cond(A) > 1e15 or 1e16, then that 
problem shouldn't be solved and maybe this is also approx. where matlab 
has it's warning-threshold (maybe, I'm just guessing here)... So, maybe 
I could perhaps use that limit in my future python program (when I find 
out how to get the condition number etc, but I assume this can be 
googled for with no problems)...


--
http://mail.python.org/mailman/listinfo/python-list


When convert two sets with the same elements to lists, are the lists always going to be the same?

2012-05-03 Thread Peng Yu
Hi,

list(a_set)

When convert two sets with the same elements to two lists, are the
lists always going to be the same (i.e., the elements in each list are
ordered the same)? Is it documented anywhere?

-- 
Regards,
Peng
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: When convert two sets with the same elements to lists, are the lists always going to be the same?

2012-05-03 Thread Dan Stromberg
If you need the same ordering in two lists, you really should sort the
lists - though your comparison function need not be that traditional.  You
might be able to get away with not sorting sometimes, but on CPython
upgrades or using different Python interpreters (Pypy, Jython), it's almost
certain the ordering will be allowed to change.

But sorting in a loop is not generally a good thing - there's almost always
a better alternative.

On Thu, May 3, 2012 at 5:36 PM, Peng Yu  wrote:

> Hi,
>
> list(a_set)
>
> When convert two sets with the same elements to two lists, are the
> lists always going to be the same (i.e., the elements in each list are
> ordered the same)? Is it documented anywhere?
>
> --
> Regards,
> Peng
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: When convert two sets with the same elements to lists, are the lists always going to be the same?

2012-05-03 Thread Tim Chase
On 05/03/12 19:36, Peng Yu wrote:
> list(a_set)
> 
> When convert two sets with the same elements to two lists, are the
> lists always going to be the same (i.e., the elements in each list are
> ordered the same)? Is it documented anywhere?

Sets are defined as unordered which the documentation[1] confirms.
A simple test seems to show that on cPython2.6 on this box, sets
with 100 elements converted to lists without sorting seem to
compare as equal (i.e., cPython is sorting), but I don't see any
guarantee that this should hold for other implementations, so I'd
sort first to ensure the intended behavior.

-tkc

[1]
http://docs.python.org/library/stdtypes.html#set-types-set-frozenset
"""
Being an unordered collection, sets do not record element position
or order of insertion.
"""




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: key/value store optimized for disk storage

2012-05-03 Thread Miki Tebeka
> I'm looking for a fairly lightweight key/value store that works for
> this type of problem:
I'd start with a benchmark and try some of the things that are already in the 
standard library:
- bsddb
- sqlite3 (table of key, value, index key)
- shelve (though I doubt this one)

You might find that for a little effort you get enough out of one of these.

Another module which is not in the standard library is hdf5/PyTables and in my 
experience very fast.

HTH,
--
Miki
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: When convert two sets with the same elements to lists, are the lists always going to be the same?

2012-05-03 Thread Andrew Berg
On 5/3/2012 7:36 PM, Peng Yu wrote:
> When convert two sets with the same elements to two lists, are the
> lists always going to be the same (i.e., the elements in each list are
> ordered the same)? Is it documented anywhere?
Sets are by definition unordered, so depending on their order would not
be a good idea. If the order stays the same, it's at most an
implementation detail which may or may not be consistent across
versions, and will likely not be consistent across implementations.

-- 
CPython 3.3.0a3 | Windows NT 6.1.7601.17790
-- 
http://mail.python.org/mailman/listinfo/python-list


most efficient way of populating a combobox (in maya)

2012-05-03 Thread astan.chee Astan
Hi,
I'm making a GUI in maya using python only and I'm trying to see which
is more efficient. I'm trying to populate an optionMenuGrp / combo box
whose contents come from os.listdir(folder). Now this is fine if the
folder isn't that full but the folder has a few hundred items (almost
in the thousands), it is also on the (work) network and people are
constantly reading from it as well. Now I'm trying to write the GUI so
that it makes the interface, and using threading - Thread, populate
the box. Is this a good idea? Has anyone done this before and have
experience with any limitations on it? Is the performance not
significant?
Thanks for any advice
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: syntax for code blocks

2012-05-03 Thread alex23
On May 4, 2:17 am, Kiuhnm  wrote:
> On 5/3/2012 2:20, alex23 wrote:
> > locals() is a dict. It's not injecting anything into func's scope
> > other than a dict so there's not going to be any name clashes. If you
> > don't want any of its content in your function's scope, just don't use
> > that content.
>
> The clashing is *inside* the dictionary itself. It contains *all* local
> functions and variables.

This is nonsense.

locals() produces a dict of the local scope. I'm passing it into a
function. Nothing in the local scope clashes, so the locals() dict has
no "internal clashing". Nothing is injecting it into the function's
local scope, so _there is no "internal clashing"_.

To revise, your original "pythonic" example was, effectively:

def a(): pass
def b(): pass

func_packet = {'a': a, 'b': b}
func(arg, func_packet)

My version was:

def a(): pass
def b(): pass

func_packet = locals()
func(arg, func_packet)

Now, please explain how that produces name-clashes that your version
does not.

> >> and how would you pass only some selected functions?
>
> > You wouldn't. You would just refer to the required functions in the
> > dict _in the same way you would in both your "bad python" and code
> > block versions.
>
> See above.

This is more nonsense. So calling 'a' in your dict is fine, but
calling a in the locals() returned dict isn't?


> That's not the same thing. If a function accepts some optional
> callbacks, and you call that function more than once, you will have
> problems. You'll need to redefine some callbacks and remove others.
> That's total lack of encapsulation.

Hand-wavy, no real example, doesn't make sense.

> > You haven't presented *any* good code or use cases.
>
> Says who? You and some others? Not enough.

So far, pretty much everyone who has tried to engage you on this
subject on the list. I'm sorry we're not all ZOMGRUBYBLOCKS111
like the commenters on your project page.

> The meaning is clear from the context.

Which is why pretty much every post in this thread mentioned finding
it confusing?

> I would've come up with something even better if only Python wasn't so rigid.

The inability for people to add 6 billion mini-DSLs to solve any
stupid problem _is a good thing_. It makes Python consistent and
predictable, and means I don't need to parse _the same syntax_ utterly
different ways depending on the context.

> Because that would reveal part of the implementation.
> Suppose you have a complex visitor. The OOP way is to subclass, while
> the FP way is to accept callbacks. Why the FP way? Because it's more
> concise.
> In any case, you don't want to reveal how the visitor walks the data
> structure or, better, the user doesn't need to know about it.

Again, nothing concrete, just vague intimations of your way being
better.

> > So define&  use a different scope! Thankfully module level isn't the
> > only one to play with.
>
> We can do OOP even in ASM, you know?

???

> > I'm sorry but it is still clear-as-mud what you're trying to show
> > here. Can you show _one_ practical, real-world, non-toy example that
> > solves a real problem in a way that Python cannot?
>
> I just did. It's just that you can't see it.

"I don't understand this example, can you provide one." "I just did,
you didn't understand it."

Okay, done with this now.  Your tautologies and arrogance are not
clarifying your position at all, and I really don't give a damn, so
*plonk*
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: key/value store optimized for disk storage

2012-05-03 Thread Steve Howell
On May 3, 1:42 am, Steve Howell  wrote:
> On May 2, 11:48 pm, Paul Rubin  wrote:
>
> > Paul Rubin  writes:
> > >looking at the spec more closely, there are 256 hash tables.. ...
>
> > You know, there is a much simpler way to do this, if you can afford to
> > use a few hundred MB of memory and you don't mind some load time when
> > the program first starts.  Just dump all the data sequentially into a
> > file.  Then scan through the file, building up a Python dictionary
> > mapping data keys to byte offsets in the file (this is a few hundred MB
> > if you have 3M keys).  Then dump the dictionary as a Python pickle and
> > read it back in when you start the program.
>
> > You may want to turn off the cyclic garbage collector when building or
> > loading the dictionary, as it badly can slow down the construction of
> > big lists and maybe dicts (I'm not sure of the latter).
>
> I'm starting to lean toward the file-offset/seek approach.  I am
> writing some benchmarks on it, comparing it to a more file-system
> based approach like I mentioned in my original post.  I'll report back
> when I get results, but it's already way past my bedtime for tonight.
>
> Thanks for all your help and suggestions.


I ended up going with the approach that Paul suggested (except I used
JSON instead of pickle for persisting the hash).  I like it for its
simplicity and ease of troubleshooting.

My test was to write roughly 4GB of data, with 2 million keys of 2k
bytes each.

The nicest thing was how quickly I was able to write the file.
Writing tons of small files bogs down the file system, whereas the one-
big-file approach finishes in under three minutes.

Here's the code I used for testing:

https://github.com/showell/KeyValue/blob/master/test_key_value.py

Here are the results:

~/WORKSPACE/KeyValue > ls -l values.txt hash.txt
-rw-r--r--  1 steve  staff44334161 May  3 18:53 hash.txt
-rw-r--r--  1 steve  staff  400600 May  3 18:53 values.txt

200 out of 200 records yielded (2k each)
Begin READING test
num trials 10
time spent 39.8048191071
avg delay 0.000398048191071

real2m46.887s
user1m35.232s
sys 0m19.723s
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: most efficient way of populating a combobox (in maya)

2012-05-03 Thread Steven D'Aprano
On Thu, 03 May 2012 19:07:51 -0700, astan.chee Astan wrote:

> Hi,
> I'm making a GUI in maya using python only and I'm trying to see which
> is more efficient. I'm trying to populate an optionMenuGrp / combo box
> whose contents come from os.listdir(folder). Now this is fine if the
> folder isn't that full but the folder has a few hundred items (almost in
> the thousands), it is also on the (work) network and people are
> constantly reading from it as well. Now I'm trying to write the GUI so
> that it makes the interface, and using threading - Thread, populate the
> box. Is this a good idea? Has anyone done this before and have
> experience with any limitations on it? Is the performance not
> significant?
> Thanks for any advice


Why don't you try it and see?


It's not like populating a combobox in Tkinter with the contents of 
os.listdir requires a large amount of effort. Just try it and see whether 
it performs well enough.



-- 
Steven


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: syntax for code blocks

2012-05-03 Thread Steven D'Aprano
On Thu, 03 May 2012 19:44:57 -0700, alex23 wrote:

[snip]
> My version was:
> 
> def a(): pass
> def b(): pass
> 
> func_packet = locals()
> func(arg, func_packet)
> 
> Now, please explain how that produces name-clashes that your version
> does not.

I too am uncomfortable about passing locals() to a function, but not 
because of imaginary "name clashes". The problem as I see it is that this 
will give the function access to things the function has no need for.

While CPython doesn't allow the called function to rebind names in the 
local scope (except in the case where the local scope is also the global 
scope), that may not apply to all Python implementations. So code which 
works safely in CPython may break badly in some other implementation.

Another problem is that even in implementations where you can't rebind 
locals, the called function might mutate them instead. If any of the 
content of locals() are mutable, you're giving the function the potential 
to mutate them, whether it needs that power or not.

Let me put it this way... suppose you had a function with a signature 
like this:

def spam(a, b, c, **kwargs):
...


and you knew that spam() ignores keyword arguments that it doesn't need. 
Or at least is supposed to. Suppose you needed to make this call:

spam(23, 42, ham=None, cheese="something")


Would you do this instead?

foo = ['some', 'list', 'of', 'things']
spam(23, 42, ham=None, cheese="something", aardvark=foo)

on the basis that since aardvark will be ignored, it is perfectly safe to 
do so?

No, of course not, that would be stupid. Perhaps spam() has a bug that 
will mutate the list even though it shouldn't touch it. More importantly, 
you cause difficulty to the reader, who wonders why you are passing this 
unused and unnecessary aardvark argument to the function.

My argument is that this is equivalent to passing locals() as argument. 
Your local scope contains some arbitrary number of name bindings. Only 
some of them are actually used. Why pass all (say) 25 of them if the 
function only needs access to (say) three? To me, passing locals() as an 
argument in this fashion is a code-smell: not necessary wrong or bad, but 
a hint that something unusual and slightly worrying is going on, and you 
should take a close look at it because there *may* be a problem.


> So far, pretty much everyone who has tried to engage you on this subject
> on the list. I'm sorry we're not all ZOMGRUBYBLOCKS111 like the
> commenters on your project page.

Goddamit, did I miss a post somewhere? What the hell is this project 
people keep talking about?



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-03 Thread Steven D'Aprano
On Thu, 03 May 2012 19:30:35 +0200, someone wrote:

> On 05/02/2012 11:45 PM, Russ P. wrote:
>> On May 2, 1:29 pm, someone  wrote:
>>
 If your data starts off with only 1 or 2 digits of accuracy, as in
 your example, then the result is meaningless -- the accuracy will be
 2-2 digits, or 0 -- *no* digits in the answer can be trusted to be
 accurate.
>>>
>>> I just solved a FEM eigenvalue problem where the condition number of
>>> the mass and stiffness matrices was something like 1e6... Result
>>> looked good to me... So I don't understand what you're saying about 10
>>> = 1 or 2 digits. I think my problem was accurate enough, though I
>>> don't know what error with 1e6 in condition number, I should expect.
>>> How did you arrive at 1 or 2 digits for cond(A)=10, if I may ask ?
>>
>> As Steven pointed out earlier, it all depends on the precision you are
>> dealing with. If you are just doing pure mathematical or numerical work
>> with no real-world measurement error, then a condition number of 1e6
>> may be fine. But you had better be using "double precision" (64- bit)
>> floating point numbers (which are the default in Python, of course).
>> Those have approximately 12 digits of precision, so you are in good
>> shape. Single-precision floats only have 6 or 7 digits of precision, so
>> you'd be in trouble there.
>>
>> For any practical engineering or scientific work, I'd say that a
>> condition number of 1e6 is very likely to be completely unacceptable.
> 
> So how do you explain that the natural frequencies from FEM (with
> condition number ~1e6) generally correlates really good with real
> measurements (within approx. 5%), at least for the first 3-4 natural
> frequencies?

I would counter your hand-waving ("correlates really good", "within 
approx 5%" of *what*?) with hand-waving of my own:

"Sure, that's exactly what I would expect!"

*wink*

By the way, if I didn't say so earlier, I'll say so now: the 
interpretation of "how bad the condition number is" will depend on the 
underlying physics and/or mathematics of the situation. The 
interpretation of loss of digits of precision is a general rule of thumb 
that holds in many diverse situations, not a rule of physics that cannot 
be broken in this universe.

If you have found a scenario where another interpretation of condition 
number applies, good for you. That doesn't change the fact that, under 
normal circumstances when trying to solve systems of linear equations, a 
condition number of 1e6 is likely to blow away *all* the accuracy in your 
measured data. (Very few physical measurements are accurate to more than 
six digits.)



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: syntax for code blocks

2012-05-03 Thread Chris Angelico
On Fri, May 4, 2012 at 12:44 PM, alex23  wrote:
> On May 4, 2:17 am, Kiuhnm  wrote:
>> I would've come up with something even better if only Python wasn't so rigid.
>
> The inability for people to add 6 billion mini-DSLs to solve any
> stupid problem _is a good thing_. It makes Python consistent and
> predictable, and means I don't need to parse _the same syntax_ utterly
> different ways depending on the context.

Agreed. If a language can be everything, it is nothing. Python has
value BECAUSE it is rigid. A while ago I played around with the idea
of a language that let you define your own operators... did up a spec
for how it could work. It is NOT an improvement over modern languages.

http://rosuav.com/1/?id=683

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


PyTextile Question

2012-05-03 Thread Josh English
I am working with an XML database and have large chunks of text in certain 
child and grandchildren nodes.

Because I consider well-formed XML to wrap at 70 characters and indent 
children, I end up with a lot of extra white space in the node.text string. (I 
parse with ElementTree.)

I thought about using pytextile to convert this text to HTML for a nicer 
display option, using a wx.HTMLWindow (I don't need much in the way of fancy 
HTML for this application.)

However, when I convert my multiple-paragraph text object with textile, my 
original line breaks are preserved. Since I'm going to HTML, I d'nt want my 
line breaks preserved.

Example (may be munged, formatting-wise):


   This is a long multi-line description
with several paragraphs and hopefully, eventually,
proper HTML P-tags.

This is a new paragraph. It should be surrounded
by its own P-tag.

Hopefully (again), I won't have a bunch of unwanted
BR tags thrown in.




I've tried several ways of pre-processing the text in the node, but pytextile 
still gives me line breaks.

Any suggestions? Is there a good tutorial for PyTextile that I haven't found?

Thanks.

Josh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: When convert two sets with the same elements to lists, are the lists always going to be the same?

2012-05-03 Thread Terry Reedy

On 5/3/2012 8:36 PM, Peng Yu wrote:

Hi,

list(a_set)

When convert two sets with the same elements to two lists, are the
lists always going to be the same (i.e., the elements in each list are
ordered the same)? Is it documented anywhere?


"A set object is an unordered collection of distinct hashable objects".
If you create a set from unequal objects with equal hashes, the 
iteration order may (should, will) depend on the insertion order as the 
first object added with a colliding hash will be at its 'natural 
position in the hash table while succeeding objects will be elsewhere.


Python 3.3.0a3 (default, May  1 2012, 16:46:00)
>>> hash('a')
-292766495615408879
>>> hash(-292766495615408879)
-292766495615408879
>>> a = {'a', -292766495615408879}
>>> b = {-292766495615408879, 'a'}
>>> list(a)
[-292766495615408879, 'a']
>>> list(b)
['a', -292766495615408879]

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-03 Thread Russ P.
On May 3, 4:59 pm, someone  wrote:
> On 05/04/2012 12:58 AM, Russ P. wrote:
>
> > Yeah, I realized that I should rephrase my previous statement to
> > something like this:
>
> > For any *empirical* engineering or scientific work, I'd say that a
> > condition number of 1e6 is likely to be unacceptable.
>
> Still, I don't understand it. Do you have an example of this kind of
> work, if it's not FEM?
>
> > I'd put finite elements into the category of theoretical and numerical
> > rather than empirical. Still, a condition number of 1e6 would bother
> > me, but maybe that's just me.
>
> Ok, but I just don't understand what's in the "empirical" category, sorry...

I didn't look it up, but as far as I know, empirical just means based
on experiment, which means based on measured data. Unless I am
mistaken , a finite element analysis is not based on measured data.
Yes, the results can be *compared* with measured data and perhaps
calibrated with measured data, but those are not the same thing.

I agree with Steven D's comment above, and I will reiterate that a
condition number of 1e6 would not inspire confidence in me. If I had a
condition number like that, I would look for a better model. But
that's just a gut reaction, not a hard scientific rule.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: syntax for code blocks

2012-05-03 Thread alex23
On May 4, 1:47 pm, Steven D'Aprano  wrote:
> I too am uncomfortable about passing locals() to a function, but not
> because of imaginary "name clashes". The problem as I see it is that this
> will give the function access to things the function has no need for.

And I would never use it in the real world. If anything, I'd rebind
via the function parameters:

def f(arg,fn1=None,fn2=None): pass

f('arg', **locals())

This way, only the aspects of the local scope that the function
explicitly asks for are provided.

But: I would _only_ do this in a context I controlled. However, that
would be the _same_ context in which the code blocks example would
also be used. I think. I'm still waiting to see an example that is
clear. I've never _ever_ found myself thinking "this code would be a
LOT clearer if I didn't have to give it a name..."

> Another problem is that even in implementations where you can't rebind
> locals, the called function might mutate them instead. If any of the
> content of locals() are mutable, you're giving the function the potential
> to mutate them, whether it needs that power or not.

This is true. But that would be the case with a provided dict too. I
wasn't suggesting someone blindly throw locals into every function and
hope for the best. I was merely stating that if you know that your
function is only going to use certain values, it doesn't matter how
many values you pass it, if it chooses to ignore them.

> My argument is that this is equivalent to passing locals() as argument.
> Your local scope contains some arbitrary number of name bindings. Only
> some of them are actually used. Why pass all (say) 25 of them if the
> function only needs access to (say) three?

Flip it: I've set up a local scope that _only_ contains the functions
I need. Why manually create a dict, repeating the name of each
function as a key, when I can just use locals()?

> To me, passing locals() as an
> argument in this fashion is a code-smell: not necessary wrong or bad, but
> a hint that something unusual and slightly worrying is going on, and you
> should take a close look at it because there *may* be a problem.

Or, conversely, I _know_ what I'm doing in the context of my own code
and it's the most elegant way to write it.

Frankly, I don't really care; I'm sick of this whole thread. We're all
taking bullshit abstractions & toy examples and none of it is
indicative of how anyone would really write code.

> > So far, pretty much everyone who has tried to engage you on this subject
> > on the list. I'm sorry we're not all ZOMGRUBYBLOCKS111 like the
> > commenters on your project page.
>
> Goddamit, did I miss a post somewhere? What the hell is this project
> people keep talking about?

https://bitbucket.org/mtomassoli/codeblocks/

The examples here are a wonder to behold as well:
http://mtomassoli.wordpress.com/2012/04/20/code-blocks-in-python/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: key/value store optimized for disk storage

2012-05-03 Thread Paul Rubin
Steve Howell  writes:
> My test was to write roughly 4GB of data, with 2 million keys of 2k
> bytes each.

If the records are something like english text, you can compress
them with zlib and get some compression gain by pre-initializing
a zlib dictionary from a fixed english corpus, then cloning it.
That is, if your messages are a couple paragraphs, you might
say something like:

  iv = (some fixed 20k or so of records concatenated together)
  compressor = zlib(iv).clone()  # I forget what this
 # operation is actually called

  # I forget what this is called too, but the idea is you throw
  # away the output of compressing the fixed text, and sync
  # to a byte boundary
  compressor.sync()

  zout = compressor.compress(your_record).sync()
  ...

i.e. the part you save in the file is just the difference
between compress(corpus) and compress(corpus_record).  To
decompress, you initialize a compressor the same way, etc.

It's been a while since I used that trick but for json records of a few
hundred bytes, I remember getting around 2:1 compression, while starting
with an unprepared compressor gave almost no compression.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pyjamas / pyjs

2012-05-03 Thread John O'Hagan
On Thu, 3 May 2012 04:52:36 -0700 (PDT)
alex23  wrote:

> Anyone else following the apparent hijack of the pyjs project from its
> lead developer?
> -- 

Just read the thread on pyjamas-dev. Even without knowing anything about the
lead-up to the coup, its leader's linguistic contortions trying to justify it
("i have retired Luke of the management duties"), and his eagerness to
change the subject ("let's move into more productive areas of discussion", and
"this is the path forward; make good of the newfound power") are indicative of
a guilty conscience or an underdeveloped sense of ethics. 

He's convinced himself that his actions were technically legal; get this: "i
would recommend you terminate thought paths regarding criminal activity", and
"please don't make me further intervene or forcibly terminate additional
threats or remarks. there is no case to be had"! 

But I am having trouble imagining a scenario where sneakily acquiring the
domain name and copying all the source and mailinglist data to your own server
would be preferable to just forking, which is what everyone knows you're
supposed to do in cases where the boss is too FOSS for your taste, or whatever
the problem was. 

Seems like a great deal of hurt has occurred, both to people and to the
project, just to save the administrative hassle of forking. In the words of the
hijacker, he was going to fork but "an opportunity presented itself, and i ran
with it". Nice. 

--

John 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: key/value store optimized for disk storage

2012-05-03 Thread Steve Howell
On May 3, 9:38 pm, Paul Rubin  wrote:
> Steve Howell  writes:
> > My test was to write roughly 4GB of data, with 2 million keys of 2k
> > bytes each.
>
> If the records are something like english text, you can compress
> them with zlib and get some compression gain by pre-initializing
> a zlib dictionary from a fixed english corpus, then cloning it.
> That is, if your messages are a couple paragraphs, you might
> say something like:
>
>   iv = (some fixed 20k or so of records concatenated together)
>   compressor = zlib(iv).clone()  # I forget what this
>                                  # operation is actually called
>
>   # I forget what this is called too, but the idea is you throw
>   # away the output of compressing the fixed text, and sync
>   # to a byte boundary
>   compressor.sync()
>
>   zout = compressor.compress(your_record).sync()
>   ...
>
> i.e. the part you save in the file is just the difference
> between compress(corpus) and compress(corpus_record).  To
> decompress, you initialize a compressor the same way, etc.
>
> It's been a while since I used that trick but for json records of a few
> hundred bytes, I remember getting around 2:1 compression, while starting
> with an unprepared compressor gave almost no compression.

Sounds like a useful technique.  The text snippets that I'm
compressing are indeed mostly English words, and 7-bit ascii, so it
would be practical to use a compression library that just uses the
same good-enough encodings every time, so that you don't have to write
the encoding dictionary as part of every small payload.

Sort of as you suggest, you could build a Huffman encoding for a
representative run of data, save that tree off somewhere, and then use
it for all your future encoding/decoding.

Is there a name to describe this technique?




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: key/value store optimized for disk storage

2012-05-03 Thread Paul Rubin
Steve Howell  writes:
> Sounds like a useful technique.  The text snippets that I'm
> compressing are indeed mostly English words, and 7-bit ascii, so it
> would be practical to use a compression library that just uses the
> same good-enough encodings every time, so that you don't have to write
> the encoding dictionary as part of every small payload.

Zlib stays adaptive, the idea is just to start with some ready-made
compression state that reflects the statistics of your data.

> Sort of as you suggest, you could build a Huffman encoding for a
> representative run of data, save that tree off somewhere, and then use
> it for all your future encoding/decoding.

Zlib is better than Huffman in my experience, and Python's zlib module
already has the right entry points.  Looking at the docs,
Compress.flush(Z_SYNC_FLUSH) is the important one.  I did something like
this before and it was around 20 lines of code.  I don't have it around
any more but maybe I can write something else like it sometime.

> Is there a name to describe this technique?

Incremental compression maybe?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: syntax for code blocks

2012-05-03 Thread Ben Finney
alex23  writes:

> The examples here are a wonder to behold as well:
> http://mtomassoli.wordpress.com/2012/04/20/code-blocks-in-python/

Wow. “What really happens is that rewrite rewrites the code, executes it
and quits.”

Please keep this far away from anything resembling Python.

-- 
 \   “If you go flying back through time and you see somebody else |
  `\   flying forward into the future, it's probably best to avoid eye |
_o__)   contact.” —Jack Handey |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: key/value store optimized for disk storage

2012-05-03 Thread Steve Howell
On May 3, 11:03 pm, Paul Rubin  wrote:
> Steve Howell  writes:
> > Sounds like a useful technique.  The text snippets that I'm
> > compressing are indeed mostly English words, and 7-bit ascii, so it
> > would be practical to use a compression library that just uses the
> > same good-enough encodings every time, so that you don't have to write
> > the encoding dictionary as part of every small payload.
>
> Zlib stays adaptive, the idea is just to start with some ready-made
> compression state that reflects the statistics of your data.
>
> > Sort of as you suggest, you could build a Huffman encoding for a
> > representative run of data, save that tree off somewhere, and then use
> > it for all your future encoding/decoding.
>
> Zlib is better than Huffman in my experience, and Python's zlib module
> already has the right entry points.  Looking at the docs,
> Compress.flush(Z_SYNC_FLUSH) is the important one.  I did something like
> this before and it was around 20 lines of code.  I don't have it around
> any more but maybe I can write something else like it sometime.
>
> > Is there a name to describe this technique?
>
> Incremental compression maybe?

Many thanks, this is getting me on the right path:

compressor = zlib.compressobj()
s = compressor.compress("foobar")
s += compressor.flush(zlib.Z_SYNC_FLUSH)

s_start = s
compressor2 = compressor.copy()

s += compressor.compress("baz")
s += compressor.flush(zlib.Z_FINISH)
print zlib.decompress(s)

s = s_start
s += compressor2.compress("spam")
s += compressor2.flush(zlib.Z_FINISH)
print zlib.decompress(s)
-- 
http://mail.python.org/mailman/listinfo/python-list


[image-SIG] img.show() does not seem to work.

2012-05-03 Thread Neru Yumekui

I am trying to get Image.show() to work, but seem to struggle with it. Thus far 
I have been using PIL on Windows, and it has worked fine and all - But I 
recently installed it on a Linux-machine, where img.show does not seem to work 
(All other features apart from screengrab seems to work well).

When I run the following as a normal user (id est, not root)
Image.new("RGBA", (100, 100), (255, 255, 255, 0)).show()

it results in these error messages in popupboxes:

Failed to open "/tmp/tmpsVfqf4".
Error stating file '/tmp/tmpsVfqf4': 
No such file or directory.

Failed to execute default File Manager.
Input/output error.

The interpreter has the following printout
Thunar: Failed to open "/tmp/tmpFs5EZr": Error stating file '/tmp/tmpFs5EZr': 
No such file or directory

Running the same as root, nothing (visible) seems to happen, either via popups 
or in the interpreter, but no image shows up either.

At first I did not have xv installed (it tried to open the image in gimp, but 
that did not work as it resulted in the errors above when gimp opened) - So I 
installed xv, and it still tried to open gimp - So I removed gimp, and that is 
more or less how I ended up where I am now. 

I guess show() is not that important to me as I could just save the image and 
open it manually, but it would be helpful to have show() at times.  
-- 
http://mail.python.org/mailman/listinfo/python-list


python sandbox question

2012-05-03 Thread viral shah
Hi

Can anyone answer these two questions :

I have two questions regarding Pysandbox:

1.) How do I achieve the functionality of eval? I understand
sandbox.execute() is equivalent to exec, but I can't find anything such
that if the code entered were 2 + 2, then it would return 4, or something
to that effect.

2.) By default, sandbox.execute() makes a passed-in environment read-only
-- i.e. if I do sandbox.execute('data.append(4)', locals={'data': [1, 2,
3]}), an error will occur. How do I make passed-in environments read-write?
-- 
http://mail.python.org/mailman/listinfo/python-list