[linked lists] Newbie - chapter 19 in "How to think like a CS in python"

2005-07-14 Thread Philip
Hi,
I'm reading "How to think like a computer scientist in python". So far,
it's been smooth sailing, but the final exercise in chapter 19 really
has me stumped. Is it just me, or did this book get very difficult, very
quickly? It says:

"As an exercise, write an implementation of the Priority Queue ADT using a
linked list. You should keep the list sorted so that removal is a constant
time operation. Compare the performance of this implementation with the
Python list implementation."

Here is the code so far:

import sys

class Node:
def __init__(self, cargo=None, next=None, prev=None):
self.cargo = cargo
self.next = next
self.prev = prev

def __str__(self):
return str(self.cargo)

def printBackward(self):
if self.next != None:
tail = self.next
tail.printBackward()
print self.cargo

class LinkedList:
def __init__(self):
self.length = 0
self.head = None

def printBackward(self):
print "["
if self.head != None:
self.head.printBackward()
print "]"

def addFirst(self, cargo):
node = Node(cargo)
node.next = self.head
self.head = node
self.length = self.length + 1

def printList(node):
sys.stdout.write("[")
while node:
sys.stdout.write(str(node.cargo))
if node.next != None:
sys.stdout.write(", ")
else:
sys.stdout.write("]")
node = node.next
print

def printBackward(list):
if list == None:
return
head = list
tail = list.next
printBackward(tail)
print head,

def removeSecond(list):
if list == None: return
if list.next == None: return
first = list
second = list.next
first.next = second.next
second.next = None
return second

def printBackwardNicely(list):
print "[",
if list != None:
head = list
tail = list.next
printBackward(tail)
print head,
print "]"

class Queue:
def __init__(self):
self.length = 0
self.head = None

def isEmpty(self):
return (self.length == 0)

def insert(self, cargo):
node = Node(cargo)
node.next = None
if self.head == None:
self.head = node
else:
last = self.head
while last.next: last = last.next
last.next = node
self.length = self.length + 1

def remove(self):
cargo = self.head.cargo
self.head = self.head.next
self.length = self.length - 1
return cargo

class ImprovedQueue:
def __init__(self):
self.length = 0
self.head = None
self.last = None

def isEmpty(self):
return (self.length == 0)

def insert(self, cargo):
node = Node(cargo)
node.next = None
if self.length == 0:
self.head = self.last = node
else:
last = self.last
last.next = node
self.last = node
self.length = self.length + 1

def remove(self):
cargo = self.head.cargo
self.head = self.head.next
self.length = self.length - 1
if self.length == 0:
self.last = None
return cargo

class PriorityQueue:
def __init__(self):
self.items = []

def isEmpty(self):
return self.items == []

def insert(self, item):
self.items.append(item)

def remove(self):
maxi = 0
for i in range(1,len(self.items)):
if self.items[i] > self.items[maxi]:
maxi = i
item = self.items[maxi]
self.items[maxi:maxi+1] = []
return item

class Golfer:
def __init__(self, name, score):
self.name = name
self.score = score

def __str__(self):
return "%-16s: %d" % (self.name, self.score)

def __cmp__(self, other):
if self.score < other.score: return 1
if self.score > other.score: return -1
return 0

I figured I'd copy ImprovedQueue and tamper with the insert method
so as to traverse the link

Re: fcntl and siginfo_t in python

2009-04-30 Thread Philip
ma  gmail.com> writes:

> 
> 
> 
> 
> Here's something that I came up with so far, I'm having some issues with
segfaulting, if I want to pass a struct member by ref in ctypes(see below), if
not, I just get a 
> "Real-time signal 0" sent back to me.
> 
> 
> Any ideas?

Try "SIGRTMIN+1", per http://souptonuts.sourceforge.net/code/dnotify.c.html

Philip



--
http://mail.python.org/mailman/listinfo/python-list


Re: fcntl and siginfo_t in python

2009-05-06 Thread Philip
ma  gmail.com> writes:
> 
> Ok! So, I decided to write a C-extension instead of using ctypes...
> 
> This works beautifully. Now, I want to release this to the public, so
> I'm thinking of making a bit of code cleanup. Should I just pack the
> entire siginfo_t struct, right now I just use the fd, into a
> dictionary and pass it to the python callback handler function? Maybe
> there might be some more suggestions to what data structures to use,
> so I'm open right now to any of them.

Could we have a look at your working prototype?

Philip J. Tait
http://subarutelescope.org

--
http://mail.python.org/mailman/listinfo/python-list


Re: Shared memory python between two separate shell-launched processes

2011-02-11 Thread Philip
On Feb 11, 6:27 am, Adam Skutt  wrote:
> On Feb 10, 9:30 am, "Charles Fox (Sheffield)" 
> wrote:
>
> > But when I look at posix_ipc and POSH it looks like you have to fork
> > the second process from the first one, rather than access the shared
> > memory though a key ID as in standard C unix shared memory.  Am I
> > missing something?   Are there any other ways to do this?
>
> I don't see what would have given you that impression at all, at least
> with posix_ipc.  It's a straight wrapper on the POSIX shared memory
> functions, which can be used across processes when used correctly.
> Even if for some reason that implementation lacks the right stuff,
> there's always SysV IPC.
>
[some stuff snipped]
> Also, just FYI, there is no such thing as "standard C unix shared
> memory".  There are at least three different relatively widely-
> supported techniques: SysV, (anonymous) mmap, and POSIX Realtime
> Shared Memory (which normally involves mmap).  All three are
> standardized by the Open Group, and none of the three are implemented
> with perfect consistency across Unicies.

Adam is 100% correct. posix_ipc doesn't require fork.

@the OP: Charles, since you refer to "standard" shared memory as being
referred to by a key, it sounds like you're thinking of SysV shared
memory. POSIX IPC objects are referred to by a string that looks like
a filename, e.g. "/my_shared_memory".

Note that there's a module called sysv_ipc which is a close cousin of
posix_ipc. I'm the author of both. IMO POSIX is easier to use.

Cheers
Philip


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing multiprocessing code on Windows

2011-02-17 Thread philip

Quoting Matt Chaput :

Does anyone know the "right" way to write a unit test for code that  
uses multiprocessing on Windows?


The problem is that with both "python setup.py tests" and  
"nosetests", when they get to testing any code that starts Processes  
they spawn multiple copies of the testing suite (i.e. the new  
processes start running tests as if they were started with "python  
setup.py tests"/"nosetests"). The test runner in PyDev works properly.


Maybe multiprocessing is starting new Windows processes by copying  
the command line of the current process? But if the command line is  
"nosetests", it's a one way ticket to an infinite explosion of  
processes.




Hi Matt,
I assume you're aware of this documentation, especially the item  
entitled "Safe importing of main module"?


http://docs.python.org/release/2.6.6/library/multiprocessing.html#windows


HTH
P

--
http://mail.python.org/mailman/listinfo/python-list


Re: pycopg2 build problems

2009-09-23 Thread philip

Quoting Wolodja Wentland :


On Wed, Sep 23, 2009 at 12:24 -0700, devaru wrote:

I'm trying to install psycopg2 on my system. I followed the
instruction in INSTALL file and gave the command
python setup.py build
running build
running build_py
running build_ext
error: No such file or directory


I ran into this some days ago. The problem is not related to the
distribution you downloaded, but to missing information about PostgreSQL
itself.

IIRC the file in question is "/usr/bin/pg_config". The file is   
probably packaged

in some lib*-dev package on your distribution.


That's the most common install problem with psycopg2 -- setup.py can't  
find (or execute) pg_config which it needs to decide how to talk to  
Postgres. I remember it giving a different (and more descriptive)  
error but I haven't used psycopg2 recently.


To the OP: if executing pg_config fails from the same command line in  
which you're running setup.py, then Wolodja is absolutely correct. You  
need to get pg_config on your path somewhere, or there might be an  
environment variable you can set to tell setup where to find it if you  
don't want it in your path.


Good luck
Philip




--- Debian example ---
$ apt-file search /usr/bin/pg_config
libpq-dev: /usr/bin/pg_config
--- snip ---

thanks for all the fish

Wolodja





--
http://mail.python.org/mailman/listinfo/python-list


PyPy3 2.1 beta 1 released

2013-07-30 Thread Philip Jenvey

PyPy3 2.1 beta 1


We're pleased to announce the first beta of the upcoming 2.1 release of
PyPy3. This is the first release of PyPy which targets Python 3 (3.2.3)
compatibility.

We would like to thank all of the people who donated_ to the `py3k proposal`_
for supporting the work that went into this and future releases.

You can download the PyPy3 2.1 beta 1 release here:

http://pypy.org/download.html#pypy3-2-1-beta-1

Highlights
==

* The first release of PyPy3: support for Python 3, targetting CPython 3.2.3!

  - There are some `known issues`_ including performance regressions (issues
`#1540`_ & `#1541`_) slated to be resolved before the final release.

What is PyPy?
==

PyPy is a very compliant Python interpreter, almost a drop-in replacement for
CPython 2.7.3 or 3.2.3. It's fast due to its integrated tracing JIT compiler.

This release supports x86 machines running Linux 32/64, Mac OS X 64 or Windows
32. Also this release supports ARM machines running Linux 32bit - anything with
``ARMv6`` (like the Raspberry Pi) or ``ARMv7`` (like Beagleboard,
Chromebook, Cubieboard, etc.) that supports ``VFPv3`` should work.


Cheers,
the PyPy team
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Validating string for FDQN

2011-06-06 Thread Philip Semanchuk

On Jun 6, 2011, at 8:40 PM, Eric wrote:

> Hello,
> 
> Is there a library or regex that can determine if a string is a fqdn
> (fully qualified domain name)? I'm writing a script that needs to add
> a defined domain to the end of a hostname if it isn't already a fqdn
> and doesn't contain the defined domain.

The ones here served me very well:
http://pyxml.cvs.sourceforge.net/viewvc/pyxml/xml/xml/Uri.py?revision=1.1&view=markup

bye
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dummy, underscore and unused local variables

2011-06-13 Thread Philip Semanchuk

On Jun 13, 2011, at 11:37 AM, Tim Johnson wrote:

> NOTE: I see much on google regarding unused local variables, 
> however, doing a search for 'python _' hasn't proved fruitful.

Yes, Google's not good for searching punctuation. But 'python underscore dummy 
OR unused' might work better.

> On a related note: from the python interpreter if I do
>>>> help(_) 
> I get 
> Help on bool object:
> 
> class bool(int)
> |  bool(x) -> bool
> ..
> I'd welcome comments on this as well.
> 

In the Python interpreter, _ gives you the results of the last expression. When 
you first start the interpreter, _ is undefined.

$ python
>>> help(_)
Traceback (most recent call last):
  File "", line 1, in 
NameError: name '_' is not defined
>>> True
True
>>> help(_)

Help on bool object:

class bool(int)
 |  bool(x) -> bool


In your case when you asked for help(_), the last object you used must have 
been a bool.

> 
> :) I expect to be edified is so many ways, some
> of them unexpected.

That's the nice thing about this list!

Hope this helps
Philip


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: search through this list's email archives

2011-06-23 Thread Philip Semanchuk

On Jun 23, 2011, at 12:11 PM, Cathy James wrote:

> Dear All,
> 
> I looked through this forum's archives, but I can't find a way to
> search for a topic through the archive. Am I missing something?


http://www.google.com/search?q=site%3Amail.python.org%2Fpipermail%2Fpython-list%2F+++banana
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unzip problem

2011-06-24 Thread Philip Semanchuk

On Jun 24, 2011, at 10:55 AM, Ahmed, Shakir wrote:

> Hi,
> 
> 
> 
> I am getting following error message while unziping a .zip file. Any
> help or idea is highly appreciated.
> 
> 
> 
> Error message>>>
> 
> Traceback (most recent call last):
> 
>  File "C:\Zip_Process\py\test2_new.py", line 15, in 
> 
>outfile.write(z.read(name))
> 
> IOError: (22, 'Invalid argument')


Start debugging with these two steps --
1) Add this just after "for name in z.namelist():"
   print name

That way you can tell which file is failing.

2) You can't tell whether you're getting an error on the write or the read 
because you've got two statements combined into one line. Change this --
   outfile.write(z.read(name))
to this --
   data = z.read(name)
   outfile.write(data)


Good luck
Philip


> 
> 
> 
> 
> 
> The script is here:
> 
> *
> 
> fh = open('T:\\test\\*.zip', 'rb')
> 
> z = zipfile.ZipFile(fh)
> 
> for name in z.namelist():
> 
>outfile = open(name, 'wb')
> 
> 
> 
>outfile.write(z.read(name))
> 
>print z
> 
>print outfile
> 
>outfile.close()
> 
> 
> 
> fh.close()
> 
> 
> 
> 
> 
> -- 
> http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: wx MenuItem - icon is missing

2011-07-05 Thread Philip Semanchuk

On Jul 5, 2011, at 4:02 AM, Laszlo Nagy wrote:

>def onPopupMenu(self,evt):
>menu = wx.Menu()
>for title,bitmap in self.getPopupMenuItems():
>item = wx.MenuItem(None,-1,title)
>if bitmap:
>item.SetBitmap(bitmap)
>menu.AppendItem(item)
>menu.Bind(wx.EVT_MENU,self.onPopupMenuItemSelected,item)
>self.PopupMenu( menu, evt.GetPoint())
>menu.Destroy()
> 
> I have read somewhere that under GTK, I have to assign the bitmap before 
> Append-ing the MenuItem to the Menu. So did I, but it doesn't work. Menu item 
> icons are not showing up in Ubuntu. On Windows 7, everything is fine. What am 
> I doing wrong?
> 
> System: Ubuntu 11 amd64
> Python: 2.7.1+
> wx.__version__ '2.8.11.0'

Hi Laszlo,
Two suggestions --

1. Post a complete example that demonstrates the problem so that we don't have 
to dummy up a wx app ourselves to try your code.

2. Ask on the wxPython mailing list.

Good luck
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: wx MenuItem - icon is missing

2011-07-05 Thread Philip Semanchuk

On Jul 5, 2011, at 3:32 PM, Laszlo Nagy wrote:

> 
>> 1. Post a complete example that demonstrates the problem so that we don't 
>> have to dummy up a wx app ourselves to try your code.
> 

[code sample snipped]

> 
> Under windows, this displays the icon for the popup menu item. Under GTK it 
> doesn't and there is no error message, no exception.


I get different results than you. 

Under Ubuntu 9.04 w with wx 2.8.9.1, when I right click I see a menu item 
called test with little icon of a calculator or something.

Under OS X 10.6 with wx 2.8.12.0 and Win XP with wx 2.8.10.1, when I right 
click I get this --

Traceback (most recent call last):
  File "x.py", line 46, in onPopupMenu
item = wx.MenuItem(None,-1,u"Test")
  File 
"/usr/local/lib/wxPython-unicode-2.8.12.0/lib/python2.6/site-packages/wx-2.8-mac-unicode/wx/_core.py",
 line 11481, in __init__
_core_.MenuItem_swiginit(self,_core_.new_MenuItem(*args, **kwargs))
wx._core.PyAssertionError: C++ assertion "parentMenu != NULL" failed at 
/BUILD/wxPython-src-2.8.12.0/src/common/menucmn.cpp(389) in wxMenuItemBase(): 
menuitem should have a menu

Hope this helps more than it confuses.

Cheers
Philip




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: wx MenuItem - icon is missing

2011-07-06 Thread Philip Semanchuk

On Jul 6, 2011, at 2:25 AM, Laszlo Nagy wrote:

> 
>>> Under windows, this displays the icon for the popup menu item. Under GTK it 
>>> doesn't and there is no error message, no exception.
>> 
>> I get different results than you.
>> 
>> Under Ubuntu 9.04 w with wx 2.8.9.1, when I right click I see a menu item 
>> called test with little icon of a calculator or something.
>> 
>> Under OS X 10.6 with wx 2.8.12.0 and Win XP with wx 2.8.10.1, when I right 
>> click I get this --
>> 
>> Traceback (most recent call last):
>>   File "x.py", line 46, in onPopupMenu
>> item = wx.MenuItem(None,-1,u"Test")
>>   File 
>> "/usr/local/lib/wxPython-unicode-2.8.12.0/lib/python2.6/site-packages/wx-2.8-mac-unicode/wx/_core.py",
>>  line 11481, in __init__
>> _core_.MenuItem_swiginit(self,_core_.new_MenuItem(*args, **kwargs))
>> wx._core.PyAssertionError: C++ assertion "parentMenu != NULL" failed at 
>> /BUILD/wxPython-src-2.8.12.0/src/common/menucmn.cpp(389) in 
>> wxMenuItemBase(): menuitem should have a menu
> I guess I'll have to write to the wxPython mailing list. Seriously, adding a 
> simple menu to something is supposed to be platform independent, but we got 
> four different results on four systems. :-(

I can understand why it's frustrating but a menu items with icons on them 
aren't exactly common, so you're wandering into territory that's probably not 
so throughly explored (nor standard across platforms). Now that I think about 
it, I don't know that I've ever seen one under OSX, and I don't even know if 
it's supported at all.

Me, I would start by addressing the error in the traceback. wx doesn't seem 
happy with an orphan menu item; why not create a wx.Menu and assign the menu 
item to that? It might solve your icon problem; you never know.

In defense of wxPython, we have three wx apps in our project and they contain 
very little platform-specific code. To be fair, we've had to rewrite some code 
after we found that it worked on one platform but not another, but generally 
we're able to find code that works on all platforms. We have only a couple of 
places where we were forced to resort to this kind of thing:

   if wx.Platform == "__WXGTK__":
  do X
   elif wx.Platform == "__WXMAC__":
  do Y
   etc.


> Thank you for trying out though.

You're welcome. VirtualBox helped.


bye
Philip



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Question- Getting Windows 64bits information Python 32bits

2011-07-07 Thread Philip Reynolds
On Thu, 07 Jul 2011, Andrew Berg wrote:

> On 2011.07.07 10:21 AM, António Rocha wrote:
> > I'm running Python (32b) in Windows7 (at 64bits) and I would like to
> > know how can I check if my machine is a 32b or 64b in Python. Is it
> > possible? I saw a few examples (like platform) but they only provide
> > information about Python not the machine.
> os.environ['processor_architecture']
> 
> os.environ is a dictionary of system environment variables. That exact
> key probably only exists on Windows, but I'm there is a similar key on
> other platforms.

$ python -c 'import platform; print platform.architecture()'
('64bit', 'ELF')

  http://docs.python.org/library/platform.html

Phil.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to get Python to insert special characters in an xml file?

2011-07-15 Thread Philip Semanchuk

On Jul 15, 2011, at 7:53 AM, hackingKK wrote:

> Hello all.
> I am currently developing a python application which reads and writes some 
> data to an xml file.
> I use the elementTree library for doing this.
> My simple question is that if I have some thing like & as in "kk & company " 
> as organisation name, how can I have Python take this as a litteral string 
> including the & sign and put in the   tag?
> Even same applies while reading the file.  I would like to have the & come as 
> a part of the literal string.

Hi Krishnakant,
You don't need to do anything special to insert metacharacters like & and < and 
> into XML using ElementTree. Just treat them as normal text and ElementTree 
will change them to entity references (&, etc.) when it writes your file to 
disk. 

If you're having a specific problem with this, post some code.

Cheers
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Deeply nested dictionaries - should I look into a database or am I just doing it wrong?

2011-07-31 Thread Philip Semanchuk

On Jul 31, 2011, at 4:04 PM, Thorsten Kampe wrote:

> * Andrew Berg (Sun, 31 Jul 2011 13:36:43 -0500)
>> On 2011.07.31 02:41 AM, Thorsten Kampe wrote:
>>> Another approach would be named tuples instead of dictionaries or
>>> flat SQL tables.
>> What would the advantage of that be?
> 
> QueueItem.x264['avs']['filter']['fft3d']['ffte'] would be 
> QueueItem.x264.avs.filter.fft3d.ffte. I recently "migrated" from a 
> syntax of - example - datetuple[fieldpositions['tm_year'][0]] (where 
> fieldpositions was a dictionary containing a list) to 
> datetuple.tm_year_start which is much more readable.
> 
> The advantage of a SQL(ite) database would be simple flat tables but 
> accessing them would be more difficult.
> 
> Even a INI config file structure could match your problem.

INI files are OK for lightweight use, but I find them very fragile. Since 
there's no specification for them, libraries don't always agree on how to read 
them. For instance, some libraries treat # as the comment character, and others 
think it is ; and others accept both. There's no standard way to specify the 
encoding, and, as would be critical to the OP who is nesting dicts inside of 
dicts, not all INI file libraries accept nested sections.

To the OP -- if you're looking to write this to disk, I recommend XML or 
SQLite. 

JMHO,
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Table Driven GUI Definition?

2011-08-05 Thread Philip Semanchuk

On Aug 5, 2011, at 4:10 PM, Tim Daneliuk wrote:

> On 8/5/2011 2:05 PM, Irmen de Jong said this:
>> On 05-08-11 19:53, Tim Daneliuk wrote:
>>> I have a task where I want to create pretty simple one page visual
>>> interfaces (Graphical or Text, but it needs to run across Windows,
>>> Cygwin, Linux,*BSD, OSX ...).  These interfaces are nothing more
>>> than option checklists and text fields.  Conceptually something like:
>>> 
>>> Please Select Your Installation Options:
>>> 
>>>Windows Compatibility Services  _
>>>Linux Compatibility Services_
>>>TRS-DOS Compatibility Services  _
>>> 
>>>What Is Your email Address: ___
>>> 
>>> What I'm looking for is a way to describe such forms in a text
>>> file that can then be fed into a tool to generate the necessary
>>> pyGUI, Tkinter, (or whatever) code.   The idea is that it should
>>> be simple to generate a basic interface like this and have it
>>> only record the user's input.  Thereafter, the python code
>>> would act on the basis of those selection without any further
>>> connection to the GUI.
>>> 
>>> An added bonus would be a similar kind of thing for generating
>>> web interfaces to do this.  This might actually be a better model
>>> because then I only have to worry about a single presentation
>>> environment.
>>> 
>>> Ideas anyone?

Hi Tim
This looks pretty straightforward to me; maybe I'm missing something. It 
doesn't look trivial, but the steps seem pretty clear. Is there some part in 
particular that's giving you trouble?

Cheers
Philip



>> 
>> Yeah, HTML being the text file and a web browser being the tool to transform 
>> it into a GUI...
>> 
>> You can hook this up with a simple web server or web framework running 
>> locally to grab the submitted form results when the form is complete and 
>> process them in a piece of python code.
>> 
>> Wouldn't that work?
>> 
>> 
>> Irmen
> 
> Yup, although I'd probably use a central apache instance.  But
> I'm still curious ... is there a way to do this with a full
> GUI tool on a thick client?
> 
> 
> -- 
> 
> Tim Daneliuk
> tun...@tundraware.com
> -- 
> http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Table Driven GUI Definition?

2011-08-05 Thread Philip Semanchuk

On Aug 5, 2011, at 6:20 PM, Tim Daneliuk wrote:

> On 8/5/2011 3:42 PM, Philip Semanchuk wrote:
>> 
>> On Aug 5, 2011, at 4:10 PM, Tim Daneliuk wrote:
>> 
>>> On 8/5/2011 2:05 PM, Irmen de Jong said this:
>>>> On 05-08-11 19:53, Tim Daneliuk wrote:
>>>>> I have a task where I want to create pretty simple one page visual
>>>>> interfaces (Graphical or Text, but it needs to run across Windows,
>>>>> Cygwin, Linux,*BSD, OSX ...).  These interfaces are nothing more
>>>>> than option checklists and text fields.  Conceptually something like:
>>>>> 
>>>>> Please Select Your Installation Options:
>>>>> 
>>>>>Windows Compatibility Services  _
>>>>>Linux Compatibility Services_
>>>>>TRS-DOS Compatibility Services  _
>>>>> 
>>>>>What Is Your email Address: ___
>>>>> 
>>>>> What I'm looking for is a way to describe such forms in a text
>>>>> file that can then be fed into a tool to generate the necessary
>>>>> pyGUI, Tkinter, (or whatever) code.   The idea is that it should
>>>>> be simple to generate a basic interface like this and have it
>>>>> only record the user's input.  Thereafter, the python code
>>>>> would act on the basis of those selection without any further
>>>>> connection to the GUI.
>>>>> 
>>>>> An added bonus would be a similar kind of thing for generating
>>>>> web interfaces to do this.  This might actually be a better model
>>>>> because then I only have to worry about a single presentation
>>>>> environment.
>>>>> 
>>>>> Ideas anyone?
>> 
>> Hi Tim
>> This looks pretty straightforward to me; maybe I'm missing something. It 
>> doesn't look trivial, but the steps seem pretty clear. Is there some part in 
>> particular that's giving you trouble?
>> 
>> Cheers
>> Philip
>> 
> 
> I want to take a text definition file that looks something this:
> 
>  Title "Please Select Your Installation Options:"
> 
> 
>  Checkbox  "Windows Compatibility Services"
>  Checkbox  "Linux Compatibility Services"
>  Checkbox  "TRS-DOS Compatibility Services"
> 
>  Inputbox   "What Is Your email Address:"
> 
> 
> And have that aut-generate the GUI interface described above for the
> selected GUI toolkit and/or an equivalent HTML page.
> 
> I know I can write a program to do this, but it seems that someone else
> may have already solved this problem.

Oh, I see. I didn't realize you were looking for a most canned solution. I 
agree that it's a problem that's been solved many times.

I've used Mako before as an HTML templating engine, but ISTR that it points out 
that it's agnostic to what it's templating. In other words, it only cares about 
what's between the Mako escape tags, it doesn't care if the surrounding text is 
HTML or XML or Python or whatever. 

So you could have a Mako template that consists mostly of Python code that 
builds a wxPython window (if wxPython is your cup of tea) and then some Mako 
commands in the middle that reads your text definition file and adds 
checkboxes, textboxes, etc. as appropriate. It's not a canned solution, but it 
does allow you to separate the boilerplate stuff from the variants.

Hope this helps
Philip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: WxPython and TK

2011-08-08 Thread Philip Semanchuk

On Aug 7, 2011, at 8:26 PM, azrael wrote:

> Today I found a quote from Guido.
> 
> wxPython is the best and most mature cross-platform GUI toolkit, given a 
> number of constraints. The only reason wxPython isn't the standard Python GUI 
> toolkit is that Tkinter was there first.
> -- Guido van Rossum
> 
> OK, now. Isn't it maybe time to throw out TK once and for all? Python is 
> missing one of the most important aspects of todays IT industry. GUI 
> development native library (I mean a serious one).

I don't see how removing TK from the standard library helps to fill the native 
GUI development library void that you see in 
Python. I guess you're promoting wxPython as the library to fill that void. 
Getting rid of TK is one argument, adding wxPython is different argument. Are 
you advocating one, the other, or both?



> If I would have gotten a dollar for every time I talked to someone in a 
> company about why they dont use python for their products and I was served 
> the answer "Well it kind of sucks in GUI development", I would be a 
> millionaire.

And if I had a dollar for every "Let's replace TK with XYZ" post, I'd also be a 
millionaire. 

I don't object to your argument; criticism of standard library is how it 
advances. But you're going to have to come up with a better argument than a 5+ 
year old quote from Guido and an exaggerated claim about why people don't use 
Python. The "best Python GUI library" conversation is repeated on this list at 
least once every few months. If the subject really interests you, I recommend 
that you read the archives and see some of the arguments for and against 
various GUI toolkits. 

Cheers
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing timing issue

2011-08-10 Thread Philip Semanchuk

On Aug 9, 2011, at 1:07 PM, Tim Arnold wrote:

> Hi, I'm having problems with an empty Queue using multiprocessing.
> 
> The task:
> I have a bunch of chapters that I want to gather data on individually and 
> then update a report database with the results.
> I'm using multiprocessing to do the data-gathering simultaneously.
> 
> Each chapter report gets put on a Queue in their separate processes. Then 
> each report gets picked off the queue and the report database is updated with 
> the results.
> 
> My problem is that sometimes the Queue is empty and I guess it's
> because the get_data() method takes a lot of time.
> 
> I've used multiprocessing before, but never with a Queue like this.
> Any notes or suggestions are very welcome.


Hi Tim,
THis might be a dumb question, but...why is it a problem if the queue is empty? 
It sounds like you figured out already that get_data() sometimes takes longer 
than your timeout. So either increase your timeout or learn to live with the 
fact that the queue is sometimes empty. I don't mean to be rude, I just don't 
understand the problem. 

Cheers
Philip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help needed with using SWIG wrapped code in Python

2011-08-15 Thread Philip Semanchuk

On Aug 15, 2011, at 4:08 AM, Vipul Raheja wrote:

> Hi,
> 
> I have wrapped a library from C++ to Python using SWIG. But I am facing
> problems while importing and using it in Python.

Hi Vipul,
Did you try asking about this on the SWIG mailing list?

bye
Philip


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why no warnings when re-assigning builtin names?

2011-08-15 Thread Philip Semanchuk

On Aug 15, 2011, at 5:52 PM, Gerrat Rickert wrote:

> With surprising regularity, I see program postings (eg. on
> StackOverflow) from inexperienced Python users  accidentally
> re-assigning built-in names.
> 
> 
> 
> For example, they'll innocently call some variable, "list", and assign a
> list of items to it.
> 
> ...and if they're _unlucky_ enough, their program may actually work
> (encouraging them to re-use this name in other programs).

Or they'll assign a class instance to 'object', only to cause weird errors 
later when they use it as a base class.

I agree that this is a problem. The folks on my project who are new-ish to 
Python overwrite builtins fairly often. Since there's never been any 
consequence other than my my vague warnings that something bad might happen as 
a result, it's difficult for them to develop good habits in this regard. It 
doesn't help that Eclipse (their editor of choice) doesn't seem to provide a 
way of coloring builtins differently. (That's what I'm told, anyway. I don't 
use it.)

> If they try to use an actual keyword, both the interpreter and compiler
> are helpful enough to give them a syntax error, but I think the builtins
> should be "pseudo-reserved", and a user should explicitly have to do
> something *extra* to not receive a warning.

Unfortunately you're suggesting a change to the language which could break 
existing code. I could see a use for "from __future__ import 
squawk_if_i_reassign_a_builtin" or something like that, but the current default 
behavior has to remain as it is.

JMO,
Philip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why no warnings when re-assigning builtin names?

2011-08-15 Thread Philip Semanchuk

On Aug 15, 2011, at 9:32 PM, Steven D'Aprano wrote:

> On Tue, 16 Aug 2011 08:15 am Chris Angelico wrote:
> 
>> If you want a future directive that deals with it, I'd do it the other
>> way - from __future__ import mask_builtin_warning or something - so
>> the default remains as it currently is. But this may be a better job
>> for a linting script.
> 
> Agreed. It's a style issue, nothing else. There's nothing worse about:
> 
> def spam(list):
>pass
> 
> compared to
> 
> class thingy: pass
> 
> def spam(thingy):
>pass
> 
> Why should built-ins be treated as more sacred than your own objects?

Because built-ins are described in the official documentation as having a 
specific behavior, while my objects are not.

Yes, it can be useful to replace some of the builtins with one's own 
implementation, and yes, doing so fits in with Python's "we're all consenting 
adults" philosophy. But replacing (shadowing, masking -- call it what you will) 
builtins is not everyday practice. On the contrary, as the OP Gerrat pointed 
out, it's most often done unwittingly by newcomers to the language who have no 
idea that they've done anything out of the ordinary or potentially confusing. 

If a language feature is most often invoked accidentally without knowledge of 
or regard for its potential negative consequences, then it might be worth 
making it easier to avoid those accidents. 

bye,
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why no warnings when re-assigning builtin names?

2011-08-16 Thread Philip Semanchuk

On Aug 16, 2011, at 1:15 AM, Steven D'Aprano wrote:

> On Tue, 16 Aug 2011 01:23 pm Philip Semanchuk wrote:
> 
>> 
>> On Aug 15, 2011, at 9:32 PM, Steven D'Aprano wrote:
>> 
>>> On Tue, 16 Aug 2011 08:15 am Chris Angelico wrote:
>>> 
>>>> If you want a future directive that deals with it, I'd do it the other
>>>> way - from __future__ import mask_builtin_warning or something - so
>>>> the default remains as it currently is. But this may be a better job
>>>> for a linting script.
>>> 
>>> Agreed. It's a style issue, nothing else. There's nothing worse about:
>>> 
>>> def spam(list):
>>>   pass
>>> 
>>> compared to
>>> 
>>> class thingy: pass
>>> 
>>> def spam(thingy):
>>>   pass
>>> 
>>> Why should built-ins be treated as more sacred than your own objects?
>> 
>> Because built-ins are described in the official documentation as having a
>> specific behavior, while my objects are not.
> 
> *My* objects certainly are, because I write documentation for my code. My
> docs are no less official than Python's docs.

I'm sure they are no less official to you. But you are you, and then 
there's...everyone else. =) 

I (and I think most people) give far more credibility to the Python docs than 
to the documentation of an individual. That's not a reflection on you, it 
reflects the limits of one person's ability versus organizationally produced 
docs which are heavily used, discussed, and have been iteratively developed 
over many years. 


> Sometimes shadowing is safe, sometimes it isn't. 

"Sometimes X is safe and sometimes it isn't" can be said of many, many things, 
from taking a walk down the street to juggling with knives. But it has little 
to do with whether or not Python should issue a warning in the specific case 
we're talking about.


> A warning that is off by default won't help the people who need it, because
> they don't know enough to turn the warning on.

I agree that it wouldn't help the people who need it most (absolute raw 
newcomers). But you're asserting that once one learned the incantation to 
enable the theoretical warning we're discussing, one would have graduated to a 
level where it's no longer useful. That's not the case. There's a lot of ground 
to cover between "newcomer who has learned about a particular warning" and 
"coder who regularly shadows builtins on purpose". 

I am an example. I know enough to turn the theoretical warning on, and I would 
if I could. I have never shadowed a builtin deliberately. I've done it 
accidentally plenty of times. There are 84 builtins in my version of Python and 
I don't have them all memorized. The fact that my editor colors them 
differently is the only thing I have to back up my leaky memory. Not all 
editors are so gracious.


>> Yes, it can be useful to replace some of the builtins with one's own
>> implementation, and yes, doing so fits in with Python's "we're all
>> consenting adults" philosophy. But replacing (shadowing, masking -- call
>> it what you will) builtins is not everyday practice. On the contrary, as
>> the OP Gerrat pointed out, it's most often done unwittingly by newcomers
>> to the language who have no idea that they've done anything out of the
>> ordinary or potentially confusing.
> 
> Protecting n00bs from their own errors is an admirable aim, but have you
> considered that warnings for something which may be harmless could do more
> harm than good?

Isn't the whole point of a warning to highlight behavior that's not strictly 
wrong but looks iffy? Sort of, "I can't be sure, but this looks like trouble to 
me. I hope you know what you're doing". If we are to eschew warnings in cases 
where they might be highlighting something harmless, then we would have no 
warnings at all. 

Again, shadowing builtins is not everyday practice. I have been trying to 
remember if I've ever seen it done deliberately, and I can't remember a case. 
Now, a comment like that is an invitation for people come out of the woodwork 
with cases where they found it useful, and I would welcome some examples as I'm 
sure they'd be interesting. But I think it's safe to say that if you look at 
random samples of code, builtins are shadowed unintentionally hundreds of times 
for every time they're shadowed deliberately and usefully. 


>> If a language feature is most often invoked accidentally without knowledge
>> of or regard for its potential negative consequences, then it might be
>> worth makin

Re: Why no warnings when re-assigning builtin names?

2011-08-16 Thread Philip Semanchuk

On Aug 16, 2011, at 11:12 AM, Chris Angelico wrote:

> On Tue, Aug 16, 2011 at 3:13 PM, Philip Semanchuk  
> wrote:
> 
>> One need look no further than the standard library to see a strong 
>> counterexample. grep through the Python source for " file =". I see dozens 
>> of examples of this builtin being used as a common variable name. I would 
>> call contributors to the standard library above-average coders, and we can 
>> see them unintentionally shadowing builtins many times.
>> 
> 
> There are several types of shadowing:
> 
> 1) Deliberate shadowing because you want to change the behavior of the
> name. Extremely rare.
> 2) Shadowing simply by using the name of an unusual builtin (like
> 'file') in a context where you never use it. Very common.
> 3) Unintentional shadowing where you create a variable, but then
> intend to use the builtin. This is the only one that's a problem.

Yes, but before you get to #3 you have to go through #2. The way I see it, #2 
is setting a trap, #3 is actually stepping in it. I don't want to do either. 
Neither do I like working with code that has set trap #2 for me.


Cheers
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why no warnings when re-assigning builtin names?

2011-08-16 Thread Philip Semanchuk

On Aug 16, 2011, at 11:41 AM, Ethan Furman wrote:

> Philip Semanchuk wrote:
>> On Aug 16, 2011, at 1:15 AM, Steven D'Aprano wrote:
>>> Protecting n00bs from their own errors is an admirable aim, but have you
>>> considered that warnings for something which may be harmless could do more
>>> harm than good?
>> Isn't the whole point of a warning to highlight behavior that's not strictly
> > wrong but looks iffy? Sort of, "I can't be sure, but this looks like trouble
> > to me. I hope you know what you're doing". If we are to eschew warnings in
> > cases where they might be highlighting something harmless, then we would
> > have no warnings at all.
> 
> Sounds good to me.  ;)  Keep such things in the IDE's, and then those who 
> desire such behavior can have it there.  Do not clutter Python with such.

You wink, yet you sound serious. What's with the mixed message? Do you honestly 
advocate removing all warnings from Python, or not? I sincerely would like to 
know what you think.


>>> Perhaps. But I'm not so sure it is worth the cost of extra code to detect
>>> shadowing and raise a warning. After all, the average coder probably never
>>> shadows anything,
>> One need look no further than the standard library to see a strong
> > counterexample. grep through the Python source for " file =". I see dozens
>> of examples of this builtin being used as a common variable name. I would
> > call contributors to the standard library above-average coders, and we can
> > see them unintentionally shadowing builtins many times.
> 
> What makes you think it's unintentional?  file makes a good variable name, 
> and if you don't need it to actually open a file there's nothing wrong with 
> using it yourself.

"Unintentional" as in, "I'm using file as a variable name because it's handy" 
as opposed to intentional as in "Yes, I am deliberately changing the meaning of 
this builtin". 


>>> and for those that do, once they get bitten *once* they
>>> either never do it again or learn how to shadow safely.
>> I have done it plenty of times, never been bitten (thankfully) and still
> > do it by accident now and again.
> 
> Seems to me the real issue is somebody using a builtin, such as str or int, 
> and that they somehow manage to do this without realizing, "wait a sec', 
> that's one of my variables!"  

Yes


> I don't see that as a problem that Python needs to solve.

"need" is a strong word. Python will be fine regardless of whether this changes 
or not. I believe Python could be improved; that's all I'm arguing.

Cheers
Philip



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why no warnings when re-assigning builtin names?

2011-08-16 Thread Philip Semanchuk

On Aug 16, 2011, at 12:19 PM, Ethan Furman wrote:

> Philip Semanchuk wrote:
>> On Aug 16, 2011, at 11:41 AM, Ethan Furman wrote:
>>> Philip Semanchuk wrote:
>>>> If we are to eschew warnings in
>>>> cases where they might be highlighting something harmless, then we would
>>>> have no warnings at all.
> >>
>>> Sounds good to me.  ;)  Keep such things in the IDE's, and then those
> >> who desire such behavior can have it there.  Do not clutter Python with
> >> such.
>> You wink, yet you sound serious. 
> 
> The smiley is an attempt to not sound harsh.

Thanks. It's hard to know on the Internet.


>>> I don't see that as a problem that Python needs to solve.
>> "need" is a strong word. Python will be fine regardless of whether this 
>> changes
> > or not. I believe Python could be improved; that's all I'm arguing.
> 
> Python can be improved -- I don't see 'hand-holding' as an improvement.  IDEs 
> and lints can do this.

When you say "hand-holding", I hear a pejorative. That makes "I don't see 
'hand-holding' as an improvement" a tautology. Have I misheard you?

I think Python does lots of beneficial hand-holding. Garbage collection is a 
good example. $DIETY knows, people have been struggling with manual memory 
management in C and its ilk for a long time. Even though there are good tools 
to help, memory leaks still happen. Python increases our productivity by 
allowing us to forget about manual memory management altogether. I can do it 
with tools like valgrind, but Python's makes the point moot. Is that 
hand-holding? If so, I'm all for it.

Cheers
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why no warnings when re-assigning builtin names?

2011-08-16 Thread Philip Semanchuk

On Aug 16, 2011, at 7:29 PM, Terry Reedy wrote:

> On 8/16/2011 1:15 PM, Gerrat Rickert wrote:
> 
>> I think that best practices would suggest that one shouldn't use
>> variable
>> names that shadow builtins (except in specific, special circumstances),
>> so I don't really think this would be an annoyance at all.  The number
>> of
>> *unwanted* warnings they'd get would be pretty close to zero.  OTOH, in
>> response to a question I asked on StackOverflow, someone posted a large
>> list of times where this isn't followed in the std lib, so there seems
>> to be a precedent for just using the builtin names for anything
>> one feels like at the time.
> 
> If you run across that again and email me the link, I will take a look and 
> see if I think the issue should be raised on pydev. Of course, some modules 
> *intentionally* define an open function, intended to be accessed as 
> 'mod.open' and not as 'from mod import *; open'. Also, class/instance 
> attributes can also reuse builtin names. But 'open = ' would be 
> bad.


Hi Terry,
To generalize from your example, are you saying that there's a mild admonition 
against shadowing builtins with unrelated variable names in standard lib code?

Here's an example from Python 3.2.1's argparse.py, lines 466-473. "open" is 
shadowed on the second line.

# clean up separators for mutually exclusive groups
open = r'[\[(]'
close = r'[\])]'
text = _re.sub(r'(%s) ' % open, r'\1', text)
    text = _re.sub(r' (%s)' % close, r'\1', text)
text = _re.sub(r'%s *%s' % (open, close), r'', text)
text = _re.sub(r'\(([^|]*)\)', r'\1', text)
text = text.strip()


Thanks
Philip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why no warnings when re-assigning builtin names?

2011-08-16 Thread Philip Semanchuk

On Aug 16, 2011, at 9:29 PM, Steven D'Aprano wrote:

> I have no objection to lint tools. But separation of concerns should apply:
> the Python compiler should just compile what I tell it to, the linter
> should warn me if I'm running with scissors.

This point (also made by Ethan) I can agree with. I haven't looked through all 
the warnings the Python compiler emits, but it seems like it currently doesn't 
dispense advice (unlike, say, gcc). It only warns about changes in the language 
& standard library. In that context, asking it to warn about shadowing builtins 
would be an expansion of scope. 

bye,
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why no warnings when re-assigning builtin names?

2011-08-16 Thread Philip Semanchuk

On Aug 16, 2011, at 10:15 PM, Terry Reedy wrote:

> On 8/16/2011 8:18 PM, Philip Semanchuk wrote:
> 
>> Hi Terry,
>> To generalize from your example, are you saying that there's a mild 
>> admonition
> > against shadowing builtins with unrelated variable names in standard lib 
> > code?
> 
> I would expect that there might be. I would have to check PEP8.


I was curious, so I checked. I didn't see anything specifically referring to 
builtins. This is as close as it gets:

"If a function argument's name clashes with a reserved keyword, it is generally 
better to append a single trailing underscore rather than use an abbreviation 
or spelling corruption.  Thus "print_" is better than "prnt".  (Perhaps better 
is to avoid such clashes by using a synonym.)"


bye
Philip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List spam

2011-08-18 Thread Philip Semanchuk

On Aug 18, 2011, at 8:58 AM, Jason Staudenmayer wrote:

> I really like this list as part of my learning tools but the amount of spam 
> that I've been getting from it is CRAZY. Doesn't anything get scanned before 
> it sent to the list?

This has been discussed on the list a number of times before, so I'll refer you 
to the archives for details.

Basically, the mailing list receives postings from Google Groups and vice 
versa. Most of the spam comes from Google Groups. If you add a mail filter that 
deletes anything with the "Organization" header set to 
"http://groups.google.com";, you won't see much spam anymore. In my experience, 
you'll also miss a number of legitimate postings. 

HTH
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List spam

2011-08-18 Thread Philip Semanchuk

On Aug 18, 2011, at 1:10 PM, Peter Pearson wrote:

> On Thu, 18 Aug 2011 12:15:59 -0400, gene heskett  wrote:
> [snip]
>> What is wrong with the mailing list only approach?
> 
> In the mailing-list approach, how do I search for prior discussions
> on a subject?  (I'm not particularly opposed to the mailing list,
> I'm just an NNTP follower worried about the uncertainties of change.)

I use a Google search like this:
site:mail.python.org/pipermail/python-list/  banana

Although that has its own issues, as not all messages seem to make it to that 
list (or they have the X-No-Archive bit set?)


Cheers
P
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Hot Girls are Looking for Sex

2011-08-19 Thread Philip Semanchuk

On Aug 19, 2011, at 4:17 PM, Matty Sarro wrote:

> That's great - but do they program in python?


Please don't repost URLs sent by a spammer. Only Google truly knows how its 
algorithm works, but the general consensus is that the more times Google sees a 
link repeated, the more credibility the link is given. By reposting links, you 
help the spammer.  



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Immediate Requirement for a Data Warehouse Developer

2011-08-25 Thread Philip Semanchuk

On Aug 25, 2011, at 9:24 AM, Sirisha wrote:

> Position Profile – Senior Data Warehouse Developer

As was mentioned on the list less than 24 hours ago, please don't post job 
listings to this mailing list. Use the Python jobs board instead:
http://www.python.org/community/jobs/


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Understanding .pth in site-packages

2011-08-27 Thread Philip Semanchuk

On Aug 27, 2011, at 12:56 PM, Josh English wrote:

> (This may be a shortened double post)
> 
> I have a development version of a library in c:\dev\XmlDB\xmldb
> 
> After testing the setup script I also have c:\python27\lib\site-packages\xmldb
> 
> Now I'm continuing to develop it and simultaneously building an application 
> with it.
> 
> I thought I could plug into my site-packages directory a file called 
> xmldb.pth with:
> 
> c:\dev\XmlDB\xmldb
> 
> which should redirect import statements to the development version of the 
> library.
> 
> This doesn't seem to work.


xmldb.pth should contain the directory that contains xmldb:
c:\dev\XmlDB

Examining sys.path at runtime probably would have helped you to debug the 
effect of your .pth file.

On another note, I don't know if the behavior of 'import xmldb' is defined when 
xmldb is present both as a directory in site-pacakges and also as a .pth file. 
You're essentially giving Python two choices from where to import xmldb, and I 
don't know which Python will choose. It may be arbitrary. I've looked for some 
sort of statement on this topic in the documentation, but haven't come across 
it yet. 


> Is there a better way to redirect import statements without messing with the 
> system path or the PYTHONPATH variable?

Personally I have never used PYTHONPATH.


Hope this helps
Philip


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Understanding .pth in site-packages

2011-08-27 Thread Philip Semanchuk

On Aug 27, 2011, at 1:57 PM, Josh English wrote:

> Philip,
> 
> Yes, the proper path should be c:\dev\XmlDB, which has the setup.py, xmldb 
> subfolder, the docs subfolder, and example subfolder, and the other text 
> files proscribed by the package development folder.
> 
> I could only get it to work, though, by renaming the xmldb folder in the 
> site-packages directory, and deleting the egg file created in the 
> site-packages directory. 
> 
> Why the egg file, which doesn't list any paths, would interfere I do not know.
> 
> But with those changes, the xmldb.pth file is being read.
> 
> So I think the preferred search order is:
> 
> 1. a folder in the site-packages directory
> 2. an Egg file (still unsure why)
> 3. A .pth file


That might be implementation-dependent or it might even come down to something 
as simple as the in which order the operating system returns files/directories 
when asked for a listing. In other words, unless you can find something in the 
documentation (or Python's import implementation) that confirms your preferred 
search order observation, I would not count on it working the same way with all 
systems, all Pythons, or even all directory names.




Good luck
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Understanding .pth in site-packages

2011-08-27 Thread Philip Semanchuk

On Aug 27, 2011, at 4:14 PM, Terry Reedy wrote:

> On 8/27/2011 2:07 PM, Philip Semanchuk wrote:
>> 
>> On Aug 27, 2011, at 1:57 PM, Josh English wrote:
>> 
>>> Philip,
>>> 
>>> Yes, the proper path should be c:\dev\XmlDB, which has the
>>> setup.py, xmldb subfolder, the docs subfolder, and example
>>> subfolder, and the other text files proscribed by the package
>>> development folder.
>>> 
>>> I could only get it to work, though, by renaming the xmldb folder
>>> in the site-packages directory, and deleting the egg file created
>>> in the site-packages directory.
>>> 
>>> Why the egg file, which doesn't list any paths, would interfere I
>>> do not know.
>>> 
>>> But with those changes, the xmldb.pth file is being read.
>>> 
>>> So I think the preferred search order is:
>>> 
>>> 1. a folder in the site-packages directory 2. an Egg file (still
>>> unsure why) 3. A .pth file
>> 
>> 
>> That might be implementation-dependent or it might even come down to
>> something as simple as the in which order the operating system
>> returns files/directories when asked for a listing.
> 
> Doc says first match, and I presume that includes first match within a 
> directory.

First match using which ordering? Do the docs clarify that?


Thanks
Philip




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Understanding .pth in site-packages

2011-08-27 Thread Philip Semanchuk

On Aug 27, 2011, at 6:49 PM, Josh English wrote:

> When I run: os.listdir('c:\Python27\lib\site-packages') I get the contents in 
> order, so the folders come before .pth files (as nothing comes before 
> something.)

That's one definition of "in order". =)


> I would guess Python is using os.listdir. Why wouldn't it?

If you mean that Python uses os.listdir() during import resolution, then yes I 
agree that's probable. And os.listdir() doesn't guarantee any consistent order. 
In fact, the documentation explicitly states that the list is returned in 
arbitrary order. Like a lot of things in Python, os.listdir() probably relies 
on the underlying C library which varies from system to system. (Case in point 
-- on my Mac, os.listdir() returns things in the same order as the 'ls' 
command, which is case-sensitive alphabetical, files & directories mixed -- 
different from Windows.)

So if import relies on os.listdir(), then you're relying on arbitrary 
resolution when you have a .pth file that shadows a site-packages directory. 
Those rules will probably work consistently on your particular system, you're 
developing a habit around what is essentially an implementation quirk.  

Cheers
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Button Label change on EVT_BUTTON in wxpython!!!

2011-08-28 Thread Philip Semanchuk

On Aug 28, 2011, at 9:30 PM, Ven wrote:

> Some system info before proceeding further:
> 
> Platform: Mac OS X 10.7.1
> Python Version: ActiveState Python 2.7.1
> wxPython Version: [url=http://downloads.sourceforge.net/wxpython/
> wxPython2.9-osx-2.9.2.1-cocoa-py2.7.dmg]wxPython2.9-osx-cocoa-py2.7[/
> url]
> 
> I want the button label to be changed while performing a task
> 
> So, here is what I did/want:
> 
> self.run_button=wx.Button(self.panel,ID_RUN_BUTTON,label='Install')
> self.Bind(wx.EVT_BUTTON, self.OnRun,id=ID_RUN_BUTTON)
> 
> def OnRun(self,evt):
>   self.run_button.SetLabel('Installing..')
>   #call a function that does the installation task
>   installation_task()
>   #After task completion, set the button label back to "Install"
>   self.run_button.SetLabel('Install')
> 
> When I try doing this, it doesn't set the label to "Installing" while
> the task is being performed. Any suggestions how do I achieve this?


Suggestion #1: After you set the label to "Installing...", try adding 
self.run_button.Refresh() and/or self.run_button.Update().

Suggestion #2: Ask wxPython questions on the wxPython mailing list.

Good luck
Philip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help parsing a text file

2011-08-29 Thread Philip Semanchuk

On Aug 29, 2011, at 2:21 PM, William Gill wrote:

> I haven't done much with Python for a couple years, bouncing around between 
> other languages and scripts as needs suggest, so I have some minor difficulty 
> keeping Python functionality Python functionality in my head, but I can 
> overcome that as the cobwebs clear.  Though I do seem to keep tripping over 
> the same Py2 -> Py3 syntax changes (old habits die hard).
> 
> I have a text file with XML like records that I need to parse.  By XML like I 
> mean records have proper opening and closing tags. but fields don't have 
> closing tags (they rely on line ends).  Not all fields appear in all records, 
> but they do adhere to a defined sequence.
> 
> My initial passes into Python have been very unfocused (a scatter gun of too 
> many possible directions, yielding very messy results), so I'm asking for 
> some suggestions, or algorithms (possibly even examples)that may help me 
> focus.
> 
> I'm not asking anyone to write my code, just to nudge me toward a more 
> disciplined approach to a common task, and I promise to put in the effort to 
> understand the underlying fundamentals.

If the syntax really is close to XML, would it be all that difficult to convert 
it to proper XML? Then you have nice libraries like ElementTree to use for 
parsing.


Cheers
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Python Tools for Visual Studio - anyone using it?

2011-08-30 Thread Philip Semanchuk
Hi all,
I was reminded today (via Slashdot) of Python Tools for Visual Studio which was 
discussed on this list back in March 
(http://mail.python.org/pipermail/python-list/2011-March/1267662.html) and has 
reached version 1.0. Is anyone here using it? Care to share pros & cons?

Here's the URL for those who haven't heard of it before:
http://pytools.codeplex.com/

Thanks
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Checking against NULL will be eliminated?

2011-03-02 Thread Philip Semanchuk

On Mar 2, 2011, at 9:21 AM, Stefan Behnel wrote:

> Claudiu Popa, 02.03.2011 14:51:
>> Hello Python-list,
>> 
>> 
>> I  don't  know how to call it, but the following Python 3.2 code seems to 
>> raise a
>> FutureWarning.
>> 
>> def func(root=None):
>> nonlocal arg
>> if root:
>>arg += 1
>> The  warning is "FutureWarning: The behavior of this method will change
>> in future versions.  Use specific 'len(elem)' or 'elem is not None' test 
>> instead."
>> Why is the reason for this idiom to be changed?
> 
> Let me guess - this is using ElementTree, right?
> 
> It's not the idiom itself that changes, it's the Element class in ElementTree 
> that will likely change its behaviour in later versions.
> 
> Fix: do as it says.

And it's documented, although you might have a hard time finding it. See the 
"Caution" at the end of this section of documentation:
http://docs.python.org/py3k/library/xml.etree.elementtree.html#element-objects

I wish this behavior had been changed for Python 3.x. That warning has been in 
the ElementTree doc since before it became part of the standard lib, so it's 
not a new idea. Python 3.x seems like it would have been an ideal time to make 
the change. Oh well.

bye
P
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: questions about multiprocessing

2011-03-04 Thread Philip Semanchuk

On Mar 4, 2011, at 11:08 PM, Vincent Ren wrote:

> Hello, everyone, recently I am trying to learn python's
> multiprocessing, but
> I got confused as a beginner.
> 
> If I run the code below:
> 
> from multiprocessing import Pool
> import urllib2
> otasks = [
> 'http://www.php.net'
> 'http://www.python.org'
> 'http://www.perl.org'
> 'http://www.gnu.org'
> ]
> 
> def f(url):
> return urllib2.urlopen(url).read()
> 
> pool = Pool(processes = 2)
> print pool.map(f, tasks)

Hi Vincent,
I don't think that's the code you're running, because that code won't run. 
Here's what I get when I run the code you gave us:

Traceback (most recent call last):
  File "x.py", line 14, in 
print pool.map(f, tasks)
NameError: name 'tasks' is not defined


When I change the name of "otasks" to "tasks", I get the nonnumeric port error 
that you reported. 

Me, I would debug it by adding a print statement to f():
def f(url):
print url
return urllib2.urlopen(url).read()


Your problem isn't related to multiprocessing.

Good luck 
Philip




> 
> 
> I'll receive this message:
> 
> Traceback (most recent call last):
>   File "", line 14, in 
>   File "/usr/lib/python2.6/multiprocessing/pool.py", line 148, in map
> return self.map_async(func, iterable, chunksize).get()
>   File "/usr/lib/python2.6/multiprocessing/pool.py", line 422, in get
> raise self._value
> httplib.InvalidURL: nonnumeric port: ''
> 
> 
> 
> I run Python 2.6 on Ubuntu 10.10
> 
> 
> Regards
> Vincent
> 
> 
> -- 
> http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing module in async db query

2011-03-08 Thread Philip Semanchuk

On Mar 8, 2011, at 3:25 PM, Sheng wrote:

> This looks like a tornado problem, but trust me, it is almost all
> about the mechanism of multiprocessing module.

[snip]


> So the workflow is like this,
> 
> get() --> fork a subprocess to process the query request in
> async_func() -> when async_func() returns, callback_func uses the
> return result of async_func as the input argument, and send the query
> result to the client.
> 
> So the problem is the the query result as the result of sql_command
> might be too big to store them all in the memory, which in our case is
> stored in the variable "data". Can I send return from the async method
> early, say immediately after the query returns with the first result
> set, then stream the results to the browser. In other words, can
> async_func somehow notify callback_func to prepare receiving the data
> before async_func actually returns?

Hi Sheng,
Have you looked at multiprocessing.Queue objects? 


HTH
Philip





-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing module in async db query

2011-03-09 Thread Philip Semanchuk

On Mar 9, 2011, at 10:22 AM, Sheng wrote:

> Hi Philip,
> 
> multiprocessing.Queue is used to transfer data between processes, how
> it could be helpful for solving my problem? Thanks!

I misunderstood -- I thought transferring data between processes *was* your 
problem. If both of your functions are in the same process, I don't understand 
how multiprocessing figures into it at all.

If you want a function to start returning results before that function 
completes, and you want those results to be processed by other code *in the 
same process*, then you'll have to use threads. A Queue object for threads 
exists in the standard library too. You might find that useful.

HTH
Philip


> 
> On Mar 8, 6:34 pm, Philip Semanchuk  wrote:
>> On Mar 8, 2011, at 3:25 PM, Sheng wrote:
>> 
>>> This looks like a tornado problem, but trust me, it is almost all
>>> about the mechanism of multiprocessing module.
>> 
>> [snip]
>> 
>>> So the workflow is like this,
>> 
>>> get() --> fork a subprocess to process the query request in
>>> async_func() -> when async_func() returns, callback_func uses the
>>> return result of async_func as the input argument, and send the query
>>> result to the client.
>> 
>>> So the problem is the the query result as the result of sql_command
>>> might be too big to store them all in the memory, which in our case is
>>> stored in the variable "data". Can I send return from the async method
>>> early, say immediately after the query returns with the first result
>>> set, then stream the results to the browser. In other words, can
>>> async_func somehow notify callback_func to prepare receiving the data
>>> before async_func actually returns?
>> 
>> Hi Sheng,
>> Have you looked at multiprocessing.Queue objects?
>> 
>> HTH
>> Philip
> 
> -- 
> http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Do you monitor your Python packages in inux distributions?

2011-03-12 Thread Philip Semanchuk

On Mar 12, 2011, at 2:26 PM, s...@pobox.com wrote:

> 
> I'm one of the SpamBayes developers and in a half-assed way try to keep
> track of SB dribbles on the net via a saved Google search.  About a month
> ago I got a hit on an Ubuntu bug tracker about a SpamBayes bug.  As it turns
> out, Ubuntu distributes an outdated (read: no longer maintained) version of
> SpamBayes.  The bug had been fixed over three years ago in the current
> version.  Had I known this I could probably have saved them some trouble, at
> least by suggesting that they upgrade.
> 
> I have a question for you people who develop and maintain Python-based
> packages.  How closely, if at all, do you monitor the bug trackers of Linux
> distributions (or Linux-like packaging systems like MacPorts) for activity
> related to your packages?  How do you encourage such projects to push bug
> reports and/or fixes upstream to you?  What tools are out there to discover
> which Linux distributions have SpamBayes packages?  (I know about
> rpmfind.net, but there must be other similar sites by now.)

Hi Skip,
I use google alerts to track where my packages posix_ipc and sysv_ipc get 
mentioned, and they have been turned into packages for Fedora and I think one 
other distro the name of which escapes me at the moment. At first I was really 
pleased to see them made into distro-specific packages because I'm too lazy to 
do it myself. But then I realized the same side effect that you described -- 
the versions distributed via my Web site have moved on and added bug fixes and 
major features like Python 3 support, while the distro-specific packages are 
frozen in time.


I guess via my Google alerts I would learn if a bug was filed against one of my 
outdated packages. I only get 1-2 alerts per day, so they're easy to keep track 
of. If my packages were more popular, I might get so many alerts I'd just stop 
reading them. So far I've never seen a distro-specific bug reported against one 
of my packages. All bugs have been reported directly to me. I hope that 
continues to be the case because I don't have a good solution to the problems 
you mentioned.

Cheers
Philip


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is there any python library that parse c++ source code statically

2011-03-13 Thread Philip Semanchuk

On Mar 13, 2011, at 11:46 AM, Stefan Behnel wrote:

> Francesco Bochicchio, 13.03.2011 10:37:
>> On 13 Mar, 10:14, kuangye  wrote:
>>> Hi, all. I need to generate other programming language source code
>>> from C++ source code for a project. To achieve this, the first step is
>>> to "understand" the c++ source code at least in formally. Thus is
>>> there any library to parse the C++ source code statically. So I can
>>> developer on this library.
>>> 
>>> Since the C++ source code is rather simple and regular. I think i can
>>> generate other language representation from C++ source code.
>> 
>> 
>> The problem is that C++ is a beast of a language and is not easy to
>> find full parsers for it.
>> I've never done it, but sometime I researched possible ways to do it.
>> The best idea I could come with
>> is doing it in 2 steps:
>> 
>>  - using gcc-xml ( http://www.gccxml.org/HTML/Index.html ) to generate
>> an xml representation of the code
>>  - using one of the many xml library for python to read the xml
>> equivalent of the code and then generate the equivalent
>>code in other languages ( where you could use a template engine,
>> but I found that the python built-in string
>>formatting libraries are quite up to the task ).
> 
> I also heard that clang is supposed to the quite useful for this kind of 
> undertaking.

I was just discussing this with some folks here at PyCon. Clang has a library 
interface (libclang):
http://clang.llvm.org/doxygen/group__CINDEX.html

There's Python bindings for it; I'm sure the author would like some company =)

https://bitbucket.org/binet/py-clang/


Cheers
P

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: calling 64 bit routines from 32 bit matlab on Mac OS X

2011-03-15 Thread Philip Semanchuk

On Mar 15, 2011, at 11:58 AM, Danny Shevitz wrote:

> Howdy,
> 
> I have run into an issue that I am not sure how to deal with, and would
> appreciate any insight anyone could offer.
> 
> I am running on Mac OS X 10.5 and have a reasonably large tool chain including
> python, PyQt, Numpy... If I do a "which python", I get "Mach-O executable 
> i386".
> 
> I need to call some commercial 3rd party C extension code that is 64 bit. Am I
> just out of luck or is there something that I can do?

Depends on how desperate you are. You could install 64-bit Python alongside the 
32-bit version, call with the 64-bit C DLL from 64-bit Python using ctypes, and 
then communicate between the 32- and 64-bit Pythons via pickled objects sent 
over an interprocess pipe. 

That solution has a Rube Goldberg-esque charm but not much else to recommend 
it. I hope you can find something better.

Cheers
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: interrupted system call w/ Queue.get

2011-03-22 Thread Philip Winston
On Feb 18, 10:23 am, Jean-Paul Calderone
 wrote:
> The exception is caused by a syscall returning EINTR.  A syscall will
> return EINTR when a signal arrives and interrupts whatever that
> syscall
> was trying to do.  Typically a signal won't interrupt the syscall
> unless you've installed a signal handler for that signal.  However,
> you can avoid the interruption by using `signal.siginterrupt` to
> disable interruption on that signal after you've installed the
> handler.
>
> As for the other questions - I don't know, it depends how and why it
> happens, and whether it prevents your application from working
> properly.

We did not try "signal.siginterrupt" because we were not installing
any signals, perhaps some library code is doing it without us knowing
about it.  Plus I still don't know what signal was causing the
problem.

Instead based on Dan Stromberg's reply (http://code.activestate.com/
lists/python-list/595310/) I wrote a drop-in replacement for Queue
called RetryQueue which fixes the problem for us:

from multiprocessing.queues import Queue
import errno

def retry_on_eintr(function, *args, **kw):
while True:
try:
return function(*args, **kw)
except IOError, e:
if e.errno == errno.EINTR:
continue
else:
raise

class RetryQueue(Queue):
"""Queue which will retry if interrupted with EINTR."""
def get(self, block=True, timeout=None):
return retry_on_eintr(Queue.get, self, block, timeout)

As to whether this is a bug or just our own malignant signal-related
settings I'm not sure. Certainly it's not desirable to have your
blocking waits interrupted. I did see several EINTR issues in Python
but none obviously about Queue exactly:
http://bugs.python.org/issue1068268
http://bugs.python.org/issue1628205
http://bugs.python.org/issue10956

-Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing, shared memory vs. pickled copies

2011-04-04 Thread Philip Semanchuk

On Apr 4, 2011, at 4:20 PM, John Ladasky wrote:

> I have been playing with multiprocessing for a while now, and I have
> some familiarity with Pool.  Apparently, arguments passed to a Pool
> subprocess must be able to be pickled.  

Hi John,
multiprocessing's use of pickle is not limited to Pool. For instance, objects 
put into a multiprocessing.Queue are also pickled, as are the args to a 
multiprocessing.Process. So if you're going to use multiprocessing, you're 
going to use pickle, and you need pickleable objects. 


> Pickling is still a pretty
> vague progress to me, but I can see that you have to write custom
> __reduce__ and __setstate__ methods for your objects.

Well, that's only if one's objects don't support pickle by default. A lot of 
classes do without any need for custom __reduce__ and __setstate__ methods. 
Since you're apparently not too familiar with pickle, I don't want you to get 
the false impression that it's a lot of trouble. I've used pickle a number of 
times and never had to write custom methods for it.



> Now, I don't know that I actually HAVE to pass my neural network and
> input data as copies -- they're both READ-ONLY objects for the
> duration of an evaluate function (which can go on for quite a while).
> So, I have also started to investigate shared-memory approaches.  I
> don't know how a shared-memory object is referenced by a subprocess
> yet, but presumably you pass a reference to the object, rather than
> the whole object.   Also, it appears that subprocesses also acquire a
> temporary lock over a shared memory object, and thus one process may
> well spend time waiting for another (individual CPU caches may
> sidestep this problem?) Anyway, an implementation of a shared-memory
> ndarray is here:

There's no standard shared memory implementation for Python. The mmap module is 
as close as you get. I wrote & support the posix_ipc and sysv_ipc modules which 
give you IPC primitives (shared memory and semaphores) in Python. They work 
well (IMHO) but they're *nix-only and much lower level than multiprocessing. If 
multiprocessing is like a kitchen well stocked with appliances, posix_ipc (and 
sysc_ipc) is like a box of sharp knives.

Note that mmap and my IPC modules don't expose Python objects. They expose raw 
bytes in memory. YOu're still going to have to jump through some hoops (...like 
pickle) to turn your Python objects into a bytestream and vice versa.


What might be easier than fooling around with boxes of sharp knives is to 
convert your ndarray objects to Python lists. Lists are pickle-friendly and 
easy to turn back into ndarray objects once they've crossed the pickle 
boundary. 


> When should one pickle and copy?  When to implement an object in
> shared memory?  Why is pickling apparently such a non-trivial process
> anyway?  And, given that multi-core CPU's are apparently here to stay,
> should it be so difficult to make use of them?

My answers to these questions:

1) Depends
2) In Python, almost never unless you're using a nice wrapper like shmarray.py
3) I don't think it's non-trivial =)
4) No, definitely not. Python will only get better at working with multiple 
cores/CPUs, but there's plenty of room for improvement on the status quo.

Hope this helps
Philip





-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyThreadState_Swap crash

2011-04-04 Thread Philip Semanchuk

On Apr 4, 2011, at 9:08 AM, Wiktor Adamski wrote:

> I have 2 threads in C code using python 2.5.2. First thread creates
> new interpreter (i need several interpreters but those 2 threads use
> only one) like that:
> 
> PyEval_AcquireLock();
> threadState = Py_NewInterpreter();
> PyThreadState_Swap(threadState);
> 
> // calling python API
> 
> PyThreadState_Swap(NULL);
> PyEval_ReleaseLock();
> 
> Second thread uses interpreter created in first thread:
> 
> PyEval_AcquireLock();
> PyThreadState_Swap(threadState);
> 
> and sometimes PyThreadState_Swap crashes in debug build
> (PyGILState_GetThisThreadState() returns garbage). In release build
> that code doesn't run and so far no other problem was found.
> I call PyEval_InitThreads() at the begining of program and every
> PyEval_AcquireLock() has PyEval_ReleaseLock().
> 
> Am I allowed to use the same threadState in different threads?
> If I am, is there another problem in my code?
> Or maybe it's a bug in python - acording to documentation "Python
> still supports the creation of additional interpreters (using
> Py_NewInterpreter()), but mixing multiple interpreters and the
> PyGILState_*() API is unsupported." - I don't use PyGILState_ but it's
> used internally in PyThreadState_Swap(). I also don't use
> PyEval_RestoreThread() - comment sugests that crashing code is present
> because possibility of calling from PyEval_RestoreThread().

Hi Wiktor,
I'm sorry I don't have a solution or even a suggestion for you. I just wanted 
to point out that PyEval_AcquireLock() and PyEval_ReleaseLock() were recently 
deprecated:
http://bugs.python.org/issue10913

Obviously they'll be around for quite a while longer but given the 
ominous-but-vague warning in issue10913's description, you might want to stay 
away from them. It's frustrating for me because I've got code I can't get to 
work without them.

Good luck
Philip



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing, shared memory vs. pickled copies

2011-04-04 Thread Philip Semanchuk

On Apr 4, 2011, at 9:03 PM, Dan Stromberg wrote:

> On Mon, Apr 4, 2011 at 4:34 PM, Philip Semanchuk wrote:
> 
>> So if you're going to use multiprocessing, you're going to use pickle, and
>> you need pickleable objects.
>> 
> 
> http://docs.python.org/library/multiprocessing.html#sharing-state-between-processes


Thank you, Dan. My reading comprehension skills need work.

Cheers
Philip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing, shared memory vs. pickled copies

2011-04-05 Thread Philip Semanchuk

On Apr 5, 2011, at 12:58 PM, John Ladasky wrote:

> Hi Philip,
> 
> Thanks for the reply.
> 
> On Apr 4, 4:34 pm, Philip Semanchuk  wrote:
>> So if you're going to use multiprocessing, you're going to use pickle, and 
>> you
>> need pickleable objects.
> 
> OK, that's good to know.

But as Dan Stromberg pointed out, there are some pickle-free ways to 
communicate between processes using multiprocessing.

> This leads straight into my second question.  I THINK, without knowing
> for sure, that most user classes would pickle correctly by simply
> iterating through __dict__.  So, why isn't this the default behavior
> for Python?  Was the assumption that programmers would only want to
> pickle built-in classes?  

One can pickle user-defined classes:

>>> class Foo(object):
... pass
... 
>>> import pickle
>>> foo_instance = Foo()
>>> pickle.dumps(foo_instance)
'ccopy_reg\n_reconstructor\np0\n(c__main__\nFoo\np1\nc__builtin__\nobject\np2\nNtp3\nRp4\n.'


And as Robert Kern pointed out, numpy arrays are also pickle-able.

>>> import numpy
>>> pickle.dumps(numpy.zeros(3))
"cnumpy.core.multiarray\n_reconstruct\np0\n(cnumpy\nndarray\np1\n(I0\ntp2\nS'b'\np3\ntp4\nRp5\n(I1\n(I3\ntp6\ncnumpy\ndtype\np7\n(S'f8'\np8\nI0\nI1\ntp9\nRp10\n(I3\nS'<'\np11\nNNNI-1\nI-1\nI0\ntp12\nbI00\nS'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'\np13\ntp14\nb."

As a side note, you should always use "new style" classes, particularly since 
you're exploring the details of Python class construction. "New" is a bit a of 
misnomer now, as "new" style classes were introduced in Python 2.2. They have 
been the status quo in Python 2.x for a while now and are the only choice in 
Python 3.x.

Subclassing object gives you a new style class:
   class Foo(object):

Not subclassing object (as you did in your example) gives you an old style 
class:
   class Foo:



Cheers
Philip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing, shared memory vs. pickled copies

2011-04-07 Thread Philip Semanchuk

On Apr 7, 2011, at 3:41 AM, John Ladasky wrote:

> Following up to my own post...
> 
> On Apr 6, 11:40 pm, John Ladasky  wrote:
> 
>> What's up with that?
> 
> Apparently, "what's up" is that I will need to implement a third
> method in my ndarray subclass -- namely, __reduce__.
> 
> http://www.mail-archive.com/numpy-discussion@scipy.org/msg02446.html
> 
> I'm burned out for tonight, I'll attempt to grasp what __reduce__ does
> tomorrow.
> 
> Again, I'm going to point out that, given the extent that
> multiprocessing depends upon pickling, pickling should be made
> easier.  This is Python, for goodness' sake!  I'm still surprised at
> the hoops I've got to jump through.

Hi John,
My own experience has been that when I reach a surprising level of hoop 
jumping, it usually means there's an easier path somewhere else that I'm 
neglecting. 

But if pickling subclasses of numpy.ndarray objects is what you really feel you 
need to do, then yes, I think asking on the numpy list is the best idea. 


Good luck
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing

2011-04-07 Thread Philip Semanchuk

On Apr 7, 2011, at 8:57 PM, Kerensa McElroy wrote:

> 
> Hi,
> 
> thanks for your response.
> 
> I checked out multiprocessing.value, however from what I can make out, it 
> works with object of only a very limited type. Is there a way to do this for 
> more complex objects? (In reality, my object is a large multi-dimensional 
> numpy array).

Elsa, 
Are you following the current thread in this list which is talking about 
sharing numpy arrays via multiprocessing?

http://mail.python.org/pipermail/python-list/2011-April/1269173.html





> Date: Wed, 6 Apr 2011 22:20:06 -0700
> Subject: Re: multiprocessing
> From: drsali...@gmail.com
> To: kerensael...@hotmail.com
> CC: python-list@python.org
> 
> 
> On Wed, Apr 6, 2011 at 9:06 PM, elsa  wrote:
> 
> Hi guys,
> 
> 
> 
> I want to try out some pooling of processors, but I'm not sure if it
> 
> is possible to do what I want to do. Basically, I want to have a
> 
> global object, that is updated during the execution of a function, and
> 
> I want to be able to run this function several times on parallel
> 
> processors. The order in which the function runs doesn't matter, and
> 
> the value of the object doesn't matter to the function, but I do want
> 
> the processors to take turns 'nicely' when updating the object, so
> 
> there are no collisions. Here is an extremely simplified and trivial
> 
> example of what I have in mind:
> 
> 
> 
> from multiprocessing import Pool
> 
> import random
> 
> 
> 
> p=Pool(4)
> 
> myDict={}
> 
> 
> 
> def update(value):
> 
>global myDict
> 
>index=random.random()
> 
>myDict[index]+=value
> 
> 
> 
> total=1000
> 
> 
> 
> p.map(update,range(total))
> 
> 
> 
> 
> 
> After, I would also like to be able to use several processors to
> 
> access the global object (but not modify it). Again, order doesn't
> 
> matter:
> 
> 
> 
> p1=Pool(4)
> 
> 
> 
> def getValues(index):
> 
>global myDict
> 
>print myDict[index]
> 
> 
> 
> p1.map(getValues,keys.myDict)
> 
> 
> 
> Is there a way to do this 
> This should give you a synchronized wrapper around an object in shared memory:
> 
> http://docs.python.org/library/multiprocessing.html#multiprocessing.Value
> 
> 
> -- 
> http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: renaming files in OS X

2011-04-20 Thread Philip Semanchuk

On Apr 20, 2011, at 10:02 AM,   wrote:

> Hello,
> 
> I'm considering using os.rename or shutil for renaming 
> files on OS X (Snow Leopard).  However, I've read that 
> shutil doesn't copy the resource fork or metadata for 
> the files on OS X.  I'm not sure about os.rename though.  
> I need to keep the resource fork and metadata.  Is it 
> better if I just use os.system('mv …') or is os.rename 
> safe to use?

Hi Jay,
I don't know if os.rename() does what you want, but why don't you try a simple 
test and find out? Surely an empirical test is at least as useful as an answer 
from someone like me who may or may not know what he's talking about. =)

The OS X command xattr  shows whether or not a file has extended attributes, 
which are what I think you're referring to when you say "metadata". xattr is 
written (badly) in Python; on my system it lives in /usr/bin/xattr-2.6

You might also find this helpful:
http://jonsview.com/mac-os-x-resource-forks


Hope this helps
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: De-tupleizing a list

2011-04-25 Thread Philip Semanchuk

On Apr 25, 2011, at 11:28 PM, Gnarlodious wrote:

> I have an SQLite query that returns a list of tuples:
> 
> [('0A',), ('1B',), ('2C',), ('3D',),...
> 
> What is the most Pythonic way to loop through the list returning a
> list like this?:
> 
> ['0A', '1B', '2C', '3D',...


This works for me -

result = [('0A',), ('1B',), ('2C',), ('3D',), ]
result = [row[0] for row in result]


Cheers
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Terrible FPU performance

2011-04-26 Thread Philip Semanchuk

On Apr 26, 2011, at 1:34 PM, Mihai Badoiu wrote:

> Already did.  They suggested the python list, because the asm generated code
> is really correct and the problem might be with the python running on top.

Does the same timing in consistency appear when you use pure Python?

bye
Philip


> 
> On Tue, Apr 26, 2011 at 1:04 PM, Chris Colbert  wrote:
> 
>> 
>> 
>> On Tue, Apr 26, 2011 at 8:40 AM, Mihai Badoiu  wrote:
>> 
>>> Hi,
>>> 
>>> I have terrible performance for multiplication when one number gets very
>>> close to zero.  I'm using cython by writing the following code:
>>> 
>>> 
>> You should ask this question on the Cython users mailing list.
>> 
>> 
> -- 
> http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ElementTree XML parsing problem

2011-04-27 Thread Philip Semanchuk

On Apr 27, 2011, at 2:26 PM, Mike wrote:

> I'm using ElementTree to parse an XML file, but it stops at the second record 
> (id = 002), which contains a non-standard ascii character, ä. Here's the XML:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The complaint offered up by the parser is
> 
> Unexpected error opening simple_fail.xml: not well-formed (invalid token): 
> line 5, column 40

You've gotten a number of good observations & suggestions already. I would add 
that if you're saving your XML file from a text editor, make sure you're saving 
it as UTF-8 and not ISO-8859-1 or Win-1252. 


bye
Philip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: checking if a list is empty

2011-05-06 Thread Philip Semanchuk

On May 6, 2011, at 5:57 PM, scattered wrote:

> On May 6, 2:36 am, Jabba Laci  wrote:
>> Hi,
>> 
>> If I want to check if a list is empty, which is the more pythonic way?
>> 
>> li = []
>> 
>> (1) if len(li) == 0:
>> ...
>> or
>> (2) if not li:
>> ...
>> 
>> Thanks,
>> 
>> Laszlo
> 
> is there any problem with
> 
> (3) if li == []:
> 
> ?

What if it's not a list but a tuple or a numpy array? Often I just want to 
iterate through an element's items and I don't care if it's a list, set, etc. 
For instance, given this function definition --

def print_items(an_iterable):
if not an_iterable:
print "The iterable is empty"
else:
for item in an_iterable:
print item

I get the output I want with all of these calls:
print_items( list() )
print_items( tuple() )
print_items( set() )
print_items( numpy.array([]) )

Given this slightly different definition, only the  first call gives me the 
output I expect: 

def print_items(an_iterable):
if an_iterable == []:
print "The iterable is empty"
else:
for item in an_iterable:
print item


I find I use the the former style ("if not an_iterable") almost exclusively.


bye
Philip




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie question about SQLite + Python and twitter

2011-05-25 Thread Philip Semanchuk

On May 25, 2011, at 2:17 PM, Jayme Proni Filho wrote:

> Helo guys,
> 
> I'm building a local application for twitter for my brother's store. I'm in
> the beginning and I have some newbie problems, so:
> 
> I create a table called tb_messages with int auto increment and varchar(140)
> fields;
> I did three SQL funcionts, insert_tweet, delete_tweet, select_tweet
> 
> select_tweet is use for getting messages for sending them to twitter;
> 
> My problem is: How can i make my select_tweet works at the same time that
> insert or delete funcions. I just got to work when I stop select function.
> 
> I would like to do my app works all the time.

Hi Jayme,
You need to provide a lot more information for us to be able to help you. 

Some suggestions -- 
http://www.istf.com.br/perguntas/#beprecise



bye
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Asyncio tasks getting cancelled

2018-11-05 Thread philip . m
On Mon, Nov 05, 2018 at 01:57:56PM -0700, Ian Kelly wrote:
> > Which is what I want in this case. Scheduling a new (long-running) task
> > as a side effect, but returning early oneself. The new task can't be
> > awaited right there, because the creating one should return already.
> 
> If you want to do this in the asyncio.run main coroutine, then that
> seems like a problematic design. Once the main coroutine returns, the
> event loop should be considered no longer running, and any still
> pending callbacks or futures won't resolve.

This is only true for the small example I provided. In the actual code
this is somewhere deep in the hirarchy.

> > > If the goal here is for the task created by main() to complete before
> > > the loop exits, then main() should await it, and not just create it
> > > without awaiting it.
> >
> > So if this happens somewhere deep in the hirarchy of your application
> > you would need some mechanism to pass the created tasks back up the
> > chain to the main function?
> 
> I haven't used asyncio.run yet myself, so take all this with a grain
> of salt, but it seems to me that anything that you want to resolve
> before the event loop terminates should be awaited either directly or
> indirectly by the main coroutine. From the documentation:
> 
> """
> This function always creates a new event loop and closes it at the
> end. It should be used as a main entry point for asyncio programs, and
> should ideally only be called once.
> """
> 
> So I think part of the idea with this is that the asyncio.run main
> coroutine is considered the main function of your async app. Once it
> returns, the program should be effectively done. For example, maybe
> the main coroutine spins up a web server and returns when the web
> server shuts down.

Again sorry for the confusion, but I don't think this is an issue with
restarting loops, as this isn't happening in my application.
For context:
https://github.com/ldo/dbussy/issues/13
https://gist.github.com/tu500/3232fe03bd1d85b1529c558f920b8e43

It really feels like asyncio is loosing strong references to scheduled
tasks, as excplicitly keeping them around helps. Also, the error
messages I'm getting are the ones from here:
https://github.com/python/cpython/blob/16c8a53490a22bd4fcde2efaf4694dd06ded882b/Lib/asyncio/tasks.py#L145
Which indicates that the tasks actually weren't even started at all?

> If that doesn't suit your program, for instance there's no core task
> to await, but you want to schedule a lot of things that need to
> resolve and that the main coroutine has no way to know about, then it
> may be the case that asyncio.run is not right for your use case and
> you should use loop.run_forever() instead. You'll still need some
> criteria for figuring out when to exit though, and it seems to me that
> whatever that is you could just bundle it up in a coroutine and await
> it from main.

Though not really related with my actual problem, so getting off topic,
but I can imagine an architecture where that would be "There aren't any
running tasks any more." or even "Never."
Also, I may be overlooking things, but I haven't found a way to add a
task before calling run_forever(), as asyncio will then say the loop
isn't running yet. So I'm not sure how you would jumpstart in that case.
-- 
https://mail.python.org/mailman/listinfo/python-list


Help with Threading

2005-01-23 Thread Philip Smith
Hi

I am fairly new to Python threading and my needs are simple(!)

I want to establish a number of threads each of which work on the same 
computationally intensive problem in different ways.

I am using the thread module rather than the threading module.

My problem is I can't see how (when one thread completes) to ensure that the 
other threads terminate immediately.

Appreciate some simple advice

Phil 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Threading

2005-01-25 Thread Philip Smith

<[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
>I use threading.Thread as outlined in this recipe:
> http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/65448
>Thanks 


-- 
http://mail.python.org/mailman/listinfo/python-list


Elliptic Code

2005-01-28 Thread Philip Smith
Hi

Does anyone have/know of a python implementation of the elliptic curve 
factoring algorithm (lenstra) which is both:

simply and cleanly coded
functional

I'm aware of William Stein's code (from elementary number theory book) but I 
don't understand his coding style and the algorithm doesn't seem to work 
efficiently.

For that matter has anyone come across any useable math/number theory 
packages apart from nzmath or aladim?

Thanks

Phil 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Elliptic Code

2005-01-28 Thread Philip Smith
thanks for the suggestion

I understand the algorithm quite well but how to code the multiplication 
stage most efficiently in python eludes me.

William Stein's code is obviously not high performance because in the region 
where ecm should do well (30-40 dec digits) my python implementation of the 
rho algorithm blows it away.  In terms of factoring implementations 
generally (in python) I think nzmath's mpqs is brilliant - and it has such a 
small footprint I can run it in 10 threads at once.

anyway - I'll have a look at MIRACL (I have the library but have never used 
it yet.

Phil

<[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
> "Philip Smith" <[EMAIL PROTECTED]> writes:
>> Does anyone have/know of a python implementation of the elliptic curve
>> factoring algorithm (lenstra) which is both:
>>
>> simply and cleanly coded
>> functional
>
> It's not in Python but take a look at Mike Scott's C++ implementation
> in MIRACL,
>
>   http://indigo.ie/~mscott/
>
> It's the simplest and most direct implementation I know of, just the
> bare essentials.  It could probably be translated into Python pretty
> straightforwardly.
>
>> I'm aware of William Stein's code (from elementary number theory
>> book) but I don't understand his coding style and the algorithm
>> doesn't seem to work efficiently.
>
> A high performance implementation means complicated code, e.g. Peter
> Montgomery has done a few of those.  If it's for instructional
> purposes I think the MIRACL version is far more understandable even if
> it's slower.
>
> If you mean you don't understand the algorithm, try Neal Koblitz's
> book "A Course in Number Theory and Cryptography".  It has no code but
> it explains the algorithm in a pretty accessible way. 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Elliptic Code

2005-01-28 Thread Philip Smith
Quite so - but thanks for your help in any case

"Paul Rubin"  wrote in message 
news:[EMAIL PROTECTED]
> Nick Craig-Wood <[EMAIL PROTECTED]> writes:
>> >  I understand the algorithm quite well but how to code the 
>> > multiplication
>> >  stage most efficiently in python eludes me.
>>
>> You might want to look at
>>
>>   http://gmpy.sourceforge.net/
>>
>> It has very fast multiplication up to any size you like!
>
> I think he's talking about point multiplication on the elliptic curve
> group, not integer multiplication. 


-- 
http://mail.python.org/mailman/listinfo/python-list


Multiple constructors

2005-02-05 Thread Philip Smith
Call this a C++ programmers hang-up if you like.

I don't seem to be able to define multiple versions of __init__ in my matrix 
class (ie to initialise either from a list of values or from 2 dimensions 
(rows/columns)).

Even if Python couldn't resolve the __init__ to use on the basis of argument 
types surely it could do so on the basis of argument numbers???

At any rate - any suggestions how I code this

Thanks

Phil 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiple constructors

2005-02-06 Thread Philip Smith
Thanks to all of you

Some useful ideas in there, even if some of them stretch my current 
knowledge of the language.

C++ to Python is a steep 'unlearning' curve...

Phil

"Philip Smith" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
> Call this a C++ programmers hang-up if you like.
>
> I don't seem to be able to define multiple versions of __init__ in my 
> matrix class (ie to initialise either from a list of values or from 2 
> dimensions (rows/columns)).
>
> Even if Python couldn't resolve the __init__ to use on the basis of 
> argument types surely it could do so on the basis of argument numbers???
>
> At any rate - any suggestions how I code this
>
> Thanks
>
> Phil
> 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: c/c++ extensions and help()

2005-07-31 Thread Philip Austin
Robert Kern <[EMAIL PROTECTED]> writes:

> Lenny G. wrote:
>> Is there a way to make a c/c++ extension have a useful method
>> signature?  Right now, help(myCFunc) shows up like:
>> myCFunc(...)
>>   description of myCFunc
>> I'd like to be able to see:
>> myCFunc(myArg1, myArg2)
>>   description of myCFunc
>> Is this currently possible?
>
> There really isn't a way to let the inspect module know about
> extension function arguments. Just put it in the docstring.
>

The next release of boost.python should do this automatically:

(http://mail.python.org/pipermail/c++-sig/2005-July/009243.html)


>>> help(rational.lcm)

Help on built-in function lcm:

lcm(...)
C++ signature:
lcm(int, int) -> int

>>> help(rational.int().numerator)

Help on method numerator:

numerator(...) method of boost_rational_ext.int instance
C++ signature:
numerator(boost::rational {lvalue}) -> int


Regards, Phil
-- 
http://mail.python.org/mailman/listinfo/python-list


lambda and for that matter goto not forgetting sugar

2005-02-10 Thread Philip Smith
I've read with interest the continuing debate about 'lambda' and its place 
in Python.

Just to say that personally I think its an elegant and useful construct for 
many types of programming task (particularly number theory/artificial 
intelligence/genetic algorithms)

I can't think why anyone would be proposing to do away with it.  Sometimes 
an anonymous function is just what you need and surely it just reflects the 
python philosophy of everything being an object (in this case a code 
object).

Mind you my particular programming interest is algorithmic programming, I 
have no idea whether Lambda is of any relevance to eg client server 
programming.

For that matter I would find implementing the classical algorithms far 
easier if python had 'goto' (I'll wait for the guffaws to subside before 
mentioning that no lesser guru than Donald Knuth writes his algorithms that 
way - naturally so because it reflects what the machine does at the base 
level).  Please don't suggest using try/except as an alternative as the 
ugliness and inappropriateness of that to achieve a simple 'goto' is utterly 
out of keeping with the 'cleanliness' which is Python's most appealing 
feature.

(And yes - I do like spaghetti but only to eat, not in my code).

Following on naturally from that last point I would also like to 'deprecate' 
the use of the expression 'syntactic sugar' on these pages.  All high level 
languages (Python included) are nothing but syntactic sugar designed to 
conceal the ugliness of what actually gets sent to the CPU to make it all 
happen.

On a positive note though - I have found this newsgroup an invaluable aid to 
learning Python over the last few weeks and the response to queries has been 
as quick as it has been informative.

I've decided I like Python - in fact I think of it more as syntactic maple 
syrup than sugar.

Competition:  Has anyone found anything you can't do in the language?

regards to all

Phil 


-- 
http://mail.python.org/mailman/listinfo/python-list


Derived class and deepcopy

2005-02-16 Thread Philip Smith
Hi

If I derive a class (eg Matrix) from list I presume this implies the classic 
OOP 'is a' relation between the derived and super class.

I therefore presume I can use a derived class in any context that I can use 
the superclass.

In the given example I want to apply deepcopy() to the Matrix instance (on 
initialisation) to ensure that the list part is not affected by subsequent 
changes to the initialising list or Matrix but this gives me a string of 
errors (some of which imply I'm trying to copy the class rather than the 
instance).

Anyone got any thoughts on this

Phil 


-- 
http://mail.python.org/mailman/listinfo/python-list


Beware complexity

2005-03-12 Thread Philip Smith


-- 
http://mail.python.org/mailman/listinfo/python-list


Beware complexity

2005-03-12 Thread Philip Smith
I wonder if anyone has any thoughts not on where Python should go but where 
it should stop?

One of the faults with langauges like C++ was that so many new 
features/constructs were added that it became a nightmare right from the 
design stage of a piece of software deciding which of the almost infinite 
different ways to do the same thing to use.

Result: the development of various coding standards (essentially definitions 
of what 'safe subset' of the language you intended to use in all your 
projects) to 'cripple' the overly complex language.

Conventions on type conversion are just one example.  Without using strict 
coding conventions the richness of the language could, and often did, result 
in ambiguity.  In my experience too C++ has defeated its own object (eg 
portability) - I've given up in many cases trying to compile third party 
libraries because I don't have the time to collect every version of every 
compiler for every platform in existence which is what C++ seems to demand 
(particularly if you are trying to cross-compile Unix->Windows).

Nothing wrong with coding conventions of course unless you:

a)Want to read and understand other peoples code
b)Want your code to work with it

There probably isn't a language in existence that provably constrains a 
programmer to use one and only one 'top level' approach to code a particular 
'class' of problem but Python does seem to have a way of naturally 
'suggesting' this through its very simplicity.

It seems to me that from here on in the Python developers should be very 
careful about adding new features to a language which (subjectively) already 
seems capable of handling pretty much any 'class' of problem.  There is 
plenty of scope left of course for continuing to develop libraries and 
optimize performance.



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Scalability TCP Server + Background Game

2014-01-19 Thread Philip Werner
On Sat, 18 Jan 2014 13:19:24 +, Mark Lawrence wrote:

> On 18/01/2014 12:40, phi...@gmail.com wrote:
> 
> [snip the stuff I can't help with]
> 
> Here's the link you need to sort the problem with double spacing from
> google groups https://wiki.python.org/moin/GoogleGroupsPython

Thanks for the link. I've, hopefully, solved the issue by switching
to Pan instead of using google groups. :)

Regards,
Philip
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python Scalability TCP Server + Background Game

2014-01-21 Thread Philip Werner
> Looking a lot more normal and readable now. Thanks!
> 
> Note that some people have experienced odd issues with Pan, possibly
> relating to having multiple instances running simultaneously. You may
> want to take care not to let it open up a duplicate copy of itself.
> 
> ChrisA

Thanks for the heads up.

It is buggy to say the least. Any other program on linux you may suggest?

Regards,
Philip
-- 
https://mail.python.org/mailman/listinfo/python-list


No overflow in variables?

2014-01-22 Thread Philip Red
Hi everyone. First of all sorry if my english is not good.
I have a question about something in Python I can not explain:
in every programming language I know (e.g. C#) if you exceed the max-value of a 
certain type (e.g. a long-integer) you get an overflow. Here is a simple 
example in C#:

static void Main(string[] args)
{
Int64 x = Int64.MaxValue;
Console.WriteLine(x);   // output: 9223372036854775807
x = x * 2;
Console.WriteLine(x);   // output: -2 (overflow)
Console.ReadKey();
}

Now I do the same with Python:

x = 9223372036854775807
print(type(x)) #   
x = x * 2  #   18446744073709551614
print(x)   #   
print(type(x))

and I get the right output without overflow and the type is always a 'int'.
How does Python manages internally the types and their values? Where are they 
stored?

Thank you for your help :)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: No overflow in variables?

2014-01-22 Thread Philip Red
Thank you for your answers!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: No overflow in variables?

2014-01-22 Thread Philip Red
Thank you ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Running a .py file iteratively at the terminal

2015-01-27 Thread Philip Keogh
On Mon, 26 Jan 2015, varun...@gmail.com wrote:
> Thanks a lot Mark but that would be a bit trivial. How can I run the
> same file multiple times? Or if I need to run two commands:
> srva@hades:~$ python NFV_nw_eu_v3_14_1_15.py --output eu_v3_14_1_15
> --demand demands_v3_21_1_15.xml --xml nobel-eu.xml
>srva@hades:~$ python NFV_v3_7_10_14.py -l log --lp --xml eu_v3_14_1_15.xml 
> repeatedly, how can I do that? Can I write a script to perform this
> function?If so, can you please help me with it?  The first command
> generates an output file eu_v3 and the second file feeds it to the
> solver. This is what I intend to do multiple times. I hope I have
> explained it this time in a much better way. I'm sorry English is my
> second language and I have some problems in expressing myself at
> times.
> 
> Thank You
> 

Have you read about Bash shell brace expansion, or a one-liner loop? A
simple wrapper script could easily accomplish what you seem to be
attempting to do.

For more, see:
http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_04.html
http://www.linuxjournal.com/content/bash-brace-expansion
http://wiki.bash-hackers.org/syntax/expansion/brace
http://tldp.org/LDP/abs/html/loops1.html
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Download Visual Studio Express 2008 now

2013-08-29 Thread Philip Inglesant

Hi Martyn,

Thanks for the good advice to download VS 2008 before M$ delete it from 
their download servers.


Unfortunately they have already done this so many Python modules now 
can't be compiled correctly on Windows!


Best regards,

Philip
--
http://mail.python.org/mailman/listinfo/python-list


Python Front-end to GCC

2013-10-20 Thread Philip Herron
Hey,

I've been working on GCCPY since roughly november 2009 at least in its
concept. It was announced as a Gsoc 2010 project and also a Gsoc 2011
project. I was mentored by Ian Taylor who has been an extremely big
influence on my software development carrer.

Gccpy is an Ahead of time implementation of Python ontop of GCC. So it
works as you would expect with a traditional compiler such as GCC to
compile C code. Or G++ to compile C++ etc.

Whats interesting and deserves a significant mention is my work is
heavily inspired by Paul Biggar's phd thesis on optimizing dynamic
languages and his work on PHC a ahead of time php compiler. I've had
so many ups and down in this project and i need to thank Andi Hellmund
for his contributions to the project.
http://paulbiggar.com/research/#phd-dissertation

The project has taken so many years as an in my spare time project to
get to this point. I for example its taken me so long simply to
understand a stabilise the core fundamentals for the compiler and how
it could all work.

The release can be found here. I will probably rename the tag to the
milestone (lucy) later on.
https://github.com/redbrain/gccpy/releases/tag/v0.1-24
(Lucy is our dog btw, German Shepard (6 years young) loves to lick
your face off :) )

Documentation can be found http://gcc.gnu.org/wiki/PythonFrontEnd.
(Although this is sparse partialy on purpose since i do not wan't
people thinking this is by any means ready to compile real python
applications)

I've found some good success with this project in compiling python
though its largely unknown to the world simply because i am nervous of
the compiler and more specifically the python compiler world.

But at least to me there is at least to me an un-answered question in
current compiler implementations.  AOT vs Jit.

Is a jit implementation of a language (not just python) better than
traditional ahead of time compilation.

What i can say is ahead of time at least strips out the crap needed
for the users code to be run. As in people are forgetting the basics
of how a computer works in my opinion when it comes to making code run
faster. Simply need to reduce the number of instructions that need to
be executed in order to preform what needs to be done. Its not about
Jit and bla bla keyword llvm keyword instruction scheduling keyword
bla.

I could go into the arguments but i feel i should let the project
speak for itself its very immature so you really cant compare it to
anything like it but it does compile little bits and bobs fairly well
but there is much more work needed.

There is nothing at steak, its simply an idea provoked from a great
phd thesis and i want to see how it would work out. I don't get funded
of paid. I love working on compilers and languages but i don't have a
day job doing it so its my little pet to open source i believe its at
least worth some research.

I would really like to hear the feedback good and bad. I can't
describe how much work i've put into this and how much persistence
I've had to have in light of recent reddit threads talking about my
project.

I have so many people to thank to get to this point! Namely Ian
Taylor, Paul Biggar, Andi Hellmund, Cyril Roelandt  Robert Bradshaw,
PyBelfast, and the Linux Outlaws community. I really couldn't have got
to this point in my life without the help of these people!

Thanks!

--Phil
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python Front-end to GCC

2013-10-21 Thread Philip Herron
Hey all,

Thanks, i've been working on this basically on my own 95% of the compiler is 
all my code, in my spare time. Its been fairly scary all of this for me. I 
personally find this as a real source of interest to really demystify compilers 
and really what Jit compilation really is under the hood.

For example if your really interested to see what jit compilation really is 
(not magic) look at:
http://compilers.iecc.com/comparch/article/10-03-063

And i really believe in this project. Though i really want to say away from 
comparing my project to any others out there. As they are much more mature than 
mine.

For example its only a few commits back your successfully able to compile:

import sys

So i think its fairly unfair comparisons could get me into trouble. Whats taken 
so long is i do not reuse the python runtime like the others. Other projects do 
this to maintain computability mostly. But i wanted to test doing this entirely 
from scratch partly for my own interest and that the python runtime was 
designed for an interpreter, not compilers at least ahead of time.

Its interesting a few things come up what about:

exec and eval. I didn't really have a good answer for this at my talk at PYCon 
IE 2013 but i am going to say no. I am not going to implement these. Partly 
because eval and exec at least to me are mostly from developing interpreters as 
a debugging exercise so the test doesn't have to invoke the program properly 
and feed in strings to interpret at least thats what i have done in the past 
with an virtual machine i wrote before gccpy.

I think anything you look up on eval and exec you shouldn't use them in 
production code as it could lead to a lot of issues. I can see their use in 
quick dirty uses but i don't think they really are part of the python language 
in my opinion its just some builtin functions that you can invoke. Another 
issue i had was:

class myClass: pass

I currently don't allow this but i've re thought this and i am going to try and 
implement this properly before i had a really complicated to pass to quess the 
type of a struct but i realise currently with how things work this may not be 
the case.

As a personal comment i think this is kind of funny as why not use a dict. But 
i can see the use for it now, and i want to implement it.

What i will say is gccpy is moving along with each milestone i will achieve 
over the course of the first half of 2014 i reckon i could at least compile 
half of a decnt python test suite. Its taken me so long to get used to the GCC 
code base and how i want to implement things i've re-wrote the runtime and the 
compiler probably at least 4 times and i am going to rewrite part of the 
runtime today for the next week or so. I think tis a worth while project.

I don't think this will ever be on par with PyPy or CPython as professional as 
those projects are i really respect their work i mean i look up to them (loads) 
i am just a guy who likes compilers and it isnt my day job i don't get paid for 
this i just enjoy the challenge and i hope you understand that this is my baby 
and i am very protective of my project :).

I hope in a few months when i start compiling testsuiites i will publish 
benchmarks what i will say is it looks pretty good right now only 1 case so far 
(that i have tested) where i am not faster than CPython and its only because i 
havent been compiling the runtime library with correct CFLAGS like -O2 etc i 
wasnt passing anything another is i have tonnnes of debugging malloc all over 
the show slowing it down because i need to rewrite part of the runtime so yeah. 
I've been fighting GCC for 4 years now i am fighting my own code ;).

Final note i am not saying JIT is bad or not the way to do things i personally 
think this question isn't answered and i can see the case for it there are down 
sides that jit people wont talk about.

The complexity of maintaining a j it in project is probably the main one and 
optimisations at runtime make it even more mental as they are hard to do is an 
under statement never mind they aren't as common as you would want to believe 
outside of target specifics which gcc already knows (-mtune*). I do believe JIT 
is the way forward but i think something in languages needs to really be 
designed from that perspective and maybe even cpu's to help with some kind of 
instructions to help maintain a destination between runtime and user code or 
something (making tail call optimisations easier on dynamic languages) maybe?

Thanks.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python Front-end to GCC

2013-10-21 Thread Philip Herron
On Monday, 21 October 2013 21:26:06 UTC+1, zipher  wrote:
> On Mon, Oct 21, 2013 at 4:08 AM, Philip Herron
> 
>  wrote:
> 
> > Thanks, i've been working on this basically on my own 95% of the compiler 
> > is all my code, in my spare time. Its been fairly scary all of this for me. 
> > I personally find this as a real source of interest to really demystify 
> > compilers and really what Jit compilation really is under the hood.
> 
> 
> 
> So I'm curious, not having looked at your code, are you just
> 
> translating python code into C code to make your front-end to gcc?
> 
> Like converting "[1,2,3]" into a C linked-list data structure and
> 
> making 1 an int (or BigNum?)?
> 
> 
> 
> -- 
> 
> MarkJ
> 
> Tacoma, Washington

No its not like those 'compilers' i dont really agree with a compiler 
generating C/C++ and saying its producing native code. I dont really believe 
its truely within the statement. Compilers that do that tend to put in alot of 
type saftey code and debugging internals at a high level to get things working 
in other projects i am not saying python compilers here i havent analysed 
enough to say this.

What i mean as a front-end is jut like GCC G++ gccgo gfortran it all works the 
same each of these are front-ends you can pass all those mental gcc options 
like -O3 -mtune -fblabla. it is implemented as part of gcc and you can 
'bootstrap python'. You can -fdump-tree-all etc.

What i can say is jit compilation is really mistified' in a big way when it 
comes to projects like pypy when its implemented in python how can it call mmap 
to make an executable memory block etc. When it comes to compilation i think it 
gets fairly mistified in the python compiler community even more.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python Front-end to GCC

2013-10-22 Thread Philip Herron
On Tuesday, 22 October 2013 09:55:15 UTC+1, Antoine Pitrou  wrote:
> Philip Herron  googlemail.com> writes:
> 
> > 
> 
> > Its interesting a few things come up what about:
> 
> > 
> 
> > exec and eval. I didn't really have a good answer for this at my talk at
> 
> PYCon IE 2013 but i am going to say no. I am
> 
> > not going to implement these. Partly because eval and exec at least to me
> 
> are mostly from developing
> 
> > interpreters as a debugging exercise so the test doesn't have to invoke
> 
> the program properly and feed in
> 
> > strings to interpret at least thats what i have done in the past with an
> 
> virtual machine i wrote before gccpy.
> 
> 
> 
> If you don't implement exec() and eval() then people won't be able to use
> 
> namedtuples, which are a common datatype factory.
> 
> 
> 
> As for the rest: well, good luck writing an AOT compiler producing
> 
> interesting results on average *pure* Python code. It's already been tried
> 
> a number of times, and has generally failed. Cython mitigates the issue by
> 
> exposing a superset of Python (including type hints, etc.).
> 
> 
> 
> Regards
> 
> 
> 
> Antoine.
Thanks for that interesting example, i haven't looked into how its implemented 
but on initially looking at this is am nearly sure i can implement this without 
using exec or eval. I've found this a lot in implementing my run time. Exec and 
eval at least to me in the past I've used them as debug hooks into a toy 
virtual machine i wrote i don't particularly think they are part of a language 
nor should people really use them.

Thanks
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python Front-end to GCC

2013-10-22 Thread Philip Herron
On Tuesday, 22 October 2013 10:14:16 UTC+1, Oscar Benjamin  wrote:
> On 22 October 2013 00:41, Steven D'Aprano
> 
> >>> On the contrary, you have that backwards. An optimizing JIT compiler
> 
> >>> can often produce much more efficient, heavily optimized code than a
> 
> >>> static AOT compiler, and at the very least they can optimize different
> 
> >>> things than a static compiler can. This is why very few people think
> 
> >>> that, in the long run, Nuitka can be as fast as PyPy, and why PyPy's
> 
> >>> ultimate aim to be "faster than C" is not moonbeams:
> 
> >>
> 
> >> That may be true but both the examples below are spurious at best. A
> 
> >> decent AOT compiler would reduce both programs to the NULL program as
> 
> >> noted by haypo:
> 
> >> http://morepypy.blogspot.co.uk/2011/02/pypy-faster-than-c-on-carefully-
> 
> > crafted.html?showComment=1297205903746#c2530451800553246683
> 
> >
> 
> > Are you suggesting that gcc is not a decent compiler?
> 
> 
> 
> No.
> 
> 
> 
> > If "optimize away
> 
> > to the null program" is such an obvious thing to do, why doesn't the most
> 
> > popular C compiler in the [FOSS] world do it?
> 
> 
> 
> It does if you pass the appropriate optimisation setting (as shown in
> 
> haypo's comment). I should have been clearer.
> 
> 
> 
> gcc compiles programs in two phases: compilation and linking.
> 
> Compilation creates the object files x.o and y.o from x.c and y.c.
> 
> Linking creates the output binary a.exe from x.o and y.o. The -O3
> 
> optimisation setting used in the blog post enables optimisation in the
> 
> compilation phase. However each .c file is compiled independently so
> 
> because the add() function is defined in x.c and called in y.c the
> 
> compiler is unable to inline it. It also can't remove it as dead code
> 
> because although it knows that the return value isn't used it doesn't
> 
> know if the call has side effects.
> 
> 
> 
> You might think it's silly that gcc can't optimise across source files
> 
> and if so you're right because actually it can if you enable link time
> 
> optimisation with the -flto flag as described by haypo. So if I do
> 
> that with the code from the blog post I get (using mingw gcc 4.7.2 on
> 
> Windows):
> 
> 
> 
> $ cat x.c
> 
> double add(double a, double b)
> 
> {
> 
>   return a + b;
> 
> }
> 
> $ cat y.c
> 
> double add(double a, double b);
> 
> 
> 
> int main()
> 
> {
> 
>   int i = 0;
> 
>   double a = 0;
> 
>   while (i < 10) {
> 
> a += 1.0;
> 
> add(a, a);
> 
> i++;
> 
>   }
> 
> }
> 
> $ gcc -O3 -flto x.c y.c
> 
> $ time ./a.exe
> 
> 
> 
> real0m0.063s
> 
> user0m0.015s
> 
> sys 0m0.000s
> 
> $ time ./a.exe  # warm cache
> 
> 
> 
> real0m0.016s
> 
> user0m0.015s
> 
> sys 0m0.015s
> 
> 
> 
> So gcc can optimise this all the way to the null program which takes
> 
> 15ms to run (that's 600 times faster than pypy).
> 
> 
> 
> Note that even if pypy could optimise it all the way to the null
> 
> program it would still be 10 times slower than C's null program:
> 
> 
> 
> $ touch null.py
> 
> $ time pypy null.py
> 
> 
> 
> real0m0.188s
> 
> user0m0.076s
> 
> sys 0m0.046s
> 
> $ time pypy null.py  # warm cache
> 
> 
> 
> real0m0.157s
> 
> user0m0.060s
> 
> sys 0m0.030s
> 
> 
> 
> > [...]
> 
> >> So the pypy version takes twice as long to run this. That's impressive
> 
> >> but it's not "faster than C".
> 
> 
> 
> (Actually if I enable -flts with that example the C version runs 6-7
> 
> times faster due to inlining.)
> 
> 
> 
> > Nobody is saying that PyPy is *generally* capable of making any arbitrary
> 
> > piece of code run as fast as hand-written C code. You'll notice that the
> 
> > PyPy posts are described as *carefully crafted* examples.
> 
> 
> 
> They are more than carefully crafted. They are useless and misleading.
> 
> It's reasonable to contrive of a simple CPU-intensive programming
> 
> problem for benchmarking. But the program should do *something* even
> 
> if it is contrived. Both programs here consist *entirely* of dead
> 
> code. Yes it's reasonable for the pypy devs to test things like this
> 
> during development. No it's not reasonable to showcase this as an
> 
> example of the potential for pypy to speed up any useful computation.
> 
> 
> 
> > I believe that, realistically, PyPy has potential to bring Python into
> 
> > Java and .Net territories, namely to run typical benchmarks within an
> 
> > order of magnitude of C speeds on the same benchmarks. C is a very hard
> 
> > target to beat, because vanilla C code does *so little* compared to other
> 
> > languages: no garbage collection, no runtime dynamism, very little
> 
> > polymorphism. So benchmarking simple algorithms plays to C's strengths,
> 
> > while ignoring C's weaknesses.
> 
> 
> 
> As I said I don't want to criticise PyPy. I've just started using it
> 
> and I it is impressive. However both of those blog posts are
> 
> misleading. Not only that but the authors must know 

Re: Python Front-end to GCC

2013-10-23 Thread Philip Herron
On Wednesday, 23 October 2013 07:48:41 UTC+1, John Nagle  wrote:
> On 10/20/2013 3:10 PM, victorgarcia...@gmail.com wrote:
> 
> > On Sunday, October 20, 2013 3:56:46 PM UTC-2, Philip Herron wrote:
> 
> >> I've been working on GCCPY since roughly november 2009 at least in its
> 
> >> concept. It was announced as a Gsoc 2010 project and also a Gsoc 2011
> 
> >> project. I was mentored by Ian Taylor who has been an extremely big
> 
> >> influence on my software development carrer.
> 
> > 
> 
> > Cool!
> 
> > 
> 
> >> Documentation can be found http://gcc.gnu.org/wiki/PythonFrontEnd.
> 
> >> (Although this is sparse partialy on purpose since i do not wan't
> 
> >> people thinking this is by any means ready to compile real python
> 
> >> applications)
> 
> > 
> 
> > Is there any document describing what it can already compile and, if 
> > possible, showing some benchmarks?
> 
> 
> 
> After reading through a vast amount of drivel below on irrelevant
> 
> topics, looking at the nonexistent documentation, and finally reading
> 
> some of the code, I think I see what's going on here.  Here's
> 
> the run-time code for integers:
> 
> 
> 
> http://sourceforge.net/p/gccpy/code/ci/master/tree/libgpython/runtime/gpy-object-integer.c
> 
> 
> 
>The implementation approach seems to be that, at runtime,
> 
> everything is a struct which represents a general Python object.
> 
> The compiler is, I think, just cranking out general subroutine
> 
> calls that know nothing about type information. All the
> 
> type handling is at run time.  That's basically what CPython does,
> 
> by interpreting a pseudo-instruction set to decide which
> 
> subroutines to call.
> 
> 
> 
>It looks like integers and lists have been implemented, but
> 
> not much else.  Haven't found source code for strings yet.
> 
> Memory management seems to rely on the Boehm garbage collector.
> 
> Much code seems to have been copied over from the GCC library
> 
> for Go. Go, though, is strongly typed at compile time.
> 
> 
> 
>There's no inherent reason this "compiled" approach couldn't work,
> 
> but I don't know if it actually does. The performance has to be
> 
> very low.  Each integer add involves a lot of code, including two calls
> 
> of "strcmp (x->identifier, "Int")".  A performance win over CPython
> 
> is unlikely.
> 
> 
> 
>Compare Shed Skin, which tries to infer the type of Python
> 
> objects so it can generate efficient type-specific C++ code.  That's
> 
> much harder to do, and has trouble with very dynamic code, but
> 
> what comes out is fast.
> 
> 
> 
>   John Nagle

I think your analysis is probably grossly unfair for many reasons. But your 
entitled to your opinion.

Current i do not use Bohem-GC (I dont have one yet), i re-use principles from 
gccgo in the _compiler_ not the runtime. At runtime everything is a 
gpy_object_t, everything does this. Yeah you could do a little of dataflow 
analysis for some really really specific code and very specific cases and get 
some performance gains. But the problem is that the libpython.so it was 
designed for an interpreter.

So first off your comparing a project done on my own to something like cPython 
loads of developers 20 years on my project or something PyPy has funding loads 
of developers.

Where i speed up is absolutely no runtime lookups on data access. Look at 
cPython its loads of little dictionaries. All references are on the Stack at a 
much lower level than C. All constructs are compiled in i can reuse C++ native 
exceptions in the whole thing. I can hear you shouting at the email already but 
the middle crap that a VM and interpreter have to do and fast lookup is _NOT_ 
one of them. If you truely understand how an interpreter works you know you 
cant do this

Plus your referencing really old code on sourceforge is another thing. And i 
dont want to put out bench marks (I would get so much shit from people its 
really not worth it) but it i can say it is faster than everything in the stuff 
i compile so far. So yeah... not only that but your referncing a strncmp to say 
no its slow yeah it isn't 100% ideal but in my current git tree i have changed 
that. So i think its completely unfair to reference tiny things and pretend you 
know everything about my project.

One thing people might find interesting is class i do data flow anaylsis to 
generate a complete type for that class and each member function is a compiled 
function like C++ but at a much lower level than C++. The whole project has 
been about stripping out the crap needed to run user code and i have been 
successful so far but your comparing a in my spare time project to people who 
work on their stuff full time. With loads of people etc.

Anyways i am just going to stay out of this from now but your email made me 
want to reply and rage.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: try/except/finally

2014-06-09 Thread Philip Shaw
On 2014-06-08, Dave Angel  wrote:
> Frank B  Wrote in message:
>> Ok; this is a bit esoteric.
>> 
>> So finally is executed regardless of whether an exception occurs, so states 
>> the docs.
>> 
>> But, I thought, if I  from my function first, that should take 
>> precedence.
>> 
>> au contraire
>> 
>> Turns out that if you do this:
>> 
>> try:
>>   failingthing()
>> except FailException:
>>   return 0
>> finally:
>>   return 1
>> 
>> Then finally really is executed regardless... even though you told it to 
>> return.
>> 
>> That seems odd to me.
>> 
>
> The thing that's odd to me is that a return is permissible inside
>  a finally block. That return
> should be at top level,  even with the finally line. And of course
>  something else should be in the body of the finally
>  block.

It does have some legitimate uses, for example:

try:
failingThing()
finally:
simple_cleanup()
if(that_worked())
return
complicated
cleanup
with
lots
of
blocks

OTOH, it could just be that Guido didn't think of banning it when
exceptions were first added and doesn't want to introduce an
incompatability later.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to use SQLite (sqlite3) more efficiently

2014-06-09 Thread Philip Shaw
On 2014-06-06, Mark Lawrence  wrote:
> On 06/06/2014 22:58, Dave Angel wrote:
>> Chris Angelico  Wrote in message:
>>> On Sat, Jun 7, 2014 at 4:15 AM, R Johnson
>>>  wrote:
> The subject line isn't as important as a header, carried invisibly
> through, that says that you were replying to an existing post. :)

 Sorry for my ignorance, but I've never edited email headers
 before and didn't find any relevant help on Google. Could you
 please give some more details about how to do what you're
 referring to, or perhaps point me to a link that would explain
 more about it? (FYI, I read the Python mailing list on Google
 Groups, and reply to posts in Thunderbird, sending them to the
 Python-list email address.)
>>>
>>> The simple answer is: You don't have to edit headers at all. If
>>> you want something to be part of the same thread, you hit Reply
>>> and don't change the subject line. If you want something to be a
>>> spin-off thread, you hit Reply and *do* change the subject. If you
>>> want it to be a brand new thread, you don't hit Reply, you start a
>>> fresh message.  Any decent mailer will do the work for you.
>>>
>>> Replying is more than just quoting a bunch of text and copying in
>>> the subject line with "Re:" at the beginning. :)
>>>
>>
>> set up a newsgroup in Thunderbird from gmane.comp.python.general.
>>
>
> That doesn't sound right to me.  Surely you set up the newgroup
> news.gmane.org and then subscribe to the mailing lists, blog feeds
> or whatever it is that you want?
>

In usenet parlance, news.gmane.org is a newsserver, and
gmane.comp.python.general is a newsgroup.

gmane runs a series of mail<->news gateways for several mailing lists,
but there are others as well: someone also bridges the list to
the group comp.lang.python, which is where I'm reading this.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Not Responding When Dealing with Large Data

2014-06-18 Thread Philip Dexter
On Wed, Jun 18, 2014 at 1:20 PM, cutey Love  wrote:
> I'm trying to read in 10 lines of text, use some functions to edit them 
> and then return a new list.
>
> The problem is my program always goes not responding when the amount of lines 
> are a high number.
>
> I don't care how long the program takes to work, just need it to stop 
> crashing?


What are you doing with the data? Try reading in the file chunks at a time
-- 
https://mail.python.org/mailman/listinfo/python-list


PyPy3 2.3.1 released

2014-06-20 Thread Philip Jenvey
=
PyPy3 2.3.1 - Fulcrum
=

We're pleased to announce the first stable release of PyPy3. PyPy3
targets Python 3 (3.2.5) compatibility.

We would like to thank all of the people who donated_ to the `py3k proposal`_
for supporting the work that went into this.

You can download the PyPy3 2.3.1 release here:

http://pypy.org/download.html#pypy3-2-3-1

Highlights
==

* The first stable release of PyPy3: support for Python 3!

* The stdlib has been updated to Python 3.2.5

* Additional support for the u'unicode' syntax (`PEP 414`_) from Python 3.3

* Updates from the default branch, such as incremental GC and various JIT
  improvements

* Resolved some notable JIT performance regressions from PyPy2:

 - Re-enabled the previously disabled collection (list/dict/set) strategies

 - Resolved performance of iteration over range objects

 - Resolved handling of Python 3's exception __context__ unnecessarily forcing
   frame object overhead

.. _`PEP 414`: http://legacy.python.org/dev/peps/pep-0414/

What is PyPy?
==

PyPy is a very compliant Python interpreter, almost a drop-in replacement for
CPython 2.7.6 or 3.2.5. It's fast due to its integrated tracing JIT compiler.

This release supports x86 machines running Linux 32/64, Mac OS X 64, Windows,
and OpenBSD,
as well as newer ARM hardware (ARMv6 or ARMv7, with VFPv3) running Linux.

While we support 32 bit python on Windows, work on the native Windows 64
bit python is still stalling, we would welcome a volunteer
to `handle that`_.

.. _`handle that`: 
http://doc.pypy.org/en/latest/windows.html#what-is-missing-for-a-full-64-bit-translation

How to use PyPy?
=

We suggest using PyPy from a `virtualenv`_. Once you have a virtualenv
installed, you can follow instructions from `pypy documentation`_ on how
to proceed. This document also covers other `installation schemes`_.

.. _donated: 
http://morepypy.blogspot.com/2012/01/py3k-and-numpy-first-stage-thanks-to.html
.. _`py3k proposal`: http://pypy.org/py3donate.html
.. _`pypy documentation`: 
http://doc.pypy.org/en/latest/getting-started.html#installing-using-virtualenv
.. _`virtualenv`: http://www.virtualenv.org/en/latest/
.. _`installation schemes`: 
http://doc.pypy.org/en/latest/getting-started.html#installing-pypy


Cheers,
the PyPy team

--
Philip Jenvey

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Get named groups from a regular expression

2014-07-03 Thread Philip Shaw
On 2014-07-01, Florian Lindner  wrote:
>
> Is there a way I can extract the named groups from a regular
> expression?  e.g. given "(?P\d)" I want to get something
> like ["testgrp"].

The match object has an attribute called "groupdict", so you can get
the found named groups using match.groupdict.keys. I can't remember
what happens to unnamed groups (I prefer to name every group I want),
but ISTR that there is a list of capture groups in which the indexes
are the capture groups number (i.e. what you'd use to backreference
them).

> Can I make the match object to return default values for named
> groups, even if no match was produced?

A lazy solution I've used was to write a default dict, then update it
with the groupdict. I doubt that's all that efficient, but the
defaults were constant strings and the program was network-bound
anyway.
-- 
https://mail.python.org/mailman/listinfo/python-list


PyPy3 2.4.0 released

2014-10-21 Thread Philip Jenvey
=
PyPy3 2.4 - Snow White
=

We're pleased to announce PyPy3 2.4, which contains significant performance
enhancements and bug fixes.

You can download the PyPy3 2.4.0 release here:

http://pypy.org/download.html

PyPy3 Highlights


Issues reported with our previous release were fixed after reports from users on
our new issue tracker at https://bitbucket.org/pypy/pypy/issues or on IRC at
#pypy. Here is a summary of the user-facing PyPy3 specific changes:

* Better Windows compatibility, e.g. the nt module functions _getfinalpathname
  & _getfileinformation are now supported (the former is required for the
  popular pathlib library for example)

* Various fsencode PEP 383 related fixes to the posix module (readlink, uname,
  ttyname and ctermid) and improved locale handling

* Switched default binary name os POSIX distributions to 'pypy3' (which
  symlinks to to 'pypy3.2')

* Fixed a couple different crashes related to parsing Python 3 source code

Further Highlights (shared w/ PyPy2)


Benchmarks improved after internal enhancements in string and
bytearray handling, and a major rewrite of the GIL handling. This means
that external calls are now a lot faster, especially the CFFI ones. It also
means better performance in a lot of corner cases with handling strings or
bytearrays. The main bugfix is handling of many socket objects in your
program which in the long run used to "leak" memory.

We fixed a memory leak in IO in the sandbox_ code

We welcomed more than 12 new contributors, and conducted two Google
Summer of Code projects, as well as other student projects not
directly related to Summer of Code.

* Reduced internal copying of bytearray operations

* Tweak the internal structure of StringBuilder to speed up large string
  handling, which becomes advantageous on large programs at the cost of slightly
  slower small *benchmark* type programs.

* Boost performance of thread-local variables in both unjitted and jitted code,
  this mostly affects errno handling on linux, which makes external calls
  faster.

* Move to a mixed polling and mutex GIL model that make mutlithreaded jitted
  code run *much* faster

* Optimize errno handling in linux (x86 and x86-64 only)

* Remove ctypes pythonapi and ctypes.PyDLL, which never worked on PyPy

* Classes in the ast module are now distinct from structures used by
  the compiler, which simplifies and speeds up translation of our
  source code to the PyPy binary interpreter

* Win32 now links statically to zlib, expat, bzip, and openssl-1.0.1i.
  No more missing DLLs

* Many issues were resolved_ since the 2.3.1 release in June

.. _`whats-new`: http://doc.pypy.org/en/latest/whatsnew-2.4.0.html
.. _resolved: https://bitbucket.org/pypy/pypy/issues?status=resolved
.. _sandbox: http://doc.pypy.org/en/latest/sandbox.html

We have further improvements on the way: rpython file handling,
numpy linalg compatibility, as well
as improved GC and many smaller improvements.

Please try it out and let us know what you think. We especially welcome
success stories, we know you are using PyPy, please tell us about it!

Cheers

The PyPy Team
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Numpy.array with dtype works on list of tuples not on list of lists?

2011-09-18 Thread Philip Semanchuk

On Sep 18, 2011, at 11:55 AM, Alex van der Spek wrote:

> Why does this not work?
> 
>>>> dat=[[1,2,3],[4,5,6]]
>>>> col=[('a','f4'),('b','f4'),('c','f4')]
>>>> arr=numpy.array(dat,dtype=col)
> 
> Traceback (most recent call last):
> File "", line 1, in 
>   arr=numpy.array(dat,dtype=col)
> TypeError: expected a readable buffer object
> 
> But this does:
> 
>>>> dat=[(1,2,3),(4,5,6)]
>>>> arr=numpy.array(dat,dtype=col)
>>>> arr
> array([(1.0, 2.0, 3.0), (4.0, 5.0, 6.0)],  dtype=[('a', ' ' 
> The only difference that the object is a list of tuples now?

I don't know why you're seeing what you're seeing, but if you don't get answer 
here you could try asking on the numpy list. 

Good luck
Philip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need help with file encoding-decoding

2011-09-23 Thread Philip Semanchuk

On Sep 23, 2011, at 7:44 AM, Yaşar Arabacı wrote:

> Hi,
> 
> I'am trying to write a mass html downloader, and it processes files after it
> downloaded them. I have problems with encodings, and decodings. Sometimes I
> get UnicodeDecodeErrors, or
> I get half-pages in after processing part. Or more generally, some things
> don't feel right. Can you check my approach, and provide me some feedback
> please? Here is what I am doing.
> 
> 1) send a HEAD request to file's source to get file encoding, set encoding
> variable accordingly.

Hi Yaşar
This is a pretty optimistic algorithm, at least by the statistics from 2008 
(see below). 


> 2) if server doesn't provide an encoding, set encoding variable as utf-8

This is statistically a good guess but it doesn't follow the HTTP specification.


> 4) in this step, I need to parse the content I get, because I will search
> for further links \
>I feed content to parser (subclass of HTMLParser.HTMLParser) like

Does HTMLParser.HTMLParser handle broken HTML? Because there's lots of it out 
there.

I used to run an automated site validator, and I wrote a couple of articles you 
might find interesting. One is about how to get the encoding of a Web page:
http://NikitaTheSpider.com/articles/EncodingDivination.html

I also wrote an article examining the statistics I'd seen run through the 
crawler/validator. One thing I saw was that almost 2/3 of Web pages specified 
the encoding in the META HTTP-EQUIV Content-Type tag rather than in the HTTP 
Content-Type header. Mind you, this was three years ago so the character of the 
Web has likely changed since then, but probably not too dramatically.
http://NikitaTheSpider.com/articles/ByTheNumbers/fall2008.html

You can also do some straightforward debugging. Save the raw bytes you get from 
each site, and when you encounter a decode error, check the raw bytes. Are they 
really in the encoding specified? Webmasters make all kinds of mistakes. 


Hope this helps
Philip



> this -> content.decode(encoding)
> 5) open a file in binary mod open(file_path,"wb")
> 6) I write as I read without modifing.
> 
> ##
> # After processing part
> ##
> 
> (Note: encoding variable is same as the downloading part)
> 
> 1) open local file in binary mod for reading file_name =
> open(file_path,"rb")
> 2) decode the file contents into a variable => decoded_content =
> file_name.read().decode(encoding)
> 3) send decoded content to a parser, parser contstruct new html content. (as
> str)
> 4) open same file for writing, in binary mod, write parsers output like
> this: file_name.write(parser.output.encode(encoding))
> -- 
> http://yasar.serveblog.net/
> -- 
> http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: void * C array to a Numpy array using Swig

2006-01-12 Thread Philip Austin
"Travis E. Oliphant" <[EMAIL PROTECTED]> writes:

> Krish wrote:

> Yes, you are right that you need to use typemaps.  It's been awhile
> since I did this kind of thing, but here are some pointers.

Also, there's http://geosci.uchicago.edu/csc/numptr



-- 
http://mail.python.org/mailman/listinfo/python-list


Multiple Polynomial Quadratic Sieve

2006-05-30 Thread Philip Smith
Just to announce that I have posted an experimental version of MPQS which I 
am hoping those of a mathematical turn of mind would care to test, comment 
on and maybe contribute to.

There is work to do but it performs very well.

The package is available via FTP at 
http://www.pythonstuff.pwp.blueyonder.co.uk/
account: python password: guest

Thanks 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiple Polynomial Quadratic Sieve

2006-05-30 Thread Philip Smith
Whoops

Should have been

http://www.python.pwp.blueyonder.co.uk/


Thanks

Phil
"Philip Smith" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
> Just to announce that I have posted an experimental version of MPQS which 
> I am hoping those of a mathematical turn of mind would care to test, 
> comment on and maybe contribute to.
>
> There is work to do but it performs very well.
>
> The package is available via FTP at 
> http://www.pythonstuff.pwp.blueyonder.co.uk/
> account: python password: guest
>
> Thanks
> 


-- 
http://mail.python.org/mailman/listinfo/python-list


Pyrex newbie question

2006-06-04 Thread Philip Smith
Just starting to use pyrex on windows.

Using pyrex version 0.9.3.1.win32

Using Activestate Python 2.4.3.12

Using Mingw compiler

When I try to run the pyrex demo it fails with a message:

"undefined reference to '_imp__Py_NoneStruct' "

Anyone know why? 


-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   >