access to preallocated block of memory?

2005-12-14 Thread Greg Copeland
I am running python on VxWorks.  In the course of operation, a vxworks
tasks writes to a reserved area of memory.  I need access to this chunk
of memory from within python.  Initially I thought I could simply
access it as a string but a string would reallocate and copy this chunk
of memory; which is not something I can have as it would waste a huge
amount of memory.  We're talking about something like 40MB on a device
with limited RAM.  I have been looking at array.  It looks promising.
What's the best route to go here?  Ideally, I would like to simply pass
in the address of the reserved block and a length, and have the memory
accessible.

Is there some existing python object/facility I can use or will I need
to create a custom module?  Any tips, hints, or pointers would
certainly be appreciated!


Thanks,

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: access to preallocated block of memory?

2005-12-15 Thread Greg Copeland
So array can not map a pre-existing chunk of memory?  I did not port
the mmap module because such semantics don't exist on VxWorks.  Based
on comments thus far, it looks like mmap is my best bet here?  Any
other options?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why does php have a standard SQL module and Python doesn't !?

2005-12-15 Thread Greg Copeland
To build on Heiko's comment's, and to be clear, Python does have a
standard interface description, to which many SQL interfaces are
available.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: access to preallocated block of memory?

2005-12-15 Thread Greg Copeland
First, let me say thanks for answering...

> What have you gathered from people who have gone before? googling python 
> vxworks
gives about 50k hits

And chances are, they will all be unrelated to my question.  WRS uses
python for various IDE scripting needs, but they do not use it on their
own platform.  Before I ported Python to VxWorks, AFAIK, it didn't
exist on that platform.

> our post does not have enough info about your environment,

That's because I'm not asking someone to specifically tell me how to do
it on VxWorks.  This is because I'm sure a VxWorks specific solution
does not exist.  I'm asking about the best road to take given a generic
python environment.  I added the extra information just in case someone
had some insight of the platform and python.  If a generic answer does
not exist then I'm sure I'll have to craft something my self...which is
also part of what I'm trying to determine.  The question is very
simple...aside from mmap, does there exist any
pre-existing facilities in python to which I can pass an address and an
length and have the associated chunk of memory exposed to python while
using an existing type, which does not attempt to malloc and copy?
Unless you have a specific answer which uses vxWorks specific
facilities, the fact that it's on VxWorks is strictly informational.

> Once you have an access-providing object, what kind of access do you require?

It doesn't really matter.  It just so happens that it's read only
access, but I can't see how it matters.  Later on, it may require write
access too.  I simply need to access the binary data byte for byte.

> What modules/libraries do you have to give you access now from python to the 
> vxworks environment?

As I said before, I have a chuck of memory, to which I have a pointer.
I know the length of the memory.  I would like to obtain acecss to the
chuck of memory.  Based on everything I'm seeing, it seems the answer
is a custom module, all the way; which I can probably base the mmap
module.

Sorry for any confusion my original question may of created.  I was
rushed when I posted it.  Hopefully this posting will be more clear as
to my intentions and needs.

Sincerely,

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: access to preallocated block of memory?

2005-12-15 Thread Greg Copeland
I think you're getting caught in OS/platform semantics rather than a
python solution.  I already have access to the block on memory...I
simply need information about existing python facilities which will
allow me to expose the block to python as a native type...from which I
can read byte for byte and optionally write to.  As I originally said,
I have a pointer and the block's length.  I have access to it already.
I simply need to expose it to python.  What facilities already exist to
allow this which do not cause malloc/memcpy on said block?

Thus far, it looks like a hack on mmap is my best bet...I was hoping
that the array module would have something I could use too; which would
prevent me from havining to write my own data type module.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: access to preallocated block of memory?

2005-12-15 Thread Greg Copeland
Based on the answers thus far, I suspect I'll being traveling this road
shortly.

Thanks,

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: access to preallocated block of memory?

2005-12-15 Thread Greg Copeland
Dang it.  That's what I suspected.  Thanks!

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: access to preallocated block of memory?

2005-12-16 Thread Greg Copeland
That certainly looks interesting.  I'll check it out right now.

Thanks!

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Shed Skin (Python-to-C++ Compiler) 0.0.5.9

2005-12-16 Thread Greg Copeland
I've been following this project with great interest.  If you don't
mind me asking, can you please include links, if available, when you
post updates?

Great Stuff!  Keep in coming!

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: access to preallocated block of memory?

2005-12-16 Thread Greg Copeland
What license does the code use?  The PKG-INFO file says its MIT?  This
accurate?  I'm still looking over the code, but it looks like I can do
exactly what I need with only minor changes.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: SMP, GIL and Threads

2005-12-16 Thread Greg Copeland
In situations like this, you need to guard the resource with a mutex.
In Python, things like insertions are atomic but iterations are not.
Thusly, if you wrap it with a mutex, things can be made safe.  I saw,
"can be", because you then have to ensure you always use the mutex to
satify your concurrent access requirements.

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: exec a string in an embedded environment

2006-01-11 Thread Greg Copeland
On Wed, 11 Jan 2006 04:29:32 -0800, Tommy R wrote:

> I work on a safety critical embedded application that runs on VxWorks.
> I have successfully ported the interpreter to VW. In my solution I have

Sure wish you would of asked...I ported Python to VxWorks
some time back.  I've been using it for some time now.

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: exec a string in an embedded environment

2006-01-12 Thread Greg Copeland
I would be happy to share my point with you.  In fact, I'm fixing a
minor memory leak (socket module; vxWorks specific) in Python 2.3.4
(ported version) today.  My port is actually on BE XScale.

Email me at g t copeland2002@@ya hoo...com and I'll be happy to talk
more with you.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dual Core outlook

2006-02-07 Thread Greg Copeland
The short answer is, "maybe".  Python will be CPU bound but not I/O
bound.  This means you can have multiple threads concurrently
performing I/O.  On the other hand, if you have two threads which are
CPU bound, only one will run at a time.

Having said that, there are plenty of ready work arounds.  One is to
make an extension module which executes your CPU bound tasks while
releasing the GIL.  The other is to break your script into a
multiprocess model rather than a multithreaded module; using IPC as
needed.

-- 
http://mail.python.org/mailman/listinfo/python-list


inheritance with new-style classes - help

2005-05-06 Thread Greg Copeland
Okay, I have:
class Base( object ):
def __init__( self ):
self._attrib = "base"
print "Base"


def real( self ):
print "Base.real() is calling base.virtual()"
self.virtual()


def virtual( self ):
print "Base virtual()"
pass



class Mother( Base ):
def __init__( self ):
print "Mother"
super( Mother, self ).__init__()


def virtual( self ):
print self._attrib
print "virtual = Mother"


class Father( Base ):
def __init__( self ):
print "Father"
super( Father, self ).__init__()

def virtual( self ):
print self._attrib
print "virtual = Father"



class Child( Mother, Father ):
def __init( self ):
print "Child"
super( Child, self ).__init__()

self._childAttrib = "child"


def virtual( self ):
print "base attribute = " + self._attrib
print "virtual = Child"
print "childAttrib = " + self._childAttrib



rename = Child


>>> x = rename()
Mother
Father
Base
>>> x.virtual()
base attribute = base
virtual = Child
Traceback (most recent call last):
  File "", line 1, in ?
  File "/usr/tmp/python-8zAJdg.py", line 51, in virtual
AttributeError: 'Child' object has no attribute '_childAttrib'

Hmmm...interestingbut okay...let's look some more...

>>> x.__dict__
{'_attrib': 'base'}

What??!  Where the heck did self._childAttrib go?  And why?

Can someone please shine some light here?  Please?


Thanks in advance,

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: inheritance with new-style classes - help

2005-05-06 Thread Greg Copeland
BTW, this is on Python 2.3.4.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: inheritance with new-style classes - help

2005-05-06 Thread Greg Copeland
Doh!  Child's __init__ was declared as __init().  Fixing that took care
of it!  Sorry for wasting the bandwidth!

Cheers,

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python project solicitation

2007-08-21 Thread Greg Copeland

On Aug 20, 9:35 pm, JoeSox <[EMAIL PROTECTED]> wrote:
> I must say this thing is pretty cool.  I had a coworker try it out and
> he ran into problems getting it to run on his Linux OS.  So I am
> really looking for some non-Windows developers to take a look at it.
> All of the info is at the project site above.
> Thanks.
>

I looked at it real quick.  You need to use os.path.join for your file
paths.  You also need to use sys.platform for windows specific
processing.  For example:

if sys.platform == 'Win32':
   FS_ROOT = 'C:'
else:
   FS_ROOT = '/'

WORDNETPATH=os.path.join( FS_ROOT, 'WordNet', '2.1', 'dict' )

So on and so on.  You wrote it very MSWin centric so it is not a
surprise it has trouble of on other platforms.  All of your file
references need to be adjusted as above using os.path.join.

Keep in mind I only looked at it real quick.  Those appear to be the
cross platform deal killers.  Short of something I missed (could
have), it should work find on most any other platform once you take
out the Windows-isms.


Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: IDE for Python

2007-08-21 Thread Greg Copeland
On Aug 21, 5:00 am, Joel Andres Granados <[EMAIL PROTECTED]>
wrote:
> Hello list:
>
> I have tried various times to use an IDE for python put have always been
> disapointed.
> I haven't revisited the idea in about a year and was wondering what the
> python people
> use.
> I have also foundhttp://pida.co.uk/mainas a possible solution.  Anyone
> tried it yet?
>
> suggestions.
> Regards
> Joel Andres Granados
>
>  joel.granados.vcf
> 1KDownload


Have you tried SPE?  I don't know how it compares to PyDev but SPE is
pretty slick.  It has several other tools integrated into it,
including a pretty nice debugger.  It's fairly small and loads mucho
faster than Eclipse ever will.  Best of all SPE is written completely
in Python.

http://pythonide.blogspot.com/
http://sourceforge.net/projects/spe

Personally I use XEmacs at the expense of some of the nicer IDE
features.

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: IDE for Python

2007-08-21 Thread Greg Copeland
On Aug 21, 5:00 am, Joel Andres Granados <[EMAIL PROTECTED]>
wrote:
> Hello list:
>
> I have tried various times to use an IDE for python put have always been
> disapointed.
> I haven't revisited the idea in about a year and was wondering what the
> python people
> use.
> I have also foundhttp://pida.co.uk/mainas a possible solution.  Anyone
> tried it yet?
>
> suggestions.
> Regards
> Joel Andres Granados
>
>  joel.granados.vcf
> 1KDownload


Have you tried SPE?  I don't know how it compares to PyDev but SPE is
pretty slick.  It has several other tools integrated into it,
including a pretty nice debugger.  It's fairly small and loads mucho
faster than Eclipse ever will.  Best of all SPE is written completely
in Python.

http://pythonide.blogspot.com/
http://sourceforge.net/projects/spe

Personally I use XEmacs at the expense of some of the nicer IDE
features.


Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Shed Skin Python-to-C++ compiler 0.0.23

2007-08-21 Thread Greg Copeland
On Aug 20, 7:31 am, "Mark Dufour" <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I have just released Shed Skin 0.0.23. It doesn't contain the type
> inference scalability improvements I was working on, but it does have
> quite a few bug fixes and minor feature additions. Here's a list of
> changes:
>
> -support for __iadd__, __imul__ and such (except __ipow__ and __imod__)
> -some overdue set optimizations
> -fix for string formatting problem (%% did not always work)
> -extension module stability fixes
> -fix for particular inheritance problem
> -other minor bugfixes, cleanups, and error messages
>
> I could really use some systematic help in pushing Shedskin further. Some 
> ideas:
>
> -send in bug reports - these are extremely valuable and motivating to
> me, yet I don't receive many..
> -find out why test 148 is currently broken under windows
> -add datetime, re or socket support
> -look into supporting custom classes in generated extension modules
> -write a Shedskin tutorial for 'novice' programmers
> -systemically test performance and suggest and work on improvements
> -investigate replacements for std::string and __gnu_cxx::hash_set
> -perform janitorial-type work in ss.py and lib/builtin.?pp
> -support extension modules under OSX (OSX gives me accute mental RSI)
> -add more tests to unit.py
>
> Thanks,
> Mark Dufour.
> --
> "One of my most productive days was throwing away 1000 lines of code"
> - Ken Thompson


Adding socket support would certainly open the door for many common
classes applications.  If I had my pick, I say, sockets and then re.

BTW, I gatta say projects like shedskin and pypy are the most exciting
python projects I'm aware of.  Please keep of the good work.  I'm so
excited about the day I can begin using shedskin for the types of
projects I use python on.

Greg


Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Shed Skin Python-to-C++ compiler 0.0.23

2007-08-21 Thread Greg Copeland
On Aug 20, 7:31 am, "Mark Dufour" <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I have just released Shed Skin 0.0.23. It doesn't contain the type
> inference scalability improvements I was working on, but it does have
> quite a few bug fixes and minor feature additions. Here's a list of
> changes:
>
> -support for __iadd__, __imul__ and such (except __ipow__ and __imod__)
> -some overdue set optimizations
> -fix for string formatting problem (%% did not always work)
> -extension module stability fixes
> -fix for particular inheritance problem
> -other minor bugfixes, cleanups, and error messages
>
> I could really use some systematic help in pushing Shedskin further. Some 
> ideas:
>
> -send in bug reports - these are extremely valuable and motivating to
> me, yet I don't receive many..
> -find out why test 148 is currently broken under windows
> -add datetime, re or socket support
> -look into supporting custom classes in generated extension modules
> -write a Shedskin tutorial for 'novice' programmers
> -systemically test performance and suggest and work on improvements
> -investigate replacements for std::string and __gnu_cxx::hash_set
> -perform janitorial-type work in ss.py and lib/builtin.?pp
> -support extension modules under OSX (OSX gives me accute mental RSI)
> -add more tests to unit.py
>
> Thanks,
> Mark Dufour.
> --
> "One of my most productive days was throwing away 1000 lines of code"
> - Ken Thompson


One more thing.  Please include a link to the current project page
when you make these postings.

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problem with Thread.join()

2007-08-21 Thread Greg Copeland
On Aug 20, 11:12 am, "Robert Dailey" <[EMAIL PROTECTED]> wrote:
> Hey guys,
>
> Sorry for taking so long to respond. I had actually figured out what
> this issue is over on the wxPython mailing list. The issue was that I
> was attempting to configure wxPython controls from a remote thread,
> which is apparently illegal due to some state persistance issues.
>

As a rule of thumb, only one thread *ever* controls the GUI.

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Fast socket write

2007-08-21 Thread Greg Copeland
I'm having a brain cramp right now.  I can't see to recall the name of
a module.  I know there is a python module which allows for optimized
socket writes on Linux.  It uses a syscall to obtain its benefit.
IIRC, it is a fast path for I/O bound servers.

Can someone please refresh my memory?  What is the name of this
module??

Help,

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Fast socket write

2007-08-22 Thread Greg Copeland
On Aug 21, 9:40 pm, Bikal KC <[EMAIL PROTECTED]> wrote:
> Greg Copeland wrote:
> > I'm having a brain cramp right now.  I can't see to recall the name of
>
> Is your cramp gone now ? :P


I wish.  If anyone can remember the name of this module I'd realy
appreciate it.

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Fast socket write

2007-09-01 Thread Greg Copeland
On Aug 22, 8:30 am, paul <[EMAIL PROTECTED]> wrote:
> Greg Copeland schrieb:> On Aug 21, 9:40 pm, Bikal KC <[EMAIL PROTECTED]> 
> wrote:
> >> Greg Copeland wrote:
> >>> I'm having a brain cramp right now.  I can't see to recall the name of
> >> Is your cramp gone now ? :P
>
> > I wish.  If anyone can remember the name of this module I'd realy
> > appreciate it.
>
> http://tautology.org/software/python-modules/sendfileprobably...



That's it.  Thanks guys!

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Shed Skin Python-to-C++ compiler 0.0.23

2007-09-01 Thread Greg Copeland
On Aug 22, 10:00 am, srepmub <[EMAIL PROTECTED]> wrote:
> > Adding socket support would certainly open the door for many common
> > classes applications.  If I had my pick, I say, sockets and then re.
>
> Thanks. Especially sockets should be not too hard to add, but I
> probably won't work on these directly myself. Let me know if you are
> interested.. :-)
>
> > BTW, I gatta say projects like shedskin and pypy are the most exciting
> > python projects I'm aware of.  Please keep of the good work.  I'm so
> > excited about the day I can begin using shedskin for the types of
> > projects I use python on.
>
> I'm practically working alone on Shedskin, so the better bet will be
> PyPy, unless I start getting more help.
>
> BTW I usually add a link to the homepage, but somehow I forgot this
> time:
>
> http://mark.dufour.googlepages.com
>
> Thanks!
> Mark Dufour.


Mark, I wish I had the time to help with your project.  I believe
PyPy, Shedskin, and pyvm (which might be dead now), are the most
interesting python projects currently going on.  In fact, I would
place them ahead of python 3000 even.

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Pygame + PyInstaller?

2007-09-01 Thread Greg Copeland
Anyone had any luck on using PyInstaller to package up Pygame?  I
posted to the PyInstaller group some time ago and have yet to receive
a reply.  Anyone have any tips to offer here?

A like-solution which runs on Linux would also be welcome.  When
PyInstaller works, it's pretty nice.  When it doesn't, it is a real
pain to figure out what the heck it doesn't like.  So if anyone has a
less painful alternative, which runs on Linux, please let me know.


Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why is this loop heavy code so slow in Python? Possible Project Euler spoilers

2007-09-02 Thread Greg Copeland
On Sep 2, 7:20 am, Arnaud Delobelle <[EMAIL PROTECTED]> wrote:
> On Sep 2, 12:51 pm, [EMAIL PROTECTED] wrote:
>
>
>
> > I'm pretty new to python, but am very happy with it. As well as using
> > it at work I've been using it to solve various puzzles on the Project
> > Euler site -http://projecteuler.net. So far it has not let me down,
> > but it has proved surprisingly slow on one puzzle.
>
> > The puzzle is: p is the perimeter of a right angle triangle with
> > integral length sides, {a,b,c}. which value of p  < 1000, is the
> > number of solutions {a,b,c} maximised?
>
> > Here's my python code:
>
> > #!/usr/local/bin/python
>
> > solutions = [0] * 1001
> > p = 0
>
> > for a in xrange(1, 1000):
> > for b in xrange(1, 1000 - a):
> > for c in xrange(1, 1000 - a - b):
> > p = a + b + c
> > if p < 1000:
> > if a ** 2 + b ** 2 == c ** 2:
> > solutions[p] += 1
>
> > max = 0
> > maxIndex = 0
> > index = 0
> > for solution in solutions:
> > if solution > max:
> > max = solution
> > maxIndex = index
> > index += 1
>
> > print maxIndex
>
> > It takes 2 minutes and twelve seconds on a 2.4GHz Core2Duo MacBook
> > Pro. Surprised at how slow it was I implemented the same algorithm in
> > C:
>
> > #include 
> > #include 
>
> > int main() {
> >   int* solutions = calloc(1000, sizeof(int));
>
> >   int p;
> >   for(int a = 1; a < 1000; ++a) {
> > for(int b = 1; b < 1000 - a; ++b) {
> >   for(int c = 1; c < 1000 - a - b; ++c) {
> > p = a + b + c;
> > if(p < 1000) {
> >   if(a * a + b * b == c * c) {
> > solutions[p] += 1;
> >   }
> > }
> >   }
> > }
> >   }
>
> >   int max = 0;
> >   int maxIndex = 0;
>
> >   for(int i = 0; i < 1000; ++i) {
> > if(solutions[i] > max) {
> >   max = solutions[i];
> >   maxIndex = i;
> > }
> >   }
> >   printf("%d\n", maxIndex);
> >   return 0;
>
> > }
>
> > gcc -o 39 -std=c99 -O3 39.c
>
> > The resulting executable takes 0.24 seconds to run. I'm not expecting
> > a scripting language to run faster than native code, but I was
> > surprised at how much slower it was in this case. Any ideas as to what
> > is causing python so much trouble in the above code?
>
> from math import sqrt
>
> solutions = [0] * 1001
> p = 0
>
> for a in xrange(1, 1000):
> a2 = a*a
> for b in xrange(1, 1000 - a):
> c = sqrt(a2 + b*b)
> if c == int(c) and a+b+c < 1000:
> solutions[a+b+int(c)] += 1
>
> max = 0
> maxIndex = 0
> index = 0
> for solution in solutions:
> if solution > max:
> max = solution
> maxIndex = index
> index += 1
>
> print maxIndex
>
> --
> Arnaud

For the curious:

O   O + P   A   A + P
=== === === ===
2:22.56 0:25.65 0:00.75 0:00.20

O = Original Implementation
P = Psyco (psyco.full())
A = Arnaud's Revised Implementation

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why does SocketServer default allow_reuse_address = false?

2007-03-07 Thread Greg Copeland
On Feb 26, 5:54 pm, "Joshua J. Kugler" <[EMAIL PROTECTED]> wrote:
> Considering that UNIX Network Programming, Vol 1 (by W. Richard Stevens)
> recommends "_All_ TCP servers should specify [SO_REUSEADDR] to allow the
> server to be restarted [if there are clients connected]," and that
> self.allow_reuse_address = False makes restarting a server a pain if there
> were connected clients, why does SocketServer default allow_reuse_address
> to False?  It's kind of bemusing to subclass ThreadingTCPServer just to
> change one variable that arguably should have been True in the first place.
>
> Is there some history to this of which I'm not aware?  Is there a good
> reason for it to default to false?
>

Yes, it is there for a good reason.  Security is the primary focus of
that option.  If you enable that option, rogue applications can assume
service processing under a number of server failure conditions.  In
other words, start your rogue, crash the primary service, and you now
have a rogue service running.  Even periodic checks will show the
server is still running.  Under a number of other configurations, it
is also possible for the rogue service to simply start and usurp some
types of IP traffic on certain OSs which would otherwise be delivered
to your real server.

Contrary to the book, blindly enabling SO_REUSEADDR is a very, very
bad idea unless you completely understand the problem domain.  I'm
sure Stevens' does understand so it makes for a good choice for him.
On the other hand, most people don't understand the implications so it
makes for a very, very poor move from a security perspective.

Long story short, it is not a bug.  It is a feature.  The proper
default is that of the OS, which is to ensure SO_REUSEADDR is disabled
unless you absoluetely understand what you're buying by enabling it.


Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


SQLAlchemy and Oracle Functions?

2007-03-08 Thread Greg Copeland
I have  a need to call an Oracle function, which is not the same thing
as a stored procedure.  Can SQLAlchemy do this directly?  Indirectly?
If so, an example would be appreciated.  If not, how do I obtain the
raw cx_Oracle cursor so I can use that directly?

Thanks,

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


SQLAlchemy and Oracle Functions?

2007-03-08 Thread Greg Copeland
I'm using SQLAlchemy and have a need to call an Oracle function; which
is not the same as a stored procedure.  Can this be done directory or
indirectly with SQLAlchemy?  If so, can someone please provide an
example?  If not, how do I obtain the raw cx_Oracle cursor so I can
use callfunc directly on that?

Thanks,

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: SQLAlchemy and Oracle Functions?

2007-03-08 Thread Greg Copeland
On Mar 8, 3:35 pm, "Giles Brown" <[EMAIL PROTECTED]> wrote:
http://www.sqlalchemy.org/docs/sqlconstruction.myt#sql_whereclause_fu...
> SQLAlchemy has its own google group
>
> http://groups.google.co.uk/group/sqlalchemy
>
> You could try asking there too.
>
> Giles

Very nice.  That exactly answered by question.  It works!  Also, I
didn't know about the sqlalchemy group so I appreciate the heads up.

Thanks,

Greg


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: SQLAlchemy and Oracle Functions?

2007-03-08 Thread Greg Copeland
On 8 Mar, 15:35, "Giles Brown" <[EMAIL PROTECTED]> wrote:
> On 8 Mar, 22:19, "Greg Copeland" <[EMAIL PROTECTED]> wrote:
>
> > I'm using SQLAlchemy and have a need to call an Oracle function; which
> > is not the same as a stored procedure.  Can this be done directory or
> > indirectly with SQLAlchemy?  If so, can someone please provide an
> > example?  If not, how do I obtain the raw cx_Oracle cursor so I can
> > use callfunc directly on that?
>
> > Thanks,
>
> > Greg
>
> http://www.sqlalchemy.org/docs/sqlconstruction.myt#sql_whereclause_fu...
> ?
>
> SQLAlchemy has its own google group
>
> http://groups.google.co.uk/group/sqlalchemy
>
> You could try asking there too.
>
> Giles


I think I spoke too soon!  Are SQL functions which have out arguments
not allowed?  I get:
sqlalchemy.exceptions.SQLError: (DatabaseError) ORA-06572: Function
blah has out arguments.

Seems google is having problems right now too.  I tried to join but it
just times out so I am currently unable to post to the sqlalchemy
google group.

Anthing special I need to do to call an Oracle function via the func
method, which also has output parameters?


Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Are Lists thread safe?

2007-03-09 Thread Greg Copeland
On Mar 9, 1:03 pm, "abcd" <[EMAIL PROTECTED]> wrote:
> Are lists thread safe?  Or do I have to use a Lock when modifying the
> list (adding, removing, etc)?  Can you point me to some documentation
> on this?
>
> thanks


Yes there are still some holes which can bite you.  Adding and
removing is thread safe but don't treat the list as locked between
operations unless you specifically do your own locking.  You still
need to be on the lookout for race conditions.

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Freezing Python Apps on Linux?

2007-03-20 Thread Greg Copeland
I seem to recall several different applications which can create
standalone binaries for python on Linux.  I know freeze.py and
cx_Freeze.py exist.  Are these still the preferred methods of creating
a stand alone binary out of a python application on Linux?

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


freeze.py and dom.minidom

2007-03-20 Thread Greg Copeland
I am attempting to freeze an application which uses the dom.minidom
parser.  When I execute my application, I get an import error of:
ImportError: No module named dom.minidom.  During the freeze process,
I can see:
freezing xml ...
freezing xml.dom ...
freezing xml.dom.NodeFilter ...
freezing xml.dom.domreg ...
freezing xml.dom.expatbuilder ...
freezing xml.dom.minicompat ...
freezing xml.dom.minidom ...
freezing xml.dom.pulldom ...
freezing xml.dom.xmlbuilder ...
freezing xml.parsers ...
freezing xml.parsers.expat ...
freezing xml.sax ...
freezing xml.sax._exceptions ...
freezing xml.sax.expatreader ...
freezing xml.sax.handler ...
freezing xml.sax.saxutils ...
freezing xml.sax.xmlreader ...

And in my target directory, I can see:
$ ls M_xml*
M_xml.c M_xml__dom.o
M_xml__sax___exceptions.o
M_xml__dom.cM_xml__dom__pulldom.c
M_xml__sax__expatreader.c
M_xml__dom__domreg.cM_xml__dom__pulldom.o
M_xml__sax__expatreader.o
M_xml__dom__domreg.oM_xml__dom__xmlbuilder.c
M_xml__sax__handler.c
M_xml__dom__expatbuilder.c  M_xml__dom__xmlbuilder.o
M_xml__sax__handler.o
M_xml__dom__expatbuilder.o  M_xml.oM_xml__sax.o
M_xml__dom__minicompat.cM_xml__parsers.c
M_xml__sax__saxutils.c
M_xml__dom__minicompat.oM_xml__parsers__expat.c
M_xml__sax__saxutils.o
M_xml__dom__minidom.c   M_xml__parsers__expat.o
M_xml__sax__xmlreader.c
M_xml__dom__minidom.o   M_xml__parsers.o
M_xml__sax__xmlreader.o
M_xml__dom__NodeFilter.cM_xml__sax.c
M_xml__dom__NodeFilter.oM_xml__sax___exceptions.c

As you can see, minidom is being frozen and M_xml__dom__minidom.o is
compiled.  Likewise, I can confirm it linked into the application.
What do I need to do to get this to work with a frozen application?

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: socket.getfqdn deadlock

2007-03-21 Thread Greg Copeland
On Mar 20, 2:23 pm, [EMAIL PROTECTED] wrote:
> Hi,
>
> I am getting deadlocks (backtrace pasted below) after a while at,
> presumably, a socket.getfqdn() call in a child process .
>
> Fwiw: This child process is created as the result of a pyro call to a
> Pyro object.
>
> Any ideas why this is happening?


Are you sure it is not timing out, waiting for a DNS reply?

Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


DBAPI Loss DB Connection

2007-04-02 Thread Greg Copeland
According to the SQLAlchemy list, the DBAPI specification does not
define a standard error reporting mechanism which would allow for
generic detection of loss of database connection without DB specific
exception handling.  For me, this is a requisite for robust error
handling.  Not to mention, this seeminly defeats the purpose of using
DBAPI since one can not have a robust DB application without tethering
it to a specific DBAPI DB implementation; which seemingly defeates the
purpose of using a DBAPI implementation.

Assuming what I've been told is correct, what is the proper method to
request a revision to the DBAPI specification to specifically address
loss of DB connection and error handling in this regard?


Greg

-- 
http://mail.python.org/mailman/listinfo/python-list


setup.py bdist_rpm help

2007-04-17 Thread Greg Copeland
Okay, I have an application which is frozen via pyinstaller.  That is
all working great.  I now want to create an RPM using distutils'
bdist_rpm facilities.  I seem to be running into trouble.  No matter
what, I only seem to get three files within my RPM (setup.py,
README.txt, and PKG_INFO).

My application ('app') has a configuration file ('x.cfg') and a single
directory ('data') which contains various data files used during
runtime.  Can someone show me an example setup.py which will create an
RPM containing only the following: app, x.cfg, data/*?  Please note
that 'app' is the frozen application and not the normal python script
(app.py).  If it matters, I'm using Python 2.4.4 on Linux.

Thanks!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: setup.py bdist_rpm help

2007-04-17 Thread Greg Copeland
Ahh.  I figured it out.  I resolved the issue by using a MANIFEST.in
file.

Greg


On Apr 17, 1:19 pm, Greg Copeland <[EMAIL PROTECTED]> wrote:
> Okay, I have an application which is frozen via pyinstaller.  That is
> all working great.  I now want to create an RPM using distutils'
> bdist_rpm facilities.  I seem to be running into trouble.  No matter
> what, I only seem to get three files within my RPM (setup.py,
> README.txt, and PKG_INFO).
>
> My application ('app') has a configuration file ('x.cfg') and a single
> directory ('data') which contains various data files used during
> runtime.  Can someone show me an example setup.py which will create an
> RPM containing only the following: app, x.cfg, data/*?  Please note
> that 'app' is the frozen application and not the normal python script
> (app.py).  If it matters, I'm using Python 2.4.4 on Linux.
>
> Thanks!


-- 
http://mail.python.org/mailman/listinfo/python-list


C or C++ with Pyparsing?

2008-11-14 Thread Greg Copeland
Anyone have a pyparsing file for parsing C/C++ they are willing to
share?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Determine the best buffer sizes when using socket.send() and socket.recv()

2008-11-14 Thread Greg Copeland
On Nov 14, 9:56 am, "Giampaolo Rodola'" <[EMAIL PROTECTED]> wrote:
> Hi,
> I'd like to know if there's a way to determine which is the best
> buffer size to use when you have to send() and recv() some data over
> the network.
> I have an FTP server application which, on data channel, uses 8192
> bytes as buffer for both incoming and outgoing data.
> Some time ago I received a report from a guy [1] who stated that
> changing the buffers from 8192 to 4096 results in a drastical speed
> improvement.
> I tried to make some tests by using different buffer sizes, from 4 Kb
> to 256 Kb, but I'm not sure which one to use as default in my
> application since I noticed they can vary from different OSes.
> Is there a recommended way to determine the best buffer size to use?
>
> Thanks in advance
>
> [1]http://groups.google.com/group/pyftpdlib/browse_thread/thread/f13a82b...
>
> --- Giampaolohttp://code.google.com/p/pyftpdlib/

As you stated, the answer is obviously OS/stack dependant. Regardless,
I believe you'll likely find the best answer is between 16K-64K. Once
you consider the various TCP stack improvements which are now
available and the rapid increase of available bandwidth, you'll likely
want to use the largest buffers which do not impose scalability issues
for your system/application. Unless you have reason to use a smaller
buffer, use 64K buffers and be done with it. This helps minimize the
number of context switches and helps ensure the stack always has data
to keep pumping.

To look at it another way, using 64k buffers requires 1/8 the number
of system calls and less time actually spent in python code.

If as you say someone actually observed a performance improvement when
changing from 8k buffers to 4k buffers, it likely has something to do
with python's buffer allocation overhead but even that seems contrary
to my expectation. The referenced article was not available to me so I
was not able to follow and read.

Another possibility is 4k buffers require less fragmentation and is
likely to perform better on lossy connections. Is it possible he/she
was testing on a high lossy connection? In short, performance wise,
TCP stinks on lossy connections.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Determine the best buffer sizes when using socket.send() and socket.recv()

2008-11-14 Thread Greg Copeland
On Nov 14, 1:58 pm, "Giampaolo Rodola'" <[EMAIL PROTECTED]> wrote:
> On Nov 14, 5:27 pm, Greg Copeland <[EMAIL PROTECTED]> wrote:
>
>
>
> > On Nov 14, 9:56 am, "Giampaolo Rodola'" <[EMAIL PROTECTED]> wrote:
>
> > > Hi,
> > > I'd like to know if there's a way to determine which is the best
> > > buffer size to use when you have to send() and recv() some data over
> > > the network.
> > > I have an FTP server application which, on data channel, uses 8192
> > > bytes as buffer for both incoming and outgoing data.
> > > Some time ago I received a report from a guy [1] who stated that
> > > changing the buffers from 8192 to 4096 results in a drastical speed
> > > improvement.
> > > I tried to make some tests by using different buffer sizes, from 4 Kb
> > > to 256 Kb, but I'm not sure which one to use as default in my
> > > application since I noticed they can vary from different OSes.
> > > Is there a recommended way to determine the best buffer size to use?
>
> > > Thanks in advance
>
> > > [1]http://groups.google.com/group/pyftpdlib/browse_thread/thread/f13a82b...
>
> > > --- Giampaolohttp://code.google.com/p/pyftpdlib/
>
> > As you stated, the answer is obviously OS/stack dependant. Regardless,
> > I believe you'll likely find the best answer is between 16K-64K. Once
> > you consider the various TCP stack improvements which are now
> > available and the rapid increase of available bandwidth, you'll likely
> > want to use the largest buffers which do not impose scalability issues
> > for your system/application. Unless you have reason to use a smaller
> > buffer, use 64K buffers and be done with it. This helps minimize the
> > number of context switches and helps ensure the stack always has data
> > to keep pumping.
>
> > To look at it another way, using 64k buffers requires 1/8 the number
> > of system calls and less time actually spent in python code.
>
> > If as you say someone actually observed a performance improvement when
> > changing from 8k buffers to 4k buffers, it likely has something to do
> > with python's buffer allocation overhead but even that seems contrary
> > to my expectation. The referenced article was not available to me so I
> > was not able to follow and read.
>
> > Another possibility is 4k buffers require less fragmentation and is
> > likely to perform better on lossy connections. Is it possible he/she
> > was testing on a high lossy connection? In short, performance wise,
> > TCP stinks on lossy connections.- Hide quoted text -
>
> > - Show quoted text -
>
> Thanks for the precious advices.
> The discussion I was talking about is this one (sorry for the broken
> link, I didn't notice 
> that):http://groups.google.com/group/pyftpdlib/browse_thread/thread/f13a82b...
>
> --- Giampaolohttp://code.google.com/p/pyftpdlib/

I read the provided link. There really isn't enough information to
explain what he observed. It is safe to say, his report is contrary to
common performance expectations and my own experience. Since he also
reported large swings in bandwidth far below his potential max, I'm
inclined to say he was suffering from some type of network
abnormality. To be clear, that's just a guess. For all we know some
script kiddie was attempting to scan/hack his system at that given
time - or any number of other variables. One can only be left making
wild assumptions about his operating environment and it's not even
clear if his results are reproducible. Lastly, keep in mind, many
people do not know how to properly benchmark simple applications, let
alone accurately measure bandwidth.

Keep in mind, python can typically saturate a 10Mb link even on fairly
low end systems so it's not likely your application was his problem.
For now, use large buffers unless you can prove otherwise.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Problem with writing fast UDP server

2008-11-20 Thread Greg Copeland
On Nov 20, 9:03 am, Krzysztof Retel <[EMAIL PROTECTED]>
wrote:
> Hi guys,
>
> I am struggling writing fast UDP server. It has to handle around 1
> UDP packets per second. I started building that with non blocking
> socket and threads. Unfortunately my approach does not work at all.
> I wrote a simple case test: client and server. The client sends 2200
> packets within 0.137447118759 secs. The tcpdump received 2189 packets,
> which is not bad at all.
> But the server only handles 700 -- 870 packets, when it is non-
> blocking, and only 670 – 700 received with blocking sockets.
> The client and the server are working within the same local network
> and tcpdump shows pretty correct amount of packets received.
>
> I included a bit of the code of the UDP server.
>
> class PacketReceive(threading.Thread):
>     def __init__(self, tname, socket, queue):
>         self._tname = tname
>         self._socket = socket
>         self._queue = queue
>         threading.Thread.__init__(self, name=self._tname)
>
>     def run(self):
>         print 'Started thread: ', self.getName()
>         cnt = 1
>         cnt_msgs = 0
>         while True:
>             try:
>                 data = self._socket.recv(512)
>                 msg = data
>                 cnt_msgs += 1
>                 total += 1
>                 # self._queue.put(msg)
>                 print  'thread: %s, cnt_msgs: %d' % (self.getName(),
> cnt_msgs)
>             except:
>                 pass
>
> I was also using Queue, but this didn't help neither.
> Any idea what I am doing wrong?
>
> I was reading that Python socket modules was causing some delays with
> TCP server. They recomended to set up  socket option for nondelays:
> "sock.setsockopt(SOL_TCP, TCP_NODELAY, 1) ". I couldn't find any
> similar option for UDP type sockets.
> Is there anything I have to change in socket options to make it
> working faster?
> Why the server can't process all incomming packets? Is there a bug in
> the socket layer? btw. I am using Python 2.5 on Ubuntu 8.10.
>
> Cheers
> K

First and foremost, you are not being realistic here. Attempting to
squeeze 10,000 packets per second out of 10Mb/s (assumed) Ethernet is
not realistic. The maximum theoretical limit is 14,880 frames per
second, and that assumes each frame is only 84 bytes per frame, making
it useless for data transport. Using your numbers, each frame requires
(90B + 84B) 174B, which works out to be a theoretical maximum of ~7200
frames per second. These are obviously some rough numbers but I
believe you get the point. It's late here, so I'll double check my
numbers tomorrow.

In your case, you would not want to use TCP_NODELAY, even if you were
to use TCP, as it would actually limit your throughput. UDP does not
have such an option because each datagram is an ethernet frame - which
is not true for TCP as TCP is a stream. In this case, use of TCP may
significantly reduce the number of frames required for transport -
assuming TCP_NODELAY is NOT used. If you want to increase your
throughput, use larger datagrams. If you are on a reliable connection,
which we can safely assume since you are currently using UDP, use of
TCP without the use of TCP_NODELAY may yield better performance
because of its buffering strategy.

Assuming you are using 10Mb ethernet, you are nearing its frame-
saturation limits. If you are using 100Mb ethernet, you'll obviously
have a lot more elbow room but not nearly as much as one would hope
because 100Mb is only possible when frames which are completely
filled. It's been a while since I last looked at 100Mb numbers, but
it's not likely most people will see numbers near its theoretical
limits simply because that number has so many caveats associated with
it - and small frames are its nemesis. Since you are using very small
datagrams, you are wasting a lot of potential throughput. And if you
have other computers on your network, the situation is made yet more
difficult. Additionally, many switches and/or routes also have
bandwidth limits which may or may not pose a wall for your
application. And to make matters worse, you are allocating lots of
buffers (4K) to send/receive 90 bytes of data, creating yet more work
for your computer.

Options to try:
See how TCP measures up for you
Attempt to place multiple data objects within a single datagram,
thereby optimizing available ethernet bandwidth
You didn't say if you are CPU-bound, but you are creating a tuple and
appending it to a list on every datagram. You may find allocating
smaller buffers and optimizing your history accounting may help if
you're CPU-bound.
Don't forget, localhost does not suffer from frame limits - it's
basically testing your memory/bus speed
If this is for local use only, considering using a different IPC
mechanism - unix domain sockets or memory mapped files
--
http://mail.python.org/mailman/listinfo/python-list


Re: Problem with writing fast UDP server

2008-11-21 Thread Greg Copeland
On Nov 21, 11:05 am, Krzysztof Retel <[EMAIL PROTECTED]>
wrote:
> On Nov 21, 4:48 pm, Peter Pearson <[EMAIL PROTECTED]> wrote:
>
> > On Fri, 21 Nov 2008 08:14:19 -0800 (PST), Krzysztof Retel wrote:
> > > I am not sure what do you mean by CPU-bound? How can I find out if I
> > > run it on CPU-bound?
>
> > CPU-bound is the state in which performance is limited by the
> > availability of processor cycles.  On a Unix box, you might
> > run the "top" utility and look to see whether the "%CPU" figure
> > indicates 100% CPU use.  Alternatively, you might have a
> > tool for plotting use of system resources.
>
> > --
> > To email me, substitute nowhere->spamcop, invalid->net.
>
> Thanks. I run it without CPU-bound

With clearer eyes, I did confirm my math above is correct. I don't
have a networking reference to provide. You'll likely have some good
results via Google. :)

If you are not CPU bound, you are likely IO-bound. That means you
computer is waiting for IO to complete - likely on the sending side.
In this case, it likely means you have reached your ethernet bandwidth
limits available to your computer. Since you didn't correct me when I
assumed you're running 10Mb ethernet, I'll continue to assume that's a
safe assumption. So, assuming you are running on 10Mb ethernet, try
converting your application to use TCP. I'd bet, unless you have
requirements which prevent its use, you'll suddenly have enough
bandwidth (in this case, frames) to achieve your desired results.

This is untested and off the top of my head but it should get you
pointed in the right direction pretty quickly. Make the following
changes to the server:

sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
 to
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

Make this:
print "Waiting for first packet to arrive...",
sock.recvfrom(BUFSIZE)

look like:
print "Waiting for first packet to arrive...",
cliSock = sock.accept()

Change your calls to sock.recvfrom(BUFSIZE) to cliSock.recv(BUFSIZE).
Notice the change to "cliSock".

Keep in mind TCP is stream based, not datagram based so you may need
to add additional logic to determine data boundaries for re-assemble
of your data on the receiving end. There are several strategies to
address that, but for now I'll gloss it over.

As someone else pointed out above, change your calls to time.clock()
to time.time().

On your client, make the following changes.
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
 to
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect( (remotehost,port) )

nbytes = sock.sendto(data, (remotehost,port))
 to
nbytes = sock.send(data)

Now, rerun your tests on your network. I expect you'll be faster now
because TCP can be pretty smart about buffering. Let's say you write
16, 90B blocks to the socket. If they are timely enough, it is
possible all of those will be shipped across ethernet as a single
frame. So what took 16 frames via UDP can now *potentially* be done in
a single ethernet frame (assuming 1500MTU). I say potentially because
the exact behaviour is OS/stack and NIC-driver specific and is often
tunable to boot. Likewise, on the client end, what previously required
15 calls to recvfrom, each returning 90B, can *potentially* be
completed in a single call to recv, returning 1440B. Remember, fewer
frames means less protocol overhead which makes more bandwidth
available to your applications. When sending 90B datagrams, you're
waisting over 48% of your available bandwidth because of protocol
overhead (actually a lot more because I'm not accounting for UDP
headers).

Because of the differences between UDP and TCP, unlike your original
UDP implementation which can receive from multiple clients, the TCP
implementation can only receive from a single client. If you need to
receive from multiple clients concurrently, look at python's select
module to take up the slack.

Hopefully you'll be up and running. Please report back your findings.
I'm curious as to your results.
--
http://mail.python.org/mailman/listinfo/python-list