Re: editor recommendations?

2021-03-02 Thread Russell
Ethan Furman  wrote:
> I'm currently using vim, and the primary reason I've stuck with it for so 
> long is because I can get truly black screens with it.  By which I mean that 
> I have a colorful window title bar, a light-grey menu bar, and then a 
> light-grey frame around the text-editing window (aka the only window), and a 
> nice, black-background editing area.

I use vim. It's actually extremely powerful, especially for text/code
editing. I'd recommend reading one of the many books on using vim
effectively. Also, plugins can really add a lot...

-- 
rust
0x68caecc97f6a90122e51c0692c88d9cb6b58a3dc
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: editor recommendations?

2021-03-07 Thread Russell
Dan Stromberg  wrote:
> On Tue, Mar 2, 2021 at 8:11 PM Dan Stromberg  wrote:
> 
>>
>> On Tue, Mar 2, 2021 at 8:00 PM Russell  wrote:
>>
>>> Ethan Furman  wrote:
>>> > I'm currently using vim, and the primary reason I've stuck with it for
>>> so long is because I can get truly black screens with it.  By which I mean
>>> that I have a colorful window title bar, a light-grey menu bar, and then a
>>> light-grey frame around the text-editing window (aka the only window), and
>>> a nice, black-background editing area.
>>>
>>> I use vim. It's actually extremely powerful, especially for text/code
>>> editing. I'd recommend reading one of the many books on using vim
>>> effectively. Also, plugins can really add a lot...
>>>
>>
>> On the subject of learning vim: There's an excellent vi cheat sheet
>> available on the internet.  I've put a copy of it at
>> https://stromberg.dnsalias.org/~strombrg/vi.ref.6
>>
>> vi is of course the predecessor of vim. But that cheat sheet is still
>> great for learning much of vim.
>>
> 
> I just ran across:  http://lib.ru/MAN/viref.txt
> ...which is pretty much the same thing, but converted to nice HTML.

To that end, vim also has extensive documentation built in. Just type
:help to get started. There's a pretty good tutorial accessible from the
main help screen. 

And I'll stop talking about vim in the Python group now, I promise. :)

-- 
rust
0x68caecc97f6a90122e51c0692c88d9cb6b58a3dc
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: editor recommendations?

2021-03-08 Thread Russell
Cameron Simpson  wrote:
 
>>-- Emacs outshines all other editing software in approximately the same
>>way that the noonday sun does the stars. It is not just bigger and
>>brighter; it simply makes everything else vanish.  ??? Neal Stephenson

Neal Stephenson's book Cryptonomicon was the reason I became borderline
obsesseed with Emacs 20 years ago. I've only come to my senses in about
the last 5 years when I started using vim. lol!

Admittedly, Normal Mode takes some getting used to. I remember a quote
from years ago that went something like, "Vi has two modes, edit and
beep."

> 
> A novice of the temple once approached the Chief Priest with a question.
> 
>   "Master, does Emacs have the Buddha nature?" the novice asked.
> 
>   The Chief Priest had been in the temple for many years and could be relied
>   upon to know these things.  He thought for several minutes before replying.
> 
>   "I don't see why not.  It's got bloody well everything else."
> 
>   With that, the Chief Priest went to lunch.  The novice suddenly achieved
> enlightenment, several years later.
> 
> Commentary:
> 
> His Master is kind,
> Answering his FAQ quickly,
> With thought and sarcasm.
> 
> Cheers,
> Cameron Simpson 

This is awesome!

-- 
rust
0x68caecc97f6a90122e51c0692c88d9cb6b58a3dc
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: for installation of pygames.

2021-05-07 Thread Russell
mishrasamir2...@gmail.com wrote:
>Sir/madam,
> 
>I'm a user of python , so I'm requesting you to give me permission to run
>pygames  .
> 
>Thankyou
> 
>Samir Mishra
> 
> 
> 
> 
> 
>Sent from [1]Mail for Windows
> 
> References
> 
>Visible links
>1. https://go.microsoft.com/fwlink/?LinkId=550986

You have my permission.
-- 
rust
0x68caecc97f6a90122e51c0692c88d9cb6b58a3dc
-- 
https://mail.python.org/mailman/listinfo/python-list


ctypes.cdll.LoadLibrary() freezes when loading a .so that contains dlopen()

2015-02-13 Thread Russell
I have a shared library, libfoo.so, that references another .so which isn't 
linked but instead loaded at runtime with 
myso=dlopen("/usr/local/lib/libbar.so", RTLD_NOW); when I try to load it with 
ctypes, the call hangs and I have to ctl-c.

(build)[dev]$ export LD_LIBRARY_PATH=/usr/local/bin
(build)[dev]$ python
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import ctypes
>>> ctypes.cdll.LoadLibrary ('/usr/local/lib/libfoo.so')   <--- This call hangs 
>>> and have to ctl-C
^CTraceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.7/ctypes/__init__.py", line 443, in LoadLibrary
   return self._dlltype(name)
  File "/usr/lib/python2.7/ctypes/__init__.py", line 365, in __init__
self._handle = _dlopen(self._name, mode)
KeyboardInterrupt
>>>


My first thought was that It couldn't find libbar.so, but if I remove that 
file, python seg faults: /usr/local/lib/libbar.so: cannot open shared object 
file: No such file or directorySegmentation fault (core dumped), so it appears 
that it finds the dlopen() file but freezes waiting for ???

This is on ubuntu 14.4 server. C code is compiled with -std=gnu11 -Wall -Werror 
-m64 -march=x86-64 -mavx -g -fPIC

I also get the same reaction on python3.4

Thanks in advance
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: ctypes.cdll.LoadLibrary() freezes when loading a .so that contains dlopen()

2015-02-13 Thread Russell
On Friday, February 13, 2015 at 7:27:54 PM UTC-6, Ian wrote:
> On Fri, Feb 13, 2015 at 8:39 AM, Russell  wrote:
> > I have a shared library, libfoo.so, that references another .so which isn't 
> > linked but instead loaded at runtime with 
> > myso=dlopen("/usr/local/lib/libbar.so", RTLD_NOW); when I try to load it 
> > with ctypes, the call hangs and I have to ctl-c.
> >
> > (build)[dev]$ export LD_LIBRARY_PATH=/usr/local/bin
> > (build)[dev]$ python
> > Python 2.7.6 (default, Mar 22 2014, 22:59:56)
> > [GCC 4.8.2] on linux2
> > Type "help", "copyright", "credits" or "license" for more information.
> >>>> import ctypes
> >>>> ctypes.cdll.LoadLibrary ('/usr/local/lib/libfoo.so')   <--- This call 
> >>>> hangs and have to ctl-C
> > ^CTraceback (most recent call last):
> >   File "", line 1, in 
> >   File "/usr/lib/python2.7/ctypes/__init__.py", line 443, in LoadLibrary
> >return self._dlltype(name)
> >   File "/usr/lib/python2.7/ctypes/__init__.py", line 365, in __init__
> > self._handle = _dlopen(self._name, mode)
> > KeyboardInterrupt
> >>>>
> >
> >
> > My first thought was that It couldn't find libbar.so, but if I remove that 
> > file, python seg faults: /usr/local/lib/libbar.so: cannot open shared 
> > object file: No such file or directorySegmentation fault (core dumped), so 
> > it appears that it finds the dlopen() file but freezes waiting for ???
> >
> > This is on ubuntu 14.4 server. C code is compiled with -std=gnu11 -Wall 
> > -Werror -m64 -march=x86-64 -mavx -g -fPIC
> 
> This seems to work for me (Mint 17).
> 
> 
> $ cat a.c
> #include 
> #include 
> 
> void _init() {
>   printf("a.so._init\n");
>   dlopen("./b.so", RTLD_NOW);
> }
> $ cat b.c
> #include 
> 
> void _init() {
>   printf("b.so._init\n");
> }
> $ gcc -std=gnu11 -Wall -Werror -m64 -march=x86-64 -mavx -g -fPIC
> -shared -nostartfiles -o a.so a.c
> $ gcc -std=gnu11 -Wall -Werror -m64 -march=x86-64 -mavx -g -fPIC
> -shared -nostartfiles -o b.so b.c
> $ python
> Python 2.7.6 (default, Mar 22 2014, 22:59:56)
> [GCC 4.8.2] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> >>> import ctypes
> >>> ctypes.cdll.LoadLibrary('./a.so')
> a.so._init
> b.so._init
> 
> 
> 
> Can you boil down the issue that you're seeing into a minimal
> reproducible example?


Thanks Ian, that's a good way to examine my problem.
Your code worked for me also.  I've taken your code and expanded it to be more 
representative of my issue.  One twist that i found was that I'm calling back 
into a.so from b.so.   I've captured that in the following code.  Note that 
this works for me
without error!!  There must be something else in my libs, that is causing this. 
 I'm still working to find my issue but I'm just replying to thank you and give 
anyone an update  :)


(build)[w]$ python
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import ctypes
>>> ctypes.cdll.LoadLibrary('./a.so')
a.so._init
b.so._init
b.so recived: called from a.so
in a_function

>>>



[w]$ cat a.c
#include 
#include 
#include 


void (*mymessage)(char *message);
void *myso;

__attribute__((constructor)) void Init(void) {
  printf("a.so._init\n");

  char* error;
  myso = dlopen("./b.so", RTLD_NOW);
  if (!myso) {
 fputs (dlerror(), stderr);
 exit(1);
  }
  mymessage = dlsym(myso, "printMessage");
  if ((error = dlerror()) != NULL)  {
  fputs(error, stderr);
  exit(1);
  }
  mymessage("called from a.so");
}

void a_function(void) {
  printf("in a_function\n");
}

__attribute__((destructor)) void Close(void) {
  dlclose(myso);
}

[w]$ cat b.c
#include 
extern void a_function(void);

__attribute__((constructor)) void Init(void) {
  printf("b.so._init\n");
}

void printMessage(char *message) {
  printf("b.so recived: %s\n", message);
  a_function();
}

__attribute__((destructor)) void Close(void) {

}


[w]$ cat build
/usr/bin/gcc -o ./a.os -c -std=gnu11 -Wall -Werror -Wa,-ahl=./a.os.s -m64 
-march=x86-64 -mavx -g -DDEBUG -fPIC a.c
/usr/bin/gcc -o ./a.so -Wl,--export-dynamic -shared -Wl,-rpath=./ -ldl ./a.os
/usr/bin/gcc -o ./b.os -c -std=gnu11 -Wall -Werror -Wa,-ahl=./b.os.s -m64 
-march=x86-64 -mavx -g -DDEBUG -fPIC b.c
/usr/bin/gcc -o ./b.so -Wl,--export-dynamic -shared -Wl,-rpath=./  b.os a.so
-- 
https://mail.python.org/mailman/listinfo/python-list


How do I dynamically create functions without lambda?

2006-01-27 Thread Russell
I want my code to be Python 3000 compliant, and hear
that lambda is being eliminated. The problem is that I
want to partially bind an existing function with a value
"foo" that isn't known until run-time:

   someobject.newfunc = lambda x: f(foo, x)

The reason a nested function doesn't work for this is
that it is, well, dynamic. I don't know how many times
or with what foo's this will be done.

Now, I am sure there are a half-dozen ways to do this.
I just want the one, new and shiny, Pythonic way. ;-)

-- 
http://mail.python.org/mailman/listinfo/python-list


Question from a python newbie

2007-12-13 Thread Russell
I've been learning Python slowly for a few months, coming from a C/C+
+, C#, Java, PHP background.  I ran across a code fragment I'm having
trouble wrapping my brain around.  I've searched the Language
Reference and was not able to find any info regarding the structure of
this code fragment:

int(text) if text.isdigit() else text

It is part of a larger lambda statement.  I do get the lambda
declaration, but having trouble with what is happening in that
fragment.  Here is the full lambda statement:

convert = lambda text: int(text) if text.isdigit() else text

Thanks for any help you can provide explaining this to me.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Question from a python newbie

2007-12-13 Thread Russell
I suspected it was a ternary type of operator, but was unable to
confirm it.  And I didn't realize it was new to 2.5.  Perfectly clear
now. :)

Thanks!
-- 
http://mail.python.org/mailman/listinfo/python-list


How does a generator object refer to itself?

2006-05-22 Thread Russell
This is more a Python 2.5 question, since it is the send()
method that makes this so useful. The issue is how to write
a generator that refers to its own generator object. This
would be useful when passing control to some other function
or generator that is expected to return control via a send():

def me():
..
nextVal = yield you(me.send)# This is wrong!

That almost looks right, except that "me" isn't really the
generator object that is executing, it is the function that
produces the generator object. It seems somewhere I read
that some keyword ("generator"?) would work in this context,
but now I can't find where I read that. Maybe I imagined it.

Thanks!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How does a generator object refer to itself?

2006-05-26 Thread Russell
Michael wrote:
> You don't need python 2.5 at all to do this. You do need to
> have a token mutable first argument though, as you can see.

Thank you. That's a pattern similar to one we're using, where
a new object refers to the generator. The problem we're seeing
is that it seems to fool the garbage collector. We're not
positive about that. But we are suspicious.

Interesting page you have. I've bookmarked it. Thanks, again.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How does a generator object refer to itself?

2006-05-26 Thread Russell
> Why don't you use a class ?

Because we use this pattern for thousands of functions,
and don't want thousands of new classes. Right now
we use a single class that creates an instance for each
such generator. I was hoping to find a way to get even
more lightweight than that. :-)

-- 
http://mail.python.org/mailman/listinfo/python-list


Beautiful Soup Table Parsing

2012-08-08 Thread Tom Russell
I am parsing out a web page at
http://online.wsj.com/mdc/public/page/2_3021-tradingdiary2.html?mod=mdc_pastcalendar
using BeautifulSoup.

My problem is that I can parse into the table where the data I want
resides but I cannot seem to figure out how to go about grabbing the
contents of the cell next to my row header I want.

For instance this code below:

soup = 
BeautifulSoup(urlopen('http://online.wsj.com/mdc/public/page/2_3021-tradingdiary2.html?mod=mdc_pastcalendar'))

table = soup.find("table",{"class": "mdcTable"})
for row in table.findAll("tr"):
for cell in row.findAll("td"):
print cell.findAll(text=True)

brings in a list that looks like this:

[u'NYSE']
[u'Latest close']
[u'Previous close']
[u'Week ago']
[u'Issues traded']
[u'3,114']
[u'3,136']
[u'3,134']
[u'Advances']
[u'1,529']
[u'1,959']
[u'1,142']
[u'Declines']
[u'1,473']
[u'1,070']
[u'1,881']
[u'Unchanged']
[u'112']
[u'107']
[u'111']
[u'New highs']
[u'141']
[u'202']
[u'222']
[u'New lows']
[u'15']
[u'11']
[u'42']
[u'Adv. volume*']
[u'375,422,072']
[u'502,402,887']
[u'345,372,893']
[u'Decl. volume*']
[u'245,106,870']
[u'216,507,612']
[u'661,578,907']
[u'Total volume*']
[u'637,047,653']
[u'728,170,765']
[u'1,027,754,710']
[u'Closing tick']
[u'+131']
[u'+102']
[u'-505']
[u'Closing Arms (TRIN)\x86']
[u'0.62']
[u'0.77']
[u'1.20']
[u'Block trades*']
[u'3,874']
[u'4,106']
[u'4,463']
[u'Adv. volume']
[u'1,920,440,454']
[u'2,541,919,125']
[u'1,425,279,645']
[u'Decl. volume']
[u'1,149,672,387']
[u'1,063,007,504']
[u'2,812,073,564']
[u'Total volume']
[u'3,186,154,537']
[u'3,643,871,536']
[u'4,322,541,539']
[u'Nasdaq']
[u'Latest close']
[u'Previous close']
[u'Week ago']
[u'Issues traded']
[u'2,607']
[u'2,604']
[u'2,554']
[u'Advances']
[u'1,085']
[u'1,596']
[u'633']
[u'Declines']
[u'1,390']
[u'880']
[u'1,814']
[u'Unchanged']
[u'132']
[u'128']
[u'107']
[u'New highs']
[u'67']
[u'87']
[u'41']
[u'New lows']
[u'36']
[u'36']
[u'83']
[u'Closing tick']
[u'+225']
[u'+252']
[u'+588']
[u'Closing Arms (TRIN)\x86']
[u'0.48']
[u'0.46']
[u'0.69']
[u'Block trades']
[u'10,790']
[u'8,961']
[u'5,890']
[u'Adv. volume']
[u'1,114,620,628']
[u'1,486,955,619']
[u'566,904,549']
[u'Decl. volume']
[u'692,473,754']
[u'377,852,362']
[u'1,122,931,683']
[u'Total volume']
[u'1,856,979,279']
[u'1,883,468,274']
[u'1,714,837,606']
[u'NYSE Amex']
[u'Latest close']
[u'Previous close']
[u'Week ago']
[u'Issues traded']
[u'434']
[u'432']
[u'439']
[u'Advances']
[u'185']
[u'204']
[u'202']
[u'Declines']
[u'228']
[u'202']
[u'210']
[u'Unchanged']
[u'21']
[u'26']
[u'27']
[u'New highs']
[u'10']
[u'12']
[u'29']
[u'New lows']
[u'4']
[u'7']
[u'13']
[u'Adv. volume*']
[u'2,365,755']
[u'5,581,737']
[u'11,992,771']
[u'Decl. volume*']
[u'4,935,335']
[u'4,619,515']
[u'15,944,286']
[u'Total volume*']
[u'7,430,052']
[u'10,835,106']
[u'28,152,571']
[u'Closing tick']
[u'+32']
[u'+24']
[u'+24']
[u'Closing Arms (TRIN)\x86']
[u'1.63']
[u'0.64']
[u'1.12']
[u'Block trades*']
[u'75']
[u'113']
[u'171']
[u'NYSE Arca']
[u'Latest close']
[u'Previous close']
[u'Week ago']
[u'Issues traded']
[u'1,188']
[u'1,205']
[u'1,176']
[u'Advances']
[u'580']
[u'825']
[u'423']
[u'Declines']
[u'562']
[u'361']
[u'730']
[u'Unchanged']
[u'46']
[u'19']
[u'23']
[u'New highs']
[u'17']
[u'45']
[u'42']
[u'New lows']
[u'5']
[u'25']
[u'12']
[u'Adv. volume*']
[u'72,982,336']
[u'140,815,734']
[u'73,868,550']
[u'Decl. volume*']
[u'58,099,822']
[u'31,998,976']
[u'185,213,281']
[u'Total volume*']
[u'146,162,965']
[u'175,440,329']
[u'260,075,071']
[u'Closing tick']
[u'+213']
[u'+165']
[u'+83']
[u'Closing Arms (TRIN)\x86']
[u'0.86']
[u'0.73']
[u'1.37']
[u'Block trades*']
[u'834']
[u'1,043']
[u'1,593']

What I want to do is only be getting the data for NYSE and nothing
else so I do not know if that's possible or not. Also I want to do
something like:

If cell.contents[0] == "Advances":
Advances = next cell or whatever??---> this part I am not sure how to do.

Can someone help point me in the right direction to get the first data
point for the Advances row? I have others I will get as well but
figure once I understand how to do this I can do the rest.

Thanks,

Tom
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generator problem: parent class not seen

2012-02-01 Thread Russell Owen
On Feb 1, 2012, at 2:34 PM, Chris Rebert wrote:

> On Wed, Feb 1, 2012 at 1:00 PM, Russell E. Owen  wrote:
>> I have an odd and very intermittent problem in Python script.
>> Occasionally it fails with this error:
>> 
>> Traceback (most recent call last):
>>  File
>> "/Applications/APO/TTUI.app/Contents/Resources/lib/python2.7/TUI/Base/Bas
>> eFocusScript.py", line 884, in run
>>  File
>> "/Applications/APO/TTUI.app/Contents/Resources/lib/python2.7/TUI/Base/Bas
>> eFocusScript.py", line 1690, in initAll
>> TypeError: unbound method initAll() must be called with BaseFocusScript
>> instance as first argument (got ScriptClass instance instead)
> 
>> The code looks like this:
>> 
>>def run(self, sr):
>>try:
>>self.initAll()
> 
>> I am puzzled why Python thinks the class type is wrong, given the output
>> of inspect.getclasstree. Any ideas on what might be wrong and how to
>> track it down (and why it would be so intermittent)?
> 
> What's the offending line of initAll() [#1690 in BaseFocusScript.py]
> look like? The lines preceding it would also be helpful for context.

Here you go. The offending line #169 is marked with ***

-- Russell

class ImagerFocusScript(BaseFocusScript):
   """..."""
def __init__(self,
sr,
instName,
imageViewerTLName = None,
defRadius = 5.0,
defBinFactor = 1,
maxFindAmpl = None,
doWindow = False,
windowOrigin = 1,
windowIsInclusive = True,
doZeroOverscan = False,
helpURL = None,
debug = False,
):
...
BaseFocusScript.__init__(self,
sr = sr,
gcamActor = gcamActor,
instName = instName,
imageViewerTLName = imageViewerTLName,
defRadius = defRadius,
defBinFactor = defBinFactor,
maxFindAmpl = maxFindAmpl,
doWindow = doWindow,
windowOrigin = windowOrigin,
windowIsInclusive = windowIsInclusive,
helpURL = helpURL,
debug = debug,
)
self.doZeroOverscan = bool(doZeroOverscan)


def initAll(self):
"""Override the default initAll to record initial bin factor, if 
relevant
"""
***   BaseFocusScript.initAll(self)
if self.exposeModel.instInfo.numBin > 0:
self.finalBinFactor = self.exposeModel.bin.getInd(0)[0]


Also, here is BaseFocusScript:

class BaseFocusScript(object):
"""Basic focus script object.

This is a virtual base class. The inheritor must:
- Provide widgets
- Provide a "run" method
"""
cmd_Find = "find"
cmd_Measure = "measure"
cmd_Sweep = "sweep"

# constants
#DefRadius = 5.0 # centroid radius, in arcsec
#NewStarRad = 2.0 # amount of star position change to be considered a new 
star
DefFocusNPos = 5  # number of focus positions
DefFocusRange = 200 # default focus range around current focus
FocusWaitMS = 1000 # time to wait after every focus adjustment (ms)
BacklashComp = 0 # amount of backlash compensation, in microns (0 for none)
WinSizeMult = 2.5 # window radius = centroid radius * WinSizeMult
FocGraphMargin = 5 # margin on graph for x axis limits, in um
MaxFocSigmaFac = 0.5 # maximum allowed sigma of best fit focus as a 
multiple of focus range
MinFocusIncr = 10 # minimum focus increment, in um
def __init__(self,
sr,
gcamActor,
instName,
tccInstPrefix = None,
imageViewerTLName = None,
defRadius = 5.0,
defBinFactor = 1,
finalBinFactor = None,
canSetStarPos = True,
maxFindAmpl = None,
doWindow = True,
windowOrigin = 0,
windowIsInclusive = True,
helpURL = None,
debug = False,
):
""""""
self.sr = sr
self.sr.debug = bool(debug)
self.gcamActor = gcamActor


def initAll(self):
"""Initialize variables, table and graph.
"""
# initialize shared variables
self.doTakeFinalImage = False
self.focDir = None
self.currBoreXYDeg = None
self.begBoreXYDeg = None
self.instScale = None
self.arcsecPerPixel = None
self.instCtr = None
self.instLim = None
self.cmdMode = None
self.focPosToRestore = None
self.expTime = None
self.absStarPos = None
self.relStarPos = None
self.binFactor = None
self.window = None # LL pixel is 0, UR pixel is included

self.enableCmdBtns(False)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generator problem: parent class not seen

2012-02-01 Thread Russell Owen
On Feb 1, 2012, at 3:35 PM, Arnaud Delobelle wrote:
> On Feb 1, 2012 9:01 PM, "Russell E. Owen"  wrote:
> >
> > I have an odd and very intermittent problem in Python script.
> > Occasionally it fails with this error:
> >
> > Traceback (most recent call last):
> >  File
> > "/Applications/APO/TTUI.app/Contents/Resources/lib/python2.7/TUI/Base/Bas
> > eFocusScript.py", line 884, in run
> >  File
> > "/Applications/APO/TTUI.app/Contents/Resources/lib/python2.7/TUI/Base/Bas
> > eFocusScript.py", line 1690, in initAll
> > TypeError: unbound method initAll() must be called with BaseFocusScript
> > instance as first argument (got ScriptClass instance instead)
> > self=; class hierarchy=[( > 'TUI.Base.BaseFocusScript.ImagerFocusScript'>, ( > 'TUI.Base.BaseFocusScript.BaseFocusScript'>,)), [(,
> > (,))]]
> >
> 
> Looks like you have loaded the same module twice.  So you have two versions 
> of your class hierarchies. You can check by printing the ids of your classes. 
> You will get classes with the same name but different ids.
> 
> Arnaud
> 

Yes! I was reloading BaseFocusScript. Oops.

In detail: script files are dynamically loaded when first requested and can be 
reloaded for debugging. I think that's safe because script files are 
self-contained (e.g. the classes in them are never subclassed or anything like 
that). But I went too far: I had my focus scripts reload BaseFocusScript, which 
is shared code, so that I could tweak BaseFocusScript while debugging focus 
scripts.

Thank you very much!

-- Russell-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What's the best way to minimize the need of run time checks?

2016-08-11 Thread Russell Owen

On 8/10/16 3:44 PM, Juan Pablo Romero Méndez wrote:

As to why I asked that, there are several reasons: I have a very concrete
need right now to find pragmatic ways to increase code quality, reduce
number of defects, etc. in a Python code base. But also I want to
understand better the mind set and culture of Python's community.


I am late to this thread, so my apologies for duplicated answers, but I 
have two concrete suggestions:
- Unit tests. These are a hassle to write, but pay huge dividends in 
robustness of your existing code and making it safer to modify the code 
later. There are also tools to measure test coverage which are worth 
considering. I don't think it is possible to write robust code in any 
language (even compiled languages) without a good test suite.
- Always run a linter such as flake8. Most source code editors can be 
configured to do this automatically. This will not catch everything that 
a compiler would catch in a compiled language, but it will catch many 
common errors.


-- Russell

--
https://mail.python.org/mailman/listinfo/python-list


Re: Anaconda with Python 3.7

2018-09-28 Thread Russell Owen
On Sep 3, 2018, gvim wrote
(in article <5b8d0122.1030...@gmail.com>):

> Anyone have any idea when Anaconda might ship a version compatible with
> Python 3.7. I sent them 2 emails but no reply.

I heard a rumor today that it will be a few more months. They are short on 
resources and are also dealing with issues with dependency management.

In any case miniconda is available for 3.7 so it is worth checking to see if 
it has the packages that you need. (And if it’s just missing a few you can 
see if pip will install those).

-- Russell


-- 
https://mail.python.org/mailman/listinfo/python-list


asyncio await different coroutines on the same socket?

2018-10-03 Thread Russell Owen

Using asyncio I am looking for a simple way to await multiple events where 
notification comes over the same socket (or other serial stream) in arbitrary 
order. For example, suppose I am communicating with a remote device that can 
run different commands simultaneously and I don't know which command will 
finish first. I want to do this:

coro1 = start(command1)
coro2 = start(command2)
asyncio.gather(coro1, coro2)

where either command may finish first. I’m hoping for a simple and 
idiomatic way to read the socket and tell each coroutine it is done. So far 
everything I have come up with is ugly, using multiple layers of "async 
def”, keeping a record of Tasks that are waiting and calling "set_result" 
on those Tasks when finished. Also Task isn’t even documented to have the 
set_result method (though "future" is)

Is there a simple, idiomatic way to do this?

-- Russell


-- 
https://mail.python.org/mailman/listinfo/python-list


How to await multiple replies in arbitrary order (one coroutine per reply)?

2018-10-05 Thread Russell Owen


I am using asyncio and am fairly new to it. I have a stream to which I write 
commands and from which I read replies. (In this case the stream is custom 
wrapper around DDS written in C++ and pybind11). Multiple commands can run at 
the same time and I cannot predict which will finish first. I need aa 
different coroutine (or asyncio.Task or other awaitable object) for each 
command that is running.

Is there a simple way to handle this in asyncio? So far the best I have come 
up is the following (greatly simplified), which works but has some 
misfeatures:

import asyncio
from iolib import read_reply, write_command, TIMED_OUT

class RemoteCommand:
def __init__(self):
self._tasks = dict()

def start(self, cmd, timeout):
"""Start a command"""
cmd_id = write_command(cmd)
task = asyncio.ensure_future(self._wait_for_command(cmd_id=cmd_id, 
timeout=timeout))
self._tasks[cmd_id] = task
if len(self._tasks) == 1:
asyncio.ensure_future(self._handle_replies())
return task

async def _wait_for_command(self, cmd_id, timeout):
"""Wait for a command to finish"""
await asyncio.sleep(timeout)
if cmd_id in self._tasks:
del self._tasks[cmd_id]
return TIMED_OUT # our standard end code for timeouts

async def _handle_replies(self):
while True:
cmd_id, end_code = read_reply()
if cmd_id in self._tasks:
task = self._tasks.pop(cmd_id)
task.set_result(end_code)
if not self._tasks:
return
await asyncio.sleep(0.1)

Misfeatures include:
- asyncio.Task is not documented to have a "set_result" method. The 
documentation says that Task is "A Future-like object that runs a Python 
coroutine" and Future does have such a method.
- When "_handle_replies" calls "task.set_result(data)" this does not seem to 
cancel the "await asyncio.sleep(timeout)" in the task, resulting in scary 
messages to stdout. I have tried saving *that* as another task and canceling 
it, but it seems clumsy and I still see scary messages.

I think what I'm looking for is a task-like thing I can create that I can end 
when *I* say it's time to end, and if I'm not quick enough then it will time 
out gracefully. But maybe there's a simpler way to do this. It doesn't seem 
like it should be difficult, but I'm stumped. Any advice would be 
appreciated.

-- Russell


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio await different coroutines on the same socket?

2018-10-05 Thread Russell Owen
On Oct 3, 2018, Ian Kelly wrote
(in 
article):

> On Wed, Oct 3, 2018 at 7:47 AM Russell Owen  wrote:
> > Using asyncio I am looking for a simple way to await multiple events where
> > notification comes over the same socket (or other serial stream) in
> > arbitrary
> > order. For example, suppose I am communicating with a remote device that can
> > run different commands simultaneously and I don't know which command will
> > finish first. I want to do this:
> >
> > coro1 = start(command1)
> > coro2 = start(command2)
> > asyncio.gather(coro1, coro2)
> >
> > where either command may finish first. I’m hoping for a simple and
> > idiomatic way to read the socket and tell each coroutine it is done. So far
> > everything I have come up with is ugly, using multiple layers of "async
> > def”, keeping a record of Tasks that are waiting and calling "set_result"
> > on those Tasks when finished. Also Task isn’t even documented to have the
> > set_result method (though "future" is)
>
> Because Tasks are used to wrap coroutines, and the result of the Task
> should be determined by the coroutine, not externally.
>
> Instead of tracking tasks (that's what the event loop is for) I would
> suggest tracking futures instead. Have start(command1) return a future
> (or create a future that it will await on itself) that is not a task.
> Whenever a response from the socket is parsed, that code would then
> look up the corresponding future and call set_result on it. It might
> look something like this:
>
> class Client:
> async def open(self, host, port):
> self.reader, self.writer = await asyncio.open_connection(host, port)
> asyncio.create_task(self.read_loop())
>
> async def read_loop(self):
> while not self.reader.at_eof():
> response = self.reader.read()
> id = get_response_id(response)
> self._futures.pop(id).set_result(response)
>
> def start(self, command):
> future = asyncio.Future()
> self._futures[get_command_id(command)] = future
> self.writer.write(command)
> return future
>
> In this case start() is not a coroutine but its result is a future and
> can be awaited.

That is exactly what I was looking for. Thank you very much!

-- Russell

(My apologies for double posting -- I asked this question again today because 
I did not think my original question -- this one -- had gone through).


-- 
https://mail.python.org/mailman/listinfo/python-list


Is it possible to connect an awaitable to a Future, basically turning it into a Task?

2018-10-27 Thread Russell Owen

I’m using asyncio and I’d like to add an item to an object that others 
can wait on immediately and which eventually I will want to use to track a 
coroutine. In other words I want something like:

class Info:
def __init__(self):
self.done_task = asyncio.Future()

info = Info()
# do other stuff, but eventually
coro = ...
asyncio.connect_future(coro, info.done_task)

I can certainly live without this, it simply requires adding an additional 
task made with asyncio.ensure_future and using that to set the result of 
done_task.

But it would be a lot more elegant to just have the one future (especially if 
I have to cancel the wait, as I have to keep the extra task around so I can 
cancel it). So...just wondering if I missed something.

Regards,

Russell


-- 
https://mail.python.org/mailman/listinfo/python-list


Rollover/wraparound time of time.clock() under win32?

2005-09-28 Thread Russell Warren
Does anyone know how long it takes for time.clock() to roll over under
win32?

I'm aware that it uses QueryPerformanceCounter under win32... when I've
used this in the past (other languages) it is a great high-res 64-bit
performance counter that doesn't roll-over for many (many) years, but
I'm worried about how many bits Python uses for it and when it will
roll over.  I need it to take years to roll over.  I'm also aware that
the actual rollover 'time' will be dependent on
QueryPerformanceFrequency, so I guess my real question is how Python
internally tracks the counter (eg: a 32 bit float would be no good),
but the first question is easier to ask. :)

time.time() is not an option for me since I need ms'ish precision and
under win32 it is at best using the 18.2Hz interrupt, and at worst it
is way worse.

I'd like to avoid having to make direct win32api calls if at all
possible.

In general - if anyone has any other approaches (preferrably platform
independent) for getting a < 1 ms resolution timer that doesn't roll
over for years, I'd like to hear it!

For now I have (for win32 and linux only):
#---
from sys import platform
if platform == 'win32':
  from time import clock as TimeStamp
else:
  from time import time as TimeStamp
print TimeStamp()
#---

This works for me as long as the clock rollover is ok and precision is
maintained.

Thanks,
Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Rollover/wraparound time of time.clock() under win32?

2005-09-28 Thread Russell Warren
Thanks!  That gets me exactly what I wanted.  I don't think I would
have been able to locate that code myself.

Based on this code and some quick math it confirms that not only will
the rollover be a looong way out, but that there will not be any loss
in precision until ~ 30 years down the road.  Checking my math:

  (float(10**16 + 1) - float(10**16)) == 0
  (float(10**15 + 1) - float(10**15)) == 1
  ie: our double precision float can resolve unity differences out to
  at least 10**15
  Assuming 1 us/count we have 10**15 us / (3.15E13 us/year) = 31.7 yrs

Past this we won't roll over since the long keeps counting for a long
time, but some precision will be lost.

For those interested, the relevant win32 time code is below.  Thanks
again!

time_clock(PyObject *self, PyObject *args)
{
static LARGE_INTEGER ctrStart;
static double divisor = 0.0;
LARGE_INTEGER now;
double diff;

if (!PyArg_ParseTuple(args, ":clock"))
return NULL;

if (divisor == 0.0) {
LARGE_INTEGER freq;
QueryPerformanceCounter(&ctrStart);
if (!QueryPerformanceFrequency(&freq) || freq.QuadPart == 0) {
/* Unlikely to happen - this works on all intel
   machines at least!  Revert to clock() */
return PyFloat_FromDouble(clock());
}
divisor = (double)freq.QuadPart;
}
QueryPerformanceCounter(&now);
diff = (double)(now.QuadPart - ctrStart.QuadPart);
return PyFloat_FromDouble(diff / divisor);
}

-- 
http://mail.python.org/mailman/listinfo/python-list


scope of socket.setdefaulttimeout?

2005-09-29 Thread Russell Warren
Does anyone know the scope of the socket.setdefaulttimeout call?  Is it
a cross-process/system setting or does it stay local in the application
in which it is called?

I've been testing this and it seems to stay in the application scope,
but the paranoid side of me thinks I may be missing something... any
confirmation would be helpful.

-- 
http://mail.python.org/mailman/listinfo/python-list


Threads and socket.setdefaulttimeout

2005-10-12 Thread Russell Warren
It appears that the timeout setting is contained within a process
(thanks for the confirmation), but I've realized that the function
doesn't play friendly with threads.  If I have multiple threads using
sockets and one (or more) is using timeouts, one thread affects the
other and you get unpredictable behavior sometimes.  I included a short
script at the end of this that demonstrates the threading problem.

I'm trying to get around this by forcing all locations that want to set
a timeout to use a 'safe' call immediately prior to socket creation
that locks out setting the timeout again until the lock is released.
Something like this:

try:
  SafeSetSocketTimeout(Timeout_s)
  #lock currently acquired to prevent other threads sneaking in here
  CreateSocket()
finally:
  ReleaseSocketTimeoutSettingLock()
UseSocket()

However - this is getting increasingly painful because I don't have
easy access to all of the socket creations where I'd like to do this.
The biggest pain right now is that I'm using xmlrpclib which has some
seriously/frustratingly heavy use of __ prefixes that makes getting
inside to do this at socket creation near impossible (at least I think
so).  Right now the best I can do is surround the xmlrpclib calls with
this (effectively putting the lock release after the UseSocket), but
then other threads get hung up for the duration of the call or timeout,
rather than just the simple socket creation.

It would be nice if the timeout were implemented as an argument in the
socket constructor rather than having this global method.  Is there a
reason for this?  I tried sifting through the cvs source and got lost -
couldn't even find the call definition for socket(family, type, proto)
and gave up...

Does anybody have any idea of another way to do what I need (indpendent
socket timeouts per thread), or have suggestions on how to break into
xmlrpclib (actually down into httplib) to do the methdo I was trying?

Related question: Is there some global way that I'm unaware of to make
it so that some few lines of code are atomic/uninterruptable and no
other thread can sneak in between?

All suggestions appreciated!  Hopefully I'm just missing something
obvious.

Russ

#--- This script confirms that settimeout's affect is across threads
import threading, xmlrpclib, socket

def st():
  socket.setdefaulttimeout(0.1)

try:
  proxy = xmlrpclib.ServerProxy("http://localhost:1";)
  print proxy.NonExistentCallThatShouldTimeout()
except Exception, E:  print "Exception caught: %s" % (E,)

cbThread = threading.Thread(target = st)
cbThread.start()

try:
  print proxy.NonExistentCallThatShouldTimeout()
except Exception, E:  print "Exception caught: %s" % (E,)

#Output is:
#Exception caught: (10061, 'Connection refused')
#Exception caught: timed out

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threads and socket.setdefaulttimeout

2005-10-19 Thread Russell Warren
Thanks for the detailed repsone... sorry for the lag in responding to
it.

After reading and further thought, the only reason I was using
setdefaulttimeout in the first place (rather then using a direct
settimeout on the socket) was because it seemed like the only way (and
easy) of getting access to the seemingly deeply buried socket being
used by xmlrpclib.  That was prior to me using threads of course.  I
then started trying to make this solution work with thread, but it is
now too convoluted as you say.  Now I think the best solution is likely
to redirect my efforts at getting access to the socket used by
xmlrpclib so that I can set it's timeout directly.  I'm still unclear
how to do this cleanly, though.

Getting to some of your comments.

> When you say "one thread affects another", I see that your example uses
> the same function for both threads. IMHO it's much better to override
> the thread's run() method than to provide a callable at thread creating
> time. That way you can be sure each thread's execution is firmly in the
> context of the particular thread instance's namespace.
>
> having said all this, I don't think that's your issue.

Correct - the bottom code is nothing to do with my code and was only to
quickly prove that it was cross-thread.

> This seems extremely contorted, and I'm pretty sure we can find a better
> way.

Couldn't agree more!

> The threads' network calls should be yielding process control during
> their timeout period to allow other runnable threads to proceed. That's

Yep.  This is not causing me any problem.

> You are aware, I presume, that you can set a timeout on each socket
> individually using its settimeout() method?

Yes, but I momentarily had forgot about it... as mentioned I ended up
making the since-bad choice of using setdefaulttimeout to get timeouts
set on the inaccessible sockets.  Then I carried it too far...

> See above. However, this *does* require you to have access to the
> sockets, which is tricky if they are buried deep in some opaque object's
> methods.

Any help on how to crack the safe would be appreciated.

> There are locks! I suspect what you need is a threading.Rlock object,
> that a thread has to hold to be able to modify the (global) default
> timeout. This isn't a full solution to your problem, though, as you have
> correctly deduced.

Not quite what I was after I don't think since potentially interfering
code needs to check the lock (via acquire) to avoid conflict.  What I
guess I mean is something general for the process saying "never ever
interrupt this block og code by running code on another thread,
regardless of whether the other thread(s) check a lock".  Thinking more
about it it seems unreasonable so I'll drop the question.

Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Accessing Postgress from Windows

2005-01-28 Thread Robby Russell
On Fri, 2005-01-28 at 14:51 -0600, Greg Lindstrom wrote:
> Hello, All-
> 
> I am running Python 2.3 on a Windows XP box and would like to access a 
> postgres database running on a Linux fileserver.  I've googled for 
> Python and Postgres but most of the stuff I saw looked stale.  I would 
> like to use a secure connection (ssl) and a direct connection, if 
> possible.  What can you recommend for the task?
> 
> Thanks...and see you all at PyCon!
> --greg
> 

You could try this mxODBC from Windows.

http://www.egenix.com/files/python/mxODBC.html

Cheers,

Robby

-- 
/*******
* Robby Russell | Owner.Developer.Geek
* PLANET ARGON  | www.planetargon.com
* Portland, OR  | [EMAIL PROTECTED]
* 503.351.4730  | blog.planetargon.com
* PHP/PostgreSQL Hosting & Development
* --- Now hosting PostgreSQL 8.0! ---
/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pythonic equivalent of Mathematica's FixedPoint function

2005-02-01 Thread Russell Blau
"jelle" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> doh...
>
> https://sourceforge.net/projects/fixedpoint
>
> pardon me
>

I don't think that Tim's FixedPoint class is doing the same thing as
Mathematica's FixedPoint function (or even anything remotely similar).
Well, except for the fact that they both operate on numbers

You could probably write your own FixedPoint function without too much
difficulty, with the only tricky part being for it to know when to stop!

Russ

-- 
I don't actually read my hotmail account, but you can replace hotmail with
excite if you really want to reach me.



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generating images with text in them

2005-07-21 Thread Daren Russell
phil hunt wrote:
> I am trying to generate some images (gifs or pngs) with text in 
> them. I can use the Python Imaging Library, but it only has access 
> to the default, rather crappy, font. 
> 
> Ideally I'd like to use one of the nicer fonts that come with my X 
> Windows installation. Using Tkinter I can draw these fonts on the 
> screen; is there any way to get these fonts into a bitmapped image?
> For example, can I draw some text on a canvas and then "grab" that 
> canvas as a bitmap into PIL, and then save it as a file?
> 
> Alternately, is there a good source of PIL font files (.pil files)
> somewhere?
> 
> If the writers of the Python Imaging Library are reading this, may I 
> suggest that they add more fonts to it. Yes, that would increase 
> the size, but these days disk space is cheap and programmer time
> expensive.
> 

I've just been playing around with this.  You can use truetype fonts with:

font = ImageFont.truetype("/path/to/font.ttf", 12)

from version 1.1.4

http://www.pythonware.com/library/pil/handbook/imagefont.htm for more
details

HTH
Daren

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pickle, __init__, and classes

2005-08-02 Thread Russell Blau
"Yin" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> I've created a class that reads in and processes a file in the
> initializer __init__.  The processing is fairly substantial, and thus,
> instead of processing the file every time the object is created, I
> pickle the object to a file.
>
> In subsequent creations of the object, I implement a test to see
> whether the pickled file exists.  If it does, then I unpickle the
> object.
>
> Unfortunately, the __init__ cannot return this unpickled object.
>

Try __new__().  http://docs.python.org/ref/customization.html  This isn't
the usual application of __new__, but since it returns an object it should
be ideal for your purposes.






-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tar module issue

2005-02-07 Thread Russell Bungay
Hello,
I'm using tarfile module to create an archive. For my example I'm using
Amsn file and directory tree.
My variables are like these ones: 
path = /home/chaica/downloads/amsn-0_94/skins/Tux/smileys/shades.gif
fileName = amsn-0_94/skins/Tux/smileys/shades.gif
tar.add( path, fileName )
and while untaring my archive with tar jxvf I've random errors :
tar: amsn-0_94/lang/genlangfiles.c: Cannot hard link to
`amsn-0_94/lang/genlangfiles.c': No such file or directory
I checked google and saw that errors like these ones could occur when
you use global path while taring, but I'm not, using fileName which is
local. 
I used tarfile for the first time at the weekend and noticed one thing 
that may help.  I don't know if it is a specific solution to your 
problem, but it might be worth a try.

I noticed that if I didn't explicitly close the tarfile with tar.close() 
after I had added the files, the resultant file would sometimes not be 
written properly (even with completed execution of the whole script). 
Explicitly closing the file would make these problems go away.

I hope that helps,
R
--
http://mail.python.org/mailman/listinfo/python-list


Re: Questions about mathematical signs...

2005-02-07 Thread Russell Blau
"Dan Bishop" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Jeff Epler wrote:
> > On Sun, Feb 06, 2005 at 12:26:30PM -0800, administrata wrote:
> > > Hi! I'm programming maths programs.
> > > And I got some questions about mathematical signs.
> ...
> > > 2. Inputing fractions like (a / b) + (c / d), It's tiring work too.
> > >Can it be simplified?
> >
> > Because of the rules of operator precedence,
> > a / b + c / d
> > has the same meaning as the expression you gave.
>
> And it's important to note that that meaning will change in version
> 3.0.  Until then, it's best to start every module with "from __future__
> import division".

You're right, of course, but that's like offering advice about how to shift
gears on a bicycle to someone who hasn't even figured out where the pedals
are yet!  ;-)


-- 
I don't actually read my hotmail account, but you can replace hotmail with
excite if you really want to reach me.



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How would you program this?

2005-03-02 Thread Russell Blau
"engsol" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> There is a number puzzle which appears in the daily paper.
> Because I'm between Python projects, I thought it might be
> fun to write a program to solve it20 minute job, max.
>
> On closer inspection, it became apparent that it's not a
> simple thing to program. How would you approach it?
>
> The puzzle: a 4 x 4 grid. The rows are summed (and given), the
> cols are summed (and given), and the two diagonals are summed,
> and given. In addition, 4 clues are given, but none of the 4 are in
> the same row or col.
>
> Example from today's paper:...solution time is 8 minutes, 1 second,
> so they say.
>
> The set of allowable numbers  is 1 thru 9
>
> Rows:
> 3 + B + C + D = 22
> E + F + 8 + H = 26
> I + J + K + 8 = 31
> M + 7 + O + P = 25
>
> Col sums:
> 24, 18, 31, 31
>
> Diag sums:
> 3 + F + K + P = 24
> M + J + 8 + D = 24
>
>
>
> The first impulse is to just brute force it with nested for loops,
> but the calculator shows the possible combinations are
> 9^12 = 5,159,780,352, which would take much too long.
>

What you have is a set of 10 linear equations in 11 variables.  Normally
that isn't
enough to generate a unique solution, but the additional constraint that all
variables must have values in the range 1..9 probably will get you to a
unique solution.  I suggest you Google for techniques for solving
"simultaneous linear equations".


-- 
I don't actually read my hotmail account, but you can replace hotmail with
excite if you really want to reach me.



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: using Tkinter from IDLE

2005-03-03 Thread Russell Blau
<[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]

> How do I use Tkinter from IDLE? Tkinter can be used from IDLE attached
> to python 2.2, IDLE 0.8. But I couldn't use from IDLE attached to
> python 2.3, IDLE 1.0.3. When I execute the code below:

> from Tkinter import *
> root = Tk()

> the window appears form IDLE 0.8, but not from IDLE 1.0.3.

Add the line:

root.mainloop()

at the end of your code.


-- 
I don't actually read my hotmail account, but you can replace hotmail with
excite if you really want to reach me.



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: rearrange text

2005-03-03 Thread Russell Blau
"Daniel Skinner" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> If I have the following text
>
> var = '1,2,3,4'
>
> and I want to use the comma as a field delimeter and rearrange the
> fields to read
>
> '1,3,2,4'
>
> How would I accomplish this in python?

Well, it kind of depends on how you want to do the rearranging, whether the
data in the fields is always going to be numbers or could be some other kind
of object, etc.

In general, though, what you seem to be looking for is:

mylist = var.split(',')
rearrange(mylist)
newvar = ','.join(mylist)


-- 
I don't actually read my hotmail account, but you can replace hotmail with
excite if you really want to reach me.



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How would you program this?

2005-03-03 Thread Russell Blau
"Dennis Lee Bieber" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> On Wed, 2 Mar 2005 13:44:07 -0500, "Russell Blau" <[EMAIL PROTECTED]>
> declaimed the following in comp.lang.python:
>
> >
> > What you have is a set of 10 linear equations in 11 variables.  Normally
>
> Worse -- there are 12 unknowns in the 10 equations...

Yup, I need to grow two more fingers, I guess.


-- 
I don't actually read my hotmail account, but you can replace hotmail with
excite if you really want to reach me.




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How do I do this? (eval() on the left hand side)

2004-12-07 Thread Russell Blau
"It's me" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
>
> In REXX, for instance, one can do a:
>
> interpret y' = 4'
>
> Since y contains a, then the above statement amongs to:
>
> a = 4
>
> There are many situations where this is useful.   For instance, you might
be
> getting an input which is a string representing the name of a variable and
> you wish to evaluate the expression (like a calculator application, for
> instance).

In Python, the canonical advice for this situation is, "Use a dictionary."
This has a number of advantages, including keeping your user's namespace
separate from your application's namespace.  Plus it's easier to debug and
maintain the code.

But, if you absolutely, positively have to refer to your variable
indirectly, you could do:

exec "%s = 4" % y

If y refers to the string "a", this will cause the variable a to refer to
the value 4.

-- 
I don't actually read my hotmail account, but you can replace hotmail with
excite if you really want to reach me.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: More baby squeaking - iterators in a class

2004-12-30 Thread Russell Blau
"Bulba!" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Hello Mr Everyone,
>
> From:
> http://docs.python.org/tut/node11.html#SECTION001190
>
> "Define a __iter__() method which returns an object with a next()
> method. If the class defines next(), then __iter__() can just return
> self:"
>
> The thing is, I tried to define __iter__() directly without explicit
> defining next (after all, the conclusion from this passage should
> be that it's possible).

I don't get that from the passage quoted, at all, although it is somewhat
opaque.  It says that your __iter__() method must *return an object* with a
next() method; your __iter__() method below doesn't return such an object,
but instead returns a string.  It then says that *if* your class defines
next(), which yours doesn't, __iter__() can return self.

[spaces inserted; you should note that many newsreaders strip the TAB
character...]

> class R:
>   def __init__(self, d):
> self.d=d
> self.i=len(d)
>   def __iter__(self):
> if self.i == 0:
>   raise StopIteration
> self.i -= 1
> return self.d[self.i]
>

Solution:  replace "__iter__" with "next" in the class definition above,
then add to the end:

   def __iter__(self):
 return self


-- 
I don't actually read my hotmail account, but you can replace hotmail with
excite if you really want to reach me.


-- 
http://mail.python.org/mailman/listinfo/python-list


Best IDE

2005-03-22 Thread tom . russell

If money is not an issue, what are the best options for a "Professional" IDE for Python that includes all the normal stuff PLUS a GUI Builder?? 

Its an open ended question but I need some opinions from those that have actually used some stuff. This is for a business producing in house programs for use within only.

Thanks,

Tom


:.
CONFIDENTIALITY : This  e-mail  and  any attachments are confidential and may be privileged. If  you are not a named recipient, please notify the sender immediately and do not disclose the contents to another person, use it for any purpose or store or copy the information in any medium.-- 
http://mail.python.org/mailman/listinfo/python-list

Python IDE

2005-03-24 Thread tom . russell

Has anyone used BlackAdder IDE for any project small or big? Whats your opinion?

Thanks,

Tom


:.
CONFIDENTIALITY : This  e-mail  and  any attachments are confidential and may be privileged. If  you are not a named recipient, please notify the sender immediately and do not disclose the contents to another person, use it for any purpose or store or copy the information in any medium.-- 
http://mail.python.org/mailman/listinfo/python-list

Re: How to install PIL or PILLOW on OS X Yosemite?

2015-02-19 Thread Russell Owen

On 2/15/15 8:17 PM, Ned Deily wrote:

In article ,
  KP  wrote:

just upgraded my Mac Mini to Yosemite and have never dabbled in Python on
this OS.

I see it has Python 2.7.6 installed.

When I do something like

from PIL import ImageFont, ImageDraw

it tells me that it cannot find PIL

How do I install this on Yosemite?


Suggestions: stick with Pillow which is the current, maintained fork of
the venerable PIL. Decide whether you want to use Python 3 or Python 2.
PIL/Pillow installation on OS X is more involved than on some other
platforms because it depends on a number of third-party C libraries that
are not shipped by Apple in OS X so you need to find another source for
them.  Rather than trying to build and install everything yourself or
downloading a Pillow or PIL installer, I suggest picking one of the
several fine distributors of open source packages for OS X and
installing everything you need from them (including an up-to-date Python
2 or 3) and for your future needs beyond Pillow; options include
Homebrew, MacPorts, Anaconda, Fink, and others.  Once you've installed
the base framework for the package manager you choose, installing
something like Pillow and all of its dependencies is often just a
one-line command.  It may take a little while to get used to the quirks
of the package manager you choose but, if you are going to use OS X for
development with Python or many other languages, that time spent will be
repaid many times over.


I agree that Pillow is preferable to PIL and that you may want to 
consider a 3rd party system.


If you are primarily interested in Python (and not unix-based C/C++ 
libraries and utilities then I suggest you try anaconda python.


Homebrew, MacPorts and Fink are mostly aimed at people who want to add 
missing unix libraries and tools.


If you want to stick with python.org python then a binary PIL installer 
is available here:

<http://www.astro.washington.edu/users/rowen/python/>
(I am not aware of any Pillow binaries).

-- Russell

--
https://mail.python.org/mailman/listinfo/python-list


Re: Picking apart a text line

2015-03-02 Thread Russell Owen

On 2/26/15 7:53 PM, memilanuk wrote:

So... okay.  I've got a bunch of PDFs of tournament reports that I want
to sift thru for information.  Ended up using 'pdftotext -layout
file.pdf file.txt' to extract the text from the PDF.  Still have a few
little glitches to iron out there, but I'm getting decent enough results
for the moment to move on.


...

So back to the lines of text I have stored as strings in a list.  I
think I want to convert that to a list of lists, i.e. split each line
up, store that info in another list and ditch the whitespace.  Or would
I be better off using dicts?  Originally I was thinking of how to
process each line and split it them up based on what information was
where - some sort of nested for/if mess.  Now I'm starting to think that
the lines of text are pretty uniform in structure i.e. the same field is
always in the same location, and that list slicing might be the way to
go, if a bit tedious to set up initially...?

Any thoughts or suggestions from people who've gone down this particular
path would be greatly appreciated.  I think I have a general
idea/direction, but I'm open to other ideas if the path I'm on is just
blatantly wrong.


It sounds to me as if the best way to handle all this is keep the 
information it in a database, preferably one available from the network 
and centrally managed, so whoever enters the information in the first 
place enters it there. But I admit that setting such a thing up requires 
some overhead.


Simpler alternatives include using SQLite, a simple file-based database 
system, or numpy structured arrays (arrays with named fields). Python 
includes a standard library module for sqlite and numpy is easy to install.


-- Russell

--
https://mail.python.org/mailman/listinfo/python-list


Re: Best way to calculate fraction part of x?

2015-03-26 Thread Russell Owen

On 3/24/15 6:39 PM, Jason Swails wrote:



On Mon, Mar 23, 2015 at 8:38 PM, Emile van Sebille mailto:em...@fenx.com>> wrote:

On 3/23/2015 5:52 AM, Steven D'Aprano wrote:

Are there any other, possibly better, ways to calculate the
fractional part
of a number?


float (("%6.3f" % x)[-4:])


​In general you lose a lot of precision this way...​


I suggest modf in the math library:

math.modf(x)
Return the fractional and integer parts of x. Both results carry the 
sign of x and are floats.




--
https://mail.python.org/mailman/listinfo/python-list


Re: Quick question, if you please

2015-03-31 Thread Russell Owen

On 3/31/15 10:09 AM, John Kelly wrote:

Pythonites,

I received Python with another install and my update software keeps
signaling I need to install a newer version, and once I do, the older
version is still there, so I keep getting told I need to update. Should
I be able to uninstall the old version each time?

Thanks for your kind attention,
John Kelly


I would need more information to help. What operating system are you on? 
How and where are you installing Python (and what do you mean by 
"received Python with another install"?).


-- Russell

--
https://mail.python.org/mailman/listinfo/python-list


Argument Presence Checking via Identity or Boolean Operation?

2015-06-04 Thread Russell Brennan
I'm going to x-post this to stackoverflow but...

When checking a method's arguments to see whether they were set, is it
pythonic to do an identity check:

def doThis(arg1, arg2=None):
  if arg2 is None:
arg2 = myClass()


Or is it proper form to use a short-circuiting boolean:

def doThis(arg1, arg2=None):
arg2 = arg2 or myClass()


In support of the former, PEP 8 states:

Comparisons to singletons like None should always be done with is or is not
, never the equality operators. Also, beware of writing if x when you
really mean if x is not None -- e.g. when testing whether a variable or
argument that defaults to None was set to some other value. The other value
might have a type (such as a container) that could be false in a boolean
context!


On the other hand, from the Google style guide:

Use the "implicit" false if at all possible. ...


But at the same time states...

Never use == or != to compare singletons like None. Use is or is not.


Does this apply to "None" since it evaluates to False always, and/or is a
boolean comparison equivalent to ==/!= under the hood?

Thanks much,
Russ
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: tkinter resize question

2015-07-17 Thread Russell Owen

On 7/17/15 12:17 PM, nickgeova...@gmail.com wrote:

On Friday, July 17, 2015 at 1:53:19 PM UTC-5, nickge...@gmail.com wrote:

Resizing a tkinter window which contains a frame which contains a button 
widget, will not change the current size of the window, frame or button as 
recorded in their height and width attributes (at least not if they are 
resizable). What is the correct way to detect their current size?


Ok, partially answering my own question:
The geometry of the window will change (win.geometry()), but the changes do not appear to 
"propagate" to the retrieved width/height of the child widgets, frames, etc. Or 
am I incorrect with this?


I'm not seeing it. If I try the following script I see that resizing the 
widget does update frame.winfo_width() and winfo_height. (I also see 
that the requested width and height are ignored; you can omit those).


-- Russell


#!/usr/bin/env python
import Tkinter
root = Tkinter.Tk()

frame = Tkinter.Frame(root, width=100, height=50)
frame.pack(expand=True, fill="both")
def doReport(*args):
print "frame actualwidth=%s, height=%s" % (frame.winfo_width(), 
frame.winfo_height())
print "frame requested width=%s, height=%s" % 
(frame.winfo_reqwidth(), frame.winfo_reqheight())

button = Tkinter.Button(frame, text="Report", command=doReport)
button.pack()

root.mainloop()


--
https://mail.python.org/mailman/listinfo/python-list


IMAP4_SSL error

2006-01-04 Thread Russell Stewart
I'm trying to log into a secure IMAP4 server using imaplib,
and I'm getting a strange error. If I do the following (name
of mail server x'ed out in example):

 >>> import imaplib
 >>> m = imaplib.IMAP4_SSL("mail.xxx.xxx")

I get:
Traceback (most recent call last):
   File "", line 1, in ?
   File "C:\Python24\lib\imaplib.py", line 1101, in __init__
 IMAP4.__init__(self, host, port)
   File "C:\Python24\lib\imaplib.py", line 160, in __init__
 self.open(host, port)
   File "C:\Python24\lib\imaplib.py", line 1114, in open
 self.sslobj = socket.ssl(self.sock, self.keyfile, self.certfile)
AttributeError: 'module' object has no attribute 'ssl'

Any ideas? I'm running Active State Python 2.4 in WinXP
SP2.

-- 
Russell Stewart   |  E-Mail: [EMAIL PROTECTED]
UNM CS Department |  WWW: http://www.russell-stewart.net

"The great thing about standards in the computer
industry is that there are so many to choose from"
  --Spotted on Slashdot
-- 
http://mail.python.org/mailman/listinfo/python-list


email modules and attachments that aren't there

2006-01-09 Thread Russell Bungay
Hello all,

I have written a short function, based on a recipe in the Python 
Cookbook, that sends an e-mail.  The function takes arguments that 
define who the e-mail is to, from, the subject, the body and an optional 
list of attachments.

The function works also perfectly, bar one slight problem.  If you 
attempt to send an e-mail with just a body and no attachments, the 
receiving client still thinks that there is an attachment (so far tested 
in Mozilla Thunderbird and the Yahoo! webmail client).  Although this 
clearly isn't a major problem, it is irritating and I am hoping to use 
my code at work.  Obviously I can't be sending out badly formed e-mails 
to my clients.

I can't for the life of me work out why.  I have compared my code to 
every example that I can find in the Python documentation, on the 
archives of this newsgroup and of the Python Tutor list, and one or two 
random searches but can't see what is happening.  Any advice or 
suggestions would be welcome.

Thank you for your help,

Russell Bungay
--
The Duck Quacks:
http://www-users.york.ac.uk/~rb502/ - Homepage
http://www-users.york.ac.uk/~rb502/blog/quack.shtml - Blog
http://www.flickr.com/photos/lsnduck/ - Photos

Code:

def sendEmail(msg_to, msg_from, msg_subject, message, attachments=[]):

 main_msg = email.Message.Message()
 main_msg['To'] = ', '.join(msg_to)
 main_msg['From'] = msg_from
 main_msg['Subject'] = msg_subject
 main_msg['Date'] = email.Utils.formatdate(localtime=1)
 main_msg['Message-ID'] = email.Utils.make_msgid()
 main_msg['Mime-version'] = '1.0'
 main_msg['Content-type'] = 'Multipart/mixed'
 main_msg.preamble = 'Mime message\n'
 main_msg.epilogue = ''

 body_encoded = quopri.encodestring(message, 1)
 body_msg = email.Message.Message()
 body_msg.add_header('Content-type', 'text/plain')
 body_msg.add_header('Content-transfer-encoding', 'quoted-printable')
 body_msg.set_payload(body_encoded)
 main_msg.attach(body_msg)

 for attachment in attachments:

 content_type, ignored = mimetypes.guess_type(attachment)
 if content_type == None:
 content_type = 'application/octet-stream'
 contents_encoded = cStingIO.StringIO()
 attach_file = open(attachment, 'rb')
 main_type = content_type[:content_type.find('/')]
 if main_type == 'text':
 cte = 'quoted-printable'
 quopri.encode(attach_file, contents_encoded, 1)
 else:
 cte = 'base64'
 base64.encode(attach_file, contents_encoded)
 attach_file.close()

 sub_msg = email.Message.Message()
 sub_msg.add_header('Content-type', content_type, name=attachment)
 sub_msg.add_header('Content-transfer-encoding', cte)
 sub_msg.set_payload(contents_encoded.getvalue())
 main_msg.attach(sub_msg)

 smtp = smtplib.SMTP(server)
 smtpfail = smtp.sendmail(msg_from, ', '.join(msg_to), 
main_msg.as_string())
 smtp.quit()
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: email modules and attachments that aren't there

2006-01-10 Thread Russell Bungay
Hello,

>> main_msg['Content-type'] = 'Multipart/mixed'
> Would it be the 'Content-Type' header?  I've no expertise in this, but
> doesn't 'multipart' mean 'has attachments'?

Brilliant, thank you.  A swift test on the number of attachments and 
changing the header suitably does the job.

Thank you for your help,

Russell
--
The Duck Quacks:
http://www-users.york.ac.uk/~rb502/ - Homepage
http://www-users.york.ac.uk/~rb502/blog/quack.shtml - Blog
http://www.flickr.com/photos/lsnduck/ - Photos
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: email modules and attachments that aren't there

2006-01-10 Thread Russell Bungay
Hello,

>>> main_msg['Content-type'] = 'Multipart/mixed'
>> Would it be the 'Content-Type' header?  I've no expertise in this, but
>> doesn't 'multipart' mean 'has attachments'?
> Brilliant, thank you.  A swift test on the number of attachments and 
> changing the header suitably does the job.

That isn't quite all there is to it, the e-mail construction needs a 
slight change as well.  Roughly working code below.

Ta,

Russell

Code:

def sendEmail(msg_to, msg_from, msg_subject, message, attachments=[]):

 main_msg = email.Message.Message()
 main_msg['To'] = ', '.join(msg_to)
 main_msg['From'] = msg_from
 main_msg['Subject'] = msg_subject
 main_msg['Date'] = email.Utils.formatdate(localtime=1)
 main_msg['Message-ID'] = email.Utils.make_msgid()
 main_msg['Mime-version'] = '1.0'
 main_msg.preamble = 'Mime message\n'
 main_msg.epilogue = ''

 body_encoded = quopri.encodestring(message, 1)

 if len(attachments) <> 0:
 main_msg['Content-type'] = 'Multipart/mixed'
 body_msg = email.Message.Message()
 body_msg.add_header('Content-type', 'text/plain')
 body_msg.add_header('Content-transfer-encoding', 
'quoted-printable')
 body_msg.set_payload(body_encoded)
 main_msg.attach(body_msg)
 for attachment in attachments:
 content_type, ignored = mimetypes.guess_type(attachment)
 if content_type == None:
 content_type = 'application/octet-stream'
 contents_encoded = cStringIO.StringIO()
 attach_file = open(attachment, 'rb')
 main_type = content_type[:content_type.find('/')]
 if main_type == 'text':
 cte = 'quoted-printable'
 quopri.encode(attach_file, contents_encoded, 1)
 else:
 cte = 'base64'
 base64.encode(attach_file, contents_encoded)
 attach_file.close()

 sub_msg = email.Message.Message()
 sub_msg.add_header('Content-type', content_type, name=attachment)
 sub_msg.add_header('Content-transfer-encoding', cte)
 sub_msg.set_payload(contents_encoded.getvalue())
 main_msg.attach(sub_msg)

 else:
 main_msg['Content-type'] = 'text/plain'
 main_msg['Content-transfer-encoding'] = 'quoted-printable'
 main_msg.set_payload(body_encoded)

 smtp = smtplib.SMTP('server')
 smtpfail = smtp.sendmail(msg_from, ', '.join(msg_to), 
main_msg.as_string())
 smtp.quit()
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: email modules and attachments that aren't there

2006-01-10 Thread Russell Bungay
Russell Bungay wrote:

> for attachment in attachments:
> 

>   sub_msg = email.Message.Message()
>   sub_msg.add_header('Content-type', content_type, name=attachment)
>   sub_msg.add_header('Content-transfer-encoding', cte)
>   sub_msg.set_payload(contents_encoded.getvalue())
>   main_msg.attach(sub_msg)

These lines should of course be within the for, not outside it.  Apologies.

Russell
-- 
http://mail.python.org/mailman/listinfo/python-list


Implied instance attribute creation when referencing a class attribute

2006-01-16 Thread Russell Warren
I just ran across a case which seems like an odd exception to either
what I understand as the "normal" variable lookup scheme in an
instance/object heirarchy, or to the rules regarding variable usage
before creation.  Check this out:

>>> class foo(object):
...   I = 1
...   def __init__(self):
... print self.__dict__
... self.I += 1
... print self.__dict__
...
>>> a=foo()
{}
{'I': 2}
>>> foo.I
1
>>> a.I
2
>>> del a.I
>>> a.I
1
>>> del a.I
Traceback (most recent call last):
  File "", line 1, in 
AttributeError: I
>>> non_existent_var += 1
Traceback (most recent call last):
  File "", line 1, in 
NameError: name 'non_existent_var' is not defined


In this case, 'self.I += 1' clearly has inserted a surprise
behind-the-scenes step of 'self.I = foo.I', and it is this which I find
interesting.

As I understand it, asking for self.I at this point should check
self.__dict__ for an 'I' entry, and if it doesn't find it, head on up
to foo.__dict__ and look for it.

So... I initially *thought* there were two possibilities for what would
happen with the 'self.I += 1':
  1. 'self.I += 1' would get a hold of 'foo.I' and increment it
  2. I'd get an AttributeError

Both were wrong.  I thought maybe an AttributeError because trying to
modify 'self.I' at that point in the code is a bit fuzzy... ie: am I
really trying to deal with foo.I (in which case, I should properly use
foo.I) or am I trying to reference an instance attribute named I (in
which case I should really create it explicitly first or get an error
as with the non_existent_var example above... maybe with 'self.I =
foo.I').

Python is obviously assuming the latter and is "helping" you by
automatically doing the 'self.I = foo.I' for you.  Now that I know this
I (hopefully) won't make this mistake again, but doing this seems
equivalent to taking my 'non_existent_var += 1' example above and
having the interpreter interpret as "oh, you must want to deal with an
integer, so I'll just create one for you with 'non_existent_var = 0'
first".  Fortunately this is not done, so why do it with the instance
attribute reference?

Does anyone have any solid reasoning behind the Python behavior?  It
might help drive it home more so than just taking it as "that's the way
it is" and remembering it.

It gets even more confusing for me because the behaviour could be
viewed as being opposite when dealing with mutable class members.  eg:

>>> class foo(object):
...   M = [1,2,3]
...   def __init__(self):
... self.M.append(len(self.M) + 1)
... print self.M
...
>>> a=foo()
[1, 2, 3, 4]
>>> foo.M
[1, 2, 3, 4]
>>> del a.M
Traceback (most recent call last):
  File "", line 1, in 
AttributeError: 'foo' object attribute 'M' is read-only

By opposite I mean that with immutable objects, a sloppy self.I
reference doesn't get you to the base class object, whereas with a
mutable one you do get to the base object (although I do recognize that
in both cases if you just remember that the interpreter will always
stuff in a 'self.x = BaseClass.x' it works as expected in both the
immutable and mutable case).

After all that, I guess it boils down to me thinking that the code
*should* interpret the attempted instance modification with one of the
two possibilities I mentioned above (although after typing this I'm now
leaning more towards an AttributeError rather than allowing 'self.I' to
be synonymous with 'foo.I' if no local override).

Russ

PS: Apologies if I mangled the "proper" terminology for talking about
this... hopefully it makes sense.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Implied instance attribute creation when referencing a class attribute

2006-01-16 Thread Russell Warren
> I can see how this can be confusing, but I think the confusion here is
> yours, not Pythons ;)

This is very possible, but I don't think in the way you describe!

> self.I += 10 is an *assignment*. Like any assignment, it causes the
> attribute in question to be created

... no it isn't.  The += is an operator.  Look at the example I
included with non_existent_var above.  If that doesn't do it for you,
pop open a clean python shell and do this one:

>>> x += 2
Traceback (most recent call last):
  File "", line 1, in 
NameError: name 'x' is not defined

Note that x doesn't exists and it does not create it.  You can't
normally operate on something before it is created - Python won't
create it for you (which is why I was surprised by the class attribute
behavior in the first post).

> If you write out the longhand for += it becomes totally obvious what
> is happening and why it makes sense:

Not true as above.  The longhand for 'self.I += 1' is 'self.I = self.I
+ 1', which normally needs self.I to exist due to the RHS of this.

> So your case 1 is actually exactly what is happening! Python is
> getting a hold of foo.I and incrementing it

Nope.  My case 1 would have the 'self.I += 1' modifying the actual
class attribute, not some new instance attribute and this is definitely
NOT happening.  Maybe my example was bad?  Try this one instead:

>>> class foo(object):
...   I = 1
...   def __init__(self):
... self.I += 123455
...
>>> a=foo()
>>> a.I
123456
>>> foo.I
1
>>> del a.I
>>> a.I
1

Note that we ended up binding a new "I" to the 'a' instance with the
'self.I += 1' statement, and it started with the value of 1 (the value
of the base class attribute).  I tried to make it clear in the example
by wiping out the local copy, which then reveals the base class
attribute when you go for it again.

The fact that there is a local I being made with the value of the base
class attribute says that Python is essentially adding the line 'self.I
= foo.I' as in the code below.

>>> class foo(object):
...   I = 123455
...   def __init__(self):
... self.I = foo.I  # unnecessary since python seems to do it in
the next line
... self.I += 1
...
>>> a=foo()
>>> b=foo()
>>> c=foo()
>>> print c.I, foo.I
123456 1

For kicks I added the b and c creations to show that at no time did the
+= operator get a hold of the foo base class as you state.  It stayed
untouched at 1 the whole time.  To do that you need to reference foo
itself as in the following case:

>>> class foo(object):
...   I = 0
...   def __init__(self):
... foo.I += 1
... self.I = foo.I
...
>>> a=foo()
>>> b=foo()
>>> c=foo()
>>> print a.I, b.I, c.I, foo.I
1 2 3 3
>>> del a.I
>>> a.I
3

Here it of course *did* increment the base foo attribute since it was
directly referenced.  'a.I' stays as 1 here because I rebound a new
instance attribute I on top with a copy of the base foo.I value due to
it being immutable (a bit weird to use the same name, but I'm trying to
show something) and it is what is retrieved first by Python (local
dictionary first, if not found it goes to the base class).  When I
clear I from the local __dict__ with the del, you see that future
self.I references skip out to the base class attribute since there is
no instance I attribute anymore.

A bit of a sidetrack there... still curious why python decides to
auto-create the variable for you in this particular case.  Any other
takers?

Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Implied instance attribute creation when referencing a class attribute

2006-01-16 Thread Russell Warren
D'oh... I just realized why this is happening.  It is clear in the
longhand as you say, but I don't think in the way you descibed it (or
I'm so far gone right now I have lost it).

  self.I += 1

is the same as

  self.I = self.I + 1

and when python tries figures out what the 'self.I' is on the right
hand side. it of course ends up having to move up to the base class
foo.__dict__ because there is no 'I' in self.__dict__ yet.  So it ends
up effectively being:

  self.I = foo.I + 1

which explains where the "self.I = foo.I' that I was claiming was being
done magically comes from.

What my head was thinking was that the 'self.I' lookup would move up to
get foo.__dict__['I'], and that I would effectively get 'foo.I += 1',
but this is a bit of a brain fart and is just plain wrong.

I should have seen that earlier... oh well.  I'm happy that it is
perfectly clear where it comes from, now.  It still does look odd when
you do a simplistic comparison of the behaviour of 'x += 1' and 'self.I
+= 1', but I suppose that that's just the way the lookup scheme
crumbles.  An unfortunate (and rare?) quirk, I guess.

It still might be nice were python to just block out this potential
confusion with an Exception... it seems that class vs instance
attribute referencing is confusing enough for people without having
this type of potential confusion lurking around the syntax.  It seems
like  such a simple thing, but to understand the outcomes requires
knowing how the name lookup scheme works, how mutable/immutable objects
are dealt with, and what the += keystroke-saver/macro operator is
actually doing.  That this is stuff that someone coding in python
should understand could certainly be argued, though...

Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Implied instance attribute creation when referencing a class attribute

2006-01-16 Thread Russell Warren
Thanks for the additional examples, David (didn't see this before my
last post).  All of it makes sense now, including those examples.

Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


How to convert arbitrary objects directly to base64 without initial string conversion?

2006-07-13 Thread Russell Warren
I've got a case where I want to convert binary blocks of data (various
ctypes objects) to base64 strings.

The conversion calls in the base64 module expect strings as input, so
right now I'm converting the binary blocks to strings first, then
converting the resulting string to base64.  This seems highly
inefficient and I'd like to just go straight from binary to a base64
string.

Here is the conversion we're using from object to string...

import ctypes
def ObjAsString(obj):
  sz = ctypes.sizeof(obj)
  charArray = ctypes.c_char * sz
  buf = charArray.from_address(ctypes.addressof(obj))
  return buf.raw[:sz]

The returned string can then be sent to base64 for conversion (although
we're actually using xmlrpc.Binary), but there is obviously some waste
in here.

import base64
b64 = base64.b64encode(ObjAsString(foo))

Is there a canned/pre-existing way to convert a block of memory to a
base64 string more efficiently?  I'd like to avoid writing my own
base64 conversion routine if possible.  Anyone have any good ideas?
Even a mroe efficient/less clunky way of conevrting an arbitrary object
to a string would be appreciated.

Thanks,
Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to convert arbitrary objects directly to base64 without initial string conversion?

2006-07-13 Thread Russell Warren
> Many functions that operate on strings also accept buffer objects as 
> parameters,
> this seems also be the case for the base64.encodestring function.  ctypes 
> objects
> support the buffer interface.
>
> So, base64.b64encode(buffer(ctypes_instance)) should work efficiently.

Thanks!  I have never used (or even heard of) the buffer objects.  I'll
check it out.

Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to convert arbitrary objects directly to base64 without initial string conversion?

2006-07-13 Thread Russell Warren
After some digging around it appears there is not a tonne of
documentation on buffer objects, although they are clearly core and
ancient... been sifting through some hits circa 1999, long before my
python introduction.

What I can find says that buffer is deprecated (Python in a Nutshell),
or non-essential/for-older-versions (Python documentation).

At least it no longer seems terribly weird to me that I never noticed
this built-in before... I got this from the python docs in reference to
buffer and others:

"Python programmers, trainers, students and bookwriters should feel
free to bypass these functions without concerns about missing something
important".

Is buffer safe to use?  Is there an alternative?

> ctypes objects support the buffer interface

How can you tell what objects support the buffer interface?  Is
anything visible at the python level, or do you need to dig into the C
source?

Regarding documentation, I assume the C PyBufferObject is the
underlying thing for the python-level buffer?  If so, is the best place
for docs on this ancient object to glean what I can from this link:
http://www.python.org/doc/1.5.2p2/api/bufferObjects.html ?

Any help is appreciated... I'd like to understand what I can about this
object if I'm to use it... I'm wary of nasty surprises.

Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Intermittent "permission denied" errors when using os.rename and a recently deleted path??

2006-07-26 Thread Russell Warren
I've been having a hard time tracking down a very intermittent problem
where I get a "permission denied" error when trying to rename a file to
something that has just been deleted (on win32).

The code snippet that gets repeatedly called is here:

  ...
  if os.path.exists(oldPath):
os.remove(oldPath)
  os.rename(newPath, oldPath)
  ...

And I get the permission denied exception on the os.rename line.
Somehow the rename target is still locked?  I don't get it.

I found a post that seemed to refer to precisely this problem:
http://groups.google.com/group/comp.lang.python/browse_frm/thread/496625ca3b0c3874/e5c19db11d8b6d4e?lnk=gst&q=os.remove+delay&rnum=1#e5c19db11d8b6d4e

However - this post describes a case where there are multiple threads
making use of other os calls.  I am running a single threaded
application and still getting this problem.  ie: the suggested fix does
not work for me.

I'm trying to see if implementing a "trap the exception and try again,
but not too many times" hack fix will do the trick, but I'm not a big
fan of this "solution", and at this point I'm not entirely certain it
will work because confirming that it *did* work is tough (it is very
difficult to repeatably create the problem).

Does anyone know of a real solution to this problem, or know what
exactly is happening so that I can work out a proper solution?

Thanks,
Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threads vs Processes

2006-07-26 Thread Russell Warren
> Another issue is the libraries you use. A lot of them aren't
> thread safe. So you need to watch out.

This is something I have a streak of paranoia about (after discovering
that the current xmlrpclib has some thread safety issues).  Is there a
list maintained anywhere of the modules that are aren't thread safe?

Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threads vs Processes

2006-07-26 Thread Russell Warren
Oops - minor correction... xmlrpclib is fine (I think/hope).  It is
SimpleXMLRPCServer that currently has issues.  It uses
thread-unfriendly sys.exc_value and sys.exc_type... this is being
corrected.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Intermittent "permission denied" errors when using os.rename and a recently deleted path??

2006-07-26 Thread Russell Warren
> Are you running a background file accessing tool like Google Desktop
> Search or an anti-virus application? If so, try turning them off as a test.

I'm actually running both... but I would think that once os.remove
returns that the file is actually gone from the hdd.  Why would either
application be blocking access to a non-existent file?

Of course, my thinking is obviously wrong since I do get the permission
problem... I will definitely try disabling those.  Now if only I could
reproducably repeat it to make testing easier. :(

Another thing is that I certainly do want the code to work in the
presence of such tools.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Intermittent "permission denied" errors when using os.rename and a recently deleted path??

2006-07-27 Thread Russell Warren
> Does it actually tell you the target is the problem? I see an
> "OSError: [Errno 17] File exists" for that case, not a permission error.
> A permission error could occur, for example, if GDS has the source open
> or locked when you call os.rename.

No it doesn't tell me the target is the issue... you are of course
right that it could be either.  I did some looking to see if/why GDS
would lock files at any time while scanning but didn't turn up anything
useful so far.  I'd be surprised if it did as that would be one heck of
an annoying design flaw.

Anyway - the retry-on-failure workaround seems to prevent it from
happening, although it still seems very hackish and I don't like it:

  ...
  if os.path.exists(path1): os.remove(path1)
  startTime = time.clock()
  while 1:
try:
  os.rename(self.path2, self.path1)
  break
except OSError:
  if (time.clock() - startTime) > MAX_RETRY_DURATION_s:
raise
  else:
time.sleep(0)
   ...

It feels very weird to have to verify a simple operation like this, but
if it works it works.

Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


TypeError: 'module' object is not callable (newby question)

2006-08-14 Thread Charles Russell
Why does this work from the python prompt, but fail from a script?
How does one make it work from a script?

#! /usr/bin/python
import glob
# following line works from python prompt; why not in script?
files=glob.glob('*.py')
print files

Traceback (most recent call last):
   File "./glob.py", line 2, in ?
 import glob
   File "/home/cdr/python/glob.py", line 5, in ?
 files=glob.glob('*.py')
TypeError: 'module' object is not callable
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: TypeError: 'module' object is not callable (newby question)

2006-08-14 Thread Charles Russell
Marc 'BlackJack' Rintsch wrote:

> 
> Don't call your file `glob.py` because then you import this module and not
> the `glob` module from the standard library.
> 
> Ciao,
>   Marc 'BlackJack' Rintsch

Yes, thanks.  Renaming to myglob.py solved the problem. But why does the 
conflict not occur when the code is run interactively from the python 
prompt?  Somewhat related - I haven't found the magic word to invoke a 
.py script from the python prompt (like  the command "source" in csh, 
bash, tcl?)  "import" runs the script, but then complains that it is not 
a module.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: TypeError: 'module' object is not callable (newby question)

2006-08-14 Thread Charles Russell
John Machin wrote:

> 
> Contemplate the following:
> 
> C:\junk>type glob.py
> if __name__ == "__main__":
> print "*** Being run as a script ..."
> import glob
> print "glob was imported from", glob.__file__
> print "glob.glob is", type(glob.glob)
> print "glob.glob was imported from", glob.glob.__file__
> print "(glob.glob is glob) is", glob.glob is glob
> print "--- end of script"
> else:
> print "*** Aarrgghh!! I'm being imported as", __name__
> import glob
> print "glob was imported from", glob.__file__
> print "glob.glob is", type(glob.glob)
> print "glob.glob was imported from", glob.glob.__file__
> print "(glob.glob is glob) is", glob.glob is glob
> print "--- end of import"
> 

Thanks.  Another newby question:  __name__ and __file__ appear to be 
predefined variables.  To look up their meaning in the manual, is there 
some method less clumsy than grepping the whole collection of .html 
source files?  I can't find any comprehensive index.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: TypeError: 'module' object is not callable (newby question)

2006-08-14 Thread Charles Russell
Charles Russell wrote:

  But why does the
> conflict not occur when the code is run interactively from the python 
> prompt?  

Because, I now realize, I had not yet created glob.py when I tried that.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: TypeError: 'module' object is not callable (newby question)

2006-08-15 Thread Charles Russell
Charles Russell wrote:
  I haven't found the magic word to invoke a
> .py script from the python prompt (like  the command "source" in csh, 
> bash, tcl?) 

Seems to be execfile()
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: TypeError: 'module' object is not callable (newby question)

2006-08-15 Thread Charles Russell
Marc 'BlackJack' Rintsch wrote:

> Here's the index of the reference manual:
> 
>   http://docs.python.org/ref/genindex.html
> 
Thanks.  When I go up a level from there, I find a pointer to the index 
right at the bottom of the table of contents, which I had overlooked.
-- 
http://mail.python.org/mailman/listinfo/python-list


Funky file contents when os.rename or os.remove are interrupted

2006-10-10 Thread Russell Warren
I've got a case where I'm seeing text files that are either all null
characters, or are trailed with nulls due to interrupted file access
resulting from an electrical power interruption on the WinXP pc.

In tracking it down, it seems that what is being interrupted is either
os.remove(), or os.rename().  Has anyone seen this behaviour, or have
any clue what is going on?

On first pass I would think that both of those calls are single step
operations (removing/changing an entry in the FAT, or FAT-like thing,
on the HDD) and wouldn't result in an intermediate, null-populated,
step, but the evidence seems to indicate I'm wrong...

Any insight from someone with knowledge of the internal operations of
os.remove and/or os.rename would be greatly appreciated, although I
expect the crux may be at the os level and not in python.

Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Funky file contents when os.rename or os.remove are interrupted

2006-10-11 Thread Russell Warren
Thanks, guys... this has all been very useful information.

The machine this is happening on is already running NTFS.

The good news is that we just discovered/remembered that there is a
write-caching option (in device manager -> HDD -> properties ->
Policies tab) available in XP.  The note right beside the
write-cache-enable checkbox says:

"This setting enables write caching to improve disk performance, but a
power outage or equipment failure might result in data loss or
corruption."

Well waddya know...  write-caching was enabled on the machine.  It is
now disabled and we'll be power-cycle testing to see if it happens
again.

Regarding the comment on journaling file systems, I looked into it and
it looks like NTFS actually does do journaling to some extent, and some
effort was expended to make NTFS less susceptible to the exact problem
I'm experiencing.  I'm currently hopeful that the corrupted files we've
seen are entirely due to the mistake of having write-caching enabled
(the default).

> Then, Windows has nothing to do with it, either. It calls the routines
> of the file system driver rather directly.

It looks like that is not entirely true... this write-caching appears
to sit above the file system itself.  In any case, it is certainly not
a Python issue!

One last non-python question... a few things I read seemed to vaguely
indicate that the journaling feature of NTFS is an extension/option.
Wording could also indicate a simple feature, though.  Are there
options you can set on your file system (aside from block size and
partition)?!  I've certainly never heard of that, but want to be sure.
I definitely need this system to be as crash-proof as possible.

Thanks again,
Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Recommended way to fix core python distribution issues in your own apps?

2006-06-19 Thread Russell Warren
I've got a case where I need to tweak the implementation of a default
python library due to what I consider to be an issue in the library.

What is the best way to do this and make an attempt to remain
compatible with future releases?

My specific problem is with the clock used in the threading.Event and
threading.Timer.  It currently uses time.time, which is affected by
changes in system time.  eg: if you change the system clock somehow at
some time (say, with an NTP broadcast) you may get a surprise in the
timing of your code execution.

What I do right now is basically this:

import sys
import time
import threading
if sys.platform == 'win32':
  threading._time = time.clock

in which case I'm simply forcing the internal clock used in the
Event/Timer code to use a time-independent performance timer rather
than the system time.

I figured this is a much better way to do it than snagging a private
copy of threading.py and making a direct change to it, but am curious
if anyone has a better way of doing this type of thing?  For example, I
have no way of guaranteeing that this hack will work come a change to
2.5 or later.

Thanks,
Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Is Queue.Queue.queue.clear() thread-safe?

2006-06-22 Thread Russell Warren
I'm guessing no, since it skips down through any Lock semantics, but
I'm wondering what the best way to clear a Queue is then.

Esentially I want to do a "get all" and ignore what pops out, but I
don't want to loop through a .get until empty because that could
potentially end up racing another thread that is more-or-less blindly
filling it asynchronously.

Worst case I think I can just borrow the locking logic from Queue.get
and clear the deque inside this logic, but would prefer to not have to
write wrapper code that uses mechanisms inside the object that might
change in the future.

Also - I can of course come up with some surrounding architecture to
avoid this concern altogether, but a thread-safe Queue clear would do
the trick and be a nice and short path to solution.

If QueueInstance.queue.clear() isn't thread safe... what would be the
best way to do it?  Also, if not, why is the queue deque not called
_queue to warn us away from it?

Any other comments appreciated!

Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: better Python IDE? Mimics Maya's script editor?

2006-06-22 Thread Russell Warren
Check out the Wing IDE - www.wingware.com .

As part of it's general greatness it has a "debug probe" which lets you
execute code snippets on active data in mid-debug execution.

It doesn't have precisely what you are after... you can't (yet)
highlight code segments and say "run this, please", but I think it
might almost have what you want for general workflow improvement.

The main drawback is that it is a commercial product, albeit "cheap".
The extra drawback is that the debug probe feature requires the
professional version which is "less cheap", but still < $200.  Well
worth it for professional development IMO.

They have a great demo policy... you should check it out.  I tried
several different IDEs (I've become accustomed to using IDEs over
supe'd up text editors) and Wing was/is my favorite.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is Queue.Queue.queue.clear() thread-safe?

2006-06-27 Thread Russell Warren
Thanks guys.  This has helped decipher a bit of the Queue mechanics for
me.

Regarding my initial clear method hopes... to be safe, I've
re-organized some things to make this a little easier for me.  I will
still need to clear out junk from the Queue, but I've switched it so
that least I can stop the accumulation of new data in the Queue while
I'm clearing it.  ie: I can just loop on .get until it is empty without
fear of a race, rather than needing a single atomic clear.

My next Queue fun is to maybe provide the ability to stuff things back
on the queue that were previously popped, although I'll probably try
and avoid this, too (maybe with a secondary "oops" buffer).

If curious why I want stuff like this, I've got a case where I'm
collecting data that is being asynchronously streamed in from a piece
of hardware.  Queue is nice because I can just have a collector thread
running and stuffing the Queue while other processing happens on a
different thread.  The incoming data *should* have start and stop
indications within the stream to define segments in the stream, but
stream/timing irregularities can sometimes either cause junk, or cause
you to want to rewind the extraction a bit (eg: in mid stream-assembly
you might realize that a stop condition was missed, but can deduce
where it should have been).  Fun.

Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: static object

2007-01-03 Thread Russell Owen
In article <[EMAIL PROTECTED]>,
 meelab <[EMAIL PROTECTED]> wrote:

> Dear All,
> 
> I am looking for a way to create a "static object" or a "static class" -
> terms might be inappropriate - having for instance:
> 
> class StaticClass:
> .
> .
> 
> and then
> staticObject1 = StaticClass()
> staticObject2 = StaticClass()
> 
> so that staticObject1 and staticObject2 refers exactly to the same
> instance of object.

Personally I do the following (in its own module). There may be a better 
way, but this is simple and it works:

_theSingleton = None

def getSingleton():
   global _theSingleton
   if not _theSingleton:
  _theSingleton = _Singleton()
   return _theSingleton

class _Singleton:
   def __init__(self, ...):
  ...


-- Russell
-- 
http://mail.python.org/mailman/listinfo/python-list


wxPython Conventions

2006-01-30 Thread Jared Russell
I've recently decided to try my hand at GUI programming with wxPython,
and I've got a couple questions about the general conventions regarding
it.

To mess around with it, I decided to create a small app to check my
Gmail.  I want something that will just sit in my system tray checking
for new emails every ten minutes or so.  As such, I have no need for an
actual window anywhere.  So I'm wondering if I should still use a Frame
or not.  From playing around with it, it seems it's unnecessary, but
I'm admittedly unfamiliar with what would be considered proper.

My other question involved the proper location of specific functions.
Take for instance the functions meant for logging in and actually
checking for new email.  Would it be better to put them in my App
class, my Frame class (if the answer to the above question is that yes,
I should use a Frame regardless), or in an entirely separate class?  To
me, it seems most natural to put them in the App class, but I'm not
sure if it would be better to avoid clutter and stick them somewhere
else.  And if I would stick them in some other class, where should I
keep the reference to the instance of that class?  In my App or Frame?

Like I said, I'm just beginning my experience with wxPython, so any
help would be appreciated.  I've looked through all the demos and
searched the group, but nothing else seems to pertain to my specific
questions.

Jared

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: wxPython Conventions

2006-02-05 Thread Jared Russell
Thanks for all the replies.  I'm admittedly new to GUI programming, so
I'm making sure to read up on the MVC pattern and related things like
the observer pattern.  I appreciate the help.

Jared

-- 
http://mail.python.org/mailman/listinfo/python-list


Finding the public callables of self

2006-02-09 Thread Russell Warren
Is there any better way to get a list of the public callables of self
other than this?

myCallables = []
classDir = dir(self)
for s in classDir:
  attr = self.__getattribute__(s)
  if callable(attr) and (not s.startswith("_")):
myCallables.append(s) #collect the names (not funcs)

I don't mean a shorter list comprehension or something that just drops
the line count, but whether or not I need to go at it through dir and
__getattribute__.  This seems a bit convoluted and with python it often
seems there's something already canned to do stuff like this when I do
it.  At first I thought self.__dict__ would do it, but callable methods
seem to be excluded so I had to resort to dir, and deal with the
strings it gives me.

Thanks,
Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Finding the public callables of self

2006-02-09 Thread Russell Warren
> import inspect
> myCallables = [name for name, value in inspect.getmembers(self) if not
> name.startswith('_') and callable(value)]

Thanks.  I forgot about the inspect module.  Interestingly, you've also
answered my question more than I suspect you know!  Check out the code
for inspect.getmembers():

def getmembers(object, predicate=None):
"""Return all members of an object as (name, value) pairs sorted by
name.
Optionally, only return members that satisfy a given predicate."""
results = []
for key in dir(object):
value = getattr(object, key)
if not predicate or predicate(value):
results.append((key, value))
results.sort()
return results

Seems familiar!  The fact that this is using dir(), getattr(), and
callable() seems to tell me there is no better way to do it.  I guess
my method wasn't as indirect as I thought!

And thanks for the reminder about getattr() instead of
__getattribute__() and other streamlining tips.

Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: absolute removal of '\n' and the like

2006-02-10 Thread Russell Blau

"S Borg" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
>  If I have a string, what is the strongest way to assure the
> removal of any line break characters?
>
> Line break characters must always be the last character in a line, so
> would
> this:str = linestring[:-1]
>
>  work?

Er, yes, if you don't mind (a) mangling any string that *doesn't* have a
newline as the last character, and (b) messing up any subsequent part of
your program that tries to use the built-in str() function (because you just
reassigned that name to something else).

I'd suggest:

foo = linestring.rstrip("\n")

You can also add to the quoted string any other characters you want to have
stripped; for example,

foo = linestring.rstrip("\n\r\t")

Or if you want to strip off *all* whitespace characters, just leave it out:

foo = linestring.rstrip()

Russ




-- 
http://mail.python.org/mailman/listinfo/python-list


Profiling/performance monitoring in win32

2006-02-17 Thread Russell Warren
The application we're working on at my company currently has about
eleventy billion independent python applications/process running and
talking to each other on a win32 platform.  When problems crop up and
we have to drill down to figure out who is to blame and how, we
currently are using the (surprisingly useful) perfmon tool that comes
with Windows.

Perfmon does a pretty decent job, but is pretty raw and sparse on the
usability front.  Does anyone know of any alternative windows (not cpu)
profilers out there?  Commercial packages ok,  Of course one that can
introspect it's way into what a python app is doing would be a bonus
(but truthfully I'm probably just adding that last bit to make this
post a bit more appropriate for c.l.py :) ).

Thanks!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Lisp-like macros in Python?

2007-05-01 Thread Chris Russell
On May 1, 5:10 pm, sturlamolden <[EMAIL PROTECTED]> wrote:
> Hello
>
> The Lisp crowd always brags about their magical macros. I was
> wondering if it is possible to emulate some of the functionality in
> Python using a function decorator that evals Python code in the stack
> frame of the caller. The macro would then return a Python expression
> as a string. Granted, I know more Python than Lisp, so it may not work
> exactly as you expect.

The 'magical macros' of lisp are executed at compile time, allowing
arbitrary code transformations without the loss of run time
efficiency. If you want to hack this together in python you should
write a preprocessor that allows python code *to be run in future*
inter spaced with python code *to be executed immediately* and
replaces the executed code with it's output. The immediately executed
code should be able to make use of any existing code or definitions
that are marked as to be compiled in the future.

This is should be quite do-able in python(I think I haven't really
looked at it) because it has a REPL and everything that implies, but
you'd have to implement lispy macros as some kind of def_with_macros
which immediately produces a string which is equivalent to the macro
expanded function definition and then evaluates it.

Good luck in doing anything useful with these macros in a language
with non-uniform syntax however.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The meaning of a = b in object oriented languages

2007-09-17 Thread Russell Wallace
Summercool wrote:
> so most or all object oriented language do assignment by reference?
> is there any object oriented language actually do assignment by
> value?  I kind of remember in C++, if you do
> 
> Animal a, b;
> 
> a = b will actually be assignment by value.
> while in Java, Python, and Ruby, there are all assignment by
> reference.  ("set by reference")
> 
> Is that the case: if a is an object, then b = a is only copying the
> reference?

Yes, your understanding is exactly correct; C++ will assign by value 
unless you explicitly use pointers, but the other languages will assign 
by reference (except for primitive types).

-- 
"Always look on the bright side of life."
To reply by email, replace no.spam with my last name.
-- 
http://mail.python.org/mailman/listinfo/python-list


logging module and trailing newlines

2007-10-02 Thread Russell Warren
I was just setting up some logging in a make script and decided to
give the built-in logging module a go, but I just found out that the
base StreamHandler always puts a newline at the end of each log.
There is a comment in the code that says  "The record is then written
to the stream with a trailing newline [N.B. this may be removed
depending on feedback]"... I guess there wasn't the feedback to drive
the change.

All I'm after is the ability to log things like...

Compiling 'shrubbery.c'...  [DONE]

where the "[DONE]" was added later in time than the "Compiling...",
and the output goes to both stdout and to a log file.  ie: I want to
tee my print statements and keep the ability to skip the trailing
newline.  I had rolled my own primitive version than decided to try
the logging module for kicks.

Anyone have a suggestion on how to get logging to work like this?  Or
know of a way to tee in Windows without forcing other users to install
a tee package?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: logging module and trailing newlines

2007-10-03 Thread Russell Warren
Both are very good responses... thanks!  I had forgotten the ease of
"monkey-patching" in python and the Stream class is certainly cleaner
than the way I had been doing it.

On Oct 3, 3:15 am, Peter Otten <[EMAIL PROTECTED]> wrote:
> Russell Warren wrote:
> > All I'm after is the ability to log things like...
>
> > Compiling 'shrubbery.c'...  [DONE]
>
> > where the "[DONE]" was added later in time than the "Compiling...", and
> > the output goes to both stdout and to a log file.  ie: I want to tee my
> > print statements and keep the ability to skip the trailing newline.  I had
> > rolled my own primitive version than decided to try the logging module for
> > kicks.
>
> > Anyone have a suggestion on how to get logging to work like this?  Or know
> > of a way to tee in Windows without forcing other users to install a tee
> > package?
>
> (1) Logging
>
> If you are too lazy to subclass you can monkey-patch:
>
> >>> import logging
> >>> def emit(self, record):
>
> ... msg = self.format(record)
> ... fs = "%s" if getattr(record, "continued", False) else "%s\n"
> ... self.stream.write(fs % msg)
> ... self.flush()
> ...>>> logging.StreamHandler.emit = emit
> >>> continued = dict(continued=True)
> >>> logging.error("Compiling... ", extra=continued); logging.error("[Done]")
>
> ERROR:root:Compiling... ERROR:root:[Done]
>
> (2) Teeing
>
> "Primitive", but should work:
>
> >>> class Stream(object):
>
> ... def __init__(self, *streams):
> ... self.streams = streams
> ... def write(self, s):
> ... for stream in self.streams:
> ... stream.write(s)
> ... def flush(self):
> ... for stream in self.streams:
> ... stream.flush()
> ...>>> import sys
> >>> stream = Stream(sys.stdout, sys.stderr)
> >>> print >> stream, "Compiling...",
>
> Compiling...Compiling...>>>
>
> I'd probably go with the latter.
>
> Peter


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Looking for a good Python environment

2007-11-11 Thread Russell Warren
> While we're at it, do any of these debuggers implement a good way to
> debug multi-threaded Python programs?

Wing now has multi-threaded debugging.

I'm a big Wing (pro) fan.  To be fair, when I undertook my huge IDE
evaluation undertaking it was approx 2 years ago... at the time as far
as what I would consider to be a full featured professional IDE it was
IMO really only Wing and Komodo who could compete.  The others were
left in the dust.  Unfortunately both cost money, but it became clear
that at least in this instance you get what you pay for.  Not a big
deal for me because as far as professional development costs the cost
is ridiculously low and I use it professionally, but I could see
balking at the cost if strictly a hobbiest... although I would pay as
I'd be lost without my Wing I think.  At the time, I much preferred
Wing to Komodo, but haven't tried Komodo more than sparingly since
then.  My bet is that the situation would still be similar since Wing
has done nothing but get better over time.  The support crew at Wing
are great, too... the mailing list is excellent and the Wing
developers typically respond very quickly to any support requests, and
even feature requests (I've had a few things added due to the mailing
list).

The biggest missing feature in Wing at the moment is integrating GUI
development.  If you are into that, you may want to look elsewhere.
Any GUI stuff I do I use wxPython and after starting with a template
builder I just manually code the GUIs... painful at times, especially
when you just want to whip up something small, but I've gotten used to
it.  Now that I type this, though, I think I'll go looking for what's
new!  Maybe Boa is less buggy now?  Hmm.

Prior to taking on my "find the ultimate IDE" quest I was using SPE
and it was free and quite decent, just not comparable to Wing.

http://pythonide.stani.be/

A quick look at the current state of SPE shows that it now has multi-
threaded debugging via WinPDB (what I used to use for debugging thread
issues).  Interesting.  Worth a look to see if it is integrated well.

-- 
http://mail.python.org/mailman/listinfo/python-list


Tkinter weirdness on Windows

2007-12-15 Thread Russell Blau
I have some Tkinter programs that I run on two different machines.  On Machine 
W, which runs Python 2.5.1 on Windows XP, these programs run just fine.  On 
Machine H, which runs Python 2.5.1 on Windows XP, however, the same programs 
crash regularly.  The crashes are not Python exceptions, but rather are 
reported by Windows as errors in pythonw.exe.  (Of course, the error messages 
themselves contain absolutely no useful information.)  This happens whether I 
run them from the command prompt or from IDLE (although IDLE itself never 
crashes).  Further, the crashes occur at unpredictable times; sometimes the 
program will crash almost immediately upon startup, while at other times it 
will run for a while and then crash.
 
A couple of other points that may or may not have something to do with it.  (1) 
Both programs use the threading module to launch new threads from the GUI.  (2) 
 Machine H, but not Machine W, also has Python 2.2 installed on it.
 
I do recall seeing a message at some point that suggested that conflicts in the 
MS VC++ runtime DLLs might cause this sort of problem, but I haven't been able 
to find that information through a search, so I'm not sure which particular 
DLLs to look for.  Any help in tracking down the source of this problem would 
be appreciated.
 
Russ
 
_
Share life as it happens with the new Windows Live.
http://www.windowslive.com/share.html?ocid=TXT_TAGHM_Wave2_sharelife_122007-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Python; jump to a concrete line

2007-12-20 Thread Russell Blau
"Horacius ReX" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
> Hi, sorry but after looking for information, I still did not get how,
> when reading a text file in python, can one jump to a concrete line
> and then read the different data (separated by spaces). In each line
> there is different number of columns so sometimes i get kind of "index
> out" error. Is there a better way to read the different data on each
> row and avoiding to know the exact number of columns ?

Have you considered using the file.readlines() method?

Russ



-- 
http://mail.python.org/mailman/listinfo/python-list


Python/Tkinter DLL conflicts on Windows

2007-12-26 Thread Russell Blau
I have some Tkinter programs that I run on two different machines.  On 
Machine W, which runs Python 2.5.1 on Windows XP, these programs run fine. 
On Machine H, which runs Python 2.5.1 on Windows XP, however, the same 
programs crash regularly.  The crashes are not Python exceptions, but rather 
are reported by Windows as errors in pythonw.exe.  (Of course, the error 
messages themselves contain absolutely no useful information.)  This happens 
whether I run them from the command prompt or from IDLE (although IDLE 
itself never crashes).  Further, the crashes occur at unpredictable times; 
sometimes the program will crash almost immediately upon startup, while at 
other times it will run for a while and then crash.

Also, I'm not sure whether this has anything to do with my problem, but 
Machine H also has Python 2.2 installed on it, while Machine W does not.

I recall seeing a message at some point that suggested that conflicts in the 
MS VC runtime DLLs might cause this sort of problem, but I haven't been able 
to find that information through a search, so I'm not sure which particular 
DLLs to look for.  Any help in tracking down the source of this problem 
would be appreciated.

Russ



-- 
http://mail.python.org/mailman/listinfo/python-list


Speed of shutil.copy vs os.system("copy src dest") in win32

2006-04-26 Thread Russell Warren
I just did a comparison of the copying speed of shutil.copy against the
speed of a direct windows copy using os.system.  I copied a file that
was 1083 KB.

I'm very interested to see that the shutil.copy copyfileobj
implementation of hacking through the file and writing a new one is
significantly faster... any clue as to why this is?  I figure I'm
missing something here.

Does os.system launch a cmd shell every time?

>>> import timeit
>>> timeit.Timer(stmt= r'shutil.copy(r"c:\windows\ntbtlog.txt", 
>>> r"c:\temp")',setup="import shutil").repeat(repeat=5,number=100)
[0.99285104671434965, 0.68337121058721095, 0.84528340892575216,
0.87780765432398766, 0.8709894693311071]
>>> timeit.Timer(stmt= r'os.system(r"copy c:\windows\ntbtlog.txt 
>>> c:\temp")',setup="import os").repeat(repeat=5,number=100)
[2.8546278926514788, 2.3763950446300441, 2.609580241377,
2.4392499605455669, 2.4446956247265916]

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: win32com short path name on 2k

2006-04-26 Thread Russell Warren
I've been driven crazy by this type of thing in the past.  In my case
it was with the same application (not two like you), but on different
machines, with all supposedly having the same OS load.  In some cases I
would get short path names and in others I would get long path names.
I could never figure it out any logical explanation for the behaviour
so I just worked around it (*cough* hack *cough*).

To help you generally get around it, you may not know about these two
functions (both in kernel32)...

GetShortPathName - http://tinyurl.com/nxzkl
GetLongPathName - http://tinyurl.com/r4ey4

You can work out a scheme with these where it doesn't matter what mood
Windows is in when you ask it for a path.

In my case I had another big problem, and that was that it would also
arbitrarily decide to change the case of critical paths.. eg:
"C:\Windows" on one machine, and "C:\windows" on another.  That also
drove me bonkers and resulted in some lost years/hair.

-- 
http://mail.python.org/mailman/listinfo/python-list


Popping from the middle of a deque + deque rotation speed

2006-04-28 Thread Russell Warren
Does anyone have an easier/faster/better way of popping from the middle
of a deque than this?

class mydeque(deque):
  def popmiddle(self, pos):
self.rotate(-pos)
ret = self.popleft()
self.rotate(pos)
return ret

I do recognize that this is not the intent of a deque, given the
clearly non-"double-ended" nature.  I'm using a deque in a place where
99.999 of the time it will be a fifo, but occasionally I will want to
pop from the middle.

I initially wrote that thinking that the rotate duration would be
independent of the rotation distance, but...

>>> import timeit
>>> s = "from collections import deque; d = deque(xrange(100))"
>>> timeit.Timer(stmt="d.rotate(1)", setup = s).timeit(number=10)
0.1372316872675583
>>> timeit.Timer(stmt="d.rotate(1000)", setup = s).timeit(number=10)
3.5050192133357996
>>> timeit.Timer(stmt="d.rotate(1)", setup = s).timeit(number=10)
32.756590851630563
>>> timeit.Timer(stmt="d.rotate(10)", setup = s).timeit(number=10)
325.59845064107299
>>> timeit.Timer(stmt="d.rotate(99)", setup = s).timeit(number=10)
0.14491059617921564

Boy was I wrong.  Given that it scales linearly it looks like it
cut-pastes the rotation an element at a time!  At least it recognizes
the shortest rotation path, though.

On first guess of how the deque is implemented I would have thought
that rotation could be achieved simply by diddling some pointers, but I
could see how that would mess with popping efficiency (seems you'd have
to remap memory in the event of a pop after rotation).  Worst case I
figured a rotate would just do a single shot memory remapping of the
deque contents so that the speed was the same regardless of rotation
size...

My guessing/figuring skills clearly need some work.

What's up with this deque rotation?  If I were to hazard one more guess
I'd think it is trying to conserve transient memory usage during
rotation... in my (poor) mental scheme it seems that cutting/relocating
could take 50% more memory than the deque itself for a full rotation.

I should stop guessing.  Or at least figure out how to find the source
code for the deque implementation...

Should I avoid using deques with large iterables?

Thanks,
Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Popping from the middle of a deque + deque rotation speed

2006-05-01 Thread Russell Warren
Thanks for the responses.

> It seems to work with my Python2.4 here.  If you're
> interested in efficiency, I'll leave their comparison as an
> exercise to the reader... :)

Ok, exercise complete! :)  For the record, they are pretty much the
same speed...

>>> s = """
... from collections import deque
... class mydeque(deque):
...   def popmiddle(self, pos):
... self.rotate(-pos)
... ret=self.popleft()
... self.rotate(pos)
... return ret
... d = mydeque(xrange(100))
>>> timeit.Timer(stmt="x=d.popmiddle(1000)", setup = s).timeit(number=10)
5.4620059253340969
>>> s2="""
... from collections import deque
... class mydeque(deque):
...   def popmiddle(self, pos):
... ret = self[pos]
... del(self[pos])
... return ret
... d = mydeque(xrange(100))
... """
>>> timeit.Timer(stmt="x=d.popmiddle(1000)", setup = s2).timeit(number=10)
5.3937888754018104

Thanks for the alternative solution.

Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Popping from the middle of a deque + deque rotation speed

2006-05-01 Thread Russell Warren
> So does the speed of the remaining 0.001 cases really matter?  Note
> that even just indexing into a deque takes O(index) time.

It doesn't matter as much, of course, but I was looking to make every
step as efficient as possible (while staying in python).

As to indexing into a deque being O(index)... I didn't realize that.
It is certainly something to keep in mind, though... looping through
the contents of a deque would obviously be a bad idea with this being
the case!  I wonder if the generator for the deque helps reduce this?
Will check later.

Proof of the O(n) for indexing into a deque (not that I doubted Tim #2!
:)...

>>> import timeit
>>> s = "from collections import deque; d = deque(xrange(100))"
>>> timeit.Timer(stmt="x=d[1]", setup = s).timeit(number=10)
0.14770257113683627
>>> timeit.Timer(stmt="x=d[10]", setup = s).timeit(number=10)
1.4016418287799155

Russ

-- 
http://mail.python.org/mailman/listinfo/python-list


Is crawling the stack "bad"? Why?

2008-02-24 Thread Russell Warren
I've got a case where I would like to know exactly what IP address a
client made an RPC request from.  This info needs to be known inside
the RPC function.  I also want to make sure that the IP address
obtained is definitely the correct one for the client being served by
the immediate function call.  That is kind of dumb/obvious to say, but
I do just to highlight that it could be a problem for an RPC server
allowing multiple simultaneous connections on multiple threads.  ie: I
can't set some simple "current_peer_info" variable when the connection
is made and let the RPC function grab that value later since by the
time it does it could easily be wrong.

In order to solve this I toyed with a few schemes, but have (so far)
settled on crawling up the stack from within the RPC call to a point
where the precise connection info that triggered the RPC call to run
could be determined.  This makes sure (I think!) that I get the exact
connection info in the event of a lot of simultaneous executions on
different threads.  It seems hackish, though.  I frequently find that
I take the long way around to do something only to find out later that
there is a nice and tight pythonic way to get it done.  This seems
like it might be one of those cases and the back of my mind keeps
trying to relegate this into the realm of cheat code that will cause
me major pain later.  I can't stop thinking the old days and slapping
gotos all over code to fix something "quickly" rather than
restructuring properly.  Crawling around the stack in non-debugger
code always seems nasty to me, but it sure seems to work nicely in
this case...

To illustrate this scheme I've got a short program using
SimpleXMLRPCServer to do it.  The code is below.  If you run it you
should get an output something like:

RPC call came in on: ('127.0.0.1', 42264)

Does anyone have a better way of doing this?  Anyone want to warn me
off of crawling the stack to get this type of info?  The docstring for
sys._getframe already warns me off by saying "This function should be
used for internal and specialized purposes only", but without
providing any convincing argument why that is the case.  I'd love to
hear a reasonable argument... the only thing I can think of is that it
starts dipping into lower level language behavior and might cause
problems if your aren't careful.  Which is almost as vague as "for
internal and specialized purposes only".

I'm very curious to hear what you python wizards have to say.



import SimpleXMLRPCServer, xmlrpclib, threading, sys

def GetCallerNameAndArgs(StackDepth = 1):
  """This function returns a tuple (a,b) where:
a = The name of the calling function
b = A dictionary with the arg values in order
  """
  f = sys._getframe(StackDepth + 1) #+1 to account for this call
  callerName = f.f_code.co_name
  #get the arg count for the frame...
  argCount = f.f_code.co_argcount
  #get a tuple with the local vars in the frame (puts the args
first)...
  localVars = f.f_code.co_varnames
  #now get the tuple of just the args...
  argNames = localVars[:argCount]
  #now to make a dictionary of args and values...
  argDict = {}
  for key in argNames:
argDict[key] = f.f_locals[key]
  return (callerName, argDict)

def GetRpcClientConnectionInfo():
  #Move up the stack to the right point to figure out client info...
  requestHandler = GetCallerNameAndArgs(4)[1]["self"]
  usedSocket = requestHandler.connection
  return str(usedSocket.getpeername())

def StartSession():
  return "RPC call came in on: %s" % GetRpcClientConnectionInfo()

class DaemonicServerLaunchThread(threading.Thread):
def __init__(self, RpcServer, **kwargs):
threading.Thread.__init__(self, **kwargs)
self.setDaemon(1)
self.server = RpcServer
def run(self):
self.server.serve_forever()

rpcServer = SimpleXMLRPCServer.SimpleXMLRPCServer(("", 12390), \
logRequests = False)
rpcServer.register_function(StartSession)
slt = DaemonicServerLaunchThread(rpcServer)
slt.start()

sp = xmlrpclib.ServerProxy("http://localhost:12390";)
print sp.StartSession()
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is crawling the stack "bad"? Why?

2008-02-24 Thread Russell Warren
Argh... the code wrapped... I thought I made it narrow enough.  Here
is the same code (sorry), but now actually pasteable.

---

import SimpleXMLRPCServer, xmlrpclib, threading, sys

def GetCallerNameAndArgs(StackDepth = 1):
  """This function returns a tuple (a,b) where:
a = The name of the calling function
b = A dictionary with the arg values in order
  """
  f = sys._getframe(StackDepth + 1) #+1 to account for this call
  callerName = f.f_code.co_name
  #get the arg count for the frame...
  argCount = f.f_code.co_argcount
  #get a tuple with the local vars in the frame (args first)...
  localVars = f.f_code.co_varnames
  #now get the tuple of just the args...
  argNames = localVars[:argCount]
  #now to make a dictionary of args and values...
  argDict = {}
  for key in argNames:
argDict[key] = f.f_locals[key]
  return (callerName, argDict)

def GetRpcClientConnectionInfo():
  #Move up the stack to the location to figure out client info...
  requestHandler = GetCallerNameAndArgs(4)[1]["self"]
  usedSocket = requestHandler.connection
  return str(usedSocket.getpeername())

def StartSession():
  return "RPC call came in on: %s" % GetRpcClientConnectionInfo()

class DaemonicServerLaunchThread(threading.Thread):
def __init__(self, RpcServer, **kwargs):
threading.Thread.__init__(self, **kwargs)
self.setDaemon(1)
self.server = RpcServer
def run(self):
self.server.serve_forever()

rpcServer = SimpleXMLRPCServer.SimpleXMLRPCServer(("", 12390), \
logRequests = False)
rpcServer.register_function(StartSession)
slt = DaemonicServerLaunchThread(rpcServer)
slt.start()

sp = xmlrpclib.ServerProxy("http://localhost:12390";)
print sp.StartSession()
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is crawling the stack "bad"? Why?

2008-02-24 Thread Russell Warren
> That is just madness.

What specifically makes it madness?  Is it because sys._frame is "for
internal and specialized purposes only"? :)

> The incoming ip address is available to the request handler, see the
> SocketServer docs

I know... that is exactly where I get the address, just in a mad way.

> Write a request handler that stashes that info somewhere that rpc
> responders can access it in a sane way.

That is exactly where I started (creating my own request handler,
snagging the IP address and stashing it), but I couldn't come up with
a stash location that would work for a threaded server.  This is the
problem I was talking about with the "current_peer_info" scheme.  How
is the RPC responder function supposed to know what is the right
stash, given that when threaded there could be multiple stashes at a
time?  The IP needs to get down to the exact function execution that
is responding to the client... how do I do that?

I had my options as:

1) stash the IP address somewhere where the RPC function could get it
2) pass the IP down the dispatch chain to be sure it gets to the
target

I couldn't come up with a way to get 1) to work.  Then, trying to
accomplish 2) I reluctantly started messing with different schemes
involving my own versions of do_POST, _marshaled_dispatch, and
_dispatch in order to pass the IP directly down the stack.  After some
pain at this (those dispatches are weird) I decided it was wy too
much of a hack.  Then I thought "why not go up the stack to fetch it
rather than trying to mess with the nice/weird dispatch chain to send
it down".  I now had a third option...

3) Go up the stack to fetch the exact IP for the thread

After realizing this I had my working stack crawl code only a few
minutes later (I had GetCallerNameAndArgs already).  Up the stack has
a clear path.  Down was murky and involved trampling on code I didn't
want to override.  The results is much cleaner than what I was doing
and it worked, albeit with the as yet unfounded "crawling the stack is
bad" fear still there.

I should also point out that I'm not tied to SimpleXMLRPCServer, it is
just a convenient example.  I think any RPC protocol and dispatcher
scheme would have the same problem.

I'd be happy to hear about a clean stashing scheme (or any other
alternative) that works for a threaded server.

My biggest specific fear at the moment is that sys._frame will do
funky things with multiple threads, but given that my toy example is
executing in a server on its own thread and it traces perfectly I'm
less worried.  Come to think of it, I wonder what happens when you
crawl up to and past thread creation?  Hmm.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is crawling the stack "bad"? Why?

2008-02-25 Thread Russell Warren
> How about a dictionary indexed by by the thread name.

Ok... a functional implementation doing precisely that is at the
bottom of this (using thread.get_ident), but making it possible to
hand around this info cleanly seems a bit convoluted.  Have I made it
more complicated than I need to?  There must be a better way?  It sure
is a heck of a lot less straightforward than having a reasonably tight
CrawlUpStackToGetClientIP function call.  But then nothing is more
straightforward than a simple goto, either...

So I ask again, what is wrong with crawling the stack?

> What happens if you want to switch to pypy?

If it doesn't work if I decide to switch implementations for some
reason, I just fix it when my unit tests tell me it is busted.  No?
Aren't there also python implementations that don't have threadign in
them that would file using thread.get_ident?  Seems hard to satisfy
all implementations.

> the threading.local class seems defined for that purpose, not that I've ever
> used it ;)

I hadn't heard of that... it seems very useful, but in this case I
think it just saves me the trouble of making a stash dictionary...
unless successive calls to threading.local return the same instance?
I'll have to try that, too.

---

import xmlrpclib, threading, sys, thread
from SimpleXMLRPCServer import SimpleXMLRPCServer, \
   SimpleXMLRPCRequestHandler

class RpcContainer(object):
  def __init__(self):
self._Handlers = {} #keys = thread IDs, values=requestHandlers
  def _GetRpcClientIP(self):
connection = self._Handlers[thread.get_ident()].connection
ip = connection.getpeername()[0]
return ip
  def WhatIsMyIP(self):
return "Your IP is: %s" % self._GetRpcClientIP()

class ThreadCapableRequestHandler(SimpleXMLRPCRequestHandler):
  def do_POST(self, *args, **kwargs):
#make the handler available to the RPCs, indexed by threadID...
self.server.RpcContainer._Handlers[thread.get_ident()] = self
SimpleXMLRPCRequestHandler.do_POST(self, *args, **kwargs)

class MyXMLRPCServer(SimpleXMLRPCServer):
  def __init__(self, RpcContainer, *args, **kwargs):
self.RpcContainer = RpcContainer
SimpleXMLRPCServer.__init__(self, *args, **kwargs)

class DaemonicServerLaunchThread(threading.Thread):
def __init__(self, RpcServer, **kwargs):
threading.Thread.__init__(self, **kwargs)
self.setDaemon(1)
self.server = RpcServer
def run(self):
self.server.serve_forever()

container = RpcContainer()
rpcServer = MyXMLRPCServer( \
  RpcContainer = container,
  addr = ("", 12390),
  requestHandler = ThreadCapableRequestHandler,
  logRequests = False)
rpcServer.register_function(container.WhatIsMyIP)
slt = DaemonicServerLaunchThread(rpcServer)
slt.start()

sp = xmlrpclib.ServerProxy("http://localhost:12390";)
print sp.WhatIsMyIP()
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is crawling the stack "bad"? Why?

2008-02-25 Thread Russell Warren
Thanks Ian... I didn't know about threading.local before but have been
experimenting and it will likely come in quite handy in the future.
For this particular case it does basically seem like a replacement for
the threadID indexed dictionary, though.  ie: I'll still need to set
up the RpcContainer, custom request handler, and custom server in
order to get the info handed around properly.  I will likely go with
this approach since it lets me customize other aspects at the same
time, but for client IP determination alone I still half think that
the stack crawler is cleaner.

No convincing argument yet on why crawling the stack is considered
bad?  I kind of hoped to come out of this with a convincing argument
that would stick with me...

On Feb 25, 12:30 pm, Ian Clark <[EMAIL PROTECTED]> wrote:
> On 2008-02-25, Russell Warren <[EMAIL PROTECTED]> wrote:
>
>
>
> >> the threading.local class seems defined for that purpose, not that I've 
> >> ever
> >> used it ;)
>
> > I hadn't heard of that... it seems very useful, but in this case I
> > think it just saves me the trouble of making a stash dictionary...
> > unless successive calls to threading.local return the same instance?
> > I'll have to try that, too.
>
> No, successive calls to threading.local() will return different objects.
> So, you call it once to get your 'data store' and then use that one
> object from all your threads. It takes care of making sure each thread
> gets it's own data.
>
> Here is your example, but using threading.local instead of your own
> version of it. :)
>
> Ian
>
> import xmlrpclib, threading, sys, thread
> from SimpleXMLRPCServer import SimpleXMLRPCServer, 
> SimpleXMLRPCRequestHandler
>
> thread_data = threading.local()
>
> class RpcContainer(object):
>   def __init__(self):
> self._Handlers = {} #keys = thread IDs, values=requestHandlers
>   def _GetRpcClientIP(self):
> #connection = self._Handlers[thread.get_ident()].connection
> connection = thread_data.request.connection
> ip = connection.getpeername()[0]
> return ip
>   def WhatIsMyIP(self):
> return "Your IP is: %s" % self._GetRpcClientIP()
>
> class ThreadCapableRequestHandler(SimpleXMLRPCRequestHandler):
>   def do_POST(self, *args, **kwargs):
> #make the handler available to the RPCs, indexed by threadID...
> thread_data.request = self
> SimpleXMLRPCRequestHandler.do_POST(self, *args, **kwargs)
>
> class MyXMLRPCServer(SimpleXMLRPCServer):
>   def __init__(self, RpcContainer, *args, **kwargs):
> self.RpcContainer = RpcContainer
> SimpleXMLRPCServer.__init__(self, *args, **kwargs)
>
> class DaemonicServerLaunchThread(threading.Thread):
> def __init__(self, RpcServer, **kwargs):
> threading.Thread.__init__(self, **kwargs)
> self.setDaemon(1)
> self.server = RpcServer
> def run(self):
> self.server.serve_forever()
>
> container = RpcContainer()
> rpcServer = MyXMLRPCServer(
>  RpcContainer = container,
>  addr = ("", 12390),
>  requestHandler = ThreadCapableRequestHandler,
>  logRequests = False)
> rpcServer.register_function(container.WhatIsMyIP)
> slt = DaemonicServerLaunchThread(rpcServer)
> slt.start()
>
> sp = xmlrpclib.ServerProxy("http://localhost:12390";)
> print sp.WhatIsMyIP()

-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   >