Warning when new attributes are added to classes at run time

2006-07-19 Thread Matthew Wilson

I sometimes inadvertently create a new attribute on an object rather
update a value bound to an existing attribute.  For example:

In [5]: class some_class(object):
   ...:  def __init__(self, a=None):
   ...:  self.a = a
   ...:

In [6]: c = some_class(a=1)

In [7]: c.a
Out[7]: 1

In [8]: c.A = 2

I meant to update c.a but I created a new c.A.  I make this mistake
probably hourly.

I suspect adding attributes at run time can be a beautiful thing, but in
this particular instance, I'm only using this feature to hurt myself.

I wrote a simple class that will warn me when I make this mistake in the
future:

import warnings

class C(object):

warn_on_new_attributes = True

standard_attributes = []

def __setattr__(self, name, value):

if self.warn_on_new_attributes \
and name is not 'warn_on_new_attributes' \
and name not in self.standard_attributes:

warnings.warn("%s has no standard attribute %s."
  % (self.__class__.__name__, name))


self.__dict__[name] = value


class C1(C):

standard_attributes = ['a1', 'a2']


class C2(C):

warn_on_new_attributes = False

# Do some simple testing.
c11 = C1()
c11.a1 = 1
c11.a2 = 2
c11.a3 = 3
c11.a4 = 4

# Disable warnings for this instance.
c12 = C1()
c12.warn_on_new_attributes = False
c12.a1 = 1
c12.a2 = 2
c12.a3 = 3
c12.a4 = 4

c11.a5 = 5

# Use an object that has warnings disabled by default.
c2 = C2()
c2.a1 = 1
c2.a2 = 2
c2.a3 = 3
c2.a4 = 4

# enable warnings for this object.
c2.warn_on_new_attributes = True
c2.a1 = 1
c2.a5 = 5


All comments are welcome.  Is there a better way of implementing the
above class, OR, is this approach generally wrong-headed?  Am I the only
one that makes this mistake?

TIA

-- 
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Warning when new attributes are added to classes at run time

2006-07-20 Thread Matthew Wilson
On Thu 20 Jul 2006 04:32:28 AM EDT, Bruno Desthuilliers wrote:
>> self.__dict__[name] = value
> Make it:  
>   object.__setattr__(self, name, value)
>
> Your approach will lead to strange results if you mix it with properties
> or other descriptors...

Thanks!

>> class C1(C):
>> 
>> standard_attributes = ['a1', 'a2']
>
> DRY violation here. And a potential problem with inheritance (as always
> with class attributes).

Considering I had to look up what DRY meant before replying to this
message, I may be missing your point.  Is the repeat here that each
subclass has to define its own list of standard attributes?   Or, is it
that the standard_attributes list holds strings, but I could build that
list up by looking at my existing attributes?

If you're feeling charitable, can you explain what you mean a little
more?

TIA


-- 
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


Need advice on how to improve this function

2006-08-20 Thread Matthew Wilson
I wrote a function that converts a tuple of tuples into html.  For
example:

In [9]: x
Out[9]:
('html',
 ('head', ('title', 'this is the title!')),
 ('body',
  ('h1', 'this is the header!'),
  ('p', 'paragraph one is boring.'),
  ('p',
   'but paragraph 2 ',
   ('a', {'href': 'http://example.com'}, 'has a link'),
   '!')))


In [10]: as_html(x, sys.stdout)




this is the title!





this is the header!

paragraph one is boring.

but paragraph 2 http://example.com";>has a link!






I'd like to know ways to make it better (more efficient, able to deal
with enormous-size arguments, etc).  How would I write this as a
generator?

Here's the definition for as_html:

def as_html(l, s):
"Convert a list or tuple into html and write it to stream s."
if isinstance(l, (tuple, list)):
tagname = l[0]
if isinstance(l[1], dict):
attributes = ' '.join(['%s="%s"' % (k, l[1][k]) for k in l[1]])
s.write('<%s %s>' % (tagname, attributes))
else:
s.write('<%s>' % tagname)
if tagname in ('html', 'head', 'body'):
s.write('\n\n')
for ll in l[1:]:
as_html(ll, s)
s.write('' % tagname)
if tagname not in ('a', 'b', 'ul'):
s.write('\n\n')
elif isinstance(l, str):
s.write(l)


All comments welcome. TIA

-- 
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


random.jumpahead: How to jump ahead exactly N steps?

2006-06-21 Thread Matthew Wilson
The random.jumpahead documentation says this:

Changed in version 2.3: Instead of jumping to a specific state, n steps
ahead, jumpahead(n) jumps to another state likely to be separated by
many steps..

I really want a way to get to the Nth value in a random series started
with a particular seed.  Is there any way to quickly do what jumpahead
apparently used to do?

I devised this function, but I suspect it runs really slowly:

def trudgeforward(n):
'''Advance the random generator's state by n calls.'''
for _ in xrange(n): random.random()

So any speed tips would be very appreciated.

TIA


-- 
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


I wish I could add docstrings to vars.

2006-09-12 Thread Matthew Wilson

I build a lot of elaborate dictionaries in my interpreter, and then I
forget exactly how they work.  It would be really nice to be able to add
notes to the dictionary.

Is there some way to do this now?

Matt


-- 
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


Code to add docstrings to classes

2006-09-12 Thread Matthew Wilson
On Tue 12 Sep 2006 10:06:27 AM EDT, Neil Cerutti wrote:
> Writing a thin wrapper around the dictionary might be beneficial,
> and would also furnish a place for the docstrings. 

I wrote a function that hopefully does just that.  I'm not very savvy at
doing this class-factory stuff, so any advice would be welcome.

def vd(C):

"""

Return a subclass of class C that has a instance-level attribute _vardoc.

"""

class VDC(C):

def __init__(self, *args, **kwargs):

vardoc = kwargs.get('vardoc')
if vardoc:
assert isinstance(vardoc, str), "vardoc must be a
string!"
kwargs.pop('vardoc')
self._vardoc = vardoc

C.__init__(self, *args, **kwargs)

def __repr__(self):

if self._vardoc:
return self._vardoc + "\n" + C.__repr__(self)

else:
return C.__repr__(self)

return VDC


def test_vd():

i = vd(int)(6)
i._vardoc = "integer!"
assert isinstance(i, int)

d = vd(dict)(a=1, b=2, c=i, vardoc="dict!")
assert d['a'] == 1
assert d['c'] == 6
assert isinstance(d, dict)



-- 
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


eval(repr(object)) hardly ever works

2006-09-13 Thread Matthew Wilson

I understand that idea of an object's __repr__ method is to return a
string representation that can then be eval()'d back to life, but it
seems to me that it doesn't always work. 

For example it doesn't work for instances of the object class:

In [478]: eval(repr(object()))

   File "", line 1
 
 ^
SyntaxError: invalid syntax

It seems to work for types like integers and dictionaries and lists,
but not for much else.

Any thoughts?


-- 
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: eval(repr(object)) hardly ever works

2006-09-13 Thread Matthew Wilson
On Wed 13 Sep 2006 10:38:03 AM EDT, Steve Holden wrote:
> That's intentional. Would you have it return the code of all the methods 
> when you take the repr() of a class?

I don't think that would be required.  Couldn't you return a string with
a call to the constructor inside?  That's what sets.Set seems to do:

In [510]: from sets import Set

In [511]: s = Set()

In [512]: s.add('baloney')

In [513]: repr(s)
Out[513]: "Set(['baloney'])"

In [514]: eval(repr(s))
Out[514]: Set(['baloney'])

> regards
>   Steve

PS: I read your python web programming book a few years ago.



-- 
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


How to iterate through a sequence, grabbing subsequences?

2006-09-29 Thread Matthew Wilson
I wrote a function that I suspect may already exist as a python builtin,
but I can't find it:

def chunkify(s, chunksize):
"Yield sequence s in chunks of size chunksize."
for i in range(0, len(s), chunksize):
yield s[i:i+chunksize]

I wrote this because I need to take a string of a really, really long
length and process 4000 bytes at a time.

Is there a better solution?

Matt

-- 
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


How to query a function and get a list of expected parameters?

2006-09-29 Thread Matthew Wilson
I'm writing a function that accepts a function as an argument, and I
want to know to all the parameters that this function expects.  How can
I find this out in my program, not by reading the source?

For example, I would want to know for the function below that I have to
pass in two things:

def f(x1, x2): return x1 * x2

It would be nice to get the names also.

All help is welcome.

TIA

Matt



-- 
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


How to coerce a list of vars into a new type?

2006-10-02 Thread Matthew Wilson

I want to verify that three parameters can all be converted into
integers, but I don't want to modify the parameters themselves.

This seems to work:

def f(a, b, c):

a, b, c = [int(x) for x in (a, b, c)]

Originally, I had a bunch of assert isinstance(a, int) statements at the
top of my code, but I decided that I would rather just check if the
parameters can be converted into integers.

The new a, b, and c are all different objects, with different id values.
Anyhow, this all seems like black magic to me.  Can anyone explain what
is going on?

Is it as simple as call-by-value?




-- 
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


How can I make a class that can be converted into an int?

2006-10-02 Thread Matthew Wilson
What are the internal methods that I need to define on any class so that
this code can work?

c = C("three")

i = int(c) # i is 3

I can handle the part of mapping "three" to 3, but I don't know what
internal method is called when int(c) happens.

For string conversion, I just define the __str__ method.  What's the
equivalent for int?  For float, too, while I'm at it?

TIA

Matt

-- 
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


Itertools question: how to call a function n times?

2007-07-19 Thread Matthew Wilson
I want to write a function that each time it gets called, it returns a
random choice of 1 to 5 words from a list of words.

I can write this easily using for loops and random.choice(wordlist) and
random.randint(1, 5).

But I want to know how to do this using itertools, since I don't like
manually doing stuff like:

phrase = list()
for i in random.randint(1, 5):

phrase.append(random.choice(wordlist))

It just seems slow. 

All advice welcome.

TIA

Matt

-- 
A better way of running series of SAS programs:
http://tplus1.com/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


Using closures and partial functions to eliminate redundant code

2007-09-26 Thread Matthew Wilson
I wrote some code to create a user and update a user on a remote box by
sending emails to that remote box.  When I was done, I realized that my
create_user function and my update_user function were effectively
identical except for different docstrings and a single different value
inside:

### VERSION ONE

def create_user(username, userpassword, useremail):
"Send an email that will create a user in the remote system."

# Build email
email_body = """
USERNAME = %s
USERPASSWORD = %s
USEREMAIL = %s
""" % (username, userpassword, useremail)

# send it.
send_email(subject="CREATE", body=email_body)


def update_user(username, userpassword, useremail):
"Send an email that will update a user's password in the remote system."

# Build email
email_body = """
USERNAME = %s
USERPASSWORD = %s
USEREMAIL = %s
""" % (username, userpassword, useremail)

# send it.
send_email(subject="UPDATE", body=email_body)

### END

Then I came up with this approach to avoid all that redundant text:

### VERSION TWO

def _f(mode):

if mode not in ("create", "update"): 
raise ValueError("mode must be create or update!")

def _g(username, userpassword, useremail):

# Build email
email_body = """
USERNAME = %s
USERPASSWORD = %s
USEREMAIL = %s
""" % (username, userpassword, useremail)

# send it.  
send_email(subject=mode.upper(), body=email_body)

# Seems goofy, but other ways are there?

docstrings = {'create': "Send an email that will create a user in the 
remote system.",
  'update': "Send an email that will update a user's 
password in the remote system."}

_g.__doc__ = docstrings[mode]

return _g

# Then I created my functions like this:

v2_create_user = _f("create")
v2_update_user = _f("update")

### END


Finally, I came up with this approach:

### VERSION THREE

from functools import partial

def _h(mode, username, userpassword, useremail):

if mode not in ("create", "update"): 
raise ValueError("mode must be create or update!")

# Build email
email_body = """
USERNAME = %s
USERPASSWORD = %s
USEREMAIL = %s
""" % (username, userpassword, useremail)

# send it.  
send_email(subject=mode.upper(), body=email_body)

# I can't figure out how to set up the docstring on these.

v3_create_user = partial(_h, mode="create")
v3_update_user = partial(_h, mode="update")

### END

I'm interested to hear how other people deal with really similar code.
The similarity just bugs me.  However, I wonder if using stuff like
closures or partial function application is needlessly showy.

Also, I hope anyone here can help me figure out how to attach a
meaningful docstring for my version three code.

Thanks in advance!


Matt
-- 
http://mail.python.org/mailman/listinfo/python-list


Need recommendations on mock object packages

2007-10-17 Thread Matthew Wilson
What are the most popular, easiest to use, and most powerful mock
object packages out there?

Thanks in advance.

Matt
-- 
http://mail.python.org/mailman/listinfo/python-list


I can't get minimock and nosetests to play nice

2007-10-18 Thread Matthew Wilson


I'm curious if anyone has ever tried using nosetests along with
minimock.

I'm trying to get the two to play nice and not making progress.  I
also wonder if I'm using minimock incorrectly.

Here's the code I want to test, saved in a file dtfun.py.

class Chicken(object):
   "I am a chicken."
   def x(self): return 1
   def z(self): return 1

def g():

   """
   Verify that we call method x on an instance of the Chicken class.

   # First set up the mockery.
   >>> from minimock import Mock
   >>> Chicken = Mock('Chicken')
   >>> Chicken.mock_returns = Mock('instance_of_chicken')

   Now this stuff is the real test.
   >>> g()
   Called Chicken()
   Called instance_of_chicken.x()
   """

   # This is what the function does.
   c = Chicken()
   c.x()

if __name__ == "__main__":

   # First set up the mockery.
   from minimock import Mock
   Chicken = Mock('Chicken')
   Chicken.mock_returns = Mock('instance_of_chicken')

   # Now run the tests.
   import doctest
   doctest.testmod()

Here's the results when I run the code using doctest.testmod (which
passes) and nosetests --with-doctest (KABOOM):

$ python dtfun.py # Nothing is a good thing here.

$ nosetests --with-doctest dtfun.py
F
==
FAIL: Doctest: dtfun.g
--
Traceback (most recent call last):
 File "doctest.py", line 2112, in runTest
   raise self.failureException(self.format_failure(new.getvalue()))
AssertionError: Failed doctest test for dtfun.g
 File "/home/matt/svn-checkouts/scratch/python/dtfun/dtfun.py", line 13,
in g

--
File "/home/matt/svn-checkouts/scratch/python/dtfun/dtfun.py", line
22, in dtfun.g
Failed example:
   g()
Expected:
   Called Chicken()
   Called instance_of_chicken.x()
Got nothing


--
Ran 1 test in 0.015s

FAILED (failures=1)

It seems like nose isn't building the mock objects.

Any ideas?

Matt
-- 
http://mail.python.org/mailman/listinfo/python-list


python logging config file doesn't allow filters?

2007-10-26 Thread Matthew Wilson
The python logging module is a beautiful masterpiece.  I'm studying
filters and the config-file approach.  Is it possible to define a filter
somehow and then refer to it in my config file?

TIA

Matt
-- 
http://mail.python.org/mailman/listinfo/python-list


assertions to validate function parameters

2007-01-25 Thread Matthew Wilson
Lately, I've been writing functions like this:

def f(a, b):

assert a in [1, 2, 3]
assert b in [4, 5, 6]

The point is that I'm checking the type and the values of the
parameters.

I'm curious how this does or doesn't fit into python's duck-typing
philosophy.

I find that when I detect invalid parameters overtly, I spend less time
debugging.

Are other people doing things like this?  Any related commentary is
welcome.

Matt

-- 
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


Can I undecorate a function?

2007-01-29 Thread Matthew Wilson
The decorator as_string returns the decorated function's value as
string.  In some instances I want to access just the function f,
though, and catch the values before they've been decorated.

Is this possible?

def as_string(f):
def anon(*args, **kwargs):
y = f(*args, **kwargs)
return str(y)
return anon

@as_string
def f(x):
return x * x

Matt


-- 
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
-- 
http://mail.python.org/mailman/listinfo/python-list


Need help writing coroutine

2007-11-07 Thread Matthew Wilson
I'm working on two coroutines -- one iterates through a huge stream, and
emits chunks in pieces.  The other routine takes each chunk, then scores
it as good or bad and passes that score back to the original routine, so
it can make a copy of the stream with the score appended on.

I have the code working, but it just looks really ugly.  Here's a vastly
simplified version.  One function yields some numbers, and the other
function tells me if they are even or odd.

def parser():
"I just parse and wait for feedback."
for i in 1, 2, 3, 4, 5:
score = (yield i)
if score: 
print "%d passed!" % i

def is_odd(n):
"I evaluate each number n, and return True if I like it."
if n and n % 2: return True

def m():
try:
number_generator = parser()
i = None
while 1:
i = number_generator.send(is_odd(i))
except StopIteration: pass

and here's the results when I run this:
In [90]: m()
1 passed!
3 passed!
5 passed!

So, clearly, the code works.  But it is nonintuitive for the casual
reader.

I don't like the while 1 construct, I don't like manually
trapping the StopIteration exception, and this line is really ugly:

i = number_generator.send(is_odd(i))

I really like the old for i in parser(): deal, but I can't figure out
how to use .send(...) with that.

Can anyone help me pretty this up?  I want to make this as intuitive as
possible.

TIA

Matt

-- 
http://mail.python.org/mailman/listinfo/python-list


I don't understand what is happening in this threading code

2008-01-18 Thread Matthew Wilson
In this code, I tried to kill my thread object by setting a variable on it
to False.

Inside the run method of my thread object, it checks a different
variable.

I've already rewritten this code to use semaphores, but I'm just curious
what is going on.

Here's the code:

import logging, threading, time
logging.basicConfig(level=logging.DEBUG,
format="%(threadName)s: %(message)s")

class Waiter(threading.Thread):
def __init__(self, hot_food):
super(Waiter, self).__init__()
self.should_keep_running = True
self.hot_food = hot_food

def run(self):
while self.should_keep_running:
logging.debug("Inside run, the id of should_keep_running is %s." 
  % id(self.should_keep_running))
self.hot_food.acquire()

def cook_food(hot_food):
i = 5
while i >= 0: 
logging.debug("I am cooking food...")
time.sleep(1)
hot_food.release()
logging.debug("Andiamo!")
i -= 1

def main():

hot_food = threading.Semaphore(value=0)

chef = threading.Thread(name="chef", target=cook_food, args=(hot_food, ))
chef.start()

w = Waiter(hot_food)
logging.debug("Initially, the id of w.should_keep_running is %s." 
  % id(w.should_keep_running))
w.start()
logging.debug("After start, the id of w.should_keep_running is %s." 
  % id(w.should_keep_running))

# Wait for the chef to finish work.
chef.join()

# Now try to kill off the waiter by setting a variable inside the waiter.
w.should_keep_running = False
logging.debug("Now, the id of w.should_keep_running is %s." 
  % id(w.should_keep_running))

if __name__ == "__main__":
main()

And here's what I get when I execute it.  I have to suspend the process
with CTRL=Z and then kill -9 it.

$ python foo.py
MainThread: Initially, the id of w.should_keep_running is 135527852.
MainThread: After start, the id of w.should_keep_running is 135527852.
chef: I am cooking food...
Thread-1: Inside run, the id of should_keep_running is 135527852.
chef: Andiamo!
chef: I am cooking food...
Thread-1: Inside run, the id of should_keep_running is 135527852.
chef: Andiamo!
chef: I am cooking food...
Thread-1: Inside run, the id of should_keep_running is 135527852.
chef: Andiamo!
chef: I am cooking food...
Thread-1: Inside run, the id of should_keep_running is 135527852.
chef: Andiamo!
chef: I am cooking food...
Thread-1: Inside run, the id of should_keep_running is 135527852.
chef: Andiamo!
chef: I am cooking food...
Thread-1: Inside run, the id of should_keep_running is 135527852.
chef: Andiamo!
Thread-1: Inside run, the id of should_keep_running is 135527852.
MainThread: Now, the id of w.should_keep_running is 135527840.

[1]+  Stopped python foo.py

$ kill -9 %1

[1]+  Stopped python foo.py

The memory address of should_keep_running seems to change when I set it
from True to False, and inside the run method, I keep checking the old
location.

I am totally baffled what this means.

Like I said earlier, I already rewrote this code to use semaphores, but
I just want to know what is going on here.

Any explanation is welcome.

TIA

Matt
-- 
http://mail.python.org/mailman/listinfo/python-list


PyYAML: How to register my yaml.YAMLObject subclasses?

2008-10-11 Thread Matthew Wilson
I suspect the solution to my problem is something really trivial.

I wrote a module called pitz that contains a class Issue:

>>> pitz.Issue.yaml_tag
u'ditz.rubyforge.org,2008-03-06/issue'

Then I try to load a document with that same tag, but I get a
ConstructorError:

ConstructorError: could not determine a constructor for the tag
'!ditz.rubyforge.org,2008-03-06/issue'

I think there's some namespace/scoping problem, because when I define
the same class in my interpreter and then run yaml.load(issue_file),
everything works OK.

So, how do I tell yaml to build pitz.Issue instances when it runs across
that tag?

Do I need to use add_constructor?  If so, I'd *really* appreciate some
example usage.

Thanks!

Matt
--
http://mail.python.org/mailman/listinfo/python-list


Need advice on python importing

2008-10-17 Thread Matthew Wilson
I started with a module with a bunch of classes that represent database
tables.  A lot of these classes have methods that use other classes
inside, sort of like this:

class C(object):
@classmethod
def c1(cls, a):
return a

class D(object):
def d1(self, a):
return a + C.c1(a)

Notice how the d1 method on class D uses a classmethod c1 on C.  C is in
the same module, so it's globally available.

I moved my classes C and D into different files, and then I noticed that
D now needed to import C first.  That worked fine until I wrote
interdependent classes that couldn't import each other.

So I passed in everything as parameters like this:

def d1(self, C, a):

That works fine, but now I've got some methods that have six
parameters (or more) that are entirely just for this purpose.  So, now
instead of passing these in as parameters, I'm passing them into my
initializer and binding them in to self.

I wanted to use functools.partial to bind on these parameters like this:

d1 = functools.partial(d1, C=C)

But partial objects don't get the self parameter passed in, so using
partial is is effectively similar to decorating methods with
staticmethod.  Here's a link to the documentation on partial that
explains this:

http://www.python.org/doc/2.5.2/lib/partial-objects.html

So, partials won't work.

I suspect that there's more elegant solutions for this.

All thoughts are welcome.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Need advice on python importing

2008-10-17 Thread Matthew Wilson
On Fri 17 Oct 2008 04:52:47 PM EDT, Steve Holden wrote:
> Matthew Wilson wrote:
>> I started with a module with a bunch of classes that represent database
>> tables.  A lot of these classes have methods that use other classes
>> inside, sort of like this:
>> 
>> class C(object):
>> @classmethod
>> def c1(cls, a):
>> return a
>> 
>> class D(object):
>> def d1(self, a):
>> return a + C.c1(a)
>> 
>> Notice how the d1 method on class D uses a classmethod c1 on C.  C is in
>> the same module, so it's globally available.
>> 
>> I moved my classes C and D into different files, and then I noticed that
>> D now needed to import C first.  That worked fine until I wrote
>> interdependent classes that couldn't import each other.
>> 
>> So I passed in everything as parameters like this:
>> 
>> def d1(self, C, a):
>> 
>> That works fine, but now I've got some methods that have six
>> parameters (or more) that are entirely just for this purpose.  So, now
>> instead of passing these in as parameters, I'm passing them into my
>> initializer and binding them in to self.
>> 
>> I wanted to use functools.partial to bind on these parameters like this:
>> 
>> d1 = functools.partial(d1, C=C)
>> 
>> But partial objects don't get the self parameter passed in, so using
>> partial is is effectively similar to decorating methods with
>> staticmethod.  Here's a link to the documentation on partial that
>> explains this:
>> 
>> http://www.python.org/doc/2.5.2/lib/partial-objects.html
>> 
>> So, partials won't work.
>> 
>> I suspect that there's more elegant solutions for this.
>> 
>> All thoughts are welcome.
>
> Explain why you are using classmethods instead of regular methods in the
> first case (I appreciate your actual code will be rather more complex
> than your examples).

Hi Steve, I'm using SQLObject classes, and joining one table with
another requires either the class or an instance for the class.  So, I
could pass in instances of all the classes, and I'd be at the same
point.  I just don't like seeing functions with lots and lots of
parameters.

Thanks for the feedback!
--
http://mail.python.org/mailman/listinfo/python-list


Very simple WSGI question

2008-11-16 Thread Matthew Wilson
I want to write some middleware to notice when the inner app returns a
500 status code.  I'm sure there are already sophisticated loggers that
do this sort of thing, but I'm using this as a learning exercise.

Right now, I wrapped the start_response callable.  So when the WSGI
application calls the start response callable, I look at the first arg
passed in and do my test.

What's the right way to do this?

Matt
--
http://mail.python.org/mailman/listinfo/python-list


Can I import from a directory that starts with a dot (.) ?

2009-04-13 Thread Matthew Wilson
I want to have .foo directory that contains some python code.  I can't
figure out how to import code from that .foo directory.  Is this even
possible?

TIA

Matt
--
http://mail.python.org/mailman/listinfo/python-list


Need help with setup.py and data files

2009-04-21 Thread Matthew Wilson
I'm working on a package that includes some files that are meant to be
copied and edited by people using the package.

My project is named "pitz" and it is a bugtracker.  Instead of using a
config file to set the options for a project, I want to use python
files.

When somebody installs pitz, I want to save some .py files somewhere so
that when they run my pitz-setup script, I can go find those .py files
and copy them into their working directory.

I have two questions: 

1. Do I need to write my setup.py file to specify that the .py files in
   particular directory need to be treated like data, not code?  For
   example, I don't want the installer to hide those files inside an
   egg.

2. How can I find those .py files later and copy them?

--
http://mail.python.org/mailman/listinfo/python-list


How to walk up parent directories?

2009-05-03 Thread Matthew Wilson
Is there already a tool in the standard library to let me walk up from a
subdirectory to the top of my file system?

In other words, I'm looking for something like:

>>> for x in walkup('/home/matt/projects'):
... print(x)
/home/matt/projects
/home/matt
/home
/

I know I could build something like this with various os.path
components, but I'm hoping I don't have to.

TIA


Matt
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to walk up parent directories?

2009-05-04 Thread Matthew Wilson
On Sun 03 May 2009 09:24:59 PM EDT, Ben Finney wrote:
> Not every simple function belongs in the standard library :-)

Thanks for the help with this!  Maybe I'm overestimating how often
people need this walkup function.

Matt
--
http://mail.python.org/mailman/listinfo/python-list


How should I use grep from python?

2009-05-07 Thread Matthew Wilson
I'm writing a command-line application and I want to search through lots
of text files for a string.  Instead of writing the python code to do
this, I want to use grep.

This is the command I want to run:

$ grep -l foo dir

In other words, I want to list all files in the directory dir that
contain the string "foo".

I'm looking for the "one obvious way to do it" and instead I found no
consensus.  I could os.popen, commands.getstatusoutput, the subprocess
module, backticks, etc.  

As of May 2009, what is the recommended way to run an external process
like grep and capture STDOUT and the error code?


TIA

Matt
--
http://mail.python.org/mailman/listinfo/python-list


Re: How should I use grep from python?

2009-05-07 Thread Matthew Wilson
On Thu 07 May 2009 09:09:53 AM EDT, Diez B. Roggisch wrote:
> Matthew Wilson wrote:
>> 
>> As of May 2009, what is the recommended way to run an external process
>> like grep and capture STDOUT and the error code?
>
> subprocess. Which becomes pretty clear when reading it's docs:

Yeah, that's what I figured, but I wondered if there was already
something newer and shinier aiming to bump subprocess off its throne.

I'll just stick with subprocess for now.  Thanks for the feedback!
--
http://mail.python.org/mailman/listinfo/python-list


Re: How should I use grep from python?

2009-05-07 Thread Matthew Wilson
On Thu 07 May 2009 09:25:52 AM EDT, Tim Chase wrote:
> While it doesn't use grep or external processes, I'd just do it 
> in pure Python:

Thanks for the code!

I'm reluctant to take that approach for a few reasons:

1. Writing tests for that code seems like a fairly large amount of work.
I think I'd need to to either mock out lots of stuff or create a bunch
of temporary directories and files for each test run.

I don't intend to test that grep works like it says it does.  I'll
just test that my code calls a mocked-out grep with the right options
and arguments, and that my code behaves nicely when my mocked-out
grep returns errors.

2. grep is crazy fast.  For a search through just a few files, I doubt
it would matter, but when searching through a thousand files (which is
likely) I suspect that an all-python approach might lag behind.  I'm
speculating here, though.

3. grep has lots and lots of cute options.  I don't want to think about
implementing stuff like --color, for example.  If I just pass all the
heavy lifting to grep, I'm already done.

On the other hand, your solution is platform-independent and has no
dependencies.  Mine depends on an external grep command.

Thanks again for the feedback!

Matt

--
http://mail.python.org/mailman/listinfo/python-list


I need help building a data structure for a state diagram

2009-05-24 Thread Matthew Wilson
I'm working on a really simple workflow for my bug tracker.  I want
filed bugs to start in an UNSTARTED status.  From there, they can go to
STARTED.

>From STARTED, bugs can go to FINISHED or ABANDONED.

I know I can easily hard-code this stuff into some if-clauses, but I
expect to need to add a lot more statuses over time and a lot more
relationships.

This seems like a crude state diagram.  So, has anyone on this list done
similar work?

How should I design this so that users can add arbitrary new statuses
and then define how to get to and from those statuses?

TIA

MAtt
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I need help building a data structure for a state diagram

2009-05-24 Thread Matthew Wilson
On Sun 24 May 2009 03:42:01 PM EDT, Kay Schluehr wrote:
>
> General answer: you can encode finite state machines as grammars.
> States as non-terminals and transition labels as terminals:
>
> UNSTARTED: 'start' STARTED
> STARTED: 'ok' FINISHED | 'cancel' ABANDONED
> ABANDONED: 'done'
> FINISHED: 'done'
>
> In some sense each state-machine is also a little language.


I've never formally studied grammars, but I've worked through trivial
stuff that uses BNF to express ideas like 

 ::=   

I don't really understand how to apply that notion to this statement:

UNSTARTED: 'start' STARTED

That doesn't seem to be BNF, and that's all I know about grammar stuff.

Can you explain a little more?  This idea of using grammars for my
workflow sounds *really* fun and I'd love to learn this stuff, but I
could benefit from some more explanation.


Matt
-- 
http://mail.python.org/mailman/listinfo/python-list


How can I get access to the function called as a property?

2009-05-24 Thread Matthew Wilson
I use a @property decorator to turn some methods on a class into
properties.  I want to be able to access some of the attributes of the
original funtion, but I don't know how to get to it.

Any ideas?

Matt
-- 
http://mail.python.org/mailman/listinfo/python-list


How to test python snippets in my documents?

2009-05-26 Thread Matthew Wilson
I'm using a homemade script to verify some code samples in my
documentation.  Here it is:

#! /usr/bin/env python2.6

# vim: set expandtab ts=4 sw=4 filetype=python:

import doctest, os, sys

def main(s):
"Run doctest.testfile(s, None)"

return doctest.testfile(s, None)

if __name__ == "__main__":
for x in sys.argv[1:]:
print "testing code excerpts in %s..." % x
print main(x)


The script checks all the files listed as arguments.  This is OK, but is
there anything better?

-- 
http://mail.python.org/mailman/listinfo/python-list


In metaclass, when to use __new__ vs. __init__?

2008-05-12 Thread Matthew Wilson
I have been experimenting with metaclasses lately.  It seems possible to
define a metaclass by either subclassing type and then either redefining
__init__ or __new__. 

Here's the signature for __init__:

def __init__(cls, name, bases, d):

and here's __new__:

def __new__(meta, classname, bases, d):

Every metaclass I have found monkeys with d, which is available in both
methods.  So when is it better to use one vs the other?

Thanks for the help.

Matt

--
Programming, life in Cleveland, growing vegetables, other stuff.
http://blog.tplus1.com
--
http://mail.python.org/mailman/listinfo/python-list


defaultdict.fromkeys returns a surprising defaultdict

2008-06-03 Thread Matthew Wilson
I used defaultdict.fromkeys to make a new defaultdict instance, but I
was surprised by behavior:

>>> b = defaultdict.fromkeys(['x', 'y'], list)
 
>>> b
defaultdict(None, {'y': , 'x': })
 
>>> b['x']

 
>>> b['z']

Traceback (most recent call last):
  File "", line 1, in 
KeyError: 'z'

I think that what is really going on is that fromdict makes a regular
dictionary, and then hands it off to the defaultdict class.

I find this confusing, because now I have a defaultdict that raises a
KeyError.

Do other people find this intuitive?

Would it be better if defaultdict.fromkeys raised a
NotImplementedException?

Or would it be better to redefine how defaultdict.fromkeys works, so
that it first creates the defaultdict, and then goes through the keys?

All comments welcome.  If I get some positive feedback, I'm going to try
to submit a patch.

Matt
--
http://mail.python.org/mailman/listinfo/python-list


Mutually referencing imports -- impossible?

2008-07-13 Thread Matthew Wilson
I started off with a module that defined a class Vehicle, and then
subclasses Car and Motorcycle.

In the Car class,  for some bizarre reason, I instantiated a Motorcycle.
Please pretend that this can't be avoided for now.

Meanwhile, my Motorcycle class instantiated a Car as well.

Then I moved the Car and Motorcycle classes into separate files.  Each
imported the Vehicle module.

Then I discovered that my Car module failed because the global
Motorcycle wasn't defined.  The same problem happened in my Motorcycle
module.  Car and Motorcycle can't both import each other.

In the beginning, when all three (Vehicle, Car, and Motorcycle) were
defined in the same file, everything worked fine.

I don't know how to split them out in separate files now though and I
really wish I could because the single file is enormous.

Any ideas?

Matt




--
http://mail.python.org/mailman/listinfo/python-list


How to package a logging.config file?

2008-07-13 Thread Matthew Wilson
I'm working on a package that uses the standard library logging module
along with a .cfg file.

In my code, I use
logging.config.fileConfig('/home/matt/mypackage/matt.cfg') to load in
the logging config file.

However, it seems really obvious to me that this won't work when I share
this package with others.

I can't figure out what path to use when I load my .cfg file.

Any ideas?

Matt
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to package a logging.config file?

2008-07-15 Thread Matthew Wilson
On Mon 14 Jul 2008 09:25:19 AM EDT, Vinay Sajip wrote:
> Is your package a library or an application? If it's a library, you
> should avoid configuring logging using a config file - this is because
> logging configuration is process-wide, and if multiple libraries use
> fileConfig to configure their logging, you may get unexpected results.

I thought that the point of using logging.getLogger('foo.bar.baz') was
to allow each module/class/function to choose from the available
configurations.

So, I can define a really weird logger and still be a good citizen.

As long as I don't tweak the root logger, is it safe to use a config
file?

Matt

--
http://mail.python.org/mailman/listinfo/python-list


Re: Factory for Struct-like classes

2008-08-14 Thread Matthew Wilson
On Thu 14 Aug 2008 11:19:06 AM EDT, Larry Bates wrote:
> eliben wrote:
>> Hello,
>> 
>> I want to be able to do something like this:
>> 
>> Employee = Struct(name, salary)
>> 
>> And then:
>> 
>> john = Employee('john doe', 34000)
>> print john.salary

I find something like this useful, especially if any time I tried to
cram in an attribute that wasn't allowed, the class raises an exception.

One way to do it is to make a function that defines a class inside and
then returns it.  See the code at the end of this post for an example.

I couldn't figure out how to do this part though:

>> Employee = Struct(name, salary)

I have to do this instead (notice that the args are strings):

>> Employee = Struct('name', 'salary')

Anyway, here's the code:

def struct_maker(*args):

class C(object):
arglist = args
def __init__(self, *different_args):

# Catch too few/too many args.
if len(self.arglist) != len(different_args):
raise ValueError("I need exactly %d args (%s)" 
% (len(self.arglist), list(self.arglist)))

for a, b in zip(self.arglist, different_args):
setattr(self, a, b)

def __setattr__(self, k, v):
"Prevent any attributes except the first ones."
if k in self.arglist:
object.__setattr__(self, k, v)
else:
raise ValueError("%s ain't in %s" 
% (k, list(self.arglist)))

return C

And here it is in action:

In [97]: Employee = struct_maker('name', 'salary')

In [98]: matt = Employee('Matt Wilson', 11000)

In [99]: matt.name, matt.salary

Out[99]: ('Matt Wilson', 11000)

In [100]: matt.invalid_attribute = 99
---
ValueError: invalid_attribute ain't in ['name', 'salary']


Matt
--
http://mail.python.org/mailman/listinfo/python-list


How to measure speed improvements across revisions over time?

2010-05-10 Thread Matthew Wilson
I know how to use timeit and/or profile to measure the current run-time
cost of some code.

I want to record the time used by some original implementation, then
after I rewrite it, I want to find out if I made stuff faster or slower,
and by how much.

Other than me writing down numbers on a piece of paper on my desk, does
some tool that does this already exist?

If it doesn't exist, how should I build it?

Matt

-- 
http://mail.python.org/mailman/listinfo/python-list


Need help using callables and setup in timeit.Timer

2010-05-12 Thread Matthew Wilson
I want to time some code that depends on some setup.  The setup code
looks a little like this:

>>> b = range(1, 1001)

And the code I want to time looks vaguely like this:

>>> sorted(b)

Except my code uses a different function than sorted.  But that ain't
important right now.

Anyhow, I know how to time this code as long as I pass in strings to timeit:

>>> import timeit
>>> t = timeit.Timer("sorted(b)", "b = range(1, 1001)")
>>> min(t.repeat(3, 100))

How do I use a setup callable and have it put stuff into the namespace
that the stmt callable can access?

In other words, I want to do the same thing as above, but using
callables as the parameters to timeit.Timer, not strings of python code.

Here's my long-run goal.  I want to reuse code from my unit tests to
test performance:

import unittest
class TestSorting(unittest.TestCase):

def setUp(self):
self.b = range(1, 1001)

def test_sorted(self):
sorted(self.b)

I expect to have to do a little work.  The timeit setup will need to
make an instance of TestSorting and somehow the stmt code will have to
use that particular instance.

Once I understand how to have the timeit setup put stuff into the same
namespace as the timeit stmt, I'll attack how to translate
unittest.TestCase instances into something that can feed right into
timeit.Timer.

Matt



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: indexing lists/arrays question

2010-05-13 Thread Matthew Wilson
On Thu 13 May 2010 10:36:58 AM EDT, a wrote:
> this must be easy but its taken me a couple of hours already
>
> i have
>
> a=[2,3,3,4,5,6]
>
> i want to know the indices where a==3 (ie 1 and 2)
>
> then i want to reference these in a
>
> ie what i would do in IDL is
>
> b=where(a eq 3)
> a1=a(b)


There's several solutions.  Here's one:

It is a recipe for madness to use a list of integers and then talk
about the position of those integers, so I renamed your list to use
strings.

>>> a = ['two', 'three', 'three', 'four','five', 'six']

Now I'll use the enumerate function to iterate through each element and
get its position::

>>> for position, element in enumerate(a):
... print position, element
...
0 two
1 three
2 three
3 four
4 five
5 six

And now filter:

>>> for position, element in enumerate(a):
...if element == 'three':
...print position, element

1 three
2 three

And now do something different besides printing:

>>> b = []
>>> for position, element in enumerate(a):
... if element == 'three':
... b.append(position)

And now we can rewrite the whole thing from scratch to use a list
comprehension:

>>> [position for (position, element) in enumerate(a) if element == 'three']
[1, 2]

HTH

Matt

-- 
http://mail.python.org/mailman/listinfo/python-list


Where should I store docs in my project?

2009-06-09 Thread Matthew Wilson
I used paster to create a project named pitz.  I'm writing a bunch of
user documentation.  Where should I put it?

The project looks a little like this:

/home/matt/projects/pitz
setup.py
pitz/
__init__.py # has my project code
docs/ # has my reST files
tests # has some tests

Is there a convention for where to put the docs folder?
-- 
http://mail.python.org/mailman/listinfo/python-list


Is this pylint error message valid or silly?

2009-06-18 Thread Matthew Wilson
Here's the code that I'm feeding to pylint:

$ cat f.py
from datetime import datetime

def f(c="today"):

if c == "today":
c = datetime.today()

return c.date()


And here's what pylint says:

$ pylint -e f.py
No config file found, using default configuration
* Module f
E: 10:f: Instance of 'str' has no 'date' member (but some types could
not be inferred)

Is this a valid error message?  Is the code above bad?  If so, what is
the right way?

I changed from using a string as the default to None, and then pylint
didn't mind:


$ cat f.py 
from datetime import datetime

def f(c=None):

if c is None:
c = datetime.today()

return c.date()

$ pylint -e f.py 
No config file found, using default configuration

I don't see any difference between using a string vs None.  Both are
immutable.  I find the string much more informative, since I can write
out what I want.

Looking for comments.

Matt
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is this pylint error message valid or silly?

2009-06-19 Thread Matthew Wilson
On Fri 19 Jun 2009 02:55:52 AM EDT, Terry Reedy wrote:
>> if c == "today":
>> c = datetime.today()
>
> Now I guess that you actually intend c to be passed as a datetime 
> object. You only used the string as a type annotation, not as a real 
> default value. Something like 'record_date = None' is better.

Thanks for the feedback.  I think I should have used a more obvious
string in my original example and a more descriptive parameter name.

So, pretend that instead of 

c="today"

I wrote 

record_date="defaults to today's date".   

I know my way is unorthodox, but I think it is a little bit more obvious
to the reader than

record_date=None

The None is a signal to use a default value, but that is only apparent
after reading the code.

Thanks again for the comments.

Matt
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Rich comparison methods don't work in sets?

2009-06-19 Thread Matthew Wilson
On Fri 19 Jun 2009 03:02:44 PM EDT, Gustavo Narea wrote:
> Hello, everyone.
>
> I've noticed that if I have a class with so-called "rich comparison"
> methods
> (__eq__, __ne__, etc.), when its instances are included in a set,
> set.__contains__/__eq__ won't call the .__eq__ method of the elements
> and thus
> the code below:
> """
> obj1 = RichComparisonClass()
> obj2 = RichComparisonClass()

What does 

>>> obj1 is obj2

return? I don't know anything about set internals.

Matt
-- 
http://mail.python.org/mailman/listinfo/python-list


Does cProfile include IO wait time?

2009-07-04 Thread Matthew Wilson
I have a command-line script that loads about 100 yaml files.  It takes
2 or 3 seconds.  I profiled my code and I'm using pstats to find what is
the bottleneck.

Here's the top 10 functions, sorted by internal time:

In [5]: _3.sort_stats('time').print_stats(10)
Sat Jul  4 13:25:40 2009pitz_prof

 756872 function calls (739759 primitive calls) in 8.621 CPU seconds

   Ordered by: internal time
   List reduced from 1700 to 10 due to restriction <10>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
151530.4460.0000.5030.000 
build/bdist.linux-i686/egg/yaml/reader.py:134(forward)
305300.4240.0000.8420.000 
build/bdist.linux-i686/egg/yaml/scanner.py:142(need_more_tokens)
980370.4230.0000.4230.000 
build/bdist.linux-i686/egg/yaml/reader.py:122(peek)
 19550.4150.0001.2650.001 
build/bdist.linux-i686/egg/yaml/scanner.py:1275(scan_plain)
699350.3810.0000.3810.000 {isinstance}
189010.3290.0003.9080.000 
build/bdist.linux-i686/egg/yaml/scanner.py:113(check_token)
 54140.2770.0000.7940.000 
/home/matt/projects/pitz/pitz/__init__.py:34(f)
309350.2580.0000.3640.000 
build/bdist.linux-i686/egg/yaml/scanner.py:276(stale_possible_simple_keys)
189450.1920.0000.3140.000 
/usr/local/lib/python2.6/uuid.py:180(__cmp__)
 23680.1720.0001.3450.001 
build/bdist.linux-i686/egg/yaml/parser.py:268(parse_node)

I expected to see a bunch of my IO file-reading code in there, but I don't.  So
this makes me think that the profiler uses CPU time, not
clock-on-the-wall time.

I'm not an expert on python profiling, and the docs seem sparse.  Can I
rule out IO as the bottleneck here?  How do I see the IO consequences?

TIA

Matt
-- 
http://mail.python.org/mailman/listinfo/python-list


How to refer to data files without hardcoding paths?

2009-09-05 Thread Matthew Wilson
When a python package includes data files like templates or images,
what is the orthodox way of referring to these in code?

I'm working on an application installable through the Python package
index.  Most of the app is just python code, but I use a few jinja2
templates.  Today I realized that I'm hardcoding paths in my app.  They
are relative paths based on os.getcwd(), but at some point, I'll be
running scripts that use this code, these open(...) calls will fail.

I found several posts that talk about using __file__ and then walking
to nearby directories.

I also came across pkg_resources, and that seems to work, but I don't
think I understand it all yet.

Matt

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to refer to data files without hardcoding paths?

2009-09-08 Thread Matthew Wilson
On Mon 07 Sep 2009 10:57:01 PM EDT, Gabriel Genellina wrote:
> I prefer
> to use pkgutil.get_data(packagename, resourcename) because it can handle
> those cases too.

I didn't know about pkgutil until.  I thought I had to use setuptools to
do that kind of stuff.  Thanks!

Matt

-- 
http://mail.python.org/mailman/listinfo/python-list


Question about unpickling dict subclass with custom __setstate__

2009-09-10 Thread Matthew Wilson
I subclassed the dict class and added a __setstate__ method because I
want to add some extra steps when I unpickle these entities.  This is a
toy example of what I am doing:

class Entity(dict):

def __setstate__(self, d):

log.debug("blah...")

Based on my experiments, the data in d *IS NOT* the data stored in my
instances when I do stuff like:

e = Entity()
e['a'] = 1

Instead, the stuff in d is the data stored when I do stuff like:

e.fibityfoo = 99

Here's my question:

Is there anything I have to do to make sure that my real dictionary data
is correctly reloaded during the unpickle phase?  In other words, should
I run super(Entity, self).__setstate__(d) or something like that?

TIA

Matt

-- 
http://mail.python.org/mailman/listinfo/python-list


How do I begin debugging a python memory leak?

2009-09-16 Thread Matthew Wilson
I have a web app based on TurboGears 1.0.  In the last few days, as
traffic and usage has picked up, I noticed that the app went from using
4% of my total memory all the way up to 50%.

I suspect I'm loading data from the database and somehow preventing
garbage collection.

Are there any tools that might help me out with this?



-- 
http://mail.python.org/mailman/listinfo/python-list


Any logger created before calling logging.config.dictCOnfig is not configured

2013-03-06 Thread W. Matthew Wilson
It seems like that any logger I create BEFORE calling
logging.config.dictConfig does not get configured.

Meanwhile, if I configure the logger like I always have, by just setting
handlers on root, everything works fine, including the loggers that were
created BEFORE I configure logging.

I make a lot of module-level log instances.

I wrote a simple script to show my problem.  Run it like:

$ python scratch.py code

and then

$ python scratch.py dict

and see how the logging output is different.

### SCRIPT START

"""

import argparse
import logging
import logging.config

log1 = logging.getLogger('scratch')

def configure_logging_with_dictConfig():

d = {
'formatters': {
'consolefmt': {
'format': '%(asctime)s %(levelname)-10s %(process)-6d
%(name)-24s %(lineno)-4d %(message)s'}},

'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'consolefmt',
'level': 'DEBUG'}},

'root': {
'handlers': ['console'],
'level': 'DEBUG'},

   'version': 1}

logging.config.dictConfig(d)

def configure_logging_with_code():

# Set the logger to DEBUG.
logging.root.setLevel(logging.DEBUG)

# Now write a custom formatter, so that we get all those different
# things.
f = logging.Formatter(
'%(asctime)s '
'%(levelname)-10s '
'%(process)-6d '
'%(filename)-24s '
'%(lineno)-4d '
'%(message)s '
)

# Set up a stream handler for DEBUG stuff (and greater).
sh = logging.StreamHandler()
sh.setLevel(logging.DEBUG)
sh.setFormatter(f)
logging.root.addHandler(sh)

def set_up_arguments():

ap = argparse.ArgumentParser()
ap.add_argument('how_to_configure', choices=['code', 'dict'])
return ap.parse_args()

if __name__ == '__main__':

args = set_up_arguments()

if args.how_to_configure == 'code':
configure_logging_with_code()

elif args.how_to_configure == 'dict':
configure_logging_with_dictConfig()

log1.debug('debug from log1')

# log2 is created AFTER I configure logging.
log2 = logging.getLogger('log2')
log2.debug('debug from log2')

# Try to figure out what is the difference!  Nothing jumps out at me.

print "log.root.level: {0}".format(log1.root.level)
print "log.root.handlers: {0}".format(log1.root.handlers)

print "log1.parent.level: {0}".format(log1.parent.level)
print "log1.parent.handlers: {0}".format(log1.parent.handlers)

print "log1.level: {0}".format(log1.level)
print "log1.handlers: {0}".format(log1.handlers)
print "log1.propagate: {0}".format(log1.propagate)
print "log1.getEffectiveLevel(): {0}".format(log1.getEffectiveLevel())

### SCRIPT END


-- 
W. Matthew Wilson
m...@tplus1.com
http://tplus1.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Missing logging output in Python

2013-03-12 Thread W. Matthew Wilson
I made that code into a program like this:

### BEGIN

import logging

def configure_logging():

logging.basicConfig(level=logging.DEBUG, format='%(asctime)s
%(name)-12s %(levelname)8s %(message)s',
datefmt='%Y-%m-%d\t%H:%M:%s',
filename='/tmp/logfun.log', filemode='a')

# define a Handler that writes INFO messages to sys.stderr
console = logging.StreamHandler()
console.setLevel(logging.INFO)

# set format that is cleaber for console use
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s
%(message)s')

# tell the handler to use this format
console.setFormatter(formatter)

# add the handler to the root logger
logging.getLogger('').addHandler(console)

if __name__ == '__main__':

configure_logging()

logging.debug('a')
logging.info('b')
logging.warn('c')
logging.error('d')
logging.critical('e')

### END

and when I run the program, I get INFO and greater messages to stderr:

$ python logfun.py
root: INFO b
root: WARNING  c
root: ERRORd
root: CRITICAL e

and I get this stuff in the log file:

$ cat /tmp/logfun.log
2013-03-12 07:31:1363087862 rootDEBUG a
2013-03-12 07:31:1363087862 root INFO b
2013-03-12 07:31:1363087862 root  WARNING c
2013-03-12 07:31:1363087862 rootERROR d
2013-03-12 07:31:1363087862 root CRITICAL e

In other words, your code works!  Maybe you should check permissions on the
file you are writing to.

Matt




On Fri, Mar 8, 2013 at 9:07 AM,  wrote:

> Hi,
>
> I would like to enable loggin in my script using the logging module that
> comes with Python 2.7.3.
>
> I have the following few lines setting up logging in my script, but for
> whatever reason  I don't seem to get any output to stdout  or to a file
> provided to the basicConfig method.
>
> Any ideas?
>
> # cinfiguring logging
> logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(name)-12s
> %(levelname)8s %(message)s',
> datefmt='%Y-%m-%d\t%H:%M:%s',
> filename=config["currentLoop"], filemode='a')
> # define a Handler that writes INFO messages to sys.stderr
> console = logging.StreamHandler()
> console.setLevel(logging.INFO)
> # set format that is cleaber for console use
> formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
> # tell the handler to use this format
> console.setFormatter(formatter)
> # add the handler to the root logger
> logging.getLogger('').addHandler(console)
> --
> http://mail.python.org/mailman/listinfo/python-list
>



-- 
W. Matthew Wilson
m...@tplus1.com
http://tplus1.com
-- 
http://mail.python.org/mailman/listinfo/python-list