NFO messages to sys.stderr
> console = logging.StreamHandler()
> console.setLevel(logging.INFO)
> # set format that is cleaber for console use
> formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
> # tell the handler to use this format
>
t "log.root.level: {0}".format(log1.root.level)
print "log.root.handlers: {0}".format(log1.root.handlers)
print "log1.parent.level: {0}".format(log1.parent.level)
print "log1.parent.handlers: {0}".format(log1.parent.handlers)
print "log1.level: {0}".format(log1.level)
print "log1.handlers: {0}".format(log1.handlers)
print "log1.propagate: {0}".format(log1.propagate)
print "log1.getEffectiveLevel(): {0}".format(log1.getEffectiveLevel())
### SCRIPT END
--
W. Matthew Wilson
m...@tplus1.com
http://tplus1.com
--
http://mail.python.org/mailman/listinfo/python-list
On Thu 13 May 2010 10:36:58 AM EDT, a wrote:
> this must be easy but its taken me a couple of hours already
>
> i have
>
> a=[2,3,3,4,5,6]
>
> i want to know the indices where a==3 (ie 1 and 2)
>
> then i want to reference these in a
>
> ie what i would do in IDL is
>
> b=where(a eq 3)
> a1=a(b)
I want to time some code that depends on some setup. The setup code
looks a little like this:
>>> b = range(1, 1001)
And the code I want to time looks vaguely like this:
>>> sorted(b)
Except my code uses a different function than sorted. But that ain't
important right now.
Anyhow, I
I know how to use timeit and/or profile to measure the current run-time
cost of some code.
I want to record the time used by some original implementation, then
after I rewrite it, I want to find out if I made stuff faster or slower,
and by how much.
Other than me writing down numbers on a piece o
I have a web app based on TurboGears 1.0. In the last few days, as
traffic and usage has picked up, I noticed that the app went from using
4% of my total memory all the way up to 50%.
I suspect I'm loading data from the database and somehow preventing
garbage collection.
Are there any tools that
I subclassed the dict class and added a __setstate__ method because I
want to add some extra steps when I unpickle these entities. This is a
toy example of what I am doing:
class Entity(dict):
def __setstate__(self, d):
log.debug("blah...")
Based on my experiments, the
On Mon 07 Sep 2009 10:57:01 PM EDT, Gabriel Genellina wrote:
> I prefer
> to use pkgutil.get_data(packagename, resourcename) because it can handle
> those cases too.
I didn't know about pkgutil until. I thought I had to use setuptools to
do that kind of stuff. Thanks!
Matt
--
http://mail.pyth
When a python package includes data files like templates or images,
what is the orthodox way of referring to these in code?
I'm working on an application installable through the Python package
index. Most of the app is just python code, but I use a few jinja2
templates. Today I realized that I'm
I have a command-line script that loads about 100 yaml files. It takes
2 or 3 seconds. I profiled my code and I'm using pstats to find what is
the bottleneck.
Here's the top 10 functions, sorted by internal time:
In [5]: _3.sort_stats('time').print_stats(10)
Sat Jul 4 13:25:40 2009
On Fri 19 Jun 2009 03:02:44 PM EDT, Gustavo Narea wrote:
> Hello, everyone.
>
> I've noticed that if I have a class with so-called "rich comparison"
> methods
> (__eq__, __ne__, etc.), when its instances are included in a set,
> set.__contains__/__eq__ won't call the .__eq__ method of the elements
On Fri 19 Jun 2009 02:55:52 AM EDT, Terry Reedy wrote:
>> if c == "today":
>> c = datetime.today()
>
> Now I guess that you actually intend c to be passed as a datetime
> object. You only used the string as a type annotation, not as a real
> default value. Something li
Here's the code that I'm feeding to pylint:
$ cat f.py
from datetime import datetime
def f(c="today"):
if c == "today":
c = datetime.today()
return c.date()
And here's what pylint says:
$ pylint -e f.py
No config file found, using defau
I used paster to create a project named pitz. I'm writing a bunch of
user documentation. Where should I put it?
The project looks a little like this:
/home/matt/projects/pitz
setup.py
pitz/
__init__.py # has my project code
docs/ # has my reST files
I'm using a homemade script to verify some code samples in my
documentation. Here it is:
#! /usr/bin/env python2.6
# vim: set expandtab ts=4 sw=4 filetype=python:
import doctest, os, sys
def main(s):
"Run doctest.testfile(s, None)"
return doctest.testfile(s, No
I use a @property decorator to turn some methods on a class into
properties. I want to be able to access some of the attributes of the
original funtion, but I don't know how to get to it.
Any ideas?
Matt
--
http://mail.python.org/mailman/listinfo/python-list
On Sun 24 May 2009 03:42:01 PM EDT, Kay Schluehr wrote:
>
> General answer: you can encode finite state machines as grammars.
> States as non-terminals and transition labels as terminals:
>
> UNSTARTED: 'start' STARTED
> STARTED: 'ok' FINISHED | 'cancel' ABANDONED
> ABANDONED: 'done'
> FINISHED: 'd
I'm working on a really simple workflow for my bug tracker. I want
filed bugs to start in an UNSTARTED status. From there, they can go to
STARTED.
>From STARTED, bugs can go to FINISHED or ABANDONED.
I know I can easily hard-code this stuff into some if-clauses, but I
expect to need to add a lo
On Thu 07 May 2009 09:25:52 AM EDT, Tim Chase wrote:
> While it doesn't use grep or external processes, I'd just do it
> in pure Python:
Thanks for the code!
I'm reluctant to take that approach for a few reasons:
1. Writing tests for that code seems like a fairly large amount of work.
I think I
On Thu 07 May 2009 09:09:53 AM EDT, Diez B. Roggisch wrote:
> Matthew Wilson wrote:
>>
>> As of May 2009, what is the recommended way to run an external process
>> like grep and capture STDOUT and the error code?
>
> subprocess. Which becomes pretty clear when readi
I'm writing a command-line application and I want to search through lots
of text files for a string. Instead of writing the python code to do
this, I want to use grep.
This is the command I want to run:
$ grep -l foo dir
In other words, I want to list all files in the directory dir that
contain
On Sun 03 May 2009 09:24:59 PM EDT, Ben Finney wrote:
> Not every simple function belongs in the standard library :-)
Thanks for the help with this! Maybe I'm overestimating how often
people need this walkup function.
Matt
--
http://mail.python.org/mailman/listinfo/python-list
Is there already a tool in the standard library to let me walk up from a
subdirectory to the top of my file system?
In other words, I'm looking for something like:
>>> for x in walkup('/home/matt/projects'):
... print(x)
/home/matt/projects
/home/matt
/home
/
I know I
I'm working on a package that includes some files that are meant to be
copied and edited by people using the package.
My project is named "pitz" and it is a bugtracker. Instead of using a
config file to set the options for a project, I want to use python
files.
When somebody installs pitz, I wan
I want to have .foo directory that contains some python code. I can't
figure out how to import code from that .foo directory. Is this even
possible?
TIA
Matt
--
http://mail.python.org/mailman/listinfo/python-list
I want to write some middleware to notice when the inner app returns a
500 status code. I'm sure there are already sophisticated loggers that
do this sort of thing, but I'm using this as a learning exercise.
Right now, I wrapped the start_response callable. So when the WSGI
application calls the
On Fri 17 Oct 2008 04:52:47 PM EDT, Steve Holden wrote:
> Matthew Wilson wrote:
>> I started with a module with a bunch of classes that represent database
>> tables. A lot of these classes have methods that use other classes
>> inside, sort of like this:
>
I started with a module with a bunch of classes that represent database
tables. A lot of these classes have methods that use other classes
inside, sort of like this:
class C(object):
@classmethod
def c1(cls, a):
return a
class D(object):
def d1(self, a
I suspect the solution to my problem is something really trivial.
I wrote a module called pitz that contains a class Issue:
>>> pitz.Issue.yaml_tag
u'ditz.rubyforge.org,2008-03-06/issue'
Then I try to load a document with that same tag, but I get a
ConstructorError:
ConstructorError
On Thu 14 Aug 2008 11:19:06 AM EDT, Larry Bates wrote:
> eliben wrote:
>> Hello,
>>
>> I want to be able to do something like this:
>>
>> Employee = Struct(name, salary)
>>
>> And then:
>>
>> john = Employee('john doe', 34000)
>> print john.salary
I find something like this useful, especially
On Mon 14 Jul 2008 09:25:19 AM EDT, Vinay Sajip wrote:
> Is your package a library or an application? If it's a library, you
> should avoid configuring logging using a config file - this is because
> logging configuration is process-wide, and if multiple libraries use
> fileConfig to configure thei
I'm working on a package that uses the standard library logging module
along with a .cfg file.
In my code, I use
logging.config.fileConfig('/home/matt/mypackage/matt.cfg') to load in
the logging config file.
However, it seems really obvious to me that this won't work when I share
this package wit
I started off with a module that defined a class Vehicle, and then
subclasses Car and Motorcycle.
In the Car class, for some bizarre reason, I instantiated a Motorcycle.
Please pretend that this can't be avoided for now.
Meanwhile, my Motorcycle class instantiated a Car as well.
Then I moved th
I used defaultdict.fromkeys to make a new defaultdict instance, but I
was surprised by behavior:
>>> b = defaultdict.fromkeys(['x', 'y'], list)
>>> b
defaultdict(None, {'y': , 'x': })
>>> b['x']
>>> b['z']
-
I have been experimenting with metaclasses lately. It seems possible to
define a metaclass by either subclassing type and then either redefining
__init__ or __new__.
Here's the signature for __init__:
def __init__(cls, name, bases, d):
and here's __new__:
def __new__(meta, classname,
In this code, I tried to kill my thread object by setting a variable on it
to False.
Inside the run method of my thread object, it checks a different
variable.
I've already rewritten this code to use semaphores, but I'm just curious
what is going on.
Here's the code:
import logging, threading,
I'm working on two coroutines -- one iterates through a huge stream, and
emits chunks in pieces. The other routine takes each chunk, then scores
it as good or bad and passes that score back to the original routine, so
it can make a copy of the stream with the score appended on.
I have the code wo
The python logging module is a beautiful masterpiece. I'm studying
filters and the config-file approach. Is it possible to define a filter
somehow and then refer to it in my config file?
TIA
Matt
--
http://mail.python.org/mailman/listinfo/python-list
I'm curious if anyone has ever tried using nosetests along with
minimock.
I'm trying to get the two to play nice and not making progress. I
also wonder if I'm using minimock incorrectly.
Here's the code I want to test, saved in a file dtfun.py.
class Chicken(object):
"I am a chicke
What are the most popular, easiest to use, and most powerful mock
object packages out there?
Thanks in advance.
Matt
--
http://mail.python.org/mailman/listinfo/python-list
I wrote some code to create a user and update a user on a remote box by
sending emails to that remote box. When I was done, I realized that my
create_user function and my update_user function were effectively
identical except for different docstrings and a single different value
inside:
### V
I want to write a function that each time it gets called, it returns a
random choice of 1 to 5 words from a list of words.
I can write this easily using for loops and random.choice(wordlist) and
random.randint(1, 5).
But I want to know how to do this using itertools, since I don't like
manually d
The decorator as_string returns the decorated function's value as
string. In some instances I want to access just the function f,
though, and catch the values before they've been decorated.
Is this possible?
def as_string(f):
def anon(*args, **kwargs):
y = f(*args, **kwargs)
Lately, I've been writing functions like this:
def f(a, b):
assert a in [1, 2, 3]
assert b in [4, 5, 6]
The point is that I'm checking the type and the values of the
parameters.
I'm curious how this does or doesn't fit into python's duck-typing
philosophy.
I find that when I detect inv
What are the internal methods that I need to define on any class so that
this code can work?
c = C("three")
i = int(c) # i is 3
I can handle the part of mapping "three" to 3, but I don't know what
internal method is called when int(c) happens.
For string conversion, I just define the __str__ me
I want to verify that three parameters can all be converted into
integers, but I don't want to modify the parameters themselves.
This seems to work:
def f(a, b, c):
a, b, c = [int(x) for x in (a, b, c)]
Originally, I had a bunch of assert isinstance(a, int) statements at the
top of
I'm writing a function that accepts a function as an argument, and I
want to know to all the parameters that this function expects. How can
I find this out in my program, not by reading the source?
For example, I would want to know for the function below that I have to
pass in two things:
def f(
I wrote a function that I suspect may already exist as a python builtin,
but I can't find it:
def chunkify(s, chunksize):
"Yield sequence s in chunks of size chunksize."
for i in range(0, len(s), chunksize):
yield s[i:i+chunksize]
I wrote this because I need to take a string of a
On Wed 13 Sep 2006 10:38:03 AM EDT, Steve Holden wrote:
> That's intentional. Would you have it return the code of all the methods
> when you take the repr() of a class?
I don't think that would be required. Couldn't you return a string with
a call to the constructor inside? That's what sets.Se
I understand that idea of an object's __repr__ method is to return a
string representation that can then be eval()'d back to life, but it
seems to me that it doesn't always work.
For example it doesn't work for instances of the object class:
In [478]: eval(repr(object()))
--
On Tue 12 Sep 2006 10:06:27 AM EDT, Neil Cerutti wrote:
> Writing a thin wrapper around the dictionary might be beneficial,
> and would also furnish a place for the docstrings.
I wrote a function that hopefully does just that. I'm not very savvy at
doing this class-factory stuff, so any advice w
I build a lot of elaborate dictionaries in my interpreter, and then I
forget exactly how they work. It would be really nice to be able to add
notes to the dictionary.
Is there some way to do this now?
Matt
--
A better way of running series of SAS programs:
http://overlook.homelinux.net/wilso
I wrote a function that converts a tuple of tuples into html. For
example:
In [9]: x
Out[9]:
('html',
('head', ('title', 'this is the title!')),
('body',
('h1', 'this is the header!'),
('p', 'paragraph one is boring.'),
('p',
'but paragraph 2 ',
On Thu 20 Jul 2006 04:32:28 AM EDT, Bruno Desthuilliers wrote:
>> self.__dict__[name] = value
> Make it:
> object.__setattr__(self, name, value)
>
> Your approach will lead to strange results if you mix it with properties
> or other descriptors...
Thanks!
>> cl
I sometimes inadvertently create a new attribute on an object rather
update a value bound to an existing attribute. For example:
In [5]: class some_class(object):
...: def __init__(self, a=None):
...: self.a = a
...:
In [6]: c = some_class(a=1)
In
The random.jumpahead documentation says this:
Changed in version 2.3: Instead of jumping to a specific state, n steps
ahead, jumpahead(n) jumps to another state likely to be separated by
many steps..
I really want a way to get to the Nth value in a random series started
with a particu
56 matches
Mail list logo