Re: complaints about no replies last week

2009-03-31 Thread Arnaud Delobelle


Arnaud Delobelle wrote:

> prueba...@latinmail.com writes:
> [...]
> > I myself asked about how to write a library to efficiently do union
> > and intersection of sets containing time intervals some time ago on
> > this list and got little to no answers. It is a tricky problem. Since
> > I was getting paid I got an O(n*n) solution working. People on this
> > list on the other hand do not get paid and answer whatever strikes
> > their fancy. Sometimes the question is hard or confusing and nobody is
> > motivated enough to answer.
>
> I wasn't around when you posted this I guess. Do you mean intervals sets
> on the (real) number line such as:
>
>   1-3, 6-10 meaning all numbers between 1 and 3 and all numbers
>   between 6 and 10.
>
> In this case I think you can achieve union and intersection in O(nlogn)
> where n is the total number of intervals in the interval sets to unify
> or intersect. There is an implementation below. I have chosen a very
> simple data structure for interval sets: an interval set is the list of
> its endpoints. E.g.
>
> 1-3, 6-10 is the list [1, 3, 6, 10]
>
> This means that I can't specify whether an interval is closed or open.
> So in the implementation below all intervals are assumed to be open.
> The method could be made to work for any kind of intervals with the same
> complexity, there would just be a few more LOC.  I'm focusing on the
> principle - here it is:
>
>
> --
> # Implementation of union and intersection of interval sets.
>
> from itertools import *
>
> def combine(threshold, intsets):
> endpoints = sorted(chain(*imap(izip, intsets, repeat(cycle([1,-1])
> height = 0
> compound = []
> for x, step in endpoints:
> old_height = height
> height += step
> if max(height, old_height) == threshold:
> compound.append(x)
> return compound
>
> def union(*intsets):
> return combine(1, intsets)
>
> def intersection(*intsets):
> return combine(len(intsets), intsets)
>
> # tests
>
> def pretty(a):
> a = iter(a)
> return ', '.join("%s-%s" % (a, b) for a, b in izip(a, a))
>
> tests = [
> ([1, 5, 10, 15], [3, 11, 13, 20]),
> ([2, 4, 6, 8], [4, 7, 10, 11]),
> ([0, 11], [5, 10, 15, 25], [7, 12, 13, 15]),
> ]
>
> for intsets in tests:
> print "sets: ", "; ".join(imap(pretty, intsets))
> print "union: ", pretty(union(*intsets))
> print "intersection: ", pretty(intersection(*intsets))
> print "-"*20
> --
>
> Is this what you were looking for?
>
> --
> Arnaud

I realised after posting last night that I must be

(1) solving the wrong problem
(2) solving it badly

- My implementation of the combine() function above is O(nlogn)
(because of the sorted() call) whereas it could be O(n) by iterating
over the interval in the parallel manner, hence (2).  This would make
union() and intersection() O(n).

- As the problem was solved by the OP in O(n^2) I must be solving the
wrong problem (1).

I apologise for this.

However it was a nice and compact implementation IMHO :)

--
Arnaud
--
http://mail.python.org/mailman/listinfo/python-list


Re: Thoughts on language-level configuration support?

2009-03-31 Thread rustom
I am not sure I understand your solution. I certainly think that the
problem is big, very much bigger than is appreciated.
Think of the hoopla in the RoR world about convention-over-
configuration.

On the other hand I feel that emacs is becoming messier and messier
because it has taken up something like your idea.  Originally there
was only setq (lisp for assignment).  Now there is the whole customize-
mess.  Then again I guess its not the idea that is wrong but its
current state of implementation.  To elaborate on this mess would be
too OT for this list.  Nevertheless its a good starting point for the
kind of thing you are talking of.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Deleteing empty directories

2009-03-31 Thread CinnamonDonkey
Steven you are right, isDirEmpty() isn't even used. That's what
happens when you try to get a last minute thread going 5 minutes
before home time! ;-)



Thanx for the responses guys! It's been very useful :)




On 30 Mar, 16:38, Steven D'Aprano  wrote:
> On Mon, 30 Mar 2009 08:14:55 -0700, CinnamonDonkey wrote:
> > My understanding was that rmtree removes a whole tree not just the empty
> > directories?
>
> So it seems:
>
> >>> os.mkdir('die-die-die')
> >>> os.mkdir('die-die-die/stuff')
> >>> shutil.rmtree('die-die-die')
>
> I think what you want is os.removedirs().
>
> >>> os.makedirs('root/die-die-die/empty/empty/empty')
> >>> os.listdir('root')
>
> ['keep', 'die-die-die']>>> os.removedirs('root/die-die-die/empty/empty/empty')
> >>> os.listdir('root')
>
> ['keep']
>
> > def isDirEmpty( path ):
> >     if not os.path.isdir( path ):
> >         return False
>
> >     contents = os.listdir( path )
>
> >     if len(contents) == 0:
> >         return True
>
> >     return False
>
> That can be simplified to:
>
> # untested
> def isDirEmpty(path):
>     return os.path.isdir(path) and not len(os.listdir(path))
>
> > def RecurseTree( path ):
> >     if not os.path.isdir( path ):
> >         return False
>
> What if it is a symbolic link to a directory?
>
> >     contents = os.listdir( path )
>
> >     if len(contents) == 0:
> >         print "Deleting Empty Dir '%s'" % (path,) #shutil.rmtree(path)
>
> Why do you go to the trouble of defining isDirEmpty() and then not use it?
>
> >     else:
> >         for item in contents:
> >             investigate = "%s\\%s" % (path, item) if
> >             os.path.isdir(investigate):
> >                 RecurseTree( investigate )
>
> As soon as you start recursively walking over directories, you should use
> os.walk. It will almost certainly do what you want.
>
> > if __name__ == '__main__':
> >     RecurseTree( r"c:\temp" )
>
> > But I'm not sure what the max recursion depth is in python?
>
> By default in my version:
>
> >>> sys.getrecursionlimit()
>
> 1000
>
> but it can be changed.
>
> > Plus I think this could be more efficient.
>
> Probably, but why do you care? The file I/O probably will take 99% of the
> time, and I doubt you can improve that.
>
> Of course I could be wrong, so profile, profile, profile, and find out
> where the time really is being spent.
>
> --
> Steven

--
http://mail.python.org/mailman/listinfo/python-list


Re: Thoughts on language-level configuration support?

2009-03-31 Thread jfager
On Mar 31, 2:54 am, David Stanek  wrote:
> On Mon, Mar 30, 2009 at 9:40 AM, jfager  wrote:
> > I've written a short post on including support for configuration down
> > at the language level, including a small preliminary half-functional
> > example of what this might look like in Python, available at
> >http://jasonfager.com/?p=440.
>
> > The basic idea is that a language could offer syntactic support for
> > declaring configurable points in the program.  The language system
> > would then offer an api to allow the end user to discover a programs
> > configuration service, as well as a general api for providing
> > configuration values.
>
> What value does this have over simply having a configuration file.

"Simply having a configuration file" - okay.  What format?  What if
the end user wants to keep their configuration info in LDAP?  Did the
library I'm including make the same decisions, or do I have to do some
contortions to adapt?  Didn't I write basically this  exact same code
for the last umpteen projects I worked on, just schlepping around
config objects?


> In your load testing application you could have easily checked for the
> settings in a config object.

Not really easily, no.  It would have been repeated boilerplate across
many different test cases (actually, that's what we started with and
refactored away), instead of a simple declaration that delegated the
checking to the test runner.


> I think that the discover-ability of
> configuration can be handled with example configs and documentation.

Who's keeping that up to date?  Who's making sure it stays in sync
with the code?  Why even bother, if you could get it automatically
from the code?


> --
> David
> blog:http://www.traceback.org
> twitter:http://twitter.com/dstanek

--
http://mail.python.org/mailman/listinfo/python-list


Re: usb mass storage device detection

2009-03-31 Thread Tim Golden

prakash jp wrote:

Hi all,

I am interested in detecting usb mass storage devices, r there any scripts
in python to do so. Thanks in advance.


What? Detecting their presence in your pocket? :)

Which operating system are you using? It tends to
make a difference: these things are quite OS-specific.
If it's Windows, WMI is usually a good bet, altho' it
does depend on exactly what you're trying to do.

TJG
--
http://mail.python.org/mailman/listinfo/python-list


Re: unpack the source tarball on Windows

2009-03-31 Thread Michael Torrie
Mensanator wrote:
> Thanks. Still had to untar the ball, but I also downloaded a
> trial version of Winzip which took care of that.

Right.  The proper command is:

tar -xvjf tarball.tar.bz2

The recommended GUI for all things archival on Windows I think has to be
7zip.  And it's not cursed shareware either.  Open source.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Thoughts on language-level configuration support?

2009-03-31 Thread CTO
On the one hand, I can 110% see why you want to reduce boilerplate
code and provide a discoverable, common mechanism for automating the
two and three-quarters parsers that a lot of applications have to
write to handle a config file, CLI, and/or registry values, but why
introduce a syntax for it? A module would do just fine in terms of
function. Are you worried about the look of it, or do you want to make
a change to make it seem more "mainstream"? I don't see the
rationale.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Thoughts on language-level configuration support?

2009-03-31 Thread Steven D'Aprano
On Mon, 30 Mar 2009 23:06:50 -0700, jfager wrote:

> On Mar 30, 9:31 pm, "Rhodri James"  wrote:
...
>> This would be a interesting idea, but ultimately no more than a veneer
>> over the current set of configuration possibilities.  Quite how such a
>> system would tell whether to get configuration data from command line
>> parameters, a config file somewhere, or just pull bytes off the Z-wave
>>  from Mars, I'm not at all clear.
> 
> Not a veneer over; a foundation under.  As I acknowledged in my blog
> post, this would require a small bit of bootstrapping, to configure
> which end-user configuration system was used.  But this would simply
> point to an adapter, that would map from the desired configuration
> interface into the canonical standard api provided by the language
> system.

Let's talk about a practical example. The ls command has the API that the 
"-l" switch means "show me long options". You're suggesting that users 
should not interact with the user-interface, but directly with the 
implementation. You are, essentially, assuming that there is a one-to-one 
correspondence between data that the user inputs and variables in the 
implementation, and that users can understand the implementation.

But that's not necessarily the case. The switch -l might affect a dozen 
different variables. So instead of the user needing to learn *one* 
command line option (or click on one checkbox in a GUI, or whatever), you 
expect him to reason "I want to see a long display of my files, so I need 
to set the value of line_width to 47, date_style to 3, show_perms to 
True, and format_into_columns to -1".

I don't think that's going to fly. Separation of interface and 
implementation is a Good Thing.


Or consider another scenario:

def ls:
if '-l' in sys.argv:
file_iterator = SimpleFilenameWriter()
else:
file_iterator = DetailedFilenameWriter()
#...


Under your proposal, the user would somehow have to create the 
appropriate instance and feed it to your program. That's simply not 
practical! So you still need some sort of proxy variable, virtually 
identically as you do now:


conf make_long_list = True

def ls(optionlist):
if make_long_list:
file_iterator = SimpleFilenameWriter()
else:
file_iterator = DetailedFilenameWriter()
#...



> The problem with the current set of configuration possibilities is that
> there's nothing really constant between them, unless the programmer
> explicitly codes it, even though they're basically accomplishing the
> same thing.  There's nothing amazingly special about this proposal, it's
> just saying:  that basic thing that we do in a lot of different ways,
> let's make as much of that as possible standard.

Over-generalization actually makes things more complicated. I think 
you're over-generalizing.


...
>> You've just specified a different way in which you have to do this, one
>> that's a good deal less visible in the code
> 
> Why would it be less visible?  If it's syntax, you would know exactly
> where it was just by looking.

It could be *anywhere* in your project. It could be in some random 
library that you imported, and the user discovers that they can modify 
variables you didn't even know existed.


> Actually, you get the best of both worlds.  You get to see clearly in
> the code how the configured values are actually used, and you get the
> gathered summary of configurable values from the "discoverability"
> implementation.

I've learned to distrust "discovery", ever since I learned that my 
doctests, which were supposedly all running without error, in fact hadn't 
been discovered at all, and not one single test was running. So even if 
you could get this working, I'd be luke-warm on the idea.



-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating huge data in very less time.

2009-03-31 Thread John Machin
On Mar 31, 4:44 pm, "venutaurus...@gmail.com"
 wrote:
> Hello all,
>             I've a requirement where I need to create around 1000
> files under a given folder with each file size of around 1GB. The
> constraints here are each file should have random data and no two
> files should be unique even if I run the same script multiple times.
> Moreover the filenames should also be unique every time I run the
> script.One possibility is that we can use Unix time format for the
> file   names with some extensions. Can this be done within few minutes
> of time.

You should be able to write a simple script to create 1000 files with
unique names and each containing 1GB of doesn't-matter-what data and
find out for yourself how long that takes. If it takes much longer
than a "few" (how many is a few?) minutes, then it's pointless
worrying about other constraints like "no two files should be
unique" (whatever that means) and "random data" (why do you want to
create 1000GB of random data??) because imposing them certainly won't
make it run faster.

> Is it possble only using threads or can be done in any other
> way. This has to be done in Windows.
>
> Please mail back for any queries you may have,
>

This looks VERY SIMILAR to a question you asked about 12 days ago ...

--
http://mail.python.org/mailman/listinfo/python-list


Re: Relative Imports, why the hell is it so hard?

2009-03-31 Thread Kay Schluehr
On 31 Mrz., 04:55, "Gabriel Genellina"  wrote:
> En Mon, 30 Mar 2009 21:15:59 -0300, Aahz  escribió:
>
> > In article ,
> > Gabriel Genellina  wrote:
>
> >> I'd recommend the oposite - use relative (intra-package) imports when
> >> possible. Explicit is better than implicit - and starting with 2.7 -when
> >> "absolute" import semantics will be enabled by default- you'll *have* to
> >> use relative imports inside a package, or fail.
>
> > Really?  I thought you would still be able to use absolute imports; you
> > just won't be able to use implied relative imports instead of explicit
> > relative imports.
>
> You're right, I put it wrongly. To make things clear, inside a package
> "foo" accessible thru sys.path, containing a.py and b.py:
>
> site-packages/
>foo/
>  a.py
>  b.py
>  __init__.py
>
> Currently, the "a" module can import "b" this way:
>
>  from foo import b
> import foo.b
>  from . import b
> import b
>
> When implicit relative imports are disabled ("from __future__ import
> absolute_import", or after 2.7 supposedly) the last one won't find b.py
> anymore.
> (I hope I put it right this time).
>
> --
> Gabriel Genellina

So it even breaks more code which is great ;)

Do you know of any near or far past attempts to re-design the import
system from the ground up? I do not mean a rather faithful and
accessible reconstruction such as Brett Cannons work but a radical re-
design which starts with a domain model and does not end with Loaders,
Importers and Finders which are actually services that pretend to be
objects.

Kay
--
http://mail.python.org/mailman/listinfo/python-list


Re: Thoughts on language-level configuration support?

2009-03-31 Thread jfager
On Mar 31, 3:08 am, rustom  wrote:
> I am not sure I understand your solution.

Any questions, please ask.


> I certainly think that the
> problem is big, very much bigger than is appreciated.
> Think of the hoopla in the RoR world about convention-over-
> configuration.

Certainly, it's a big problem.  I'm not saying this will solve it
completely, or anything like that.  I just want to identify the most
common, basic needs that can be solved once and provided as a service
to the programmer and the end user, and to get rid of some of the
repetitive work around dealing with configuration.



> On the other hand I feel that emacs is becoming messier and messier
> because it has taken up something like your idea.  Originally there
> was only setq (lisp for assignment).  Now there is the whole customize-
> mess.  Then again I guess its not the idea that is wrong but its
> current state of implementation.  To elaborate on this mess would be
> too OT for this list.  Nevertheless its a good starting point for the
> kind of thing you are talking of.

I don't think emacs is a great parallel, for a couple of reasons.
First, the customize system seems weird and out of place in a world
where the entrenched configuration mechanism is 'program it directly
in your .emacs file' - by the time you know enough emacs to be able to
improve the customize interface, you don't want to use it anymore.
Also, the lack of namespacing in elisp means there's not a great way
to automatically name and organize these points, so again it falls to
the individual programmers to decide, and they inevitably decide on
something slightly different from each other.
--
http://mail.python.org/mailman/listinfo/python-list


Any other web mail accessor like libgmail?

2009-03-31 Thread Ken
Is there other python wrapper such as libhotmail or libyahoomail?

curiously ask. :p
--
http://mail.python.org/mailman/listinfo/python-list


Re: Windows command line not displaying print commands

2009-03-31 Thread John Machin
On Mar 31, 11:42 am, Terry Reedy  wrote:
> JonathanB wrote:
> > Ok, I'm sure this is really simple, but I cannot for the life of me
> > get any print statements from any of my python scripts to actually
> > print when I call them from the windows command line. What am I doing
> > wrong?
>
> > hello.py:
> > print "Hello World!"
>
> > command line:
> > E:\Python\dev>python hello.py
>
> > E:\Python\dev>
>
> > I'm using Python 2.6.1
>
> I suspect that it opens the window, prints to it, and closes it in a
> blink of an eye.

What window? He's *already* in a Command Prompt window, he's typing a
command "python hello.py", and getting only a blank line and another
prompt.


>  If so, adding an input prompt after the print will
> stop the window from closing until you respond to the prompt.
>
> a = input("hit return to close")
>
> tjr

--
http://mail.python.org/mailman/listinfo/python-list


Re: Thoughts on language-level configuration support?

2009-03-31 Thread jfager
On Mar 31, 3:30 am, CTO  wrote:
> On the one hand, I can 110% see why you want to reduce boilerplate
> code and provide a discoverable, common mechanism for automating the
> two and three-quarters parsers that a lot of applications have to
> write to handle a config file, CLI, and/or registry values, but why
> introduce a syntax for it? A module would do just fine in terms of
> function. Are you worried about the look of it, or do you want to make
> a change to make it seem more "mainstream"? I don't see the
> rationale.

Syntax is kind of a rubbery term.  I just mean that there should be a
clear and easy way to do it, that it should be considered a basic
service, and that if the best way to satisfy all the goals is to
integrate it directly into the language, that shouldn't be shied away
from.

The example that I have on my blog post, I consider that 'syntax',
even though it's implemented as a function, mainly just because it
digs into the bytecode and modifies the normal way a function is
evaluated (the function's value is determined by where the output
would go).
--
http://mail.python.org/mailman/listinfo/python-list


Re: Cyclic GC rules for subtyped objects with tp_dictoffset

2009-03-31 Thread Hrvoje Niksic
[ Questions such as this might be better suited for the capi-sig list,
  http://mail.python.org/mailman/listinfo/capi-sig ]

BChess  writes:

> I'm writing a new PyTypeObject that is base type, supports cyclic
> GC, and has a tp_dictoffset.  If my type is sub-typed by a python
> class, what exactly are the rules for how I'm supposed to treat my
> PyDict object with regards to cyclic GC?  Do I still visit it in my
> traverse () function if I'm subtyped?  Do I decrement the refcount
> upon dealloc?  By the documentation, I'm assuming I should always be
> using _PyObject_GetDictPtr() to be accessing the dictionary, which I
> do.  But visiting the dictionary in traverse() in the case it's
> subtyped results in a crash in weakrefobject.c.  I'm using Python
> 2.5.

First off, if your class is intended only as a base class, are you
aware that simply inheriting from a dictless class adds a dict
automatically?  For example, the base "object" type has no dict, but
inheriting from it automatically adds one (unless you override that
using __slots__).  Having said that, I'll assume that the base class
is usable on its own and its direct instances need to have a dict as
well.

I'm not sure if this kind of detail is explicitly documented, but as
far as the implementation goes, the answer to your question is in
Objects/typeobject.c:subtype_traverse.  That function gets called to
traverse instances of heap types (python subclasses of built-in
classes such as yours).  It contains code like this:

 if (type->tp_dictoffset != base->tp_dictoffset) {
 PyObject **dictptr = _PyObject_GetDictPtr(self);
 if (dictptr && *dictptr)
 Py_VISIT(*dictptr);
 }

According to this, the base class is responsible for visiting its dict
in its tp_traverse, and the subtype only visits the dict it added
(which is why its location differs).  Note that visiting an object
twice still shouldn't cause a crash; objects may be and are visited an
arbitrary number of times, and it's up to the GC to ignore those it
has already seen.  So it's possible that you have a bug elsewhere in
the code.

As far as the decrementing goes, the rule of thumb is: if you created
it, you get to decref it.  subtype_dealloc contains very similar
logic:

/* If we added a dict, DECREF it */
if (type->tp_dictoffset && !base->tp_dictoffset) {
PyObject **dictptr = _PyObject_GetDictPtr(self);
if (dictptr != NULL) {
PyObject *dict = *dictptr;
if (dict != NULL) {
Py_DECREF(dict);
*dictptr = NULL;
}
}
}

So, if the subtype added a dict, it was responsible for creating it
and it will decref it.  If the dict was created by you, it's up to you
to dispose of it.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating huge data in very less time.

2009-03-31 Thread Steven D'Aprano
On Mon, 30 Mar 2009 22:44:41 -0700, venutaurus...@gmail.com wrote:

> Hello all,
> I've a requirement where I need to create around 1000
> files under a given folder with each file size of around 1GB. The
> constraints here are each file should have random data and no two files
> should be unique even if I run the same script multiple times. 

I don't understand what you mean. "No two files should be unique" means 
literally that only *one* file is unique, the others are copies of each 
other.

Do you mean that no two files should be the same?


> Moreover
> the filenames should also be unique every time I run the script. One
> possibility is that we can use Unix time format for the file   names
> with some extensions. 

That's easy. Start a counter at 0, and every time you create a new file, 
name the file by that counter, then increase the counter by one.


> Can this be done within few minutes of time. Is it
> possble only using threads or can be done in any other way. This has to
> be done in Windows.

Is it possible? Sure. In a couple of minutes? I doubt it. 1000 files of 
1GB each means you are writing 1TB of data to a HDD. The fastest HDDs can 
reach about 125 MB per second under ideal circumstances, so that will 
take at least 8 seconds per 1GB file or 8000 seconds in total. If you try 
to write them all in parallel, you'll probably just make the HDD waste 
time seeking backwards and forwards from one place to another.



-- 
Steven

--
http://mail.python.org/mailman/listinfo/python-list


Re: An inheritance question: getting the name of the "one up" class

2009-03-31 Thread Steven D'Aprano
On Tue, 31 Mar 2009 01:29:50 -0300, Gabriel Genellina wrote:

>> Oh, and while the gurus are at it, what would be the advantage (if any)
>> of changing, say
>>Primate.__init__(self)
>> to
>> super(Human, self).__init__()
> 
> None, if you use single inheritance everywhere.


But there's no disadvantage to using super with single inheritance (and 
new-style classes).



> super is very tricky; see:
> http://fuhm.net/super-harmful/
> and
> http://www.artima.com/weblogs/viewpost.jsp?thread=236275


As I understand it, the trickiness only comes about when you have diamond 
diagrams in your MRO.



-- 
Steven


--
http://mail.python.org/mailman/listinfo/python-list


Re: Windows command line not displaying print commands

2009-03-31 Thread John Machin
On Mar 31, 9:57 am, JonathanB  wrote:
> On Mar 30, 6:28 pm, John Machin  wrote:
>
> > On Mar 31, 8:37 am, Irmen de Jong  wrote:
> > > Does just typing:
>
> > >    python
>
> Yes, just typing python takes me to my interactive prompt
>
> > > Or do you have a module in your E:\Python\dev directory called 'os', 
> > > 'sys' or something
> > > else that may clobber one of the default library modules.
>
> The only module in the directory is called pyfind.py

So what do you classify hello.py as? A script?

Please tell us what other files are in the directory.


>
>
>
> > or perhaps there's a file named python.bat that does nothing.
>
> > What directory is Python installed in? What does your Windows PATH
> > look like? Is this your very first attempt to do anything at all with
> > Python or have you managed to get any output from a Python script
> > before? If the latter, what have you changed in your environment? Does
> > E: refer to a removable disk?
>
> Unfortunately, this problem is on my work computer, so I'm not in
> front of it right now. I've done the development on this in
> PortablePython, but I have python installed in C:/Python25 and that
> should be in my path (I went though and added it). I've never run a
> script that output to the command line before, only django apps.
> Django will output stuff though, which makes me wonder if I've somehow
> borked my stdout in the script. Not sure how I could have done that,
> but I'll post the script I've written in the next post just in case
> I'm somehow messing up the calls (although "print var" seems fairly
> user-proof...). E: does refer to a removable disc.

If hello.py doesn't print,  then the problem is unlikely to be in your
big script.

I suspect that your best approach would be to (a) ensure that you have
the latest release of Portable Python [there was one in the last few
days] and (b) ask the author for help.

Other things to try that might diagnose where the problem really is:
just follow my example below.


| C:\junk>python -c "print 9876"
| 9876
|
| C:\junk>python
| Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit
(Intel)] on
| win32
| Type "help", "copyright", "credits" or "license" for more
information.
| >>> print "hello"
| hello
| >>> print 9876
| 9876
| >>> ^Z
|
|
| C:\junk>copy con test1.py
| print 9876
| ^Z
| 1 file(s) copied.
|
| C:\junk>python test1.py
| 9876
|
| C:\junk>copy con test2.py
| 1 / 0
| ^Z
| 1 file(s) copied.
|
| C:\junk>python test2.py
| Traceback (most recent call last):
|   File "test2.py", line 1, in 
| 1 / 0
| ZeroDivisionError: integer division or modulo by zero
|

HTH,
John
--
http://mail.python.org/mailman/listinfo/python-list


Re: Ordered Sets

2009-03-31 Thread pataphor
On Mon, 30 Mar 2009 19:57:39 -0700 (PDT)
Alex_Gaynor  wrote:

> I really like the Ordered Set class(I've been thinking about one ever
> since ordered dict hit the std lib), is there any argument against
> adding one to the collections module?  I'd be willing to write a PEP
> up for it.

Suppose the implementation would use a circular linked list. Then the
iteration could start from any point, the thing is symmetric after all.
But this would also mean we could add items to the left of that new
starting point, since that would now be the 'end' of the circle. This
is something different from merely remembering insertion order. How do
you feel about that?

P.
--
http://mail.python.org/mailman/listinfo/python-list


Re: An inheritance question: getting the name of the "one up" class

2009-03-31 Thread Michele Simionato
On Mar 31, 5:13 am, "Nick"  wrote:
> Oh, and while the gurus are at it, what would be the advantage (if any) of
> changing, say
>    Primate.__init__(self)
> to
>     super(Human, self).__init__()

What  others said. In Python 3.0 you would have a bigger advantage,
since you can just
write

super().__init__()

without repetition.
I normally use super, because it is the recommended solution by Guido.
This is not to say that I am perfectly happy, by my gripe is more
against
multiple inheritance than against super itself.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Thoughts on language-level configuration support?

2009-03-31 Thread jfager
On Mar 31, 3:40 am, Steven D'Aprano
 wrote:
> On Mon, 30 Mar 2009 23:06:50 -0700, jfager wrote:
> > On Mar 30, 9:31 pm, "Rhodri James"  wrote:
> ...
> >> This would be a interesting idea, but ultimately no more than a veneer
> >> over the current set of configuration possibilities.  Quite how such a
> >> system would tell whether to get configuration data from command line
> >> parameters, a config file somewhere, or just pull bytes off the Z-wave
> >>  from Mars, I'm not at all clear.
>
> > Not a veneer over; a foundation under.  As I acknowledged in my blog
> > post, this would require a small bit of bootstrapping, to configure
> > which end-user configuration system was used.  But this would simply
> > point to an adapter, that would map from the desired configuration
> > interface into the canonical standard api provided by the language
> > system.
>
> Let's talk about a practical example. The ls command has the API that the
> "-l" switch means "show me long options". You're suggesting that users
> should not interact with the user-interface, but directly with the
> implementation.  You are, essentially, assuming that there is a one-to-one
> correspondence between data that the user inputs and variables in the
> implementation, and that users can understand the implementation.

No, not at all.  I'm saying that the programmer shouldn't have to care
what the end-user's interface is, not there's no end-user interface at
all.  For the program 'ls', why should I, the programmer, care at all
how the end user actually specifies that they want 'long options'?  I
might think or know that a command line argument is the 'best' way,
but why should I even worry about it?  I should just tell them that
option exists, and then they can choose how to give me a value for it
in whatever way they please.



> But that's not necessarily the case. The switch -l might affect a dozen
> different variables. So instead of the user needing to learn *one*
> command line option (or click on one checkbox in a GUI, or whatever), you
> expect him to reason "I want to see a long display of my files, so I need
> to set the value of line_width to 47, date_style to 3, show_perms to
> True, and format_into_columns to -1".

Not at all, not even a little bit.  Why wouldn't you just say some
variable 'long-lines' is configurable, and then use that to do all the
work you would have done by manually parsing the command-line
arguments for the -l flag?  You, the programmer, still have complete
control over what is or isn't visible to the end user, it's not like
I'm advocating that every variable in the system automagically become
end-user configurable without programmer input.


> I don't think that's going to fly. Separation of interface and
> implementation is a Good Thing.

Agreed, that's the whole point of this.  Use the interface provided by
the language, then let the end user provide their own implementation
(of course, there will be basic ones provided out of the box) of how
to specify their configuration values.


>
> Or consider another scenario:
>
> def ls:
>     if '-l' in sys.argv:
>         file_iterator = SimpleFilenameWriter()
>     else:
>         file_iterator = DetailedFilenameWriter()
>     #...
>
> Under your proposal, the user would somehow have to create the
> appropriate instance and feed it to your program. That's simply not
> practical! So you still need some sort of proxy variable, virtually
> identically as you do now:
>
> conf make_long_list = True
>
> def ls(optionlist):
>     if make_long_list:
>         file_iterator = SimpleFilenameWriter()
>     else:
>         file_iterator = DetailedFilenameWriter()
>     #...

This second one is what I intended (no optionlist param needed,
though).  I'm curious - why do you think this is a bad thing?  One toy
example looks pretty similar to how you would do things now, and
you're ready to throw out the whole concept?  Notice that you did
actually gain something that I think is significant - you no longer
have any hardcoded reference to the fact that make_long_list is
defined as a command line parameter.



>
> > The problem with the current set of configuration possibilities is that
> > there's nothing really constant between them, unless the programmer
> > explicitly codes it, even though they're basically accomplishing the
> > same thing.  There's nothing amazingly special about this proposal, it's
> > just saying:  that basic thing that we do in a lot of different ways,
> > let's make as much of that as possible standard.
>
> Over-generalization actually makes things more complicated. I think
> you're over-generalizing.

Keep throwing out examples of where this makes things more
complicated, it's good to work through all the concerns.



> >> You've just specified a different way in which you have to do this, one
> >> that's a good deal less visible in the code
>
> > Why would it be less visible?  If it's syntax, you would know exactly
> > where it was just by looking.
>
> 

Re: Thoughts on language-level configuration support?

2009-03-31 Thread Kay Schluehr
On 30 Mrz., 15:40, jfager  wrote:
> I've written a short post on including support for configuration down
> at the language level, including a small preliminary half-functional
> example of what this might look like in Python, available 
> athttp://jasonfager.com/?p=440.
>
> The basic idea is that a language could offer syntactic support for
> declaring configurable points in the program.  The language system
> would then offer an api to allow the end user to discover a programs
> configuration service, as well as a general api for providing
> configuration values.
>
> The included example implements the first bit and hints at the third,
> defining a function that looks up what variable its output will be
> assigned to and tries to find a corresponding value from a
> configuration source.  It's very preliminary, but I hope it gives a
> flavor of the general idea.
>
> Any thoughts or feedback would be greatly appreciated.

The problem with your idea is that those declared declaration points
can be overlooked no matter how much syntactical support is added.
Lets say a resource file is loaded and there are a few of the config-
properties declared in modules you have written. Now an object wants
to access a resource defined in the file and fails because the
resource providing property could not be found since the property
defining module wasn't loaded yet and the property couldn't register
itself. That's why things are centralized as in optparse and the
workflow is  designed upfront or things are implemented locally and
individual units have to take care of their own.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Thoughts on language-level configuration support?

2009-03-31 Thread David Stanek
On Tue, Mar 31, 2009 at 3:19 AM, jfager  wrote:
>
> "Simply having a configuration file" - okay.  What format?  What if
> the end user wants to keep their configuration info in LDAP?  Did the
> library I'm including make the same decisions, or do I have to do some
> contortions to adapt?  Didn't I write basically this  exact same code
> for the last umpteen projects I worked on, just schlepping around
> config objects?
>

Ah I see your point here. During PyCon I was trying to add the ability
to inject configuration into objects that are constructed by the
snake-guice framework. The code is not yet in the Subversion
repository, but I did brain dump a little documentation[0]. It is
still very much a work in progress.

0. http://code.google.com/p/snake-guice/wiki/InjectingConfiguration

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
--
http://mail.python.org/mailman/listinfo/python-list


Re: PyFits for Windows?

2009-03-31 Thread W. eWatson

W. eWatson wrote:

W. eWatson wrote:
It looks like PyFits downloads are for Linux. Isn't there anything 
available for Win (xp)?
I'm now on the scipy mail list. Things look hopeful, according to the 
first respondent, to meet my criteria mentioned in another sub-thread to 
this one:

"I'm hoping the use of this library will be relative simple for my
purposes, which are basically to write an image to a fits file with a
somewhat simple header, which might include lat/long, date, image size,
date-time, and a comment."

Apparently, the first chapter or several pages or so of a manual 
distributed with PyFits is enough.


The link I mentioned in another sub-thread here about the U of Wash. is 



--
   W. eWatson

 (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
  Obz Site:  39° 15' 7" N, 121° 2' 32" W, 2700 feet

Web Page: 

--
http://mail.python.org/mailman/listinfo/python-list


Re: Ordered Sets

2009-03-31 Thread pataphor
On Tue, 31 Mar 2009 10:33:26 +0200
pataphor  wrote:

> On Mon, 30 Mar 2009 19:57:39 -0700 (PDT)
> Alex_Gaynor  wrote:
> 
> > I really like the Ordered Set class(I've been thinking about one
> > ever since ordered dict hit the std lib), is there any argument
> > against adding one to the collections module?  I'd be willing to
> > write a PEP up for it.
> 
> Suppose the implementation would use a circular linked list. Then the
> iteration could start from any point, the thing is symmetric after
> all. But this would also mean we could add items to the left of that
> new starting point, since that would now be the 'end' of the circle.
> This is something different from merely remembering insertion order.
> How do you feel about that?

And in case that didn't confuse you enough, how about this method?

def move(self,key1,key2):
#self ==> key1,(key2 ... end), (key1+1... key2-1)
links = self.links
if set([key1,key2]) and self :
start = self.start
a = links[key1][1]
b = links[key2][0]
c  = links[start][0]
links[key1][1] = key2
links[key2][0] = key1
links[a][0] = c
links[c][1] = a
links[b][1] = start
links[start][0] = b

This takes [key2:]  (isn't pseudo slice notation wonderful?) and
inserts it after key1.

for example:

R = OrderedSet(range(10))
print(list(R))
R.move(3,7)
print(list(R))

gives:

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[0, 1, 2, 3, 7, 8, 9, 4, 5, 6]

All in O(1) of course. 

P.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating huge data in very less time.

2009-03-31 Thread andrea
On 31 Mar, 12:14, "venutaurus...@gmail.com" 
wrote:
>
> That time is reasonable. The randomness should be in such a way that
> MD5 checksum of no two files should be the same.The main reason for
> having such a huge data is for doing stress testing of our product.


In randomness is not necessary (as I understood) you can just create
one single file and then modify one bit of it iteratively for 1000
times.
It's enough to make the checksum change.

Is there a way to create a file to big withouth actually writing
anything in python (just give me the garbage that is already on the
disk)?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating huge data in very less time.

2009-03-31 Thread venutaurus...@gmail.com
On Mar 31, 1:15 pm, Steven D'Aprano
 wrote:
> On Mon, 30 Mar 2009 22:44:41 -0700, venutaurus...@gmail.com wrote:
> > Hello all,
> >             I've a requirement where I need to create around 1000
> > files under a given folder with each file size of around 1GB. The
> > constraints here are each file should have random data and no two files
> > should be unique even if I run the same script multiple times.
>
> I don't understand what you mean. "No two files should be unique" means
> literally that only *one* file is unique, the others are copies of each
> other.
>
> Do you mean that no two files should be the same?
>
> > Moreover
> > the filenames should also be unique every time I run the script. One
> > possibility is that we can use Unix time format for the file   names
> > with some extensions.
>
> That's easy. Start a counter at 0, and every time you create a new file,
> name the file by that counter, then increase the counter by one.
>
> > Can this be done within few minutes of time. Is it
> > possble only using threads or can be done in any other way. This has to
> > be done in Windows.
>
> Is it possible? Sure. In a couple of minutes? I doubt it. 1000 files of
> 1GB each means you are writing 1TB of data to a HDD. The fastest HDDs can
> reach about 125 MB per second under ideal circumstances, so that will
> take at least 8 seconds per 1GB file or 8000 seconds in total. If you try
> to write them all in parallel, you'll probably just make the HDD waste
> time seeking backwards and forwards from one place to another.
>
> --
> Steven

That time is reasonable. The randomness should be in such a way that
MD5 checksum of no two files should be the same.The main reason for
having such a huge data is for doing stress testing of our product.
--
http://mail.python.org/mailman/listinfo/python-list


Authorize.net integration problem

2009-03-31 Thread Lakshman
I am trying to integrate Authorize.net SIM API into django views.

I am facing a problem in the fingerprint generation. I am repeatedly
getting that the fingerprint generated doesn't match the one the
server generates.

I have generated the md5 hash with the key provided as specified in
the SIM documentation.

Here is the code:

params = {
'x_login' : '4ffrBT36La',
'x_amount' : '100.00',
'x_show_form' : 'PAYMENT_FORM',
'x_type' : 'AUTH_CAPTURE',
'x_method' : 'CC',
'x_fp_sequence' : '123',
'x_version' : '3.1',
'x_relay_response' : 'FALSE',
}
params['x_fp_timestamp'] = int(time.time())

msg = '^'.join([params['x_login'],
   str(params['x_fp_sequence']),
   str(params['x_fp_timestamp']),
   str(params['x_amount'])
   ])+'^'

fingerprint = hmac.new('9LyEU8t87h9Hj49Y',msg).hexdigest()


I would be glad if some one that has dealt with this earlier, points
out what the glitch is. Thanks in advance.
--
http://mail.python.org/mailman/listinfo/python-list


How to pass one HTML values to another HTML

2009-03-31 Thread Kalyan
hi
   by using python and google app engine how can i pass one HTML values to
another HTML  .. i am very new to Python programing

Example :
 in one HTML i entered Name and Address fields and i submit the page at that
time i want to see those two values in another HTML page.. please reply me..

advance thanks

-- 
Regards
kalyan
--
http://mail.python.org/mailman/listinfo/python-list


[ANN] Data Plotting Library DISLIN 9.5

2009-03-31 Thread Helmut Michels

Dear Python users,

I am pleased to announce version 9.5 of the data plotting software
DISLIN.

DISLIN is a high-level and easy to use plotting library for
displaying data as curves, bar graphs, pie charts, 3D-colour plots,
surfaces, contours and maps. Several output formats are supported
such as X11, VGA, PostScript, PDF, CGM, WMF, HPGL, TIFF, GIF, PNG,
BMP and SVG.

The software is available for the most C, Fortran 77 and Fortran 90/95
compilers. Plotting extensions for the interpreting languages Perl,
Python and Java are also supported.

DISLIN distributions and manuals in PDF, PostScript and HTML format
are available from the DISLIN home page

 http://www.dislin.de

and via FTP from the server

 ftp://ftp.gwdg.de/pub/grafik/dislin

All DISLIN distributions are free for non-commercial use. Licenses
for commercial use are available from the site http://www.dislin.de.

 ---
  Helmut Michels
  Max Planck Institute for
  Solar System Research   Phone: +49 5556 979-334
  Max-Planck-Str. 2   Fax  : +49 5556 979-240
  D-37191 Katlenburg-Lindau   Mail : mich...@mps.mpg.de
--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating huge data in very less time.

2009-03-31 Thread Tim Chase

venutaurus...@gmail.com wrote:

On Mar 31, 1:15 pm, Steven D'Aprano

The fastest HDDs can reach about 125 MB per second under
ideal circumstances, so that will take at least 8 seconds
per 1GB file or 8000 seconds in total.


That time is reasonable. 


You did catch the bit about "the *fastest* HDDs" (my emphasis). 
Unless you've got some massive RAID or Gig-E/Fiberchanel SAN, you 
likely don't have these ideal conditions.  Additionally, I've 
seen "125MB/sec" as the read speeds -- sustained write speeds are 
often lower.  Under more real-world testing, you'll likely get 
throughput closer to 30-70MB/sec.  Call that roughly half the 
throughput, and you're up to 16,000 seconds, or about 4.5hr.  A 
far cry from the "few minutes of time" you first mentioned...


And this doesn't take into consideration the OS overhead of the 
filesystem type.  Some filesystem types are optimized for large 
sequential access, while others work better with smaller files. 
You then have things like OS permission overhead, directory-path 
overhead, and other disk I/O going on at the same time.


-tkc





--
http://mail.python.org/mailman/listinfo/python-list


First project in python, want someone to hold my hand for 2 hours for $100

2009-03-31 Thread googleaccount
Hey, I have to generate this really big matrix from some data. It's
extremely straightforward for someone who has the slightest idea what
they are doing. I'd really like to learn how to do this but I've
gotten impatient with the tutorials because this should be so
straightforward. Email me or ideally send me a msg on freenode, my
username is steve186.

Thanks.
--
http://mail.python.org/mailman/listinfo/python-list


UnknownTimeZoneError

2009-03-31 Thread Brian
I'm running App Engine with Django. I'm having troubles executing
timezone conversion via pytz. I have looked at the Google example
implementation. The following works in IDLE:

>>> import pytz
>>> from pytz import common_timezones
>>> from pytz import timezone
>>> import datetime
>>> timestamp = datetime.datetime.utcnow()
>>> print timestamp

2009-03-22 11:02:41.578000
>>> translated = 
>>> timestamp.replace(tzinfo=pytz.utc).astimezone(timezone('US/Central'))
>>> print translated

2009-03-22 06:02:41.578000-05:00

I have tried to run the following in my app (modules imported, too):

def tz(request):
timestamp = datetime.datetime.utcnow()
translated = timestamp.replace(tzinfo=pytz.utc).astimezone
(timezone
('US/Central'))
return respond(request, user, 'tz',
{'translated':translated,'timestamp':timestamp})

When I pull up the page, I get the following error:
UnknownTimeZoneError at /tz
'US/Central'
Request Method: GET
Request URL:http://localhost:8080/tz
Exception Type: UnknownTimeZoneError
Exception Value:'US/Central'

Does this not work in App Engine for some reason?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating huge data in very less time.

2009-03-31 Thread Tim Chase

andrea wrote:

On 31 Mar, 12:14, "venutaurus...@gmail.com" 
wrote:

That time is reasonable. The randomness should be in such a way that
MD5 checksum of no two files should be the same.The main reason for
having such a huge data is for doing stress testing of our product.



In randomness is not necessary (as I understood) you can just create
one single file and then modify one bit of it iteratively for 1000
times.
It's enough to make the checksum change.

Is there a way to create a file to big withouth actually writing
anything in python (just give me the garbage that is already on the
disk)?


Not exactly AFAIK, but this line of thinking does remind me of 
sparse files[1] if your filesystem supports them:


  f = file('%i.txt' % i, 'wb')
  data = str(i) + '\n'
  f.seek(1024*1024*1024 - len(data))
  f.write(data)
  f.close()

On FS's that support sparse files, it's blindingly fast and 
creates a virtual file of that size without the overhead of 
writing all the bits to the file.  However, this same 
optimization may also throw off any benchmarking you do, as it 
doesn't have to read a gig off the physical media.  This may be a 
good metric for hash calculation across such files, but not a 
good metric for I/O.


-tkc

[1]
http://en.wikipedia.org/wiki/Sparse_file



--
http://mail.python.org/mailman/listinfo/python-list


Re: Authorize.net integration problem

2009-03-31 Thread andrew cooke

have you printed msg and checked it is formatted correctly?  i have node
idea what the protocol is, but your use of join and string concatenation
in the generation of msg looks unusual to me.

andrew

Lakshman wrote:
> I am trying to integrate Authorize.net SIM API into django views.
>
> I am facing a problem in the fingerprint generation. I am repeatedly
> getting that the fingerprint generated doesn't match the one the
> server generates.
>
> I have generated the md5 hash with the key provided as specified in
> the SIM documentation.
>
> Here is the code:
>
> params = {
> 'x_login' : '4ffrBT36La',
> 'x_amount' : '100.00',
> 'x_show_form' : 'PAYMENT_FORM',
> 'x_type' : 'AUTH_CAPTURE',
> 'x_method' : 'CC',
> 'x_fp_sequence' : '123',
> 'x_version' : '3.1',
> 'x_relay_response' : 'FALSE',
> }
> params['x_fp_timestamp'] = int(time.time())
>
> msg = '^'.join([params['x_login'],
>str(params['x_fp_sequence']),
>str(params['x_fp_timestamp']),
>str(params['x_amount'])
>])+'^'
>
> fingerprint = hmac.new('9LyEU8t87h9Hj49Y',msg).hexdigest()
>
>
> I would be glad if some one that has dealt with this earlier, points
> out what the glitch is. Thanks in advance.
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>


--
http://mail.python.org/mailman/listinfo/python-list


Hands on Python - Problem with Local Cgi Server

2009-03-31 Thread Gary Wood
I have the DOS box with the message 
Localhost CGI server started 

But when i try this 
Back in the www directory, 


  1.. Open the web link http://localhost:8080/adder.html (preferably in a new 
window, separate from this this tutorial). 
  2.. You should see an adder form in your browser again. Note that the web 
address no longer includes 'cs.luc.edu'. Instead it starts with 
'localhost:8080', to reference the local Python server you started. Fill out 
the form and test it as before. 
  3.. Look at the console window. You should see a log of the activity with the 
server. Close the server window. 
  4.. Reload the web link http://localhost:8080/adder.html. You should get an 
error, since you refer to localhost, but you just stopped the local server.

I get the Windows Error 


Failed to Connect

  The connection was refused when attempting to contact localhost:8080.

  Though the site seems valid, the browser was unable to establish a connection.




*  Could the site be temporarily unavailable? Try again later.


*  Are you unable to browse other sites?  Check the computer's network 
connection.


*  Is your computer or network protected by a firewall or proxy? Incorrect 
settings can interfere with Web browsing.


The py file brings up the DOS box as if its running ok 


'''Run a local cgi server from the current directory that treats *.cgi files
as executable python cgi scripts.'''

import http.server, sys, os

class CGIExtHTTPRequestHandler(http.server.CGIHTTPRequestHandler):
'''This request handler mimics the Loyola server, which looks for CGI files
to end in '.cgi' and be in any directory as opposed to the CGIHTTPServer
expectation that the cgi script are of the form /cgi-bin/*.py.'''

def is_python(self, path):
"""Test whether argument path is a Python script: allow .cgi"""
return path.lower().endswith('.cgi')

def is_cgi(self):
'''As on xenon, go by extension only.'''
base = self.path
query = ''
i = base.find('?')
if i != -1:
query = base[i:]
base = base[:i]
if not base.lower().endswith('.cgi'):
return False
[parentDirs, script] = base.rsplit('/', 1)
self.cgi_info = (parentDirs, script+query)
return True
   

def run_server():
dirName = os.getcwd()
blanks = dirName.count(' ')
if 0 < blanks:  # server cannot handle blanks in path names
print("""The path to this directory contains {blanks} space(s):
{dirName}
Either rename directories to remove the blanks or
move this directory to a place with no blanks in the path.
Aborting the local server run!""".format(**locals()))
input("Press return after reading this message.")
return

server_addr = ('localhost', 8080)
cgiServer = http.server.HTTPServer(server_addr, CGIExtHTTPRequestHandler)
sys.stderr.write('Localhost CGI server started\n.')
cgiServer.serve_forever()

run_server()





  


  
  --
http://mail.python.org/mailman/listinfo/python-list


Re: Ordered Sets

2009-03-31 Thread Alex_Gaynor
On Mar 31, 5:52 am, pataphor  wrote:
> On Tue, 31 Mar 2009 10:33:26 +0200
>
> pataphor  wrote:
> > On Mon, 30 Mar 2009 19:57:39 -0700 (PDT)
> > Alex_Gaynor  wrote:
>
> > > I really like the Ordered Set class(I've been thinking about one
> > > ever since ordered dict hit the std lib), is there any argument
> > > against adding one to the collections module?  I'd be willing to
> > > write a PEP up for it.
>
> > Suppose the implementation would use a circular linked list. Then the
> > iteration could start from any point, the thing is symmetric after
> > all. But this would also mean we could add items to the left of that
> > new starting point, since that would now be the 'end' of the circle.
> > This is something different from merely remembering insertion order.
> > How do you feel about that?
>
> And in case that didn't confuse you enough, how about this method?
>
>     def move(self,key1,key2):
>         #self ==> key1,(key2 ... end), (key1+1... key2-1)
>         links = self.links
>         if set([key1,key2]) and self :
>             start = self.start
>             a = links[key1][1]
>             b = links[key2][0]
>             c  = links[start][0]
>             links[key1][1] = key2
>             links[key2][0] = key1
>             links[a][0] = c
>             links[c][1] = a
>             links[b][1] = start
>             links[start][0] = b
>
> This takes [key2:]  (isn't pseudo slice notation wonderful?) and
> inserts it after key1.
>
> for example:
>
>     R = OrderedSet(range(10))
>     print(list(R))
>     R.move(3,7)
>     print(list(R))
>
> gives:
>
> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
> [0, 1, 2, 3, 7, 8, 9, 4, 5, 6]
>
> All in O(1) of course.
>
> P.

My inclination would be to more or less *just* have it implement the
set API, the way ordered dict does in 2.7/3.1.

Alex
--
http://mail.python.org/mailman/listinfo/python-list


Re: Authorize.net integration problem

2009-03-31 Thread Lakshman Prasad
Yup. Unusual, it is.

But thats how their string specification syntax is. It includes a ^ at the
end.


On Tue, Mar 31, 2009 at 6:13 PM, andrew cooke  wrote:

>
> have you printed msg and checked it is formatted correctly?  i have node
> idea what the protocol is, but your use of join and string concatenation
> in the generation of msg looks unusual to me.
>
> andrew
>
> Lakshman wrote:
> > I am trying to integrate Authorize.net SIM API into django views.
> >
> > I am facing a problem in the fingerprint generation. I am repeatedly
> > getting that the fingerprint generated doesn't match the one the
> > server generates.
> >
> > I have generated the md5 hash with the key provided as specified in
> > the SIM documentation.
> >
> > Here is the code:
> >
> > params = {
> > 'x_login' : '4ffrBT36La',
> > 'x_amount' : '100.00',
> > 'x_show_form' : 'PAYMENT_FORM',
> > 'x_type' : 'AUTH_CAPTURE',
> > 'x_method' : 'CC',
> > 'x_fp_sequence' : '123',
> > 'x_version' : '3.1',
> > 'x_relay_response' : 'FALSE',
> > }
> > params['x_fp_timestamp'] = int(time.time())
> >
> > msg = '^'.join([params['x_login'],
> >str(params['x_fp_sequence']),
> >str(params['x_fp_timestamp']),
> >str(params['x_amount'])
> >])+'^'
> >
> > fingerprint = hmac.new('9LyEU8t87h9Hj49Y',msg).hexdigest()
> >
> >
> > I would be glad if some one that has dealt with this earlier, points
> > out what the glitch is. Thanks in advance.
> > --
> > http://mail.python.org/mailman/listinfo/python-list
> >
> >
>
>
>


-- 
Regards,
Lakshman
becomingguru.com
lakshmanprasad.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: Thoughts on language-level configuration support?

2009-03-31 Thread jfager
On Mar 31, 5:57 am, Kay Schluehr  wrote:
> On 30 Mrz., 15:40, jfager  wrote:
>
>
>
> > I've written a short post on including support for configuration down
> > at the language level, including a small preliminary half-functional
> > example of what this might look like in Python, available 
> > athttp://jasonfager.com/?p=440.
>
> > The basic idea is that a language could offer syntactic support for
> > declaring configurable points in the program.  The language system
> > would then offer an api to allow the end user to discover a programs
> > configuration service, as well as a general api for providing
> > configuration values.
>
> > The included example implements the first bit and hints at the third,
> > defining a function that looks up what variable its output will be
> > assigned to and tries to find a corresponding value from a
> > configuration source.  It's very preliminary, but I hope it gives a
> > flavor of the general idea.
>
> > Any thoughts or feedback would be greatly appreciated.
>
> The problem with your idea is that those declared declaration points
> can be overlooked no matter how much syntactical support is added.
> Lets say a resource file is loaded and there are a few of the config-
> properties declared in modules you have written. Now an object wants
> to access a resource defined in the file and fails because the
> resource providing property could not be found since the property
> defining module wasn't loaded yet and the property couldn't register
> itself. That's why things are centralized as in optparse and the
> workflow is  designed upfront or things are implemented locally and
> individual units have to take care of their own.

What is a resource file?  How am I accessing the "resource-providing
property" of an unloaded module? That is, if a module isn't loaded,
how do I know about its properties?  Or conversely, if I know about
its properties, why isn't the module loaded?  In other words, why
would accessing a configuration point be any different than accessing
any other name in a module?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Authorize.net integration problem

2009-03-31 Thread Aahz
In article ,
Lakshman   wrote:
>
>I am facing a problem in the fingerprint generation. I am repeatedly
>getting that the fingerprint generated doesn't match the one the
>server generates.

How are you getting this?  Server error?  You're not giving us enough
information.
-- 
Aahz (a...@pythoncraft.com)   <*> http://www.pythoncraft.com/

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."  --Brian W. Kernighan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Windows command line not displaying print commands

2009-03-31 Thread JonathanB
I think I found the problem. I recently removed Python 2.5 and
replaced it with 2.6. When I got in, I tried to run some django
commands and even they weren't producing output. On a hunch, I tried
to uninstall 2.6 and reinstall it, since now even django wasn't
producing output. When I tried, it told me that I couldn't because it
wasn't installed. I had to delete the folder and manually go through
and delete every instance of "python" in my registry. However, when I
reinstalled 2.6, it worked. Some of the registry entries were still
pointing to the defunct Python25 path rather than Python26. Now both
the simple hello.py script and the bigger script that I really wanted
to get working are producing output.

I apologize for the confusion caused by going the wrong direction with
my troubleshooting (from the simplest possible script to the more
complex script), next time I will be more sensible in my
troubleshooting.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Thoughts on language-level configuration support?

2009-03-31 Thread jfager
On Mar 31, 6:02 am, David Stanek  wrote:
> On Tue, Mar 31, 2009 at 3:19 AM, jfager  wrote:
>
> > "Simply having a configuration file" - okay.  What format?  What if
> > the end user wants to keep their configuration info in LDAP?  Did the
> > library I'm including make the same decisions, or do I have to do some
> > contortions to adapt?  Didn't I write basically this  exact same code
> > for the last umpteen projects I worked on, just schlepping around
> > config objects?
>
> Ah I see your point here. During PyCon I was trying to add the ability
> to inject configuration into objects that are constructed by the
> snake-guice framework. The code is not yet in the Subversion
> repository, but I did brain dump a little documentation[0]. It is
> still very much a work in progress.
>
> 0.http://code.google.com/p/snake-guice/wiki/InjectingConfiguration

This is getting close :)  I think it would be nice if you didn't have
to come up with your own names (so that projects across different
authors would share more or less the same naming structure), and if
those names didn't encode their expectation of a particular end-user
configuration scheme.



>
> --
> David
> blog:http://www.traceback.org
> twitter:http://twitter.com/dstanek

--
http://mail.python.org/mailman/listinfo/python-list


Listing all python modules robustly

2009-03-31 Thread Brian
I've used the C api to write a method that can call any python module
function. I would like to extend the interface to allow dynamically listing
all python modules, and for a given module all functions, and for a given
function all argument types and the return types if possible.
Starting with the modules, I came up with this bit of code:

from pkgutil import walk_packages;

modules=[]

modules = [modules.append(item[1]) for item in walk_packages()]


I then took this to my various systems for testing. It works fine on OSX and
one of my Ubuntu boxes, but two of the Ubuntu boxes fail, each with a
different error message.  The failure is due to python importing every
single module and some of the modules failing. One example is the
UniConverter package - towards the end of __init__.py it calls sys.exit(0)
which kills the interpreter completely.

I've spent a ton of time trying to rewrite walk_packages to be robust to
failure, but so far without luck. Any advice is appreciated.

/Brian
--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating huge data in very less time.

2009-03-31 Thread Dave Angel
I wrote a tiny DOS program called resize that simply did a seek out to a 
(user specified) point, and wrote zero bytes. One (documented) side 
effect of DOS was that writing zero bytes would truncate the file at 
that point.  But it also worked to extend the file to that point without 
writing any actual data.  The net effect was that it adjusted the FAT 
table, and none of the data.  It was used frequently for file recovery, 
unformatting, etc.   And it was very fast.


Unfortunately, although the program still ran under NT (which includes 
Win 2000, XP, ...), the security system insists on zeroing all the 
intervening sectors, which takes much time, obviously.


Still, if the data is not important (make the first sector unique, and 
the rest zeroes), this would probably be the fastest way to get all 
those files created.  Just write the file name in the first sector 
(since we'[ll separately make sure the filename is unique), and then 
seek out to a billion, and write one more byte.  I won't assume that 
writing zero bytes would work for Unix.


andrea wrote:

On 31 Mar, 12:14, "venutaurus...@gmail.com" 
wrote:
  

That time is reasonable. The randomness should be in such a way that
MD5 checksum of no two files should be the same.The main reason for
having such a huge data is for doing stress testing of our product.




In randomness is not necessary (as I understood) you can just create
one single file and then modify one bit of it iteratively for 1000
times.
It's enough to make the checksum change.

Is there a way to create a file to big withouth actually writing
anything in python (just give me the garbage that is already on the
disk)?

  

--
http://mail.python.org/mailman/listinfo/python-list


RE: Cannot register to submit a bug report

2009-03-31 Thread John Posner
 >>  >> We can try to debug this :)
 >>  >> 
 >>  >> > E-mail message checked by Spyware Doctor (6.0.0.386)
 >>  >> > Database version: 
 >>  >> 5.12060http://www.pctools.com/en/spyware-doctor-antivirus/
 >>  >> 
 >>  >> Any chance it's Spyware Doctor or some anti-virus flagging 
 >>  >> the message and hiding it?

I said:

 >> Thanks for the suggestion, but that's probably not it. No 
 >> message appears on my ISP mail server (Yahoo), either. 
 >> That's beyond the reach of my machine's Spyware Doctor.
 >> 
 >> I plan to take Terry's suggestion: send a message to the Webmaster.

My ISP (AT&T/Yahoo) was blocking email from the Python bug-tracker: "The
sending system has been identified as a source of spam". I took a suggestion
from Martin Lowis on the "tracker-discuss" list: register under a different
email address. That solution worked fine.





E-mail message checked by Spyware Doctor (6.0.0.386)
Database version: 5.12080
http://www.pctools.com/en/spyware-doctor-antivirus/
--
http://mail.python.org/mailman/listinfo/python-list


Re: win32com python AttributeError!

2009-03-31 Thread Mike Driscoll
On Mar 30, 11:17 pm, Michael  wrote:
> Hi Python-list -
>
> Has anyone figured this out from Rebecca:
>
> Hi, I am having trouble with win32com for python.  I get the following
> error when I try to issue any command after using Dispatch.
>
> >>> xl=win32com.client.Dispatch("Excel.Application")
> >>> xl.Visible=0
>
> Traceback (most recent call last):
>   File "", line 1, in ?
>     xl.Visible=0
>   File "D:\Python22\Lib\site-packages\win32com\client\dynamic.py",
> line 504, in __setattr__
>     raise AttributeError, "Property '%s.%s' can not be set." %
> (self._username_, attr)
> AttributeError: Property 'Excel.Application.Visible' can not be set.
>
>
>
> I have programs that I used to use all the time and they simply won't
> run.  Is this an error with python or win32com or my setup?
>
> Thanks,
> -rebecca
>
> I have the same problem.
>
> Thus,
>
> myWord = Dispatch("Word.Application")
> myWord.Visible = 1   # or, True
>
> opens a word document but
>
> myExcel = Dispatch("Excel.Application")
> myExcel.Visible = 1    # or, True
>
> causes (as Rebecca notes above):
>
> AttributeError: Property 'Excel.Application.Visible' can
> not be set.
>
> Thanks,
>
> Michael

This works fine for me on Windows XP and Python 2.5. It looks like
Rebecca is using Python 2.2, which might be the issue. I would also
upgrade to the latest PyWin32 as well. I'm using 212.

- Mike
--
http://mail.python.org/mailman/listinfo/python-list


Re: Thoughts on language-level configuration support?

2009-03-31 Thread David Stanek
On Tue, Mar 31, 2009 at 10:01 AM, jfager  wrote:
> On Mar 31, 6:02 am, David Stanek  wrote:
>> On Tue, Mar 31, 2009 at 3:19 AM, jfager  wrote:
>>
>> > "Simply having a configuration file" - okay.  What format?  What if
>> > the end user wants to keep their configuration info in LDAP?  Did the
>> > library I'm including make the same decisions, or do I have to do some
>> > contortions to adapt?  Didn't I write basically this  exact same code
>> > for the last umpteen projects I worked on, just schlepping around
>> > config objects?
>>
>> Ah I see your point here. During PyCon I was trying to add the ability
>> to inject configuration into objects that are constructed by the
>> snake-guice framework. The code is not yet in the Subversion
>> repository, but I did brain dump a little documentation[0]. It is
>> still very much a work in progress.
>>
>> 0.http://code.google.com/p/snake-guice/wiki/InjectingConfiguration
>
> This is getting close :)  I think it would be nice if you didn't have
> to come up with your own names (so that projects across different
> authors would share more or less the same naming structure), and if
> those names didn't encode their expectation of a particular end-user
> configuration scheme.
>

For my purpose I am writing the glue infrastructure that allows
components to be put together within an application. What I am missing
is a schema-like way to define configuration files. I have debated
starting a project to do that, but at this time I'm already
overextended :-)


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
--
http://mail.python.org/mailman/listinfo/python-list


RE: Style question - defining immutable class data members

2009-03-31 Thread John Posner
I said:

 >> > My intent was to fix an obvious omission: a special case 
 >> was discussed in
 >> > the "Augmented assignment statements" section, but an 
 >> almost-identical
 >> > special case was omitted from the "Assignment statements" section.

After finally getting registered at bugs.python.org (as described in another
thread), I submitted  my suggested change to the Python documentation. It's
issue 5621.

-John





E-mail message checked by Spyware Doctor (6.0.0.386)
Database version: 5.12080
http://www.pctools.com/en/spyware-doctor-antivirus/
--
http://mail.python.org/mailman/listinfo/python-list


Re: Ordered Sets

2009-03-31 Thread pataphor
On Tue, 31 Mar 2009 06:03:13 -0700 (PDT)
Alex_Gaynor  wrote:

> My inclination would be to more or less *just* have it implement the
> set API, the way ordered dict does in 2.7/3.1.

As far as I can tell all that would be needed is read/write access to
two key variables: The iterator start position and the underlying map.
There is no need for more than basic set API since people can use
those two variables to subclass their own iterators.

P.
--
http://mail.python.org/mailman/listinfo/python-list


methods and class methods

2009-03-31 Thread Zach Goscha
I just learned python programming and is wondering how to change a method to
a class method. Also what are the differences between a method and class
method.

Thanks in advance
- Zach (Freshman student in High school)
--
http://mail.python.org/mailman/listinfo/python-list


Re: An inheritance question: getting the name of the "one up" class

2009-03-31 Thread Nick
Thanks for the replies. This has given me some incentive to start looking at 
Python 3. Oh, and thanks for the articles on super().


Nick 


--
http://mail.python.org/mailman/listinfo/python-list


Re: methods and class methods

2009-03-31 Thread Daniel Fetchinson
> I just learned python programming and is wondering how to change a method to
> a class method.

class x( object ):
@classmethod
i_will_be_a_class_method( cls ): pass

> Also what are the differences between a method and class method.

A class method receives the class as its first argument while an
ordinary method receives the instance as its first argument. This fact
is reflected in the convention that for class methods the first
argument typically is called cls while for ordinary methods it's
called self.

Cheers,
Daniel


-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown
--
http://mail.python.org/mailman/listinfo/python-list


regex negative lookbehind assertion not working correctly?

2009-03-31 Thread Gabriel Rossetti

Hello everyone,

I am trying to write a regex pattern to match an ID in a URL only if it 
is not a given ID. Here's an example, the ID not to match is 
"14522XXX98", if my URL is "/profile.php?id=14522XXX99" I want it to 
match and if it's "/profile.php?id=14522XXX98" I want it not to. I tried 
this:


>>> re.search(r"/profile.php\?id=(\d+)(?"/profile.php?id=14522XXX98").groups()

('14522XXX9',)

which should not match, but it does, then I tried this :

>>> re.search(r"/profile.php\?id=(\d+)(?"/profile.php?id=14522XXX99").groups()

('14522XXX99',)

which should match and it does. I then tried uring /positive lookbehind 
assertion/ instead and it does this :


>>> re.search(r"/profile.php\?id=(\d+)(?<=14522XXX98)", 
"/profile.php?id=14522XXX98").groups()

('14522XXX98',)

which matches as it should and then I tried this :

>>> re.search(r"/profile.php\?id=(\d+)(?<=14522XXX98)", 
"/profile.php?id=14522XXX99").groups()

Traceback (most recent call last):
 File "", line 1, in 
AttributeError: 'NoneType' object has no attribute 'groups'

which doesn't match as it should. Could someone please explain why the 
negative lookbehind assertion is not working as I understand it? Also, 
notice how the last digit of the first expression is not matched, I get 
('14522XXX9',) instead of ('14522XXX98',), why? It does on the others


Thank you,
Gabriel
--
http://mail.python.org/mailman/listinfo/python-list


Re: Authorize.net integration problem

2009-03-31 Thread Stephen Chapman
Are they expecting the results in a specific order... because as you 
probably know a dictionary is never in the order that you add the items.


Lakshman Prasad wrote:

Yup. Unusual, it is.

But thats how their string specification syntax is. It includes a ^ at 
the end.



On Tue, Mar 31, 2009 at 6:13 PM, andrew cooke > wrote:



have you printed msg and checked it is formatted correctly?  i
have node
idea what the protocol is, but your use of join and string
concatenation
in the generation of msg looks unusual to me.

andrew

Lakshman wrote:
> I am trying to integrate Authorize.net SIM API into django views.
>
> I am facing a problem in the fingerprint generation. I am repeatedly
> getting that the fingerprint generated doesn't match the one the
> server generates.
>
> I have generated the md5 hash with the key provided as specified in
> the SIM documentation.
>
> Here is the code:
>
> params = {
> 'x_login' : '4ffrBT36La',
> 'x_amount' : '100.00',
> 'x_show_form' : 'PAYMENT_FORM',
> 'x_type' : 'AUTH_CAPTURE',
> 'x_method' : 'CC',
> 'x_fp_sequence' : '123',
> 'x_version' : '3.1',
> 'x_relay_response' : 'FALSE',
> }
> params['x_fp_timestamp'] = int(time.time())
>
> msg = '^'.join([params['x_login'],
>str(params['x_fp_sequence']),
>str(params['x_fp_timestamp']),
>str(params['x_amount'])
>])+'^'
>
> fingerprint = hmac.new('9LyEU8t87h9Hj49Y',msg).hexdigest()
>
>
> I would be glad if some one that has dealt with this earlier, points
> out what the glitch is. Thanks in advance.
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>





--
Regards,
Lakshman
becomingguru.com 
lakshmanprasad.com 


--
http://mail.python.org/mailman/listinfo/python-list
  


--
http://mail.python.org/mailman/listinfo/python-list


Re: An inheritance question: getting the name of the "one up" class

2009-03-31 Thread Gabriel Genellina
En Tue, 31 Mar 2009 05:16:47 -0300, Steven D'Aprano  
 escribió:



On Tue, 31 Mar 2009 01:29:50 -0300, Gabriel Genellina wrote:


Oh, and while the gurus are at it, what would be the advantage (if any)
of changing, say
   Primate.__init__(self)
to
super(Human, self).__init__()


None, if you use single inheritance everywhere.


But there's no disadvantage to using super with single inheritance (and
new-style classes).


It's ok *if* you follow the guidelines at the end of the "harmful"  
document.



super is very tricky; see:
http://fuhm.net/super-harmful/
and
http://www.artima.com/weblogs/viewpost.jsp?thread=236275


As I understand it, the trickiness only comes about when you have diamond
diagrams in your MRO.


With multiple inheritance, you *always* have a diamond diagram - every  
class inherits from object.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: complaints about no replies last week

2009-03-31 Thread pruebauno
On Mar 31, 2:56 am, Arnaud Delobelle  wrote:
> Arnaud Delobelle wrote:
> > prueba...@latinmail.com writes:
> > [...]
> > > I myself asked about how to write a library to efficiently do union
> > > and intersection of sets containing time intervals some time ago on
> > > this list and got little to no answers. It is a tricky problem. Since
> > > I was getting paid I got an O(n*n) solution working. People on this
> > > list on the other hand do not get paid and answer whatever strikes
> > > their fancy. Sometimes the question is hard or confusing and nobody is
> > > motivated enough to answer.
>
> > I wasn't around when you posted this I guess. Do you mean intervals sets
> > on the (real) number line such as:
>
> >       1-3, 6-10 meaning all numbers between 1 and 3 and all numbers
> >       between 6 and 10.
>
> > In this case I think you can achieve union and intersection in O(nlogn)
> > where n is the total number of intervals in the interval sets to unify
> > or intersect. There is an implementation below. I have chosen a very
> > simple data structure for interval sets: an interval set is the list of
> > its endpoints. E.g.
>
> >     1-3, 6-10 is the list [1, 3, 6, 10]
>
> > This means that I can't specify whether an interval is closed or open.
> > So in the implementation below all intervals are assumed to be open.
> > The method could be made to work for any kind of intervals with the same
> > complexity, there would just be a few more LOC.  I'm focusing on the
> > principle - here it is:
>
> > --
> > # Implementation of union and intersection of interval sets.
>
> > from itertools import *
>
> > def combine(threshold, intsets):
> >     endpoints = sorted(chain(*imap(izip, intsets, repeat(cycle([1,-1])
> >     height = 0
> >     compound = []
> >     for x, step in endpoints:
> >         old_height = height
> >         height += step
> >         if max(height, old_height) == threshold:
> >             compound.append(x)
> >     return compound
>
> > def union(*intsets):
> >     return combine(1, intsets)
>
> > def intersection(*intsets):
> >     return combine(len(intsets), intsets)
>
> > # tests
>
> > def pretty(a):
> >     a = iter(a)
> >     return ', '.join("%s-%s" % (a, b) for a, b in izip(a, a))
>
> > tests = [
> >     ([1, 5, 10, 15], [3, 11, 13, 20]),
> >     ([2, 4, 6, 8], [4, 7, 10, 11]),
> >     ([0, 11], [5, 10, 15, 25], [7, 12, 13, 15]),
> >     ]
>
> > for intsets in tests:
> >     print "sets: ", "; ".join(imap(pretty, intsets))
> >     print "union: ", pretty(union(*intsets))
> >     print "intersection: ", pretty(intersection(*intsets))
> >     print "-"*20
> > --
>
> > Is this what you were looking for?
>
> > --
> > Arnaud
>
> I realised after posting last night that I must be
>
> (1) solving the wrong problem
> (2) solving it badly
>
> - My implementation of the combine() function above is O(nlogn)
> (because of the sorted() call) whereas it could be O(n) by iterating
> over the interval in the parallel manner, hence (2).  This would make
> union() and intersection() O(n).
>
> - As the problem was solved by the OP in O(n^2) I must be solving the
> wrong problem (1).
>
> I apologise for this.
>
> However it was a nice and compact implementation IMHO :)
>
> --
> Arnaud

I am pretty sure the problem can be solved in O(n log n). I just
wasn't feeling overly smart when I was writing the algorithm. N is on
average 4 and it had eventually to be implemented inside a framework
using C++ anyway, so it is pretty fast. I can’t believe that no
programmer has come over the same kind of problem before, yet my
Google fu didn’t do anything for me.

Well since I attracted a couple people's attention I will describe the
problem in more detail. Describing the problem properly is probably as
hard as solving it, so excuse me if I struggle a bit.

The problem is for a health insurance company and involves the period
of time a person is covered. Most insurance companies allow not only
for the main member to be insured but his family: the spouse and the
dependents (children). This additional coverage costs extra but less
than a full new insurance. So for example if Alice buys an insurance
worth at 100 dollars a month, she can insure her husband Bob for an
additional 50 dollars. Under certain circumstances Alice may go off
the insurance and only Bob stays. In that case the price goes back to
100 dollars or maybe there is a deal for 80 or something like that. In
other words the cost of the insurance is dependent on the combination
of family members that participate in it. Additionally not only do we
have different family compositions but also different insurance
products. So you can get medical, dental and vision insurance.

All that data is stored in a database that is not very tidy and looks
something like this:

First Day of Coverage, Last Day of Coverage, Relationship, Product
5/3/2005, 5/3/2005, D, M
9/10/

Re: Introducing Python to others

2009-03-31 Thread David C. Ullrich
In article 
<039360fb-a29c-4f43-b6e0-ba97fb598...@z23g2000prd.googlegroups.com>,
 Mensanator  wrote:

> On Mar 26, 11:42 am, "andrew cooke"  wrote:
> > David C. Ullrich wrote:
> > > In article ,
> > >  "Paddy O'Loughlin"  wrote:
> >
> > > Here's my favorite thing about Python (you'd of course
> > > remark that it's just a toy example, doing everything
> > > in as dumb but easily understood way as possible):
> >
> > > x=[1,2]
> >
> > > print x+x
> >
> > > class Vector():
> > >   def __init__(self, data):
> > >     self.data = data
> > >   def __repr__(self):
> > >     return repr(self.data)
> > >   def __add__(self, other):
> > >     return Vector([self.data[0]+other.data[0],
> > >                   self.data[1]+other.data[1]])
> >
> > > x = Vector([1,2])
> >
> > > print x+x
> >
> > that's cute, but if you show them 2.6 or 3 it's even cuter:
> >
> > >>> from operator import add
> > >>> class Vector(list):
> >
> > ...   def __add__(self, other):
> > ...     return map(add, self, other)
> > ...>>> x = Vector([1,2])
> > >>> x+x
> >
> > [2, 4]
> >
> > andrew
> 
> Mind if I ask a question? In DU's code, both operands have to
> be instances of the Vector class?

Yes, in the code I posted. That code was not meant to be
an example of the right way to do anything, just an
illustration of how wonderful things like __add__ can be.

> >>> x = Vector([1,2])
> >>> x+x
> [2, 4]
> >>> x+[3,3]
> 
> Traceback (most recent call last):
>   File "", line 1, in 
> x+[3,3]
>   File "", line 7, in __add__
> return SV([self.data[0]+other.data[0],self.data[1]+other.data[1]])
> AttributeError: 'list' object has no attribute 'data'
> 
> 
> Whereas with your version, "other" just has to be an iterable.
> 
> >>> x = Vector([1,2])
> >>> x+x
> [2, 4]
> >>> x+[3,3]
> [4, 5]
> >>> x+(9,9)
> [10, 11]
> >>> x+{3:4,4:9}
> [4, 6]
> 
> Although it does require the same number of elements (because that's
> required by map and could be changed if necessary).
> 
> >>> x+[3,3,3]
> 
> Traceback (most recent call last):
>   File "", line 1, in 
> x+[3,3,3]
>   File "", line 3, in __add__
> return map(add,self,other)
> TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
> 
> 
> What would you have to do to make this work?
> 
> >>> x+x+x  # expecting [3,6]
> [2, 4, 1, 2]

-- 
David C. Ullrich
--
http://mail.python.org/mailman/listinfo/python-list


Re: regex negative lookbehind assertion not working correctly?

2009-03-31 Thread andrew cooke

it is working - it's making the final "8" not be matched.

don't you want lookahead rather than lookbehind?  or force an end of string?

andrew



Gabriel Rossetti wrote:
> Hello everyone,
>
> I am trying to write a regex pattern to match an ID in a URL only if it
> is not a given ID. Here's an example, the ID not to match is
> "14522XXX98", if my URL is "/profile.php?id=14522XXX99" I want it to
> match and if it's "/profile.php?id=14522XXX98" I want it not to. I tried
> this:
>
>  >>> re.search(r"/profile.php\?id=(\d+)(? "/profile.php?id=14522XXX98").groups()
> ('14522XXX9',)
>
> which should not match, but it does, then I tried this :
>
>  >>> re.search(r"/profile.php\?id=(\d+)(? "/profile.php?id=14522XXX99").groups()
> ('14522XXX99',)
>
> which should match and it does. I then tried uring /positive lookbehind
> assertion/ instead and it does this :
>
>  >>> re.search(r"/profile.php\?id=(\d+)(?<=14522XXX98)",
> "/profile.php?id=14522XXX98").groups()
> ('14522XXX98',)
>
> which matches as it should and then I tried this :
>
>  >>> re.search(r"/profile.php\?id=(\d+)(?<=14522XXX98)",
> "/profile.php?id=14522XXX99").groups()
> Traceback (most recent call last):
>   File "", line 1, in 
> AttributeError: 'NoneType' object has no attribute 'groups'
>
> which doesn't match as it should. Could someone please explain why the
> negative lookbehind assertion is not working as I understand it? Also,
> notice how the last digit of the first expression is not matched, I get
> ('14522XXX9',) instead of ('14522XXX98',), why? It does on the others
>
> Thank you,
> Gabriel
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>


--
http://mail.python.org/mailman/listinfo/python-list


Re: Introducing Python to others

2009-03-31 Thread David C. Ullrich
In article ,
 Scott David Daniels  wrote:

> Mensanator wrote:
> > On Mar 26, 11:42 am, "andrew cooke"  wrote:
> >> ...
> >> that's cute, but if you show them 2.6 or 3 it's even cuter:
> >>
> > from operator import add
> > class Vector(list):
> >> ...   def __add__(self, other):
> >> ... return map(add, self, other)
> >> ...>>> x = Vector([1,2])
> > x+x
> >> [2, 4]
> > 
> > What would you have to do to make this work?
> > 
>  x+x+x  # expecting [3,6]
> > [2, 4, 1, 2]
> > 
> 
>  class Vector(list):
>  def __add__(self, other):
>  return type(self)(x + y for x, y in zip(self, other))

Question: I would have thought it would be 

  return type(self)([x + y for x, y in zip(self, other)])

What's this thing that looks like a list comprehension but isn't?

Comment:

I didn't mean to start a big deal, but as long as it's started:
Of course returning that list as in Andrew's example is not what
we want. Someone said we should return a Vector instead. That's
probably what the demo should do, but in stuff like this that
I actually _use_ I tend to do something like what you do here
(with very different spelling, since type(self) wouldn't work
in the bad old days.) The reason of course being that we want
subclasses to return instances of the subclass automatically.

On the other hand I have this vague feeling that explicitly
inspecting the type like this is "wrong" - I've always wondered
whether this is the "right" way to do it. ???

>  def __sub__(self, other):
>  return type(self)(x - y for x, y in zip(self, other))
>  def __repr__(self):
>  return '%s(%s)' % (
>  type(self).__name__, list.__repr__(self))
> 
>  x = Vector([1,2])
>  x + x + x
> 
> --Scott David Daniels
> scott.dani...@acm.org

-- 
David C. Ullrich
--
http://mail.python.org/mailman/listinfo/python-list


Re: Introducing Python to others

2009-03-31 Thread andrew cooke
David C. Ullrich wrote:
> In article ,
>  Scott David Daniels  wrote:
>
>> Mensanator wrote:
>> > On Mar 26, 11:42 am, "andrew cooke"  wrote:
>> >> ...
>> >> that's cute, but if you show them 2.6 or 3 it's even cuter:
>> >>
>> > from operator import add
>> > class Vector(list):
>> >> ...   def __add__(self, other):
>> >> ... return map(add, self, other)
>> >> ...>>> x = Vector([1,2])
>> > x+x
>> >> [2, 4]
>> >
>> > What would you have to do to make this work?
>> >
>>  x+x+x  # expecting [3,6]
>> > [2, 4, 1, 2]
>> >
>>
>>  class Vector(list):
>>  def __add__(self, other):
>>  return type(self)(x + y for x, y in zip(self, other))
>
> Question: I would have thought it would be
>
>   return type(self)([x + y for x, y in zip(self, other)])
>
> What's this thing that looks like a list comprehension but isn't?

it's a generator expression. 
http://docs.python.org/3.0/reference/expressions.html#index-3735

andrew

--
http://mail.python.org/mailman/listinfo/python-list


Re: Listing all python modules robustly

2009-03-31 Thread Brian
Turns out that the Twisted framework provides better introspective support
than standard python, so problem solved!

http://twistedmatrix.com/documents/8.2.0/api/twisted.python.modules.html#walkModules

On Tue, Mar 31, 2009 at 8:12 AM, Brian  wrote:

> I've used the C api to write a method that can call any python module
> function. I would like to extend the interface to allow dynamically listing
> all python modules, and for a given module all functions, and for a given
> function all argument types and the return types if possible.
> Starting with the modules, I came up with this bit of code:
>
> from pkgutil import walk_packages;
>
> modules=[]
>
> modules = [modules.append(item[1]) for item in walk_packages()]
>
>
> I then took this to my various systems for testing. It works fine on OSX
> and one of my Ubuntu boxes, but two of the Ubuntu boxes fail, each with a
> different error message.  The failure is due to python importing every
> single module and some of the modules failing. One example is the
> UniConverter package - towards the end of __init__.py it calls sys.exit(0)
> which kills the interpreter completely.
>
> I've spent a ton of time trying to rewrite walk_packages to be robust to
> failure, but so far without luck. Any advice is appreciated.
>
> /Brian
>
--
http://mail.python.org/mailman/listinfo/python-list


Detecting Binary content in files

2009-03-31 Thread ritu
Hi,

I'm wondering if Python has a utility to detect binary content in
files? Or if anyone has any ideas on how that can be accomplished? I
haven't been able to find any useful information to accomplish this
(my other option is to fire off a perl script from within m python
script that will tell me whether the file is binary), so any pointers
will be appreciated.

Thanks,
Ritu
--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating huge data in very less time.

2009-03-31 Thread Grant Edwards
On 2009-03-31, Dave Angel  wrote:

> I wrote a tiny DOS program called resize that simply did a
> seek out to a (user specified) point, and wrote zero bytes.
> One (documented) side effect of DOS was that writing zero
> bytes would truncate the file at that point.  But it also
> worked to extend the file to that point without writing any
> actual data.  The net effect was that it adjusted the FAT 
> table, and none of the data.  It was used frequently for file
> recovery, unformatting, etc.  And it was very fast.
>
> Unfortunately, although the program still ran under NT (which includes 
> Win 2000, XP, ...), the security system insists on zeroing all the 
> intervening sectors, which takes much time, obviously.

Why would it even _allocate_ intevening sectors?  That's pretty
brain-dead.

>> Is there a way to create a file to big withouth actually writing
>> anything in python (just give me the garbage that is already on the
>> disk)?

No.  That would be a monstrous security hole.

-- 
Grant Edwards   grante Yow! I'm having a MID-WEEK
  at   CRISIS!
   visi.com
--
http://mail.python.org/mailman/listinfo/python-list


urllib2 problem, data param not working?

2009-03-31 Thread Gabriel Rossetti

Hello everyone,

I am having a problem with urllib2, when I do this :

   post = urllib.urlencode(post)
   request = urllib2.Request(url, post)
   response = urllib2.urlopen(request)

or this :

   post = urllib.urlencode(post)
   response = urllib2.urlopen(url, post)

or this :

   post = urllib.urlencode(post)
   request = urllib2.Request(url)
   response = urllib2.urlopen(request, post)

it doesn't work, it's like if the post params weren't added, and if I do 
this :


   post = urllib.urlencode(post)
   request = url + '?' + post
   response = urllib2.urlopen(request)

it works as expected, can anyone explain what is going on? I know that 
if I don't add the data ('post' in my case) param it uses an HTTP GET, 
could that be why it works when I add them manually?


Thank you,
Gabriel
--
http://mail.python.org/mailman/listinfo/python-list


Re: Relative Imports, why the hell is it so hard?

2009-03-31 Thread s4g
Hi,

I was looking for a nice idiom for interpackage imports as I found
this thread.
Here come a couple of solutions I came up with. Any discussion is
welcome.

I assume the same file structure

\ App
| main.py
+--\subpack1
| | __init__.py
| | module1.py
|
+--\subpack2
| | __init__.py
| | module2.py


When you run main.py all imports relative to \App work fine, so the
only problem is running a module from within a subpackage as a script.
I therefore assume we want to run module1.py as a script, which wants
to import module2.

I hope the following solutions are self-evident

= solution 1
--> in module1.py
import sys, os
if __name__ == '__main__':
sys.path.append(os.path.normpath(__file__+'/../..'))

import subpack2.module2

= solution 2
--> in subpack1/__init__.py
import sys, os

_top_package_level = 1   # with current package being level 0

_top_package = os.path.normpath(__file__ + '/..'*(_top_package_level
+1))
if _top_package not in sys.path:
sys.path.append(_top_package)

--> in module1 or any module in the package, which requires import
relative to the package top
import __init__
import subpack2.module2


= solution 3
--> in package_import.py, somewhere on the PYTHONPATH ( perhaps in
standard lib ;)

def set_top_package(module, level):
_top_package = os.path.normpath(module + '/..'*(level+1))
if _top_package not in sys.path:
sys.path.append(_top_package)

class absolute_import(object):
def __init__(self, module, level):
self.level = level
self.module = module

def __enter__(self):
sys.path.insert( 0,
os.path.normpath(self.module + '/..'*(self.level+1))
)

def __exit__(self, exc_type, exc_val, exc_tb):
del sys.path[0]

--> in module1
import package_import
package_import.set_top_package(__file__, 1)
import subpack2.module2

--> or in module1
import package_import
with package_import.absolute_import(__file__, 1):
import subpack2.module2
...


--
http://mail.python.org/mailman/listinfo/python-list


Re: Detecting Binary content in files

2009-03-31 Thread Matt Nordhoff
ritu wrote:
> Hi,
> 
> I'm wondering if Python has a utility to detect binary content in
> files? Or if anyone has any ideas on how that can be accomplished? I
> haven't been able to find any useful information to accomplish this
> (my other option is to fire off a perl script from within m python
> script that will tell me whether the file is binary), so any pointers
> will be appreciated.
> 
> Thanks,
> Ritu

There isn't any perfect test. The usual heuristic is to check if there
are any NUL bytes in the file:

>>> '\0' in some_string

That can fail, of course. UTF-16-encoded text will have tons of NUL
bytes, and some binary files may not have any.
-- 
--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating huge data in very less time.

2009-03-31 Thread Grant Edwards
On 2009-03-31, Steven D'Aprano  wrote:

[writing a bunch of files with a bunch of random data in each]

>> Can this be done within few minutes of time. Is it possble
>> only using threads or can be done in any other way. This has
>> to be done in Windows.
>
> Is it possible? Sure. In a couple of minutes? I doubt it. 1000
> files of 1GB each means you are writing 1TB of data to a HDD.

1TB of random data.  Damn.  I don't know where you're going to
be able to find an entropy source that can produce that much
data in a reasonable amount of time. Typical entropy sources in
Desktop OSes based on keystrokes and network packets can
probably only manage a few hundred bits/second best case.

Even a hardware solution like those in some chipsets can't do
more than about 100K bits/second.

If your motherboard has a hardware RNG that'll do 100Kbps,
you're looking at about 3.8 years to generate 1TB of random
data.

Of course it's possible the OP doesn't really require random
data...

-- 
Grant Edwards   grante Yow! Do you have exactly
  at   what I want in a plaid
   visi.compoindexter bar bat??
--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating huge data in very less time.

2009-03-31 Thread Terry Reedy

venutaurus...@gmail.com wrote:


That time is reasonable. The randomness should be in such a way that
MD5 checksum of no two files should be the same.The main reason for
having such a huge data is for doing stress testing of our product.


For most purposes (other than stress testing the HD and HD read 
routines], I suspect you would be better off directly piping the data 
into your product (or a special version of it).


--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating huge data in very less time.

2009-03-31 Thread Tim Chase

Is there a way to create a file to big withouth actually writing
anything in python (just give me the garbage that is already on the
disk)?


No.  That would be a monstrous security hole.


Sure...just install 26 hard-drives and partition each up into 40 
1-GB unformatted partitions each, and then read directly from 
/dev/hd[a-z][0-39]




-tkc
(ponders to self, "does logical partitioning allow for that many 
partitions on a disk?")





--
http://mail.python.org/mailman/listinfo/python-list


Printing Out Called Function Calls

2009-03-31 Thread Victor Subervi
Hi;
Due to screwy problems at my server farm that they refuse to fix, I need to
call lines that execute code from other files, like this:
theContent += `tidBits[i][y][:-2]`
but what that returns is this (as an example):
tableTop(348,180)
when I need it to execute the fn tableTop. What do?
TIA,
Victor
--
http://mail.python.org/mailman/listinfo/python-list


Writing to Console on mac OS X

2009-03-31 Thread RGK
I'm on mac os x 10.4.11 running python 2.5.2, and Django 1.0, but this 
is a python question.


When doing django/mod_python stuff, I can write to the Apache error_log 
file with


sys.stderr.write("SOMETHING I WANT TO KNOW")

which had me wondering if there's not a means for a misc. python program 
to write to the Mac OS X console?   That would be much nicer than having 
to open up the error log and inspect stuff, as then I could see debug 
info stream past on a console window.


(This is console, as in the "console" run from /Applications/Utilities, 
not the bash "Terminal")


Any help or suggestions appreciated. Thx.

Ross.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Detecting Binary content in files

2009-03-31 Thread Benjamin Kaplan
On Tue, Mar 31, 2009 at 12:23 PM, ritu  wrote:

> Hi,
>
> I'm wondering if Python has a utility to detect binary content in
> files? Or if anyone has any ideas on how that can be accomplished? I
> haven't been able to find any useful information to accomplish this
> (my other option is to fire off a perl script from within m python
> script that will tell me whether the file is binary), so any pointers
> will be appreciated.
>


All files are binary. The question is whether every byte in the file
represents a (whole or part of a) character or whether some of the data
represents a different data type. You could theoretically have a "binary"
file that appears in a text editor to be perfect English. How does your perl
script tell the difference?



http://mail.python.org/mailman/listinfo/python-list
>
--
http://mail.python.org/mailman/listinfo/python-list


Re: regex negative lookbehind assertion not working correctly?

2009-03-31 Thread MRAB

Gabriel Rossetti wrote:

Hello everyone,

I am trying to write a regex pattern to match an ID in a URL only if it 
is not a given ID. Here's an example, the ID not to match is 
"14522XXX98", if my URL is "/profile.php?id=14522XXX99" I want it to 
match and if it's "/profile.php?id=14522XXX98" I want it not to. I tried 
this:


 >>> re.search(r"/profile.php\?id=(\d+)(?"/profile.php?id=14522XXX98").groups()

('14522XXX9',)

which should not match, but it does, then I tried this :


[snip]
How can '(\d+)' be capturing '14522XXX9'? '\d' matches only digits!

Anyway, your basic problem is that it initially matches '14522XXX98',
but then the lookbehind rejects that, so it backtracks and releases the
last character, giving '14522XXX9', which is not be rejected because
'14522XXX9' isn't '14522XXX98'.

Try putting a '\b' after the '\d+' to reject partial IDs.
--
http://mail.python.org/mailman/listinfo/python-list


Re: create a log level for python logging module

2009-03-31 Thread dj
On Mar 30, 4:18 pm, Vinay Sajip  wrote:
> On Mar 30, 4:13 pm, dj  wrote:
>
>
>
> > I am trying to create a log level called userinfo for the pythonlogging. I 
> > read the source code and tried to register the level to theloggingnamespace 
> > with the following source:
>
> >              fromloggingimport Logger
>
> >                          # create the custom log level
> >                                   class userinfo(Logger):
> >                                                def userinfo(self, msg,
> > *args, **kwargs):
> >                                                           if
> > self.isEnabledFor(WARNING):
>
> > self._log(WARNING, msg, args, **kwargs)
>
> >                               # Register log level in thelogging.Logger 
> > namespace
> >                               Logger.userinfo = userinfo
>
> > Has I am sure you guessed, it did not work. If you know how this is
> > done or know what I am doing work or can provide a link to example
> > code (because I have not been able to locate any), I would greatly
> > appreciate it.
> > My sincere and heartfelt thanks in advance.
>
> See the example script at
>
> http://dpaste.com/hold/21323/
>
> which contains, amongst other things, an illustration of how to use
> custom logging levels in an application.
>
> Regards,
>
> Vinay Sajip

I got the code setup, however, I still get an error for my custom log
level.

### Python code
###


import sys, logging

# log levels
CRITICAL = 50
ERROR = 40
WARNING = 30
USERINFO =25 # my custom log level
INFO = 20
DEBUG  = 10

# define the range
LEVEL_RANGE = range(DEBUG, CRITICAL +1)

# level names

log_levels = {

CRITICAL : 'critical',
ERROR : 'error',
WARNING : 'warning',
USERINFO : 'userinfo',
INFO : 'info',
DEBUG : 'debug',

}

# associate names with our levels.
for lvl in log_levels.keys():
logging.addLevelName(lvl, log_levels[lvl])



# setup a log instance
logger = logging.getLogger('myLog')
logger.setLevel(CRITICAL)
hdlr = logging.StreamHandler()
hdlr.setLevel(CRITICAL)
logger.addHandler(hdlr)

# give it a try
print 'write logs'
logger.critical('this a critical log message')
logger.userinfo('this is a userinfo log message')   #call custom log
level

# Output from my interpreter
##

Python 2.6 (r26:66721, Oct  2 2008, 11:35:03) [MSC v.1500 32 bit
(Intel)]
Type "help", "copyright", "credits" or "license" for more information.
>>>
Evaluating log_level_test.py
write logs
this a critical log message
AttributeError: Logger instance has no attribute 'userinfo'
>>>

I would love to know what I am doing wrong. Thanks again for your
help, it is really appreciated.



--
http://mail.python.org/mailman/listinfo/python-list


Re: Printing Out Called Function Calls

2009-03-31 Thread andrew cooke
Victor Subervi wrote:
> Hi;
> Due to screwy problems at my server farm that they refuse to fix, I need
> to
> call lines that execute code from other files, like this:
> theContent += `tidBits[i][y][:-2]`
> but what that returns is this (as an example):
> tableTop(348,180)
> when I need it to execute the fn tableTop. What do?


change server famrs?

use eval() or exec()?

andrew

--
http://mail.python.org/mailman/listinfo/python-list


Re: Cyclic GC rules for subtyped objects with tp_dictoffset

2009-03-31 Thread BChess
On Mar 31, 12:27 am, Hrvoje Niksic  wrote:
> [ Questions such as this might be better suited for the capi-sig list,
>  http://mail.python.org/mailman/listinfo/capi-sig]
>
> BChess writes:
> > I'm writing a new PyTypeObject that is base type, supports cyclic
> > GC, and has a tp_dictoffset.  If my type is sub-typed by a python
> > class, what exactly are the rules for how I'm supposed to treat my
> > PyDict object with regards to cyclic GC?  Do I still visit it in my
> > traverse () function if I'm subtyped?  Do I decrement the refcount
> > upon dealloc?  By the documentation, I'm assuming I should always be
> > using _PyObject_GetDictPtr() to be accessing the dictionary, which I
> > do.  But visiting the dictionary in traverse() in the case it's
> > subtyped results in a crash in weakrefobject.c.  I'm using Python
> > 2.5.
>
> First off, if your class is intended only as a base class, are you
> aware that simply inheriting from a dictless class adds a dict
> automatically?  For example, the base "object" type has no dict, but
> inheriting from it automatically adds one (unless you override that
> using __slots__).  Having said that, I'll assume that the base class
> is usable on its own and its direct instances need to have a dict as
> well.
>
> I'm not sure if this kind of detail is explicitly documented, but as
> far as the implementation goes, the answer to your question is in
> Objects/typeobject.c:subtype_traverse.  That function gets called to
> traverse instances of heap types (python subclasses of built-in
> classes such as yours).  It contains code like this:
>
>      if (type->tp_dictoffset != base->tp_dictoffset) {
>          PyObject **dictptr = _PyObject_GetDictPtr(self);
>              if (dictptr && *dictptr)
>                  Py_VISIT(*dictptr);
>      }
>
> According to this, the base class is responsible for visiting its dict
> in its tp_traverse, and the subtype only visits the dict it added
> (which is why its location differs).  Note that visiting an object
> twice still shouldn't cause a crash; objects may be and are visited an
> arbitrary number of times, and it's up to the GC to ignore those it
> has already seen.  So it's possible that you have a bug elsewhere in
> the code.
>
> As far as the decrementing goes, the rule of thumb is: if you created
> it, you get to decref it.  subtype_dealloc contains very similar
> logic:
>
>         /* If we added a dict, DECREF it */
>         if (type->tp_dictoffset && !base->tp_dictoffset) {
>                 PyObject **dictptr = _PyObject_GetDictPtr(self);
>                 if (dictptr != NULL) {
>                         PyObject *dict = *dictptr;
>                         if (dict != NULL) {
>                                 Py_DECREF(dict);
>                                 *dictptr = NULL;
>                         }
>                 }
>         }
>
> So, if the subtype added a dict, it was responsible for creating it
> and it will decref it.  If the dict was created by you, it's up to you
> to dispose of it.

My confusion stemmed from the fact that I wasn't actually,
technically, allocating a PyDict in this space.
PyObject_GenericSetAttr() does that automatically when it finds that
it's NULL.  But that seems to be the same as if I had made it myself
-- so I'm to dealloc either way.

Thank you for the very in-depth answer, though.  You're right: the
problem was elsewhere. The crash stemmed from using a negative number
for tp_dictoffset.  This doesn't seem to do the right thing when
subtyping -- the tp_dictoffset was pointing to the same memory as the
offset specified in the subtype's tp_weaklistoffset.  I missed the
sentence in the documentation that negative tp_dictoffsets should only
be used for variable-length objects.  Using a positive offset instead
worked like a charm.

Thanks again,
Ben
--
http://mail.python.org/mailman/listinfo/python-list


Re: Detecting Binary content in files

2009-03-31 Thread Josh Dukes
There might be another way but off the top of my head:

#!/usr/bin/env python

def isbin(filename):
   fd=open(filename,'rb')
   for b in fd.read():
   if ord(b) > 127:
   fd.close()
   return True
   fd.close()
   return False

for f in ['/bin/bash', '/etc/passwd']:
   print "%s is binary: " % f, isbin(f)


Of course this would detect unicode files as being binary and maybe
that's not what you want. How are you thinking about doing it in
perl exactly? 


On Tue, 31 Mar 2009 09:23:05 -0700 (PDT)
ritu  wrote:

> Hi,
> 
> I'm wondering if Python has a utility to detect binary content in
> files? Or if anyone has any ideas on how that can be accomplished? I
> haven't been able to find any useful information to accomplish this
> (my other option is to fire off a perl script from within m python
> script that will tell me whether the file is binary), so any pointers
> will be appreciated.
> 
> Thanks,
> Ritu
> --
> http://mail.python.org/mailman/listinfo/python-list


-- 

Josh Dukes
MicroVu IT Department
--
http://mail.python.org/mailman/listinfo/python-list


Re: Detecting Binary content in files

2009-03-31 Thread Josh Dukes
s/if ord(b) > 127/if ord(b) > 127 or ord(b) < 32/


On Tue, 31 Mar 2009 10:19:44 -0700
Josh Dukes  wrote:

> There might be another way but off the top of my head:
> 
> #!/usr/bin/env python
> 
> def isbin(filename):
>fd=open(filename,'rb')
>for b in fd.read():
>if ord(b) > 127:
>fd.close()
>return True
>fd.close()
>return False
> 
> for f in ['/bin/bash', '/etc/passwd']:
>print "%s is binary: " % f, isbin(f)
> 
> 
> Of course this would detect unicode files as being binary and maybe
> that's not what you want. How are you thinking about doing it in
> perl exactly? 
> 
> 
> On Tue, 31 Mar 2009 09:23:05 -0700 (PDT)
> ritu  wrote:
> 
> > Hi,
> > 
> > I'm wondering if Python has a utility to detect binary content in
> > files? Or if anyone has any ideas on how that can be accomplished? I
> > haven't been able to find any useful information to accomplish this
> > (my other option is to fire off a perl script from within m python
> > script that will tell me whether the file is binary), so any
> > pointers will be appreciated.
> > 
> > Thanks,
> > Ritu
> > --
> > http://mail.python.org/mailman/listinfo/python-list
> 
> 


-- 

Josh Dukes
MicroVu IT Department
--
http://mail.python.org/mailman/listinfo/python-list


Re: Does Python have certificate?

2009-03-31 Thread Aahz
In article ,
Paddy3118   wrote:
>
>The Academy of Research into Science Education being a true leader in
>the field offers acclaimed accreditation for Python programmers. Those
>who pass our strict exams and pay our modest fees will earn our
>prestigious certification.
>
>Those who show promise can advance to our Winter Improve Python to
>Expert program, for an additional fee, and, be given expert tutoring
>to help you gain our exemplary A.R.S.E./W.I.P.E certification which is
>guaranteed to attract certain types of employers by its name alone.

+1 QOTW
-- 
Aahz (a...@pythoncraft.com)   <*> http://www.pythoncraft.com/

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."  --Brian W. Kernighan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Cannot register to submit a bug report

2009-03-31 Thread Terry Reedy

John Posner wrote:


My ISP (AT&T/Yahoo) was blocking email from the Python bug-tracker: "The
sending system has been identified as a source of spam".


I hope you were able to suggest to them that that identification must be 
an error.  Frustrating given the spam sources that somehow do not get 
identified.


> I took a suggestion

from Martin Lowis on the "tracker-discuss" list: register under a different
email address. That solution worked fine.


Better than waiting for AT&T to wise up. ;-)

--
http://mail.python.org/mailman/listinfo/python-list


Re: Ordered Sets

2009-03-31 Thread Alex_Gaynor
On Mar 31, 11:06 am, pataphor  wrote:
> On Tue, 31 Mar 2009 06:03:13 -0700 (PDT)
>
> Alex_Gaynor  wrote:
> > My inclination would be to more or less *just* have it implement the
> > set API, the way ordered dict does in 2.7/3.1.
>
> As far as I can tell all that would be needed is read/write access to
> two key variables: The iterator start position and the underlying map.
> There is no need for more than basic set API since people can use
> those two variables to subclass their own iterators.
>
> P.

The only issue with that is if we ever moved it to a C implementation
we'd probably use a more conventional linked list.

Alex
--
http://mail.python.org/mailman/listinfo/python-list


Re: Detecting Binary content in files

2009-03-31 Thread Josh Dukes
or rather:

#!/usr/bin/env python
import string

def isbin(filename):
   fd=open(filename,'rb')
   for b in fd.read():
  if not b in string.printable and b not in string.whitespace:
 fd.close()
 return True
   fd.close()
   return False

for f in ['/bin/bash', '/etc/passwd']:
   print "%s is binary: " %f, isbin(f)


whatever... basically it's what everyone else said, every file is
binary so it all depends on your definitiion of binary. 

On Tue, 31 Mar 2009 10:23:51 -0700
Josh Dukes  wrote:

> s/if ord(b) > 127/if ord(b) > 127 or ord(b) < 32/
> 
> 
> On Tue, 31 Mar 2009 10:19:44 -0700
> Josh Dukes  wrote:
> 
> > There might be another way but off the top of my head:
> > 
> > #!/usr/bin/env python
> > 
> > def isbin(filename):
> >fd=open(filename,'rb')
> >for b in fd.read():
> >if ord(b) > 127:
> >fd.close()
> >return True
> >fd.close()
> >return False
> > 
> > for f in ['/bin/bash', '/etc/passwd']:
> >print "%s is binary: " % f, isbin(f)
> > 
> > 
> > Of course this would detect unicode files as being binary and maybe
> > that's not what you want. How are you thinking about doing it in
> > perl exactly? 
> > 
> > 
> > On Tue, 31 Mar 2009 09:23:05 -0700 (PDT)
> > ritu  wrote:
> > 
> > > Hi,
> > > 
> > > I'm wondering if Python has a utility to detect binary content in
> > > files? Or if anyone has any ideas on how that can be
> > > accomplished? I haven't been able to find any useful information
> > > to accomplish this (my other option is to fire off a perl script
> > > from within m python script that will tell me whether the file is
> > > binary), so any pointers will be appreciated.
> > > 
> > > Thanks,
> > > Ritu
> > > --
> > > http://mail.python.org/mailman/listinfo/python-list
> > 
> > 
> 
> 


-- 

Josh Dukes
MicroVu IT Department
--
http://mail.python.org/mailman/listinfo/python-list


Re: Printing Out Called Function Calls

2009-03-31 Thread Victor Subervi
>
> change server famrs?
>

Really. But I imagine they are all trash for the price I pay.

>
> use eval() or exec()?
>
eval worked, exec no. Thanks!

>
> andrew
>
>
--
http://mail.python.org/mailman/listinfo/python-list


Re: Relative Imports, why the hell is it so hard?

2009-03-31 Thread Kay Schluehr
On 31 Mrz., 18:48, s4g  wrote:
> Hi,
>
> I was looking for a nice idiom for interpackage imports as I found
> this thread.
> Here come a couple of solutions I came up with. Any discussion is
> welcome.
>
> I assume the same file structure
>
> \ App
> | main.py
> +--\subpack1
> | | __init__.py
> | | module1.py
> |
> +--\subpack2
> | | __init__.py
> | | module2.py
>
> When you run main.py all imports relative to \App work fine, so the
> only problem is running a module from within a subpackage as a script.
> I therefore assume we want to run module1.py as a script, which wants
> to import module2.
>
> I hope the following solutions are self-evident
>
> = solution 1
> --> in module1.py
> import sys, os
> if __name__ == '__main__':
>     sys.path.append(os.path.normpath(__file__+'/../..'))
>
> import subpack2.module2
>
> = solution 2
> --> in subpack1/__init__.py
> import sys, os
>
> _top_package_level = 1   # with current package being level 0
>
> _top_package = os.path.normpath(__file__ + '/..'*(_top_package_level
> +1))
> if _top_package not in sys.path:
>     sys.path.append(_top_package)
>
> --> in module1 or any module in the package, which requires import
> relative to the package top
> import __init__
> import subpack2.module2
>
> = solution 3
> --> in package_import.py, somewhere on the PYTHONPATH ( perhaps in
> standard lib ;)
>
> def set_top_package(module, level):
>     _top_package = os.path.normpath(module + '/..'*(level+1))
>     if _top_package not in sys.path:
>         sys.path.append(_top_package)
>
> class absolute_import(object):
>     def __init__(self, module, level):
>         self.level = level
>         self.module = module
>
>     def __enter__(self):
>         sys.path.insert( 0,
>             os.path.normpath(self.module + '/..'*(self.level+1))
>             )
>
>     def __exit__(self, exc_type, exc_val, exc_tb):
>         del sys.path[0]
>
> --> in module1
> import package_import
> package_import.set_top_package(__file__, 1)
> import subpack2.module2
>
> --> or in module1
> import package_import
> with package_import.absolute_import(__file__, 1):
>     import subpack2.module2
>     ...

This and similar solutions ( see Istvan Alberts ) point me to a
fundamental problem of the current import architecture. Suppose you
really want to run a module as a script without a prior import from a
module path:

...A\B\C> python my_module.py

then the current working directory C is added to sys.path which means
that the module finder searches in C but C isn't a known package.
There is no C package in sys.modules even if the C directory is
"declared" as a package by placing an __init__.py file in it. Same
goes of course with B and A. Although the ceremony has been performed
basically correct the interpreter god is not pacified and doesn't
respond. But why not? Because it looks up for *living* imported
packages in the module cache ( in sys.modules ).

I don't think there is any particular design idea behind it. The
module cache is just a simple flat dictionary; a no-brainer to
implement and efficient for look ups. But it counteracts a domain
model. All you are left with is those Finders, Loaders and Importers
in Brett Cannons importlib. Everything remains deeply mysterious and I
don't wonder that it took long for him to work this out.

--
http://mail.python.org/mailman/listinfo/python-list


Re: create a log level for python logging module

2009-03-31 Thread MRAB

dj wrote:

On Mar 30, 4:18 pm, Vinay Sajip  wrote:

On Mar 30, 4:13 pm, dj  wrote:




I am trying to create a log level called userinfo for the pythonlogging. I read 
the source code and tried to register the level to theloggingnamespace with the 
following source:
 fromloggingimport Logger
 # create the custom log level
  class userinfo(Logger):
   def userinfo(self, msg,
*args, **kwargs):
  if
self.isEnabledFor(WARNING):
self._log(WARNING, msg, args, **kwargs)
  # Register log level in thelogging.Logger 
namespace
  Logger.userinfo = userinfo
Has I am sure you guessed, it did not work. If you know how this is
done or know what I am doing work or can provide a link to example
code (because I have not been able to locate any), I would greatly
appreciate it.
My sincere and heartfelt thanks in advance.

See the example script at

http://dpaste.com/hold/21323/

which contains, amongst other things, an illustration of how to use
custom logging levels in an application.

Regards,

Vinay Sajip


I got the code setup, however, I still get an error for my custom log
level.

### Python code
###


import sys, logging

# log levels
CRITICAL = 50
ERROR = 40
WARNING = 30
USERINFO =25 # my custom log level
INFO = 20
DEBUG  = 10

# define the range
LEVEL_RANGE = range(DEBUG, CRITICAL +1)

# level names

log_levels = {

CRITICAL : 'critical',
ERROR : 'error',
WARNING : 'warning',
USERINFO : 'userinfo',
INFO : 'info',
DEBUG : 'debug',

}

# associate names with our levels.
for lvl in log_levels.keys():
logging.addLevelName(lvl, log_levels[lvl])



# setup a log instance
logger = logging.getLogger('myLog')
logger.setLevel(CRITICAL)
hdlr = logging.StreamHandler()
hdlr.setLevel(CRITICAL)
logger.addHandler(hdlr)

# give it a try
print 'write logs'
logger.critical('this a critical log message')
logger.userinfo('this is a userinfo log message')   #call custom log
level

# Output from my interpreter
##

Python 2.6 (r26:66721, Oct  2 2008, 11:35:03) [MSC v.1500 32 bit
(Intel)]
Type "help", "copyright", "credits" or "license" for more information.
Evaluating log_level_test.py
write logs
this a critical log message
AttributeError: Logger instance has no attribute 'userinfo'

I would love to know what I am doing wrong. Thanks again for your
help, it is really appreciated.


I think that custom levels don't get their own method; you have to use:

logger.log(USERINFO, 'this is a userinfo log message')

although you could add it yourself with, say:

setattr(logger, 'userinfo', lambda *args: logger.log(USERINFO, *args))
--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating huge data in very less time.

2009-03-31 Thread Irmen de Jong

venutaurus...@gmail.com wrote:

On Mar 31, 1:15 pm, Steven D'Aprano
 wrote:

On Mon, 30 Mar 2009 22:44:41 -0700, venutaurus...@gmail.com wrote:

Hello all,
I've a requirement where I need to create around 1000
files under a given folder with each file size of around 1GB. The
constraints here are each file should have random data and no two files
should be unique even if I run the same script multiple times.

I don't understand what you mean. "No two files should be unique" means
literally that only *one* file is unique, the others are copies of each
other.

Do you mean that no two files should be the same?


Moreover
the filenames should also be unique every time I run the script. One
possibility is that we can use Unix time format for the file   names
with some extensions.

That's easy. Start a counter at 0, and every time you create a new file,
name the file by that counter, then increase the counter by one.


Can this be done within few minutes of time. Is it
possble only using threads or can be done in any other way. This has to
be done in Windows.

Is it possible? Sure. In a couple of minutes? I doubt it. 1000 files of
1GB each means you are writing 1TB of data to a HDD. The fastest HDDs can
reach about 125 MB per second under ideal circumstances, so that will
take at least 8 seconds per 1GB file or 8000 seconds in total. If you try
to write them all in parallel, you'll probably just make the HDD waste
time seeking backwards and forwards from one place to another.

--
Steven


That time is reasonable. The randomness should be in such a way that
MD5 checksum of no two files should be the same.The main reason for
having such a huge data is for doing stress testing of our product.



Does it really need to be *files* on the *hard disk*?

What nobody has suggested yet is that you can *simulate* the files by making a large set 
of custom file-like object and feed that to your application. (If possible!)

The object could return a 1 GB byte stream consisting of a GUID followed by 
random bytes
(or just millions of A's, because you write that the only requirement is to have a 
different MD5 checksum).
That way you have no need of a 1 terabyte hard drive and the huge wait time to create 
the actual files...


--irmen
--
http://mail.python.org/mailman/listinfo/python-list


Re: udp package header

2009-03-31 Thread Artur M. Piwko
In the darkest hour on Tue, 24 Mar 2009 00:50:10 + (UTC),
R. David Murray  screamed:
>> I got a problem. İ want to send udp package and get this package (server and 
>> clinet ). it's easy to python but i want to look the udp header how can i 
>> do ?
>
> The English word is 'packet'.
>
> If you are on Linux you can use raw sockets for this.
>

With a little drawback - raw sockets require root privilege.

-- 
[ Artur M. Piwko : Pipen : AMP29-RIPE : RLU:100918 : From == Trap! : SIG:210B ]
[ 19:19:19 user up 12028 days,  7:14,  1 user, load average: 0.36, 0.28, 0.54 ]

   God is an atheist.
--
http://mail.python.org/mailman/listinfo/python-list


custom handler does not write to log file

2009-03-31 Thread dj
It seems that you can create custom handlers and add them to the
logging.handlers namespace(http://mail.python.org/pipermail/python-
list/2008-May/493826.html.)
But for reasons beyond my understanding my log file (test.log) is not
written to.

  my handler class
###
import logging.handlers


# create my handler class
class MyHandler(logging.handlers.RotatingFileHandler):
def __init__(self, filename):
logging.handlers.RotatingFileHandler.__init__(self, filename,
 
maxBytes=10485760, backupCount=5)


# Register handler in the "logging.handlers" namespace
logging.handlers.MyHandler = MyHandler

  test app.py
##
import logging
import logging.handlers

from myhandler import MyHandler

# log file path
LOG_FILE_PATH='H:/python_experiments/logging/test.log'  # log file
path
#log file formatter
myformatter = logging.Formatter('%(asctime)s %(levelname)s %(filename)
s %(lineno)d %(message)s')

# setup a log instance for myHandler
logger2 = logging.getLogger('myLog2')
logger2.setLevel(logging.CRITICAL)
hdlr2 = logging.handlers.MyHandler(LOG_FILE_PATH)
hdlr2.setFormatter(myformatter)
hdlr2.setLevel(logging.CRITICAL)
logger2.addHandler(hdlr2)

# give it a try
print 'using myHandler'
logger2.debug('this is a test of myHandler')
print 'after logger using myHandler'

Thanks in advance for your help.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Thoughts on language-level configuration support?

2009-03-31 Thread Lorenzo Gatti
On 31 Mar, 09:19, jfager  wrote:
> On Mar 31, 2:54 am, David Stanek  wrote:
>
> > On Mon, Mar 30, 2009 at 9:40 AM, jfager  wrote:
> > >http://jasonfager.com/?p=440.
>
> > > The basic idea is that a language could offer syntactic support for
> > > declaring configurable points in the program.  The language system
> > > would then offer an api to allow the end user to discover a programs
> > > configuration service, as well as a general api for providing
> > > configuration values.

A configuration "service"? An "end user" that bothers to discover it?
API for "providing" configuration "values"? This suggestion, and the
companion blog post, seem very distant from the real world for a
number of reasons.

1) Users want to supply applications with the least amount of useful
configuration information as rarely and easily as possible, not to use
advanced tools to satisfy an application's crudely expressed
configuration demands.

Reducing inconvenience for the user entails sophisticated and mostly
ad hoc techniques: deciding without asking (e.g. autoconf looking into
C compiler headers and trying shell commands or countless applications
with "user profiles" querying the OS for the current user's home
directory), asking when the software is installed (e.g. what 8 bit
character encoding should be used in a new database), designing
sensible and safe defaults.

2) Practical complex configuration files (or their equivalent in a DB,
a LDAP directory, etc.) are more important and more permanent than the
applications that use them; their syntax and semantics should be
defined by external specifications (such as manuals and examples), not
in the code of a particular implementation.

User documentation is necessary, and having a configuration mechanism
that isn't subject to accidents when the application is modified is
equally important.

3) Configuration consisting of values associated with individual
variables is an unusually simple case. The normal case is translating
between nontrivial sequential, hierarchical or reticular data
structures in the configuration input and quite different ones in the
implementation.

4) Your actual use case seems to be providing a lot of tests with a
replacement for the "real" configuration of the actual application.
Branding variables as "configuration" all over the program isn't an
useful way to help the tests and the actual application build the same
data structures in different ways.

> > What value does this have over simply having a configuration file.
>
> "Simply having a configuration file" - okay.  What format?  What if
> the end user wants to keep their configuration info in LDAP?

Wait a minute. Reading the "configuration" from a live LDAP directory
is a major feature, with involved application specific aspects (e.g.
error handling) and a solid justification in the application's
requirements (e.g. ensuring up to date authentication and
authorization data), not an interchangeable configuration provider and
certainly not something that the user can replace.

Deciding where the configuration comes from is an integral part of the
design, not something that can or should be left to the user: there
can be value in defining common object models for various sources of
configuration data and rules to combine them, like e.g. in the Spring
framework for Java, but it's only a starting point for the actual
design of the application's configuration.

> > In your load testing application you could have easily checked for the
> > settings in a config object.
>
> Not really easily, no.  It would have been repeated boilerplate across
> many different test cases (actually, that's what we started with and
> refactored away), instead of a simple declaration that delegated the
> checking to the test runner.

A test runner has no business configuring tests beyond calling generic
setup and teardown methods; tests can be designed smartly and factored
properly to take care of their own configuration without repeating
"boilerplate".

> > I think that the discover-ability of
> > configuration can be handled with example configs and documentation.
>
> Who's keeping that up to date?  Who's making sure it stays in sync
> with the code?  Why even bother, if you could get it automatically
> from the code?

It's the code that must remain in sync with the documentation, the
tests, and the actual usage of the application. For example, when did
you last see incompatible changes in Apache's httpd.conf?

You seem to think code is central and actual use and design is a
second class citizen. You say in your blog post: "Users shouldn’t have
to pore through the code to find all the little bits they can tweak".
They shouldn't because a well designed application has adequate
documentation of what should be configured in the form of manuals,
wizards, etc. and they shouldn't because they don't want to tweak
little bits, not even if they have to.

Regards,

Lorenzo Gatti
--
http://mail.python.org/mailman/listinfo/python-list


RE: Cannot register to submit a bug report

2009-03-31 Thread John Posner
Terry Ready said: 

 >> > My ISP (AT&T/Yahoo) was blocking email from the Python bug-tracker:
"The
 >> > sending system has been identified as a source of spam".
 >> 
 >> I hope you were able to suggest to them that that 
 >> identification must be 
 >> an error.  Frustrating given the spam sources that somehow 
 >> do not get 
 >> identified.

The AT&T web site carefully explained that only administrators, not mere
mortals, would be able to submit a "this is not spam" appeal. So I forwarded
the appropriate info to Martin Lowis, my Tracker-discuss benefactor.

 >> > I took a suggestion
 >> > from Martin Lowis on the "tracker-discuss" list: register under a
different
 >> > email address. That solution worked fine.
 >> 
 >> Better than waiting for AT&T to wise up. ;-)

Fuggedaboudit! :-)






E-mail message checked by Spyware Doctor (6.0.0.386)
Database version: 5.12080
http://www.pctools.com/en/spyware-doctor-antivirus/
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing to Console on mac OS X

2009-03-31 Thread Irmen de Jong

RGK wrote:
I'm on mac os x 10.4.11 running python 2.5.2, and Django 1.0, but this 
is a python question.


When doing django/mod_python stuff, I can write to the Apache error_log 
file with


sys.stderr.write("SOMETHING I WANT TO KNOW")

which had me wondering if there's not a means for a misc. python program 
to write to the Mac OS X console?   That would be much nicer than having 
to open up the error log and inspect stuff, as then I could see debug 
info stream past on a console window.


(This is console, as in the "console" run from /Applications/Utilities, 
not the bash "Terminal")


Any help or suggestions appreciated. Thx.

Ross.


Yeah, use the syslog facility, for instance:

import syslog
syslog.openlog("django")
syslog.syslog(syslog.LOG_ALERT, "Here is my syslog alert message")


It seems that anything below alert level isn't shown in the console.
I don't how to change this.


You might want to consider using the Python logging module instead?

--irmen
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing to Console on mac OS X

2009-03-31 Thread RGK


Thanks for the pointer Irmen. That works fine.

Also my unfamiliarity with the console app is showing - I just learned 
that there is a navigation pane activated by the 'logs' icon that allows 
me to see various system logs, including the Apache ones :p


You're right, I've heard a bit about the Python logging module, but 
never looked into it. This is a good reason to take a look.


Thanks again & Regards,
Ross.

Irmen de Jong wrote:

RGK wrote:
I'm on mac os x 10.4.11 running python 2.5.2, and Django 1.0, but this 
is a python question.


When doing django/mod_python stuff, I can write to the Apache 
error_log file with


sys.stderr.write("SOMETHING I WANT TO KNOW")

which had me wondering if there's not a means for a misc. python 
program to write to the Mac OS X console?   That would be much nicer 
than having to open up the error log and inspect stuff, as then I 
could see debug info stream past on a console window.


(This is console, as in the "console" run from 
/Applications/Utilities, not the bash "Terminal")


Any help or suggestions appreciated. Thx.

Ross.


Yeah, use the syslog facility, for instance:

import syslog
syslog.openlog("django")
syslog.syslog(syslog.LOG_ALERT, "Here is my syslog alert message")


It seems that anything below alert level isn't shown in the console.
I don't how to change this.


You might want to consider using the Python logging module instead?

--irmen

--
http://mail.python.org/mailman/listinfo/python-list


Re: Relative Imports, why the hell is it so hard?

2009-03-31 Thread Terry Reedy

Kay Schluehr wrote:

On 31 Mrz., 18:48, s4g  wrote:



This and similar solutions ( see Istvan Alberts ) point me to a
fundamental problem of the current import architecture. Suppose you
really want to run a module as a script without a prior import from a
module path:

...A\B\C> python my_module.py

then the current working directory C is added to sys.path which means
that the module finder searches in C but C isn't a known package.
There is no C package in sys.modules even if the C directory is
"declared" as a package by placing an __init__.py file in it. Same
goes of course with B and A.


Nothing is added to sys.modules, except the __main__ module, unless 
imported (which so are on startup).



Although the ceremony has been performed
basically correct the interpreter god is not pacified and doesn't
respond.


But the import 'ceremony' has not been performed.


But why not? Because it looks up for *living* imported
packages in the module cache ( in sys.modules ).

I don't think there is any particular design idea behind it. The
module cache is just a simple flat dictionary; a no-brainer to
implement and efficient for look ups.


This all dates to the time before packages and imports from zip files 
and such.


> But it counteracts a domain model.

What is that?


All you are left with is those Finders, Loaders and Importers
in Brett Cannons importlib. Everything remains deeply mysterious and I
don't wonder that it took long for him to work this out.


And your proposal is?

tjr

--
http://mail.python.org/mailman/listinfo/python-list


Re: Relative Imports, why the hell is it so hard?

2009-03-31 Thread Kay Schluehr
On 31 Mrz., 20:50, Terry Reedy  wrote:

> Nothing is added to sys.modules, except the __main__ module, unless
> imported (which so are on startup).

Yes. The startup process is opaque but at least user defined modules
are not accidentally imported.

>
> > Although the ceremony has been performed
> > basically correct the interpreter god is not pacified and doesn't
> > respond.
>
> But the import 'ceremony' has not been performed.

There is no import ceremony. Imports are just stated in the source.
There is a package ceremony for whatever reasons.

> > But why not? Because it looks up for *living* imported
> > packages in the module cache ( in sys.modules ).
>
> > I don't think there is any particular design idea behind it. The
> > module cache is just a simple flat dictionary; a no-brainer to
> > implement and efficient for look ups.
>
> This all dates to the time before packages and imports from zip files
> and such.
>
>  > But it counteracts a domain model.
>
> What is that?

Object oriented programming.

>
> > All you are left with is those Finders, Loaders and Importers
> > in Brett Cannons importlib. Everything remains deeply mysterious and I
> > don't wonder that it took long for him to work this out.
>
> And your proposal is?

I have still more questions than answers.
--
http://mail.python.org/mailman/listinfo/python-list


Re: regex negative lookbehind assertion not working correctly?

2009-03-31 Thread Gabriel Rossetti

MRAB wrote:

Gabriel Rossetti wrote:

Hello everyone,

I am trying to write a regex pattern to match an ID in a URL only if 
it is not a given ID. Here's an example, the ID not to match is 
"14522XXX98", if my URL is "/profile.php?id=14522XXX99" I want it to 
match and if it's "/profile.php?id=14522XXX98" I want it not to. I 
tried this:


 >>> re.search(r"/profile.php\?id=(\d+)(?"/profile.php?id=14522XXX98").groups()

('14522XXX9',)

which should not match, but it does, then I tried this :


[snip]
How can '(\d+)' be capturing '14522XXX9'? '\d' matches only digits!

:-), yes, I had replaced the digits for the example (originally longer, etc)


Anyway, your basic problem is that it initially matches '14522XXX98',
but then the lookbehind rejects that, so it backtracks and releases the
last character, giving '14522XXX9', which is not be rejected because
'14522XXX9' isn't '14522XXX98'.

Try putting a '\b' after the '\d+' to reject partial IDs.


That did it, thanks a lot, I would never have found that.

Gabriel
--
http://mail.python.org/mailman/listinfo/python-list


Re: Detecting Binary content in files

2009-03-31 Thread Dave Angel
There are lots of ways to decide if a file is non-text, but I don't know 
of any "standard" way.  You can detect a file as not-ascii by simply 
searching for any character greater than 0x7f.  But that doesn't handle 
a UTF-8 file, which is an 8bit  text file representing Unicode.


The way I've seen done many times is to search for regular occurrence of 
the end-of-line character, and the lack of nulls.   Most "binary" files 
will have more nulls than linefeeds, and any null could be considered a 
marker for a non-text file.


If you're happy with your particular perl script, probably it could be 
readily translated to Python.


ritu wrote:

Hi,

I'm wondering if Python has a utility to detect binary content in
files? Or if anyone has any ideas on how that can be accomplished? I
haven't been able to find any useful information to accomplish this
(my other option is to fire off a perl script from within m python
script that will tell me whether the file is binary), so any pointers
will be appreciated.

Thanks,
Ritu

  

--
http://mail.python.org/mailman/listinfo/python-list


Re: Detecting Binary content in files

2009-03-31 Thread Dave Angel

All files are binary, but probably by binary you mean non-text.

There are lots of ways to decide if a file is non-text, but I don't know 
of any "standard" way.  You can detect a file as not-ascii by simply 
searching for any character greater than 0x7f.  But that doesn't handle 
a UTF-8 file, which is an 8bit  text file representing Unicode.


The way I've seen done many times is to search for regular occurrence of 
the end-of-line character, and the lack of nulls.   Most "binary" files 
will have more nulls than linefeeds, and any null could be considered a 
marker for a non-text file.


If you're happy with your particular perl script, probably it could be 
readily translated to Python.


ritu wrote:

Hi,

I'm wondering if Python has a utility to detect binary content in
files? Or if anyone has any ideas on how that can be accomplished? I
haven't been able to find any useful information to accomplish this
(my other option is to fire off a perl script from within m python
script that will tell me whether the file is binary), so any pointers
will be appreciated.

Thanks,
Ritu

  

--
http://mail.python.org/mailman/listinfo/python-list


Re: Re: Creating huge data in very less time.

2009-03-31 Thread Dave Angel
The FAT file system does not support sparse files.  They were added in 
NTFS, in the Windows 2000 timeframe, to my recollection.


Don't try to install NTFS on a floppy.

Grant Edwards wrote:

On 2009-03-31, Dave Angel  wrote:

  

I wrote a tiny DOS program called resize that simply did a
seek out to a (user specified) point, and wrote zero bytes.
One (documented) side effect of DOS was that writing zero
bytes would truncate the file at that point.  But it also
worked to extend the file to that point without writing any
actual data.  The net effect was that it adjusted the FAT 
table, and none of the data.  It was used frequently for file

recovery, unformatting, etc.  And it was very fast.

Unfortunately, although the program still ran under NT (which includes 
Win 2000, XP, ...), the security system insists on zeroing all the 
intervening sectors, which takes much time, obviously.



Why would it even _allocate_ intevening sectors?  That's pretty
brain-dead.

  

Is there a way to create a file to big withouth actually writing
anything in python (just give me the garbage that is already on the
disk)?
  


No.  That would be a monstrous security hole.

  

--
http://mail.python.org/mailman/listinfo/python-list


Re: please include python26_d.lib in the installer

2009-03-31 Thread Compie
On 27 mrt, 17:01, Carl Banks  wrote:
> OTOH, it's possible that SWIG and Python just happen to use the same
> macro to indicate debugging mode.  So I think you raise a valid point
> that this can be problematic.  Perhaps something like _Py_DEBUG should
> be used instead.

This would be a good solution IMHO. I'm not the only one facing this
problem. The internet is full of people looking for this file...
http://www.google.com/search?q=python25_d.lib+error+cannot

_DEBUG is automatically defined by Visual Studio when you build the
Debug version of a project.
http://msdn.microsoft.com/en-us/library/0b98s6w8.aspx

So I'm proposing: please use _PYTHON_DEBUG for this purpose. Would
this cause any problems?

Johan.
--
http://mail.python.org/mailman/listinfo/python-list


Re: custom handler does not write to log file

2009-03-31 Thread dj
On Mar 31, 1:13 pm, dj  wrote:
> It seems that you can create custom handlers and add them to the
> logging.handlers namespace(http://mail.python.org/pipermail/python-
> list/2008-May/493826.html.)
> But for reasons beyond my understanding my log file (test.log) is not
> written to.
>
>   my handler class
> ###
> import logging.handlers
>
> # create my handler class
> class MyHandler(logging.handlers.RotatingFileHandler):
>     def __init__(self, filename):
>         logging.handlers.RotatingFileHandler.__init__(self, filename,
>
> maxBytes=10485760, backupCount=5)
>
> # Register handler in the "logging.handlers" namespace
> logging.handlers.MyHandler = MyHandler
>
>   test app.py
> ##
> import logging
> import logging.handlers
>
> from myhandler import MyHandler
>
> # log file path
> LOG_FILE_PATH='H:/python_experiments/logging/test.log'  # log file
> path
> #log file formatter
> myformatter = logging.Formatter('%(asctime)s %(levelname)s %(filename)
> s %(lineno)d %(message)s')
>
> # setup a log instance for myHandler
> logger2 = logging.getLogger('myLog2')
> logger2.setLevel(logging.CRITICAL)
> hdlr2 = logging.handlers.MyHandler(LOG_FILE_PATH)
> hdlr2.setFormatter(myformatter)
> hdlr2.setLevel(logging.CRITICAL)
> logger2.addHandler(hdlr2)
>
> # give it a try
> print 'using myHandler'
> logger2.debug('this is a test of myHandler')
> print 'after logger using myHandler'
>
> Thanks in advance for your help.

Kindly ingnore this message, turns out the problem was  a
misunderstanding of the severity for the logging levels.
My bad. Thanks anyway :-).
--
http://mail.python.org/mailman/listinfo/python-list


  1   2   >