My tests were run in python 2.6.5.
--
http://mail.python.org/mailman/listinfo/python-list
I'm having trouble understanding when variables are added to
namespaces. I thought I understood it, but my nested function
examples below have me very confused.
In each test function below I have an x variable (so "x" is in the
namespace of each test function). I also have a nested function in
e
On Nov 29, 11:09 am, Christian Heimes wrote:
> The feature is available in Python 3.x:
>
> >>> a, b, *c = 1, 2, 3, 4, 5
> >>> a, b, c
> (1, 2, [3, 4, 5])
> >>> a, *b, c = 1, 2, 3, 4, 5
> >>> a, b, c
>
> (1, [2, 3, 4], 5)
Interesting... especially the recognition of how both ends work with
the "a,
Is there a reason that this is fine:
>>> def f(a,b,c):
... return a+b+c
...
>>> f(1, *(2,3))
6
but the code below is not?
>>> x = (3, 4)
>>> (1, 2, *x) == (1, 2, 3, 4)
Traceback (most recent call last):
File "", line 1, in
invalid syntax: , line 1, pos 8
Why does it only work when unpackin
I'm running python 2.5.1 and it seems that SimpleXmlRpcServer is not
setup to support the base datetime module in the same way xmlrpclib
has been with "use_datetime". I see that someone (Virgil Dupras) has
recently submitted a fix to address this, but I don't want to patch my
python distro. I wan
> OK, if you crawl the stack I will seek you out and hit you with a big
> stick. Does that affect your decision-making?
How big a stick? :)
> Seriously, crawling the stack introduces the potential for disaster in
> your program, since there is no guarantee that the calling code will
> provide the
convincing argument yet on why crawling the stack is considered
bad? I kind of hoped to come out of this with a convincing argument
that would stick with me...
On Feb 25, 12:30 pm, Ian Clark <[EMAIL PROTECTED]> wrote:
> On 2008-02-25, Russell Warren <[EMAIL PROTECTED]> wrote:
>
>
&
> How about a dictionary indexed by by the thread name.
Ok... a functional implementation doing precisely that is at the
bottom of this (using thread.get_ident), but making it possible to
hand around this info cleanly seems a bit convoluted. Have I made it
more complicated than I need to? There
> That is just madness.
What specifically makes it madness? Is it because sys._frame is "for
internal and specialized purposes only"? :)
> The incoming ip address is available to the request handler, see the
> SocketServer docs
I know... that is exactly where I get the address, just in a mad wa
Argh... the code wrapped... I thought I made it narrow enough. Here
is the same code (sorry), but now actually pasteable.
---
import SimpleXMLRPCServer, xmlrpclib, threading, sys
def GetCallerNameAndArgs(StackDepth = 1):
"""This function returns a tuple (a,b) where:
a = The name of the ca
I've got a case where I would like to know exactly what IP address a
client made an RPC request from. This info needs to be known inside
the RPC function. I also want to make sure that the IP address
obtained is definitely the correct one for the client being served by
the immediate function call
> While we're at it, do any of these debuggers implement a good way to
> debug multi-threaded Python programs?
Wing now has multi-threaded debugging.
I'm a big Wing (pro) fan. To be fair, when I undertook my huge IDE
evaluation undertaking it was approx 2 years ago... at the time as far
as what
Both are very good responses... thanks! I had forgotten the ease of
"monkey-patching" in python and the Stream class is certainly cleaner
than the way I had been doing it.
On Oct 3, 3:15 am, Peter Otten <[EMAIL PROTECTED]> wrote:
> Russell Warren wrote:
> > All I'
I was just setting up some logging in a make script and decided to
give the built-in logging module a go, but I just found out that the
base StreamHandler always puts a newline at the end of each log.
There is a comment in the code that says "The record is then written
to the stream with a trailin
Thanks, guys... this has all been very useful information.
The machine this is happening on is already running NTFS.
The good news is that we just discovered/remembered that there is a
write-caching option (in device manager -> HDD -> properties ->
Policies tab) available in XP. The note right b
I've got a case where I'm seeing text files that are either all null
characters, or are trailed with nulls due to interrupted file access
resulting from an electrical power interruption on the WinXP pc.
In tracking it down, it seems that what is being interrupted is either
os.remove(), or os.renam
> Does it actually tell you the target is the problem? I see an
> "OSError: [Errno 17] File exists" for that case, not a permission error.
> A permission error could occur, for example, if GDS has the source open
> or locked when you call os.rename.
No it doesn't tell me the target is the issu
> Are you running a background file accessing tool like Google Desktop
> Search or an anti-virus application? If so, try turning them off as a test.
I'm actually running both... but I would think that once os.remove
returns that the file is actually gone from the hdd. Why would either
applica
Oops - minor correction... xmlrpclib is fine (I think/hope). It is
SimpleXMLRPCServer that currently has issues. It uses
thread-unfriendly sys.exc_value and sys.exc_type... this is being
corrected.
--
http://mail.python.org/mailman/listinfo/python-list
> Another issue is the libraries you use. A lot of them aren't
> thread safe. So you need to watch out.
This is something I have a streak of paranoia about (after discovering
that the current xmlrpclib has some thread safety issues). Is there a
list maintained anywhere of the modules that are are
I've been having a hard time tracking down a very intermittent problem
where I get a "permission denied" error when trying to rename a file to
something that has just been deleted (on win32).
The code snippet that gets repeatedly called is here:
...
if os.path.exists(oldPath):
os.remove(o
After some digging around it appears there is not a tonne of
documentation on buffer objects, although they are clearly core and
ancient... been sifting through some hits circa 1999, long before my
python introduction.
What I can find says that buffer is deprecated (Python in a Nutshell),
or non-e
> Many functions that operate on strings also accept buffer objects as
> parameters,
> this seems also be the case for the base64.encodestring function. ctypes
> objects
> support the buffer interface.
>
> So, base64.b64encode(buffer(ctypes_instance)) should work efficiently.
Thanks! I have ne
I've got a case where I want to convert binary blocks of data (various
ctypes objects) to base64 strings.
The conversion calls in the base64 module expect strings as input, so
right now I'm converting the binary blocks to strings first, then
converting the resulting string to base64. This seems h
Thanks guys. This has helped decipher a bit of the Queue mechanics for
me.
Regarding my initial clear method hopes... to be safe, I've
re-organized some things to make this a little easier for me. I will
still need to clear out junk from the Queue, but I've switched it so
that least I can stop t
Check out the Wing IDE - www.wingware.com .
As part of it's general greatness it has a "debug probe" which lets you
execute code snippets on active data in mid-debug execution.
It doesn't have precisely what you are after... you can't (yet)
highlight code segments and say "run this, please", but
I'm guessing no, since it skips down through any Lock semantics, but
I'm wondering what the best way to clear a Queue is then.
Esentially I want to do a "get all" and ignore what pops out, but I
don't want to loop through a .get until empty because that could
potentially end up racing another thre
I've got a case where I need to tweak the implementation of a default
python library due to what I consider to be an issue in the library.
What is the best way to do this and make an attempt to remain
compatible with future releases?
My specific problem is with the clock used in the threading.Eve
> So does the speed of the remaining 0.001 cases really matter? Note
> that even just indexing into a deque takes O(index) time.
It doesn't matter as much, of course, but I was looking to make every
step as efficient as possible (while staying in python).
As to indexing into a deque being O(inde
Thanks for the responses.
> It seems to work with my Python2.4 here. If you're
> interested in efficiency, I'll leave their comparison as an
> exercise to the reader... :)
Ok, exercise complete! :) For the record, they are pretty much the
same speed...
>>> s = """
... from collections import d
Does anyone have an easier/faster/better way of popping from the middle
of a deque than this?
class mydeque(deque):
def popmiddle(self, pos):
self.rotate(-pos)
ret = self.popleft()
self.rotate(pos)
return ret
I do recognize that this is not the intent of a deque, given the
clear
I've been driven crazy by this type of thing in the past. In my case
it was with the same application (not two like you), but on different
machines, with all supposedly having the same OS load. In some cases I
would get short path names and in others I would get long path names.
I could never fig
I just did a comparison of the copying speed of shutil.copy against the
speed of a direct windows copy using os.system. I copied a file that
was 1083 KB.
I'm very interested to see that the shutil.copy copyfileobj
implementation of hacking through the file and writing a new one is
significantly f
Yes, I definitely should have done that for that case. I'm not
entirely sure why I didn't. If I had, though, I may not have been
prompted to ask the question and get all the other great little
tidbits!
--
http://mail.python.org/mailman/listinfo/python-list
Thanks guys - all great responses that answered my question in a few
different ways with the addition of some other useful tidbits!
This is a nice summary:
> In general the idea is to move the test from 'every time I need to do
> something' to 'once when some name is defined'.
Gotta love the resp
> the collections module was added in 2.4
Ah... sorry about that. I should have checked my example more closely.
What I'm actually doing is rebinding some Queue.Queue behaviour in a
"safe" location like this:
def _get(self):
ret = self.queue.popleft()
DoSomethingSimple()
return ret
And se
After some digging it seems that python does not have any equivalent to
C's #if directives, and I don't get it...
For example, I've got a bit of python 2.3 code that uses
collections.deque.pop(0) in order to pop the leftmost item. In python
2.4 this is no longer valid - there is no argument on po
The application we're working on at my company currently has about
eleventy billion independent python applications/process running and
talking to each other on a win32 platform. When problems crop up and
we have to drill down to figure out who is to blame and how, we
currently are using the (surp
> import inspect
> myCallables = [name for name, value in inspect.getmembers(self) if not
> name.startswith('_') and callable(value)]
Thanks. I forgot about the inspect module. Interestingly, you've also
answered my question more than I suspect you know! Check out the code
for inspect.getmember
Is there any better way to get a list of the public callables of self
other than this?
myCallables = []
classDir = dir(self)
for s in classDir:
attr = self.__getattribute__(s)
if callable(attr) and (not s.startswith("_")):
myCallables.append(s) #collect the names (n
Thanks for the additional examples, David (didn't see this before my
last post). All of it makes sense now, including those examples.
Russ
--
http://mail.python.org/mailman/listinfo/python-list
D'oh... I just realized why this is happening. It is clear in the
longhand as you say, but I don't think in the way you descibed it (or
I'm so far gone right now I have lost it).
self.I += 1
is the same as
self.I = self.I + 1
and when python tries figures out what the 'self.I' is on the ri
> I can see how this can be confusing, but I think the confusion here is
> yours, not Pythons ;)
This is very possible, but I don't think in the way you describe!
> self.I += 10 is an *assignment*. Like any assignment, it causes the
> attribute in question to be created
... no it isn't. The +=
I just ran across a case which seems like an odd exception to either
what I understand as the "normal" variable lookup scheme in an
instance/object heirarchy, or to the rules regarding variable usage
before creation. Check this out:
>>> class foo(object):
... I = 1
... def __init__(self):
...
Thanks for the detailed repsone... sorry for the lag in responding to
it.
After reading and further thought, the only reason I was using
setdefaulttimeout in the first place (rather then using a direct
settimeout on the socket) was because it seemed like the only way (and
easy) of getting access t
It appears that the timeout setting is contained within a process
(thanks for the confirmation), but I've realized that the function
doesn't play friendly with threads. If I have multiple threads using
sockets and one (or more) is using timeouts, one thread affects the
other and you get unpredicta
Does anyone know the scope of the socket.setdefaulttimeout call? Is it
a cross-process/system setting or does it stay local in the application
in which it is called?
I've been testing this and it seems to stay in the application scope,
but the paranoid side of me thinks I may be missing something
Thanks! That gets me exactly what I wanted. I don't think I would
have been able to locate that code myself.
Based on this code and some quick math it confirms that not only will
the rollover be a looong way out, but that there will not be any loss
in precision until ~ 30 years down the road. C
Does anyone know how long it takes for time.clock() to roll over under
win32?
I'm aware that it uses QueryPerformanceCounter under win32... when I've
used this in the past (other languages) it is a great high-res 64-bit
performance counter that doesn't roll-over for many (many) years, but
I'm worr
49 matches
Mail list logo