On 24/06/2023 18.18, Jason Friedman wrote:
I'm writing a database connectivity module to be used by other modules and
leveraging the jaydebeapi module.
From what I can tell jaydebeapi contains no built-in timeout capability, so
then I turned to https://pypi.org/project/timeout-decorator
On 2023-06-24 17:18, Jason Friedman via Python-list wrote:
I'm writing a database connectivity module to be used by other modules and
leveraging the jaydebeapi module.
From what I can tell jaydebeapi contains no built-in timeout capability, so
then I turned to https://pypi.org/project/ti
I'm writing a database connectivity module to be used by other modules and
leveraging the jaydebeapi module.
>From what I can tell jaydebeapi contains no built-in timeout capability, so
then I turned to https://pypi.org/project/timeout-decorator/.
My goal is to have a default timeout of,
>
>
> Hey All,
>
> I have a set of bash and python scripts that all interact with a remote
> git repository.
>
>
> https://gitpython.readthedocs.io/en/stable/reference.html?highlight=cache#git.index.fun.read_cache
> https://pypi.org/project/git-credential-helpers/
>
> But neither means appears to h
> Hey All,
>
> I have a set of bash and python scripts that all interact with a remote
> git repository.
>
This does not exactly answer your question, but whenever I have wanted to
interact with (popular) software via Python I have checked to see if
someone has already written that code for me.
h
issue
the following commands in git to effectively enable the caching of
credentials in memory for a predefined amount of time off the linux shell:
git config credential.helper 'cache
git config credential.helper 'cache --timeout=300'
However, what is the best way to do so vi
On Sun, Dec 6, 2020 at 2:37 PM Barry wrote:
> > On 5 Dec 2020, at 23:44, Peter J. Holzer wrote:
> >
> > On 2020-12-05 23:42:11 +0100, sjeik_ap...@hotmail.com wrote:
> >> Timeout: no idea. But check out re.compile and re.iterfind as they
> might
> >>
> On 5 Dec 2020, at 23:44, Peter J. Holzer wrote:
>
> On 2020-12-05 23:42:11 +0100, sjeik_ap...@hotmail.com wrote:
>> Timeout: no idea. But check out re.compile and re.iterfind as they might
>> speed things up.
>
> I doubt that compiling regular expressions
On 2020-12-05 23:42:11 +0100, sjeik_ap...@hotmail.com wrote:
> Timeout: no idea. But check out re.compile and re.iterfind as they might
>speed things up.
I doubt that compiling regular expressions helps the OP much. Compiled
regular expressions are cached, but more importantly, if a
Hi,
Timeout: no idea. But check out re.compile and re.iterfind as they might
speed things up. I often compile a regex once upon import, then use it in
functions
On 27 Nov 2020 13:33, Shahique Khan wrote:
Hi Team,
I have noticed if our regex sometimes does not give a
Hi Team,
I have noticed if our regex sometimes does not give a result and on that
time regex took more time in returning response (empty response).
My Question is can we set a timeout parameter (in seconds/millisecond) with
re.find or anywhere in code to avoid CPU consumption if regex takes more
On Mon, Apr 16, 2018 at 1:33 PM, Jugurtha Hadjar
wrote:
> On 04/15/2018 12:01 PM, Ho Yeung Lee wrote:
>>
>> while 1:
>> runner = os.popen("tracert -d www.hello.com")
>> o=runner.read()
>>
>> how to set timeout and know that this is timeout?
&g
quot;)
o=runner.read()
print(o)
runner.close()
after running over 1 hour, it stop and not return in one of tracert
how to set timeout and know that this is timeout?
import signal
import time
from contextlib import contextmanager
@contextmanager
def timeout(duration, handler):
"
ww.hello.com")
> o=runner.read()
> print(o)
> runner.close()
>
>
> after running over 1 hour, it stop and not return in one of tracert
>
> how to set timeout and know that this is timeout?
>
There are a number of answers on Stackoverflow, for example
https://stackoverflo
quot;)
o=runner.read()
print(o)
runner.close()
after running over 1 hour, it stop and not return in one of tracert
how to set timeout and know that this is timeout?
Try using the 'subprocess' module instead.
--
https://mail.python.org/mailman/listinfo/python-list
after running over 1 hour, it stop and not return in one of tracert
how to set timeout and know that this is timeout?
--
https://mail.python.org/mailman/listinfo/python-list
imple asyncio loop that runs two background tasks.
>>
> [...]
>>
>>
>> Both take an optional timeout.
>>
>> If I use the first method without a timeout, the cancellation completes
>> and the loop stops.
>>
>> If I use the second method without a tim
;>>
>> [...]
>>>
>>>
>>> Both take an optional timeout.
>>>
>>> If I use the first method without a timeout, the cancellation completes
>>> and the loop stops.
>>>
>>> If I use the second method without a timeout, the
On Fri, Nov 24, 2017 at 6:31 AM, Frank Millman wrote:
> "Frank Millman" wrote in message news:ov5v3s$bv7$1...@blaine.gmane.org...
>
>> Below is a simple asyncio loop that runs two background tasks.
>>
> [...]
>>
>>
>> Both take an optional t
"Frank Millman" wrote in message news:ov5v3s$bv7$1...@blaine.gmane.org...
Below is a simple asyncio loop that runs two background tasks.
[...]
Both take an optional timeout.
If I use the first method without a timeout, the cancellation completes
and the loop stops.
If I use
is complete -
asyncio.wait([a sequence of futures])
asyncio.wait_for(a single future)
Both take an optional timeout.
If I use the first method without a timeout, the cancellation completes and
the loop stops.
If I use the second method without a timeout, the future is cancelled, but
On 2017-08-25, Chris Angelico wrote:
> On Sat, Aug 26, 2017 at 6:16 AM, Jon Ribbens
> wrote:
>> I said it in the majority of the posts I've made in this thread.
>> I said it in the post you were responding to just now. I'm using
>> threads. Now I've said it again.
>
> You said WHY you are using
On Sat, Aug 26, 2017 at 6:16 AM, Jon Ribbens wrote:
> On 2017-08-25, Chris Angelico wrote:
>> On Sat, Aug 26, 2017 at 5:40 AM, Jon Ribbens
>> wrote:
>>> On 2017-08-25, Chris Angelico wrote:
On Sat, Aug 26, 2017 at 1:47 AM, Jon Ribbens
wrote:
> On 2017-08-25, Chris Angelico wro
On 2017-08-25, Chris Angelico wrote:
> On Sat, Aug 26, 2017 at 5:40 AM, Jon Ribbens
> wrote:
>> On 2017-08-25, Chris Angelico wrote:
>>> On Sat, Aug 26, 2017 at 1:47 AM, Jon Ribbens
>>> wrote:
On 2017-08-25, Chris Angelico wrote:
> That looks like an exception to me. Not a "process
On Sat, Aug 26, 2017 at 5:40 AM, Jon Ribbens wrote:
> On 2017-08-25, Chris Angelico wrote:
>> On Sat, Aug 26, 2017 at 1:47 AM, Jon Ribbens
>> wrote:
>>> On 2017-08-25, Chris Angelico wrote:
That looks like an exception to me. Not a "process is now terminated".
That's what happened wh
On 2017-08-25, Chris Angelico wrote:
> On Sat, Aug 26, 2017 at 1:47 AM, Jon Ribbens
> wrote:
>> On 2017-08-25, Chris Angelico wrote:
>>> That looks like an exception to me. Not a "process is now terminated".
>>> That's what happened when I pressed Ctrl-C (the IP address was
>>> deliberately pic
On Sat, Aug 26, 2017 at 1:47 AM, Jon Ribbens wrote:
> On 2017-08-25, Chris Angelico wrote:
>> That looks like an exception to me. Not a "process is now terminated".
>> That's what happened when I pressed Ctrl-C (the IP address was
>> deliberately picked as one that doesn't currently exist on my n
On 2017-08-25, dieter wrote:
> This may no longer work. Long ago, I have often been plagued
> by such EINTR exceptions, and I have wished heavily that in those
> cases the IO operation should be automatically resumed. In recent time,
> I have no longer seen such exceptions - and I concluded that m
On 2017-08-25, Chris Angelico wrote:
> That looks like an exception to me. Not a "process is now terminated".
> That's what happened when I pressed Ctrl-C (the IP address was
> deliberately picked as one that doesn't currently exist on my network,
> so it took time).
Ok yes, so ctrl-C is sending
Chris Angelico writes:
> ...
> That looks like an exception to me. Not a "process is now terminated".
> That's what happened when I pressed Ctrl-C (the IP address was
> deliberately picked as one that doesn't currently exist on my network,
> so it took time).
What Jon argues about: signals are de
match any known thread id). Can be called with exc=NULL to clear an
existing async exception. This raises no exceptions
int PyThreadState_SetAsyncExc(long id, PyObject *exc);
Together with a "read timeout", you can implement a total
timeout for your requests: you perform your re
ages/requests/sessions.py",
line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/adapters.py",
line 423, in send
timeout=timeout
File
"/usr/local/lib/python3.7/site-packages/requests/packages/urllib3/connectionpool.py&quo
On 2017-08-24, Chris Angelico wrote:
> On Fri, Aug 25, 2017 at 12:17 AM, Jon Ribbens
> wrote:
>> On 2017-08-24, Chris Angelico wrote:
>>> On Thu, Aug 24, 2017 at 9:43 PM, Jon Ribbens
>>> wrote:
Where did you explain how it can be done without help? As far as I'm
aware, you can't clo
On Thursday, August 24, 2017 at 5:02:12 PM UTC+1, Chris Angelico wrote:
>
> (Caveat: I have no idea how this works on Windows. I do expect,
> though, that it will abort the connection without terminating the
> process, just like it does on Unix.)
>
> ChrisA
There was a big thread "cross platform
On Fri, Aug 25, 2017 at 5:07 AM, Marko Rauhamaa wrote:
> Chris Angelico :
>
>> On Fri, Aug 25, 2017 at 3:40 AM, Marko Rauhamaa wrote:
>>> Signals are an arcane Unix communication method. I strongly recommend
>>> against using signals for anything but terminating a process, and even
>>> then you h
Chris Angelico :
> On Fri, Aug 25, 2017 at 3:40 AM, Marko Rauhamaa wrote:
>> Signals are an arcane Unix communication method. I strongly recommend
>> against using signals for anything but terminating a process, and even
>> then you have to be extra careful.
>>
>> I have seen code that uses signa
On Fri, Aug 25, 2017 at 3:40 AM, Marko Rauhamaa wrote:
> Chris Angelico :
>
>> On Fri, Aug 25, 2017 at 12:17 AM, Jon Ribbens
>> wrote:
>>> By that, do you mean "kill the process"? That's obviously not a
>>> sensible answer in general, especially given we were including
>>> processes which have no
Chris Angelico :
> On Fri, Aug 25, 2017 at 12:17 AM, Jon Ribbens
> wrote:
>> By that, do you mean "kill the process"? That's obviously not a
>> sensible answer in general, especially given we were including
>> processes which have no terminal or user sitting there watching them.
>
> Only in the s
On Fri, Aug 25, 2017 at 12:17 AM, Jon Ribbens wrote:
> On 2017-08-24, Chris Angelico wrote:
>> On Thu, Aug 24, 2017 at 9:43 PM, Jon Ribbens
>> wrote:
>>> Where did you explain how it can be done without help? As far as I'm
>>> aware, you can't close the socket without help since you can't get
>
On 2017-08-24, Chris Angelico wrote:
> On Thu, Aug 24, 2017 at 9:43 PM, Jon Ribbens
> wrote:
>> Where did you explain how it can be done without help? As far as I'm
>> aware, you can't close the socket without help since you can't get
>> access to it, and as you mentioned even if you were to do
could just close the underlying socket without help, but it might
>>>> mess up future queries that end up getting put onto the same socket.
>>>> It's not that you CAN'T do this without help (which is the case for a
>>>> "time between bytes" tim
uture queries that end up getting put onto the same socket.
>>> It's not that you CAN'T do this without help (which is the case for a
>>> "time between bytes" timeout), but that having help would allow
>>> requests *itself* to benefit.
>>
>&g
not that you CAN'T do this without help (which is the case for a
>> "time between bytes" timeout), but that having help would allow
>> requests *itself* to benefit.
>
> I don't understand - in the above paragraph you first explain how
> it cannot be done w
7;s failed without actually cancelling it; and (b)
> you could just close the underlying socket without help, but it might
> mess up future queries that end up getting put onto the same socket.
> It's not that you CAN'T do this without help (which is the case for a
> "ti
Chris Angelico :
> What I said was that you don't need threading or alarms because most
> of the time you can let the user use SIGINT. And without the (utterly
> totally useless) threading that you have here, it works flawlessly:
> Ctrl-C instantly breaks the recv call.
Oh, if you give up threadi
On Thu, Aug 24, 2017 at 2:59 AM, Marko Rauhamaa wrote:
> Chris Angelico :
>
>> But also, this honestly isn't as big an issue as you might think. If
>> the user thinks a program has been running for too long, s/he can hit
>> Ctrl-C. Voila! Signal is sent, which aborts a socket read,
>
> Well, no, i
File "/usr/lib64/python3.5/threading.py", line 1070, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
The program hangs, though, and "nc" doesn't terminate
On Thu, Aug 24, 2017 at 12:52 AM, Jon Ribbens wrote:
> OK cool, so circling back to where you were - which is the same place
> that the 'requests' developers are - which is the claim that requests
> does not need to provide an "overall timeout" feature because
t WILL abort the connection.
OK cool, so circling back to where you were - which is the same place
that the 'requests' developers are - which is the claim that requests
does not need to provide an "overall timeout" feature because you
can cancel the request yourself is untrue since
Jon Ribbens :
> Yes, what I was interested to learn was how the monitoring thread can
> "cut off" the requesting thread.
In general, that cannot be done. Often, you resort to a dirty trick
whereby the monitoring thread closes the I/O object requesting thread is
waiting on, triggering an immediate
ests, er, request in a thread with a proper timeout?
>>
>> Assuming that by "proper timeout" you mean "limit the entire
>> download's wall time": Use one thread to do the request, and another
>> thread to monitor it. Generally, the monitoring thread is you
On 2017-08-23, Chris Angelico wrote:
> On Wed, Aug 23, 2017 at 9:10 PM, Jon Ribbens
> wrote:
>> I am interested to learn what you mean by "with a thread". How would
>> one execute a requests, er, request in a thread with a proper timeout?
>
> Assuming that b
On Wed, Aug 23, 2017 at 9:10 PM, Jon Ribbens wrote:
> On 2017-08-22, Chris Angelico wrote:
>> On Wed, Aug 23, 2017 at 5:06 AM, Jon Ribbens
>> wrote:
>>> I have no idea what you mean here. The only sane way to implement the
>>> request timeout is to provide
On 2017-08-22, Chris Angelico wrote:
> On Wed, Aug 23, 2017 at 5:06 AM, Jon Ribbens
> wrote:
>> I have no idea what you mean here. The only sane way to implement the
>> request timeout is to provide both types of timeout.
>
> You could provide both, but since on
Skip Montanaro writes:
> ...
> Given the semantics of timeouts which percolate up from the socket
> level, I agree with Chris. It has a particular meaning, that
> implemented by the underlying socket layer. Unfortunately, the word
> "timeout" can take on related
Chris. It has a particular meaning, that
implemented by the underlying socket layer. Unfortunately, the word
"timeout" can take on related (but different) meanings, depending on
context. We can discuss how to implement the timeout which means, "the
maximum amount of time it should tak
On Wed, Aug 23, 2017 at 5:06 AM, Jon Ribbens wrote:
>> You can always add in the overall timeout separately. If the low-level
>> timeout were implemented that way, there would be no way to externally
>> add the other form of timeout. Therefore the only sane way to
>> imple
2:58 AM, Jon Ribbens
>>>> wrote:
>>>>>
>>>>> Yes. There is no timeout feature that can be used to limit the total
>>>>> time a 'requests' request takes. Some people might think that this is
>>>>> a serious flaw in the r
On 2017-08-22, Chris Angelico wrote:
> The low-level timeout will distinguish between those. If you want a
> high-level timeout across the entire job, you can do that too, but
> then you have to figure out exactly how long is "too long". Let's say
> you set a thirty
On 2017-08-22 19:43, Chris Angelico wrote:
On Wed, Aug 23, 2017 at 4:14 AM, Jon Ribbens wrote:
On 2017-08-22, Chris Angelico wrote:
On Wed, Aug 23, 2017 at 2:58 AM, Jon Ribbens wrote:
Yes. There is no timeout feature that can be used to limit the total
time a 'requests' request t
On Wed, Aug 23, 2017 at 4:31 AM, Grant Edwards
wrote:
> On 2017-08-22, Chris Angelico wrote:
>
>> """
>> Once your client has connected to the server and sent the HTTP
>> request, the read timeout is the number of seconds the client will
>> wait for
On Wed, Aug 23, 2017 at 4:14 AM, Jon Ribbens wrote:
> On 2017-08-22, Chris Angelico wrote:
>> On Wed, Aug 23, 2017 at 2:58 AM, Jon Ribbens
>> wrote:
>>> Yes. There is no timeout feature that can be used to limit the total
>>> time a 'requests' reque
On 2017-08-22, Chris Angelico wrote:
> """
> Once your client has connected to the server and sent the HTTP
> request, the read timeout is the number of seconds the client will
> wait for the server to send a response. (Specifically, it's the number
> of second
On 2017-08-22, Chris Angelico wrote:
> On Wed, Aug 23, 2017 at 2:58 AM, Jon Ribbens
> wrote:
>> Yes. There is no timeout feature that can be used to limit the total
>> time a 'requests' request takes. Some people might think that this is
>> a serious flaw in th
On Wed, Aug 23, 2017 at 2:58 AM, Jon Ribbens wrote:
> On 2017-08-22, Skip Montanaro wrote:
>> I'm using the requests module with timeouts to fetch URLs, for example:
>>
>> response = requests.get("http://www.google.com/";, timeout=10)
>>
>> I u
> """
> Once your client has connected to the server and sent the HTTP
> request, the read timeout is the number of seconds the client will
> wait for the server to send a response. (Specifically, it's the number
> of seconds that the client will wait between by
On 2017-08-22, Skip Montanaro wrote:
> I'm using the requests module with timeouts to fetch URLs, for example:
>
> response = requests.get("http://www.google.com/";, timeout=10)
>
> I understand the timeout value in this case applies both to creating the
> c
On Wed, Aug 23, 2017 at 12:02 AM, Skip Montanaro
wrote:
> I'm using the requests module with timeouts to fetch URLs, for example:
>
> response = requests.get("http://www.google.com/";, timeout=10)
>
> I understand the timeout value in this case applies both to
I'm using the requests module with timeouts to fetch URLs, for example:
response = requests.get("http://www.google.com/";, timeout=10)
I understand the timeout value in this case applies both to creating the
connection and fetching the remote content. Can the server dribble
Good day,
I'm trying to run multiple SSH clients using AsyncSSH lib. This is an example
from manual, which works fine: http://pastebin.com/zh4zymeQ
The problem is, it is not possible to directly set connect timeout in
run_client() function. However, the manual says:
"asyncio ca
Good day,
I'm trying to run multiple SSH clients using AsyncSSH lib. This is an example
from manual, which works fine:
http://pastebin.com/zh4zymeQ
The problem is, it is not possible to directly set connect timeout in
run_client() function. However, the manual says:
"asyncio ca
ntly the urllib2 and socket timeouts are not effective here - how do
> > I fix this?
> >
> > --------
> > import urllib2
> > import socket
> > from sys import exc_info as sysExc_info
> > timeout = 2
> > socket.setdefaulttimeout(timeout)
>
rt socket
> from sys import exc_info as sysExc_info
> timeout = 2
> socket.setdefaulttimeout(timeout)
>
> try :
> req = urllib2.Request(url,None,headers)
> response = urllib2.urlopen(req,timeout=timeout)
> html = response.read()
> except
upto
an hour (!), at either the urllib2.urlopen or response.read stages.
Apparently the urllib2 and socket timeouts are not effective here - how do I
fix this?
import urllib2
import socket
from sys import exc_info as sysExc_info
timeout = 2
socket.setdefaulttimeout(timeout
On Friday, October 16, 2015 at 2:44:53 AM UTC-7, lucasfneves14 wrote:
> How did you do it?
I took the advice of just being myself.
--
https://mail.python.org/mailman/listinfo/python-list
lucasfneves14 writes:
> How did you do it?
That's an impressive reply gap.
If anyone's wondering, this is apparently in reply to this from March:
http://thread.gmane.org/gmane.comp.python.general/774441
--
https://mail.python.org/mailman/listinfo/python-list
On 2015-10-16, lucasfneves14 wrote:
> How did you do it?
I just climbed in and pushed the button. Same as always.
--
Grant Edwards grant.b.edwardsYow! This MUST be a good
at party -- My RIB CAGE is
On 16/10/2015 10:44, lucasfneves14 wrote:
How did you do it?
I conned my way in, nobody suspected it.
--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.
Mark Lawrence
--
https://mail.python.org/mailman/listinfo/python-list
How did you do it?
--
https://mail.python.org/mailman/listinfo/python-list
Here's a traceback I generated by catching a SIGINT and printing an exception :
Traceback (most recent call last):
File "/usr/lib/python2.7/SocketServer.py", line 295, in
_handle_request_noblock
self.process_request(request, client_address)
File "/usr/lib/python2.7/SocketServer.py", line
Hello,
I have written a script using bottle.py. The app works fine most of times.
Sometimes though, the server takes time to respond and the web browser
eventually drops the connection to the server after a certain time (timeout),
showing this page :
"""
Connection reset
Th
On Sunday, March 29, 2015 at 2:27:59 PM UTC+5:30, bobbdeep wrote:
> I am trying to communicate between a server and client using TCP sockets.
>
> Server code:
>
> import socket
> import sys
>
> # Create a TCP/IP socket
> sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
> # B
On 29/03/2015 12:20, bobbdeep wrote:
How do I add a port to the list of open ports on my server ?
Ask the system administrator.
--
https://mail.python.org/mailman/listinfo/python-list
On Sunday, March 29, 2015 at 3:44:43 PM UTC+5:30, mm0fmf wrote:
> On 29/03/2015 09:57, bobbydeep wrote:
>
> From the error (10060) it looks like Windows but it would be nice if
> you could say which Python and OS you were using.
>
> I haven't looked at your code but just taking at face value th
On 29/03/2015 09:57, bobbydeep wrote:
From the error (10060) it looks like Windows but it would be nice if
you could say which Python and OS you were using.
I haven't looked at your code but just taking at face value that it does
work internally.
server_address = ('my-server-ipadress', 199
Changed server code to the following,
from socket import *
HOST = ''
PORT = 8080
serversocket = socket(AF_INET,SOCK_STREAM)
serversocket.bind((HOST,PORT))
serversocket.listen(5)
while True:
(clientsocket, address) = serversocket.accept()
print ("Got client request from",address)
clien
I am trying to communicate between a server and client using TCP sockets.
Server code:
import socket
import sys
# Create a TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Bind the socket to the port
server_address = ('my-server-ipadress', 1999)
print
On Sat, Aug 9, 2014 at 4:35 AM, Neil D. Cerutti wrote:
> Doesn't any natural looking use of blocking=False suffer from the same race
> condition? What's the correct way to use it?
Actually, I don't know. I try to avoid any form of thread locking
where possible, and I don't remember the last time
On Fri, Aug 8, 2014 at 7:25 AM, Ethan Furman wrote:
> On 08/08/2014 04:51 AM, cool-RR wrote:
>>
>>
>> If I want to acquire a `threading.Lock` using the context manager
>> protocol,
>> is it possible to specify the `blocking` and `timeout` arguments that
d
>> happen between those two. Are you able to set the callback to be a
>> "release if acquired" atomic operation?
>>
>
> Doesn't any natural looking use of blocking=False suffer from the same
> race condition? What's the correct way to use it?
>
>
On 8/8/2014 2:35 PM, Neil D. Cerutti wrote:
Here's another attempt at context managing:
@contextlib.contextmanager
def release_if_acquired(lock, blocking=True, timeout=-1):
acquired = lock.acquire(blocking, timeout)
if acquired:
yield acquired
lock.re
at's the correct way to use it?
Here's another attempt at context managing:
@contextlib.contextmanager
def release_if_acquired(lock, blocking=True, timeout=-1):
acquired = lock.acquire(blocking, timeout)
if acquired:
yield acquired
lock.release()
else
On Sat, Aug 9, 2014 at 2:05 AM, Neil D. Cerutti wrote:
> Perhaps defer release, a la a common Go pattern:
>
> with contextlib.ExitStack() as stack:
> acquired = lock.acquire(blocking=False)
> if acquired:
> stack.callback(lock.release)
> do_stuff
There's a race condition i
On 8/8/2014 9:25 AM, Ethan Furman wrote:
On 08/08/2014 04:51 AM, cool-RR wrote:
If I want to acquire a `threading.Lock` using the context manager
protocol,
is it possible to specify the `blocking` and `timeout` arguments that
`acquire` would usually take?
Not that I know of, but why would
On 08/08/2014 04:51 AM, cool-RR wrote:
If I want to acquire a `threading.Lock` using the context manager protocol,
is it possible to specify the `blocking` and `timeout` arguments that
`acquire` would usually take?
Not that I know of, but why would you want to? There's no built-i
Hi all,
If I want to acquire a `threading.Lock` using the context manager protocol, is
it possible to specify the `blocking` and `timeout` arguments that `acquire`
would usually take?
Thanks,
Ram.
--
https://mail.python.org/mailman/listinfo/python-list
Running Python 2.6 and 2.7 on Windows 7 and Server 2012
Event::wait causes a delay when used with a timeout that is not triggered
because event is set in time. I don't understand why.
Can someone explain?
The following program shows this;
'''Shows that using a timeout in E
error cames from
> local system, you can try to decrease it via resolv.conf:
>
> nameserver 1.2.3.4
> options timeout:2
>
> It's just an idea... but maybe...
>
> Note that the 2 seconds for DNS timeout in resolv.conf may sound crazy,
> because the _real_slow_ DNS answe
down.
>
> I'm working on a software gateway running on a Raspberry Pi, that forwards
> data received through a radio link to the network.
>
> https://github.com/Jerome-github/oem_gateway
>
> This can be sending HTTP requests ev
link to the network.
https://github.com/Jerome-github/oem_gateway
This can be sending HTTP requests every 3 seconds, so a 10 secondes timeout
is an issue.
--
I was not at home when I wrote my last message. Now back home, I could try on
my own D
1 - 100 of 548 matches
Mail list logo