project is working fine in most of the cases
> (www.pythonscad.org)
> however I am right now isolating a scenario, which makes it crash
> permanently.
>
> It does not happen with Python 3.11.6 (and possibly below), it happens with
> 3.12 and above
> It does not happen when n
It does not happen when not using Threads.
However due to the architecture of the program I am forced to evaluate some
parts in main thread and some parts in a dedicated Thread. The Thread is
started with QThread(QT 5.0)
whereas I am quite sure that program flows do not overlap.
When I just
My Software project is working fine in most of the cases
(www.pythonscad.org)
however I am right now isolating a scenario, which makes it crash
permanently.
It does not happen with Python 3.11.6 (and possibly below), it happens with
3.12 and above
It does not happen when not using Threads
likely the thing causing the cited exception "threads
can only be started once". Your setup of the button with the action
defined as:
Thread().start
creates a _single_ new Thread _when you define the button_, and makes
hte button callback try to start it. On the second an
On 10Jan2023 18:32, MRAB wrote:
I don't like how you're passing Thread...start as an argument. IMHO, it
would be better/cleaner to pass a plain function, even if the only
thing that function does is to start the thread.
Yes, and this is likely the thing causing the cited exceptio
On 2023-01-10 14:57, Abhay Singh wrote:
Here is the entire code snippet of the same.
Please help
def change_flag(top_frame, bottom_frame, button1, button2, button3, button4,
controller): global counter, canvas, my_image, chosen, flag, directory
canvas.delete('all') button5['state'] = DISABLED
Here is the entire code snippet of the same.
Please help
def change_flag(top_frame, bottom_frame, button1, button2, button3, button4,
controller): global counter, canvas, my_image, chosen, flag, directory
canvas.delete('all') button5['state'] = DISABLED counter += 1
chosen, options_text = func
*Resending this message after subscribing in python-mail-list*
-- Forwarded message -
From: Souvik Ghosh
Date: Sat, Dec 11, 2021 at 5:10 PM
Subject: I/O bound threads got to no chance to run with small CPU bound
threads with new GIL
To:
Hello PSF,
I'm Souvik Ghosh from
Dear All,
Threads shutting down in Py 2.7 but not in Py 3.69(or 3.x) while making SSH
connection using Paramiko/PySSH module or Socket
Executing code qa-test-execute.py in Py 2.7 (Ubuntu 14.04.6 LTS)
Command 1 :
sudo python ./qa-test-execute.py
Output 1 :
2021-05-24 23:35:59,889
Hello,
Recently I have been trying to use a reantrant read write lock in my
project but discovered several problems when writing test cases.
All the relevant material can be found on the following locations
https://stackoverflow.com/questions/58410610/calling-condition-wait-inside-thread-causes-r
x27; to see things broken down
> by CPU core (there are 32 of them, probably counting hyperthreads as
> different cores), but the CPU use is in the teens or so.
If you had many CPU-bound Python threads, then with 32 cores each core
might show as 3 % busy (the sum of the threads can't u
On 17/07/2019 09.58, Barry Scott wrote:
>
>> On 16 Jul 2019, at 20:48, Dan Stromberg wrote:
>>
>>
>>
>> A question arises though: Does threading.active_count() only show Python
>> threads created with the threading module? What about threads created wit
lowness yet. The CPU isn't getting hit
> > hard, and I/O on the system appears to be low - but throughput is poor.
> > I'm wondering if it could be CPU-bound Python threads causing the problem
> > (because of the threading+GIL thing).
>
> Does top show the pro
rge CPython 2.x/3.x codebase
> > with quite a few dependencies.
> >
> > I'm not sure what's causing the slowness yet. The CPU isn't getting hit
> > hard, and I/O on the system appears to be low - but throughput is poor.
> > I'm wondering if it could be
x27;t getting hit
> hard, and I/O on the system appears to be low - but throughput is poor.
> I'm wondering if it could be CPU-bound Python threads causing the problem
> (because of the threading+GIL thing).
Does top show the process using 100% CPU?
>
> The non-depend
could be CPU-bound Python threads causing the problem
(because of the threading+GIL thing).
The non-dependency Python portions don't Appear to have much in the way of
threading going on based on a quick grep, but csysdig says a process
running the code has around 32 threads running - the actual t
On 09/04/2019 14:08, Shakti Kumar wrote:
Over due course I've felt the need of a way to kill a thread gracefully, by
relieving all occupied resources.
A bit of search online shows me that killing a thread depends very much on
the underlying platform support, and is something not advised, however
Hello Team,
Over due course I've felt the need of a way to kill a thread gracefully, by
relieving all occupied resources.
A bit of search online shows me that killing a thread depends very much on
the underlying platform support, and is something not advised, however I
face this problem when one o
I had yet another program where I accidentally had more than one
thread enter pdb at once, leaving me with the "pdb's battling for
the keyboard" syndrome. So I extended pdb to recognize and handle
threads. I added:
"jobs"
List threads, with one current one being the o
Python at a time it seems to work fine. But there seems to be a
> problem with the module importing when several Python threads are active.
>
> I get a variety of errors indeterministically, usually indicating that some
> symbol hasn't been imported. This occurs both in my own c
On Thursday, 21 December 2017 00:33:54 UTC+1, Lawrence D’Oliveiro wrote:
> On Thursday, December 21, 2017 at 5:13:33 AM UTC+13, geoff...@gmail.com wrote:
> >
> > I have a multithreaded application using an embedded Python 3.6.4 ...
>
> Avoid multithreading if you can. Is your application CPU-bou
. But there seems to be a problem
with the module importing when several Python threads are active.
I get a variety of errors indeterministically, usually indicating that some
symbol hasn't been imported. This occurs both in my own code and in the
standard library. The most frequent is probably
Are they independent in a way that each thread will spawn a new process and
won't share resources?
--
https://mail.python.org/mailman/listinfo/python-list
Use multiprocessing since you want to do multiple things at once
https://pymotw.com/2/multiprocessing/basics.html If I understand you
correctly, once the string is found you would terminate the process, so you
would have to signal the calling portion of the code using a Manager dictionary
or l
I've been unsuccessfully looking for an alternative for signals, that works in
threads.
After updating a script of mine from being single-threaded to being
multi-threaded, I realised that signals do not work in threads.
I've used signals to handle blocking operations that possibly ta
5.043s
I can expect this result when I run some processes in parallel on
different CPUs, but this code uses threads, so the GIL prevents the two
task() functions to be executed in parallel. What am I missing?
--
Marco Buttu
INAF-Osservatorio Astronomico di Cagliari
Via della Scienza n. 5, 09047 Sela
Hello List,
I came across some threading code in Some Other place recently and wanted to
sanity-check my assumptions.
The code (below) creates a number of threads; each thread takes the last (index
-1) value from a global list of integers, increments it by one and appends the
new value to the
On Wed, Jan 4, 2017 at 5:41 PM, Kev Dwyer wrote:
> Hello List,
>
> I came across some threading code in Some Other place recently and wanted to
> sanity-check my assumptions.
>
> The code (below) creates a number of threads; each thread takes the last
> (index -1) value
On Wed, Jan 4, 2017 at 5:41 PM, Kev Dwyer wrote:
> Hello List,
>
> I came across some threading code in Some Other place recently and wanted to
> sanity-check my assumptions.
>
> The code (below) creates a number of threads; each thread takes the last
> (index -1) value
Hello List,
I came across some threading code in Some Other place recently and wanted to
sanity-check my assumptions.
The code (below) creates a number of threads; each thread takes the last
(index -1) value from a global list of integers, increments it by one and
appends the new value to the
On 2016-10-27 07:33 AM, jmp wrote:
On 10/27/2016 12:22 PM, pozz wrote:
(blocking) thread. The blocking function read returns *immediately* when
all the bytes are received. And I think during blocking time, the
thread isn't consuming CPU clocks.
Threads do consume CPU clocks.
Sometimes
On 10/27/2016 02:55 PM, Chris Angelico wrote:
On Thu, Oct 27, 2016 at 11:33 PM, jmp wrote:
On 10/27/2016 01:43 PM, Chris Angelico wrote:
Blocked threads don't consume CPU time. Why would they?
ChrisA
Agreed. My point being that a blocked thread achieve nothing, except
parallelism
On Thu, Oct 27, 2016 at 11:33 PM, jmp wrote:
> On 10/27/2016 01:43 PM, Chris Angelico wrote:
>>
>> Blocked threads don't consume CPU time. Why would they?
>>
>> ChrisA
>>
>
> Agreed. My point being that a blocked thread achieve nothing, except
> p
On 10/27/2016 01:43 PM, Chris Angelico wrote:
Blocked threads don't consume CPU time. Why would they?
ChrisA
Agreed. My point being that a blocked thread achieve nothing, except
parallelism, i.e. other threads can be processed.
To be more specific, if you compute factorial(51354)
On Thu, Oct 27, 2016 at 10:56 PM, pozz wrote:
> Yes of course, but when the backend thread calls the *blocking* function
> pyserial.read(), it *doesn't* consume CPU clocks (at least, I hope).
> The low-level implementation of pyserial.read() should move the thread in a
> "suspend" or "waiting" sta
s are received. And I think during blocking time, the
thread isn't consuming CPU clocks.
Threads do consume CPU clocks.
An operation within a thread will not consume less CPU clocks, however,
the scheduler will interrupt the thread and give other
threads/operations a chance to process as wel
od is using another
>> (blocking) thread. The blocking function read returns *immediately* when
>> all the bytes are received. And I think during blocking time, the
>> thread isn't consuming CPU clocks.
>
>
> Threads do consume CPU clocks.
> An operation within a thr
cking time, the
thread isn't consuming CPU clocks.
Threads do consume CPU clocks.
An operation within a thread will not consume less CPU clocks, however,
the scheduler will interrupt the thread and give other
threads/operations a chance to process as well.
Threads implement parale
Here is an example about threads and PyQT
https://www.youtube.com/watch?v=ivcxZSHL7jM&index=2
On 10/27/2016 01:22 PM, pozz wrote:
Il 26/10/2016 16:18, jmp ha scritto:
On 10/26/2016 02:45 PM, pozz wrote:
Il 26/10/2016 13:16, jmp ha scritto:
[...]
I suggest you write a GUI that
Il 26/10/2016 16:18, jmp ha scritto:
On 10/26/2016 02:45 PM, pozz wrote:
Il 26/10/2016 13:16, jmp ha scritto:
[...]
I suggest you write a GUI that make synchronouscalls to a remote
application, if possible. If the remote app is in python, you have
access to remote protocols already written
Chris Angelico :
> Python-the-language doesn't permit those kinds of rewrites.
[Citation needed]
Is there something here, perhaps?
https://docs.python.org/3/library/concurrency.html>
Marko
--
https://mail.python.org/mailman/listinfo/python-list
On Thu, Oct 27, 2016 at 1:42 AM, Marko Rauhamaa wrote:
> Chris Angelico :
>> And since Python doesn't rewrite the code, you don't have a problem.
>
> Do you mean Python or CPython?
>
> And how do you know?
Both, and I know because Python-the-language doesn't permit those
kinds of rewrites. PyPy d
Chris Angelico :
> And since Python doesn't rewrite the code, you don't have a problem.
Do you mean Python or CPython?
And how do you know?
Marko
--
https://mail.python.org/mailman/listinfo/python-list
On Thu, Oct 27, 2016 at 1:21 AM, Marko Rauhamaa wrote:
> Analogous code in C or Java would not be guaranteed to finish if func1()
> and func2() were in different execution contexts. In fact, it would be
> almost guaranteed to hang.
>
> That is because the compiler can see that "active" cannot chan
Chris Angelico :
> On Thu, Oct 27, 2016 at 12:37 AM, Marko Rauhamaa wrote:
>> I don't know what "Global state is shared across all threads" means
>> in this context. It sounds like something that would be true for,
>> say, Java and C as well. Howev
On 10/26/2016 02:45 PM, pozz wrote:
Il 26/10/2016 13:16, jmp ha scritto:
[...]
I suggest you write a GUI that make synchronouscalls to a remote
application, if possible. If the remote app is in python, you have
access to remote protocols already written for you, Pyro is one of them,
you can
there is no "volatile"
>>> in Python so you can't coordinate Python threads safely without
>>> proper synchronization. If you set a variable in one thread and read
>>> it in another thread, the latter might never see the change.
>>
>> Incor
Chris Angelico :
> On Wed, Oct 26, 2016 at 11:58 PM, Marko Rauhamaa wrote:
>> I can't think of a valid program that could take advantage of this
>> primitive guarantee of Python's. For example, there is no "volatile"
>> in Python so you can't coor
bnoxious, blocking APIs abound.
>
> However, I have usually used processes (instead of threads) to
> encapsulate blocking APIs. Processes have neater resource isolation and
> a better-behaving life cycle. For example, you can actually kill a
> process while you can't kill a thread.
W
pozz :
> The real problem is that retrieving status from remote device is a
> slow operation. If the GUI thread blocks waiting for the answer, the
> GUI blocks and the user complains.
Correct. Obnoxious, blocking APIs abound.
However, I have usually used processes (instead of th
7;t think of a valid program that could take advantage of this
> primitive guarantee of Python's. For example, there is no "volatile" in
> Python so you can't coordinate Python threads safely without proper
> synchronization. If you set a variable in one thread and read i
pozz :
> Il 26/10/2016 13:27, Antoon Pardon ha scritto:
>> Op 26-10-16 om 12:22 schreef pozz:
>>> Is it safe to access this variable from two different threads?
>>> Should I implement a safer and more complex mechanism? If yes, what
>>> mechanism?
>>
>
Il 26/10/2016 13:16, jmp ha scritto:
[...]
I suggest you write a GUI that make synchronouscalls to a remote
application, if possible. If the remote app is in python, you have
access to remote protocols already written for you, Pyro is one of them,
you can skip the low level communication part
(inside Start/Stop buttons handler) and backend
thread
(in the "while self.comm_active" instruction).
Is it safe to access this variable from two different threads?
Should I implement a safer and more complex mechanism? If yes, what mechanism?
Accessing from multiple thread shou
e some concerns even in using self.comm_active. It is a boolean
> variable
> accessed by the GUI thread (inside Start/Stop buttons handler) and backend
> thread
> (in the "while self.comm_active" instruction).
> Is it safe to access this variable from two different threa
r) and backend thread (in the "while self.comm_active" instruction).
Is it safe to access this variable from two different threads? Should I
implement a safer and more complex mechanism? If yes, what mechanism?
from http://nedbatchelder.com/blog/201204/two_problems.html
Some people,
"while self.comm_active" instruction).
Is it safe to access this variable from two different threads? Should I
implement a safer and more complex mechanism? If yes, what mechanism?
--
https://mail.python.org/mailman/listinfo/python-list
Il 26/10/2016 09:13, pozz ha scritto:
> [...]
What is the best approach to use in my scenario (GUI and backend
communication)?
I just found this[1] page, where the thread approach is explained with
the following code:
---
import threading
import time
from gi.repository import GLib, Gt
lication starts sending "GET
STATUS" requests to the remote device, waiting its response. When the
response arrives, the GUI widgets are refreshed with the new status. The
"GET STATUS" requests are send at a regular frequency (polling mode).
I thought two split the application
That bug is: if you control-C the top-level process, all the
>> subprocesses are left running.
>>
>> I've been thinking about making it catch SIGINT, SIGTERM and SIGHUP,
>> and having it SIGKILL its active subprocesses upon receiving one of
>> these signals.
>>
&
;
> I've been thinking about making it catch SIGINT, SIGTERM and SIGHUP,
> and having it SIGKILL its active subprocesses upon receiving one of
> these signals.
>
> However, it's multithreaded, and I've heard that in CPython, threads
> and signals don't mix well.
t's multithreaded, and I've heard that in CPython, threads
> and signals don't mix well.
Python does confuse matters, but both threads and signals are
problematic entities under Linux. You need to be very well versed in the
semantics of both operating system concepts (man 7 pthr
Dan Stromberg writes:
> That bug is: if you control-C the top-level process, all the
> subprocesses are left running.
Are you setting the daemon flag?
--
https://mail.python.org/mailman/listinfo/python-list
pon receiving one of
> these signals.
>
> However, it's multithreaded, and I've heard that in CPython, threads
> and signals don't mix well.
Generally, expect SIGINT to be handled by the main thread, and code
accordingly. But I've never used the low-level thread and _
d having it SIGKILL its active subprocesses upon receiving one of
these signals.
However, it's multithreaded, and I've heard that in CPython, threads
and signals don't mix well.
Is this still an issue in CPython 3.5? If yes, how can I work around it?
Thanks!
--
https://mail.python.org/mailman/listinfo/python-list
On 9/27/2016 12:01 AM, srinivas devaki wrote:
how does Python switch execution and maintain context i.e function stack
etc,.. for co-routines and why is it less costly than switching threads
which almost do the same, and both are handled by Python Interpreter
itself(event loop for co-routines
how does Python switch execution and maintain context i.e function stack
etc,.. for co-routines and why is it less costly than switching threads
which almost do the same, and both are handled by Python Interpreter
itself(event loop for co-routines and GIL scheduling for threading), so
where does
tatus = 'P'
except serial.SerialException:
# This looks like a job for try/finally, actually
self.status = 'Z' # Dead
self.alive = False
raise
Then your main thread, instead of just sleeping forever, does this:
while True:
time.sleep(1)
block. I think I haven't used correctly
the threads.
Seems a fairly reasonable model. From what I'm seeing here, you start
a thread to read from each serial port, but then those threads will
make blocking writes to all the other serial ports. Is it possible
that one of them is getting ful
Il 17/09/2015 15:04, Dennis Lee Bieber ha scritto:
On Thu, 17 Sep 2015 12:00:08 + (UTC), alister
declaimed the following:
I can see the data being transmitted snowballing & running away in a +ve
feedback loop very easily.
Especially if a few of the remote devices are configured
Il 17/09/2015 14:00, alister ha scritto:
I would like to know more about how many serial ports are connected
One real serial port and two virtual serial ports, created by com0com
(it's a free virtual serial port for Windows).
what the equipment they are connected to does and expects.
Ra
thread to manage the
> receiving. When a byte is received, I call the .write() method for all
> the other ports.
>
> It works, but sometimes it seems to block. I think I haven't used
> correctly the threads.
>
> Below is my code, I hope someone can help me.
>
> C
used correctly
> the threads.
>
Seems a fairly reasonable model. From what I'm seeing here, you start
a thread to read from each serial port, but then those threads will
make blocking writes to all the other serial ports. Is it possible
that one of them is getting full?
When I do this ki
write() method for all
the other ports.
It works, but sometimes it seems to block. I think I haven't used
correctly the threads.
Below is my code, I hope someone can help me.
Consider that I'm a newbie in python and I never used threads before.
import serial
import threading
import
Ryan Stuart writes:
> My point is malloc, something further up (down?) the stack, is making
> modifications to shared state when threads are involved. Modifying
> shared state makes it infinitely more difficult to reason about the
> correctness of your software.
If you're sayin
On Thu, Feb 26, 2015 at 4:16 AM, Mark Lawrence wrote:
> IIRC the underlying JET engine was replaced by SQL Server years ago. Maybe
> not the best technlogy in the world, but you'd be hard pushed to do worse
> than JET :)
The way I understood it, MS Access could connect to a variety of
database ba
On 25/02/2015 17:00, Ian Kelly wrote:
On Wed, Feb 25, 2015 at 9:37 AM, Mark Lawrence wrote:
On 25/02/2015 06:02, Ian Kelly wrote:
Is the name of that database program "Microsoft Access" perchance?
Are you referring to the GUI, the underlying database engine, both, or what?
The engine. I
On Wed, Feb 25, 2015 at 9:37 AM, Mark Lawrence wrote:
> On 25/02/2015 06:02, Ian Kelly wrote:
>>
>>
>> Is the name of that database program "Microsoft Access" perchance?
>>
>
> Are you referring to the GUI, the underlying database engine, both, or what?
The engine. In theory it supports concurren
On 25/02/2015 06:02, Ian Kelly wrote:
Is the name of that database program "Microsoft Access" perchance?
Are you referring to the GUI, the underlying database engine, both, or what?
--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.
Mar
On Tue, Feb 24, 2015 at 10:54 PM, Chris Angelico wrote:
> On Wed, Feb 25, 2015 at 4:46 PM, Marko Rauhamaa wrote:
>> Marcos Almeida Azevedo :
>>
>>> Synchronized methods in Java really makes programming life simpler.
>>> But I think it is standard practice to avoid this if there is a
>>> lighter a
On Wed, Feb 25, 2015 at 1:46 PM, Marko Rauhamaa wrote:
> Marcos Almeida Azevedo :
>
> > Synchronized methods in Java really makes programming life simpler.
> > But I think it is standard practice to avoid this if there is a
> > lighter alternative as synchronized methods are slow. Worse case I
>
On Wed, Feb 25, 2015 at 4:46 PM, Marko Rauhamaa wrote:
> Marcos Almeida Azevedo :
>
>> Synchronized methods in Java really makes programming life simpler.
>> But I think it is standard practice to avoid this if there is a
>> lighter alternative as synchronized methods are slow. Worse case I
>> use
Marcos Almeida Azevedo :
> Synchronized methods in Java really makes programming life simpler.
> But I think it is standard practice to avoid this if there is a
> lighter alternative as synchronized methods are slow. Worse case I
> used double checked locking.
I have yet to see code whose perform
red memory.
> I don't understand what you mean about malloc.
>
My point is malloc, something further up (down?) the stack, is making
modifications to shared state when threads are involved. Modifying shared
state makes it infinitely more difficult to reason about the correctness of
your
Chris Angelico :
> Actually, you can quite happily have multiple threads messing with the
> underlying file descriptors, that's not a problem. (Though you will
> tend to get interleaved output. But if you always produce output in
> single blocks of text that each contain one lin
;> all have the same issue.
>
> Re stdin/stdout: obviously you can't have
> multiple threads messing with the same fd's; that's the same thing as
> data sharing.
Actually, you can quite happily have multiple threads messing with the
underlying file descriptors, that
Ryan Stuart writes:
> I'm not sure what else to say really. It's just a fact of life that
> Threads by definition run in the same memory space and hence always
> have the possibility of nasty unforeseen problems. They are unforeseen
> because it is extremely difficult (may
Hi Tom,
Tom Kent gmail.com> writes:
>
> I'm getting an error output when I call the C-API's Py_Finalize() from a
different C-thread than I made a
> python call on.
Can you please post a bug on https://bugs.python.org ?
Be sure to upload your example there.
Thank you
Antoine.
--
https://ma
Thanks I read subprocess module this answered most of my question ,thanks a
lot for the replies
On Thu, Jan 8, 2015 at 9:46 AM, Terry Reedy wrote:
> On 1/7/2015 9:00 PM, Ganesh Pal wrote:
>
>> Hi friends,
>>
>> I'm trying to use threads to achieve the below work
I'm getting an error output when I call the C-API's Py_Finalize() from a
different C-thread than I made a python call on.
The error I'm seeing is:
Exception ignored in:
Traceback (most recent call last):
File "C:\Python34-32\Lib\threading.py", line 1289, in _shutdown
assert tlock.locked()
On 1/7/2015 9:00 PM, Ganesh Pal wrote:
Hi friends,
I'm trying to use threads to achieve the below work flow
1. Start a process , while its running grep for a string 1
2. Apply the string 1 to the command in step 1 and exit step 2
3. Monitor the stdout of step1 and print success if t
On Thu, Jan 8, 2015 at 1:00 PM, Ganesh Pal wrote:
> I'm trying to use threads to achieve the below work flow
>
> 1. Start a process , while its running grep for a string 1
> 2. Apply the string 1 to the command in step 1 and exit step 2
> 3. Monitor the stdout of step1 and
On 01/07/2015 09:00 PM, Ganesh Pal wrote:
Hi friends,
I'm trying to use threads to achieve the below work flow
1. Start a process , while its running grep for a string 1
2. Apply the string 1 to the command in step 1 and exit step 2
3. Monitor the stdout of step1 and print success if t
r not grep
succeeded -- so you can redirect stdout and stderr to os.devnull and
avoid using .communicate(). Also, if you can't use .communicate(),
but need to access stdout, this is the most common reason to need
threads with subprocess.
-- Devin
On Wed, Jan 7, 2015 at 8:00 PM, Ganesh P
Hi friends,
I'm trying to use threads to achieve the below work flow
1. Start a process , while its running grep for a string 1
2. Apply the string 1 to the command in step 1 and exit step 2
3. Monitor the stdout of step1 and print success if the is pattern found
Questions:
1. Can the
On 11/13/2014 7:51 PM, satishmlm...@gmail.com wrote:
in 4 different threads
How to get file descriptors of sys.stdin, sys.stdout and sys.stderr?
fileno() in not supported. Is it only in 3.1? What is the workaround?
> io.UnsupportedOperation: fileno
> How to give a file descriptor number t
On 12/05/14 07:33, lgabiot wrote:
But AFAIK the python GIL (and in smaller or older computers that have
only one core) does not permit true paralell execution of two threads. I
believe it is quite like the way multiple processes are handled by an OS
on a single CPU computer: process A has x CPU
Le 12/05/14 10:14, lgabiot a écrit :
So if I follow you, if the Pyaudio part is "Non-blocking" there would be
a way to make it work without the two threads things. I'm back to the
Pyaudio doc, and try to get my head around the callback method, which
might be the good lead.
after filling the buffer), then you definitely need two threads for this.
But AFAIK the python GIL (and in smaller or older computers that have only
one core) does not permit true paralell execution of two threads.
Not for code that runs in the *interpreter", but it certainly allows
000 s for instance), since while doing the calculation, no audio would
> be ingested (unless pyAudio possess some kind of internal concurrency system).
> Which leads me to think that a buffer (queue) and separate threads
> (producer and consumer) are necessary for this task.
This sounds lik
1 - 100 of 1430 matches
Mail list logo