Re: Subprocess Popen confusion

2020-05-14 Thread Peter Otten
Dick Holmes wrote:

> https://occovid19.ochealthinfo.com/coronavirus-in-oc 

> I'm trying to
> communicate using a continuing dialog between two
> processes on the same system.

I think pexpect

https://pexpect.readthedocs.io/en/stable/index.html

does this naturally, but I don't know if Windows support is sufficient for 
your needs.

> I've looked at various mechanisms and the
> class that seems to fit my needs is Popen in the subprocess module, but
> I can't seem to get more than a single round-trip message through Popen.
> I first call Popen then poll using the identifier returned from the call
> and the poll seems to work. I then call the communicate function passing
> None as the value to send to the companion process stdin. I get the
> expected result, but I also get "Exception condition detected on fd 0
> \\n" and "error detected on stdin\\n". Subsequent attempts to
> read/write/communicate with the subprocess fail because the file (stdxx
> PIPE) is closed.
> 
> I can't tell from the documentation if the communicate function is a
> one-time operation. 

Yes, communicate() is one-off, 


"""Interact with process: Send data to stdin and close it.
Read data from stdout and stderr, until end-of-file is
reached.  Wait for process to terminate.
...
"""

seems pretty clear. What would you improve?

> I have tried using read but the read call doesn't
> return (I'm using winpdb-reborn to monitor the operations).

Try readline(). Deadlocks may happen ;)
 
> I'm using Python 3.7, Windows 10, winpdb-reborn 2.0.0, rpdb2 1.5.0. If
> it makes any difference, I'm trying to communicate with GDB using the MI
> interpreter.
> 
> Thoughts and advice appreciated!
> 
> Dick


-- 
https://mail.python.org/mailman/listinfo/python-list


[RELEASE] Python 3.8.3 is now available

2020-05-14 Thread Łukasz Langa
On behalf of the entire Python development community, and the currently serving 
Python release team in particular, I’m pleased to announce the release of 
Python 3.8.3, the third maintenance release of Python 3.8. You can find it here:

https://www.python.org/downloads/release/python-383/ 


It contains two months worth of bug fixes. Detailed information about all 
changes made in 3.8.3 can be found in its change log 
.
 Note that compared to 3.8.2, version 3.8.3 also contains the changes 
introduced in 3.8.3rc1.

The Python 3.8 series is the newest feature release of the Python language, and 
it contains many new features and optimizations. See the “What’s New in Python 
3.8 ” document for more 
information about features included in the 3.8 series.

Maintenance releases for the 3.8 series will continue at regular bi-monthly 
intervals, with 3.8.4 planned for mid-July 2020.
One more thing

Unless blocked on any critical issue, Monday May 18th will be the release date 
of Python 3.9.0 beta 1. It’s a special release for us because this is when we 
lock the feature set for Python 3.9. If you can help testing the current 
available alpha release, that would be very helpful:

https://www.python.org/downloads/release/python-390a6/ 

We hope you enjoy the new Python release!

Thanks to all of the many volunteers who help make Python Development and these 
releases possible! Please consider supporting our efforts by volunteering 
yourself or through organization contributions to the Python Software 
Foundation.

https://www.python.org/psf/ 


Your friendly release team,
Ned Deily @nad 
Steve Dower @steve.dower 
Łukasz Langa @ambv 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Subprocess Popen confusion

2020-05-14 Thread Terry Reedy

On 5/13/2020 11:13 PM, Dick Holmes wrote:

https://occovid19.ochealthinfo.com/coronavirus-in-oc I'm trying to
communicate using a continuing dialog between two
processes on the same system


Try multiprocessing and pipes.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


PROBLEM WITH PYTHON IDLE

2020-05-14 Thread aduojo samson
Hello, my name is Samson Haruna and I am from Nigeria.I have a problem with
my python 3.8, any time I click on it to write my python code I get this
error message "IDLE subprocess didn't make connection". I have uninstalled
and reinstalled several times, I have even deleted and downloaded a new
one, it works for some time then it stops giving me the above mentioned
error message. what can I do.

Samson.A.Haruna
08106587039
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: php to python code converter

2020-05-14 Thread MRAB

On 2020-05-13 17:53, Palash Bauri wrote:

That gives a 404 🤷‍♂️, you sure , you pasted the right link ?

~Palash Bauri

On Wed, 13 May 2020, 9:07 pm ,  wrote:


On Friday, 8 May 2009 16:08:25 UTC+5:30, bvidinli  wrote:
> if anybody needs:
> http://code.google.com/p/phppython/

$username = "username";
$password = "password";
$hostname = "localhost";

$dbhandle = mysql_connect($hostname, $username, $password) or
die("Unable to connect to MySQL");
$selected = mysql_select_db("dropdownvalues", $dbhandle) or
die("Could not select examples");
$choice = mysql_real_escape_string($_GET['choice']);

$query = "SELECT * FROM dd_vals WHERE category='$choice'";

$result = mysql_query($query);

while ($row = mysql_fetch_array($result)) {
echo "" . $row{'dd_val'} . "";
}


Look at the date of the original post. It says "8 May 2009". That's over 
11 years ago!


Since then, Google Code has ceased to exist.
--
https://mail.python.org/mailman/listinfo/python-list


Re: PROBLEM WITH PYTHON IDLE

2020-05-14 Thread Souvik Dutta
What is your os? Where have you downloaded it from? What type of package
did you download?
Souvik flutter dev

On Thu, May 14, 2020, 5:36 PM aduojo samson  wrote:

> Hello, my name is Samson Haruna and I am from Nigeria.I have a problem with
> my python 3.8, any time I click on it to write my python code I get this
> error message "IDLE subprocess didn't make connection". I have uninstalled
> and reinstalled several times, I have even deleted and downloaded a new
> one, it works for some time then it stops giving me the above mentioned
> error message. what can I do.
>
> Samson.A.Haruna
> 08106587039
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: PROBLEM WITH PYTHON IDLE

2020-05-14 Thread Mats Wichmann
On 5/14/20 5:15 AM, aduojo samson wrote:
> Hello, my name is Samson Haruna and I am from Nigeria.I have a problem with
> my python 3.8, any time I click on it to write my python code I get this
> error message "IDLE subprocess didn't make connection". I have uninstalled
> and reinstalled several times, I have even deleted and downloaded a new
> one, it works for some time then it stops giving me the above mentioned
> error message. what can I do.

Did you read the instructions in the link presented when it gives you
that error?

https://docs.python.org/3/library/idle.html#startup-failure

Does this help?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: php to python code converter

2020-05-14 Thread Jon Ribbens via Python-list
On 2020-05-14, MRAB  wrote:
> Look at the date of the original post. It says "8 May 2009". That's over 
> 11 years ago!
>
> Since then, Google Code has ceased to exist.

Disgraceful, all URLs should continue to work for at least as long as
this one has: http://info.cern.ch/hypertext/WWW/TheProject.html ;-)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Subprocess Popen confusion

2020-05-14 Thread Dick Holmes
In article , 
__pete...@web.de says...
> 
> Dick Holmes wrote:
> 
> > https://occovid19.ochealthinfo.com/coronavirus-in-oc 
> 
> > I'm trying to
> > communicate using a continuing dialog between two
> > processes on the same system.
> 
> I think pexpect
> 
> https://pexpect.readthedocs.io/en/stable/index.html
> 
> does this naturally, but I don't know if Windows support is sufficient for 
> your needs.
> 
> > I've looked at various mechanisms and the
> > class that seems to fit my needs is Popen in the subprocess module, but
> > I can't seem to get more than a single round-trip message through Popen.
> > I first call Popen then poll using the identifier returned from the call
> > and the poll seems to work. I then call the communicate function passing
> > None as the value to send to the companion process stdin. I get the
> > expected result, but I also get "Exception condition detected on fd 0
> > \\n" and "error detected on stdin\\n". Subsequent attempts to
> > read/write/communicate with the subprocess fail because the file (stdxx
> > PIPE) is closed.
> > 
> > I can't tell from the documentation if the communicate function is a
> > one-time operation. 
> 
> Yes, communicate() is one-off, 
> 
> 
> """Interact with process: Send data to stdin and close it.
> Read data from stdout and stderr, until end-of-file is
> reached.  Wait for process to terminate.
> ...
> """
> 
> seems pretty clear. What would you improve?

Peter - thanks for the clarification. I'm using the 3.6.5 CHM 
documentation and it doesn't mention the phrase "and close it".
> 
> > I have tried using read but the read call doesn't
> > return (I'm using winpdb-reborn to monitor the operations).
> 
> Try readline(). Deadlocks may happen ;)
>  
> > I'm using Python 3.7, Windows 10, winpdb-reborn 2.0.0, rpdb2 1.5.0. If
> > it makes any difference, I'm trying to communicate with GDB using the MI
> > interpreter.
> > 


-- 
https://mail.python.org/mailman/listinfo/python-list


Fwd: Removing python installation

2020-05-14 Thread Shawn Hoffman
I've somehow wound up in a situation where I have both 3.7.5 and 3.7.6
installed, and the py.exe launcher can find both of them, and defaults
to the older one:

>py -0p
Installed Pythons found by py Launcher for Windows
 -3.7-64"C:\Program Files (x86)\Microsoft Visual
Studio\Shared\Python37_64\python.exe" *
 -3.7-64C:\Users\shawn\AppData\Local\Programs\Python\Python37\python.exe

As you can see, the 3.7.5 install is from Visual Studio. I want to
remove this python installation, however while uninstalling it via the
VS Installer GUI appears to work, none of the files are removed. Only
the json file VS Installer uses to track the package is removed. In
the VS Installer logs, I see:

Skipping uninstall of 'CPython3.Exe.x64,version=3.7.5,chip=x64'
because it is permanent.

which seems suspicious.

Additionally, in the aforementioned json file I can see the installer
being used is "python-3.7.5-amd64.exe" from
https://go.microsoft.com/fwlink/?linkid=2109129 , with args:
"/quiet /log \"[LogFile]\" InstallAllUsers=1 CompileAll=1
Include_symbols=1 TargetDir=\"[SharedInstallDir]\\Python37_64\""

So, I've downloaded this installer and tried to run it with the
/uninstall option. Again, the uninstall appears to complete OK, but
the files are not removed.
The uninstall log is here:
https://gist.github.com/shuffle2/3c3aa736f5cf9579e6e4a4a33b1ad81d

Is there some "clean" way to remove this VS-installed 3.7.5 (and not
break the 3.7.6 install)?

Thanks,
-Shawn
-- 
https://mail.python.org/mailman/listinfo/python-list


Decorators with arguments?

2020-05-14 Thread Christopher de Vidal
Help please? Creating an MQTT-to-Firestore bridge and I know a decorator
would help but I'm stumped how to create one. I've used decorators before
but not with arguments.

The Firestore collection.on_snapshot() method invokes a callback and sends
it three parameters (collection_snapshot, changes, and read_time). I need
the callback to also know the name of the collection so that I can publish
to the equivalent MQTT topic name. I had thought to add a fourth parameter
and I believe a decorator is the right approach but am stumped how to add
that fourth parameter. How would I do this with the code below?

#!/usr/bin/env python3
from google.cloud import firestore
import firebase_admin
from firebase_admin import credentials
import json
import mqtt

firebase_admin.initialize_app(credentials.Certificate("certs/firebase.json"))
db = firestore.Client()
mqtt.connect()


def load_json(contents):
try:
return json.loads(contents)
except (json.decoder.JSONDecodeError, TypeError):
return contents


def on_snapshot(col_name, col_snapshot, changes, read_time):
data = dict()
for doc in col_snapshot:
serial = doc.id
contents = load_json(doc.to_dict()['value'])
data[serial] = contents
for change in changes:
serial = change.document.id
mqtt_topic = col_name + '/' + serial
contents = data[serial]
if change.type.name in ['ADDED', 'MODIFIED']:
mqtt.publish(mqtt_topic, contents)
elif change.type.name == 'REMOVED':
mqtt.publish(mqtt_topic, None)


# Start repeated code section
# TODO Better to use decorators but I was stumped on how to pass arguments
def door_status_on_snapshot(col_snapshot, changes, read_time):
on_snapshot('door_status', col_snapshot, changes, read_time)


door_status_col_ref = db.collection('door_status')
door_status_col_watch =
door_status_col_ref.on_snapshot(door_status_on_snapshot)

# Repetition...
def cpu_temp_on_snapshot(col_snapshot, changes, read_time):
on_snapshot('cpu_temp', col_snapshot, changes, read_time)


cpu_temp_col_ref = db.collection('cpu_temp')
cpu_temp_col_watch = cpu_temp_col_ref.on_snapshot(cpu_temp_on_snapshot)
# End repeated code section

# Start repeated code section
door_status_col_watch.unsubscribe()
cpu_temp_col_watch.unsubscribe()
# Repetition...
# End repeated code section

Christopher de Vidal

Would you consider yourself a good person? Have you ever taken the 'Good
Person' test? It's a fascinating five minute quiz. Google it.
-- 
https://mail.python.org/mailman/listinfo/python-list


Multithread and locking issue

2020-05-14 Thread Stephane Tougard



Hello,

A multithreaded software written in Python is connected with a Postgres
database. To avoid concurrent access issue with the database, it starts
a thread who receive all SQL request via queue.put and queue.get (it
makes only insert, so no issue with the return of the SQL request).

As long as it runs with 10 threads, no issues. At 100 threads, the
software is blocked by what I think is a locking issue.

I guess Python multithreading and queue are working good enough that it
can handle 100 threads with no issue (give me wrong here), so I guess
the problem is in my code.

The function (thread) who handles SQL requests.

def execute_sql(q):
print("Start SQL Thread")
while True:
try:
data = q.get(True,5)
except:
print("No data")
continue

print("RECEIVED SQL ORDER")
print(data)
print("END")
if data == "EXIT":
return
try:
request = data['request']
arg = data['arg']
ref.execute(request,arg)
except:
print("Can not execute SQL request")
print(data)


The code to send the SQL request.

sql = dict()
sql['request'] = "update b2_user set credit = credit -%s where 
id = %s"
sql['arg'] = (i,username,)
try:
q.put(sql,True,5)
except:
print("Can not insert data")

The launch of the SQL thread (nothing fancy here).

q = qu.Queue()
t = th.Thread(target = execute_sql, args = (q,))
t.start()


Any idea ?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Decorators with arguments?

2020-05-14 Thread Cameron Simpson

On 14May2020 08:28, Christopher de Vidal  wrote:

Help please? Creating an MQTT-to-Firestore bridge and I know a decorator
would help but I'm stumped how to create one. I've used decorators before
but not with arguments.

The Firestore collection.on_snapshot() method invokes a callback and sends
it three parameters (collection_snapshot, changes, and read_time). I need
the callback to also know the name of the collection so that I can publish
to the equivalent MQTT topic name. I had thought to add a fourth parameter
and I believe a decorator is the right approach but am stumped how to add
that fourth parameter. How would I do this with the code below?


To start with, I'm not convinced a decorator is a good approach here.  
I'd use a small class.


Maybe you could provide an example of how you think the code would look 
_with_ a decorator (ignoring the implementation of the decorator itself) 
so that we can see what you'd like to use?


The thing about a decorator is that in normal use you use it to define a 
new named function; each function would tend to be different in what it 
does, otherwise you'd only have one function.


It sounds like you want a named function per collection name, but with 
the _same_ inner code (on_snapshot). But decorators, being applied to 
multiple functions, are usually for situations where the inner code 
varies and the decorator is just making a shim to call it in a 
particular way. You've got _one_ function and just want to attach it to 
multiple collection names, kind of the inverse.


To clear my mind, I'll lay out the class approach (untested). Then if I 
think you can do this with a decorator I'll try to sketch one.


   class MQTTAdaptor:

   def __init__(self, fbdb, collection_name):
   self.fbdb = fbdb
   self.collection_name = collection_name
   self.fbref = None
   self.subscribe()

   def subscribe(self):
   assert self.fbref is None
   self.fbref = db.collection(collection_name)
   self.fbref.on_snapshot(self.on_snapshot)

   def unsubscribe(self):
   self.fbref.unsubscribe()
   self.fbref = None

   def on_snapshot(self, col_snapshot, changes, read_time):
   col_name = self.collection_name
   data = {}
   for doc in col_snapshot:
   serial = doc.id
   contents = load_json(doc.to_dict()['value'])
   data[serial] = contents
   for change in changes:
   serial = change.document.id
   mqtt_topic = col_name + '/' + serial
   contents = data[serial]
   if change.type.name in ['ADDED', 'MODIFIED']:
   mqtt.publish(mqtt_topic, contents)
   elif change.type.name == 'REMOVED':
   mqtt.publish(mqtt_topic, None)
   else:
   warning("unhandled change type: %r" % change.type.name) 


   adaptors = []
   for collection_name in 'cpu_temp', 'door_status':
   adaptors.append(MQTTAdaptor, db, collection_name)
    run for a while ...
   for adaptor in adaptors:
   adaptor.unsubscribe()

I've broken out the subscribe/unsubscribe as standalone methods just in 
case you want to resubscribe an adaptor later (eg turn them on and off).


So here we've got a little class that keeps the state (the subscription 
ref and the collection name) and has its own FB style on_snapshot which 
passes stuff on to MQTT.


If you want to do this with a decorator you've got a small problem: it 
is easy to make a shim like your on_snapshot callback, but if you want 
to do a nice unsubscribe at the end thend you need to keep the ref 
around somewhere. The class above provides a place to keep that.


With a decorator you need it to know where to store that ref. You can 
use a global registry (ugh) or you could make one (just a dict) and pass 
it to the decorator as well. We'll use a global and just use it in the 
decorator directly, since we'll use "db" the same way.


   # the registry
   adaptors = {}

   @adapt('cpu_temp')
   def cpu_temp_on_snapshot(collection_name, col_snapshot, changes, read_time):
   ... your existing code here ...

and then to subscribe:

   cpu_temp_col_ref = db.collection('cpu_temp')
   cpu_temp_col_watch = cpu_temp_col_ref.on_snapshot(cpu_temp_on_snapshot)

but then for the door_status you want the same function repeated:

   @adapt('door_status')
   def door_on_snapshot(collection_name, col_snapshot, changes, read_time):
   ... your existing code here ...

and the same longhand subscription.

You can see this isn't any better - you're writing out on_snapshot 
longhand every time.  Now, a decorator just accepts a function as its 
argument and returns a new function to be used in its place. So we could 
define on_snapshot once and decorate it:


   def named_snapshot(collection_name, col_snapshot, changes, read_time):
   ... the egneral function code here again ...

   cpu_temp_on_snaps

Re: Multithread and locking issue

2020-05-14 Thread Cameron Simpson

On 14May2020 22:36, Stephane Tougard  wrote:

A multithreaded software written in Python is connected with a Postgres
database. To avoid concurrent access issue with the database, it starts
a thread who receive all SQL request via queue.put and queue.get (it
makes only insert, so no issue with the return of the SQL request).


Just a remark that PostgreSQL ets callers work in parallel, just make 
multiple connections. But if you're coming from just one connection, yes 
serialising the SQL queries is good.



As long as it runs with 10 threads, no issues. At 100 threads, the
software is blocked by what I think is a locking issue.


Or a timeout or busy loop. More print statements will help you.


I guess Python multithreading and queue are working good enough that it
can handle 100 threads with no issue (give me wrong here),


Yep, that is just fine.


so I guess
the problem is in my code.

The function (thread) who handles SQL requests.

def execute_sql(q):
   print("Start SQL Thread")
   while True:
   try:
   data = q.get(True,5)
   except:


2 problems here:

1: a "bare except" catches _any_ exception instead of just the one you 
want (Empty). Don't do that - catch _only_ what you expect to handle.


2: since you're catching anything you never know why an exception 
occurred - at least report it:


   except Exception as e:
   print("No data: %s" % e)
   continue


   print("No data")
   continue


The first thing I would do is get rid of the timeout (5) parameter. And 
since block=True is the default, also that parameter:


   data = q.get()

This never times out, so there's no need to test for a timeout.

I would also put a print() _immediately_ before _and_ after the q.get() 
so that you know it was called and that it completed, and what it got.


To indicate no more SQL, send a sentinel such as None, and test for that:

   data = q.get()
   if data is None:
   break


   print("RECEIVED SQL ORDER")
   print(data)
   print("END")
   if data == "EXIT":
   return


Ah, here's your sentinel. I prefer None or some other special value 
rather than a magic string ("EXIT"). Partly because some other queue you 
use might be processing arbitrary strings, so a string won't do.



   try:
   request = data['request']
   arg = data['arg']
   ref.execute(request,arg)
   except:


Another bare except. Catch specific exceptions and report the exception!


   print("Can not execute SQL request")
   print(data)

The code to send the SQL request.

   sql = dict()
   sql['request'] = "update b2_user set credit = credit -%s where id = 
%s"
   sql['arg'] = (i,username,)
   try:
   q.put(sql,True,5)
   except:


Again the bare except. But just drop the timeout and go:

   q.put(sql)

and forget the try/except.


   print("Can not insert data")

The launch of the SQL thread (nothing fancy here).

q = qu.Queue()
t = th.Thread(target = execute_sql, args = (q,))
t.start()


That looks superficially good. Take out the timeouts, put in more 
print()s, and see what happens.


You also need to send the end-of-SQL sentinel:

   q.put(None)

or:

   q.put("EXIT")

depending on what you decide to use.

My problem with the timeouts is that they make your code far less 
predictable. If you get rid of them then your code must complete or 
deadlock, there's no fuzzy timeouts-may-occur middle ground. Timeouts 
are also difficult to choose correctly (if "correct" is even a term 
which is meaningful), and it is often then better to not try to choose 
them at all.


Cheers,
Cameron Simpson 
--
https://mail.python.org/mailman/listinfo/python-list


Re: Multithread and locking issue

2020-05-14 Thread MRAB

On 2020-05-14 23:36, Stephane Tougard wrote:



Hello,

A multithreaded software written in Python is connected with a Postgres
database. To avoid concurrent access issue with the database, it starts
a thread who receive all SQL request via queue.put and queue.get (it
makes only insert, so no issue with the return of the SQL request).

As long as it runs with 10 threads, no issues. At 100 threads, the
software is blocked by what I think is a locking issue.

I guess Python multithreading and queue are working good enough that it
can handle 100 threads with no issue (give me wrong here), so I guess
the problem is in my code.

The function (thread) who handles SQL requests.

def execute_sql(q):
 print("Start SQL Thread")
 while True:
 try:
 data = q.get(True,5)
 except:
 print("No data")
 continue
 
 print("RECEIVED SQL ORDER")

 print(data)
 print("END")
 if data == "EXIT":
 return
 try:
 request = data['request']
 arg = data['arg']
 ref.execute(request,arg)
 except:
 print("Can not execute SQL request")
 print(data)


The code to send the SQL request.

 sql = dict()
 sql['request'] = "update b2_user set credit = credit -%s where id = 
%s"
 sql['arg'] = (i,username,)
 try:
 q.put(sql,True,5)
 except:
 print("Can not insert data")

The launch of the SQL thread (nothing fancy here).

q = qu.Queue()
t = th.Thread(target = execute_sql, args = (q,))
t.start()


Any idea ?

Are there 100 threads running execute_sql? Do you put 100 "EXIT" 
messages into the queue, one for each thread?


The "bare excepts" are a bad idea because they catch _all_ exceptions, 
even the ones that might occur due to bugs, such as NameError 
(misspelled variable or function).

--
https://mail.python.org/mailman/listinfo/python-list


Re: Multithread and locking issue

2020-05-14 Thread Stephane Tougard
On 2020-05-14, MRAB  wrote:
> On 2020-05-14 23:36, Stephane Tougard wrote:
>> 
> Are there 100 threads running execute_sql? Do you put 100 "EXIT" 
> messages into the queue, one for each thread?

Nope, the EXIT comes from the main thread at the very end, once all
other threads are dead already and the programm is good to die. It's
just to ensure that the SQL thread dies as well.

> The "bare excepts" are a bad idea because they catch _all_ exceptions, 
> even the ones that might occur due to bugs, such as NameError 
> (misspelled variable or function).

At first, I did not use timeout nor try/except. The thing is that I
never catch any except anyway. The program just stuck. If I CTRL-C on
it, it shows a "acquire_lock()" status.

So I added the timeout to ensure that it does not stuck on the get/put
and the except to see if the problem comes from here. It does not come
from here. I can remove all that, the programm is still going to stuck
between 150/200 threads have been launched and have died.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Multithread and locking issue

2020-05-14 Thread Chris Angelico
On Fri, May 15, 2020 at 9:20 AM Cameron Simpson  wrote:
>
> On 14May2020 22:36, Stephane Tougard  wrote:
> >A multithreaded software written in Python is connected with a Postgres
> >database. To avoid concurrent access issue with the database, it starts
> >a thread who receive all SQL request via queue.put and queue.get (it
> >makes only insert, so no issue with the return of the SQL request).
>
> Just a remark that PostgreSQL ets callers work in parallel, just make
> multiple connections.

Seconded. If you know how many threads you're going to have, just open
that many connections. If not, there's a connection-pooling feature as
part of psycopg2 (if I'm not mistaken). This would be far far easier
to work with than a fragile queueing setup.

Chrisa
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Multithread and locking issue

2020-05-14 Thread Stephane Tougard
On 2020-05-15, Chris Angelico  wrote:

> Seconded. If you know how many threads you're going to have, just open
> that many connections. If not, there's a connection-pooling feature as
> part of psycopg2 (if I'm not mistaken). This would be far far easier
> to work with than a fragile queueing setup.

I've done like that (more or less), it works fine.

I note that que queuing module or Python is "fragile".
-- 
https://mail.python.org/mailman/listinfo/python-list