ModuleNotFoundError with click module

2019-11-30 Thread Tim Johnson

Using linux ubuntu 16.04 with bash shell.
Am retired python programmer, but not terribly current.
I have moderate bash experience.

When trying to install pgadmin4 via apt I get the following error traceback
when pgadmin4 is invoked:

Traceback (most recent call last):
  File "setup.py", line 17, in 
    from pgadmin.model import db, User, Version, ServerGroup, Server, \
  File "/usr/share/pgadmin4/web/pgadmin/__init__.py", line 19, in 
    from flask import Flask, abort, request, current_app, session, url_for
  File "/usr/local/lib/python3.7/site-packages/flask/__init__.py", line 
21, in 

    from .app import Flask
  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 34, 
in 

    from . import cli
File "/usr/local/lib/python3.7/site-packages/flask/cli.py", line 25, in 


import click
ModuleNotFoundError: No module named 'click'


If I invoke python3 (/usr/local/bin/python3), version 3.7.2 and invoke
>>> import click
click is imported successfully.

In this invocation, sys.path is:
['', '/usr/local/lib/python37.zip', '/usr/local/lib/python3.7',
'/usr/local/lib/python3.7/lib-dynload',
'/home/tim/.local/lib/python3.7/site-packages',
'/usr/local/lib/python3.7/site-packages']

$PYTHONPATH is empty when the bash shell is invoked

$PATH as follows:
/home/tim/bin:/home/tim/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin

click.py can be found at 
/usr/local/lib/python3.7/site-packages/pipenv/patched/piptools/

in turn click.py imports click, presumably as the package,
which appears to be at 
/usr/local/lib/python3.7/site-packages/pipenv/vendor/click


Any number of settings of PYTHONPATH to the various paths above has 
failed to resolve the ModuleNotFoundError

Same issues with attempting install from a virtual environment.

Any help will be appreciated.
thanks
tim

--
https://mail.python.org/mailman/listinfo/python-list


Mixed Python/C debugging

2019-11-30 Thread Skip Montanaro
After at least ten years away from Python's run-time interpreter &
byte code compiler, I'm getting set to familiarize myself with that
again. This will, I think, entail debugging a mixed Python/C
environment. I'm an Emacs user and am aware that GDB since 7.0 has
support for debugging at the Python code level. Is Emacs+GDB my best
bet? Are there any Python IDEs which support C-level breakpoints and
debugging?

Thanks,

Skip
-- 
https://mail.python.org/mailman/listinfo/python-list


Pickle caching objects?

2019-11-30 Thread José María Mateos

Hi,

I just asked this question on the IRC channel but didn't manage to get a 
response, though some people replied with suggestions that expanded this 
question a bit.


I have a program that has to read some pickle files, perform some 
operations on them, and then return. The pickle objects I am reading all 
have the same structure, which consists of a single list with two 
elements: the first one is a long list, the second one is a numpy 
object.


I found out that, after calling that function, the memory taken by the 
Python executable (monitored using htop -- the entire thing runs on 
Python 3.6 on an Ubuntu 16.04, pretty standard conda installation with a 
few packages installed directly using `conda install`) increases in 
proportion to the size of the pickle object being read. My intuition is 
that that memory should be free upon exiting.


Does pickle keep a cache of objects in memory after they have been 
returned? I thought that could be the answer, but then someone suggested 
to measure the time it takes to load the objects. This is a script I 
wrote to test this; nothing(filepath) just loads the pickle file, 
doesn't do anything with the output and returns how long it took to 
perform the load operation.


---
import glob
import pickle
import timeit
import os
import psutil

def nothing(filepath):
   start = timeit.default_timer()
   with open(filepath, 'rb') as f:
   _ = pickle.load(f)
   return timeit.default_timer() - start

if __name__ == "__main__":

   filelist = glob.glob('/tmp/test/*.pk')

   for i, filepath in enumerate(filelist):
   print("Size of file {}: {}".format(i, os.path.getsize(filepath)))
   print("First call:", nothing(filepath))
   print("Second call:", nothing(filepath))
   print("Memory usage:", psutil.Process(os.getpid()).memory_info().rss)
   print()
---

This is the output of the second time the script was run, to avoid any 
effects of potential IO caches:


---
Size of file 0: 11280531
First call: 0.1466723980847746
Second call: 0.10044755204580724
Memory usage: 49418240

Size of file 1: 8955825
First call: 0.07904054620303214
Second call: 0.07996074995025992
Memory usage: 49831936

Size of file 2: 43727266
First call: 0.37741047400049865
Second call: 0.38176894187927246
Memory usage: 49758208

Size of file 3: 31122090
First call: 0.271301960805431
Second call: 0.27462846506386995
Memory usage: 49991680

Size of file 4: 634456686
First call: 5.526095286011696
Second call: 5.558765463065356
Memory usage: 539324416

Size of file 5: 3349952658
First call: 29.50982437795028
Second call: 29.461691531119868
Memory usage: 3443597312

Size of file 6: 9384929
First call: 0.0826977719552815
Second call: 0.08362263604067266
Memory usage: 3443597312

Size of file 7: 422137
First call: 0.0057482069823890924
Second call: 0.005949910031631589
Memory usage: 3443597312

Size of file 8: 409458799
First call: 3.562588643981144
Second call: 3.6001368327997625
Memory usage: 3441451008

Size of file 9: 44843816
First call: 0.3913297887245
Second call: 0.398518088972196
Memory usage: 3441451008
---

Notice that memory usage increases noticeably specially on files 4 and 
5, the biggest ones, and doesn't come down as I would expect it to. But 
the loading time is constant, so I think I can disregard any pickle 
caching mechanisms.


So I guess now my question is: can anyone give me any pointers as to why 
is this happening? Any help is appreciated.


Thanks,

--
José María (Chema) Mateos || https://rinzewind.org/
--
https://mail.python.org/mailman/listinfo/python-list


Re: Pickle caching objects?

2019-11-30 Thread Chris Angelico
On Sun, Dec 1, 2019 at 12:15 PM José María Mateos  wrote:
> print("Memory usage:", psutil.Process(os.getpid()).memory_info().rss)
>
> Notice that memory usage increases noticeably specially on files 4 and
> 5, the biggest ones, and doesn't come down as I would expect it to. But
> the loading time is constant, so I think I can disregard any pickle
> caching mechanisms.
>
> So I guess now my question is: can anyone give me any pointers as to why
> is this happening? Any help is appreciated.
>

I can't answer your question authoritatively, but I can suggest a
place to look. Python's memory allocator doesn't always return memory
to the system when the objects are freed up, for various reasons
including the way that memory pages get allocated from. But it
internally knows which parts are in use and which parts aren't. You're
seeing the RSS go down slightly at some points, which would be the
times when entire pages can be released; but other than that, what
you'll end up with is a sort of high-water-mark with lots of unused
space inside it.

So what you're seeing isn't actual objects being cached, but just
memory ready to be populated with future objects.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pickle caching objects?

2019-11-30 Thread Richard Damon
On 11/30/19 5:05 PM, José María Mateos wrote:
> Hi,
>
> I just asked this question on the IRC channel but didn't manage to get
> a response, though some people replied with suggestions that expanded
> this question a bit.
>
> I have a program that has to read some pickle files, perform some
> operations on them, and then return. The pickle objects I am reading
> all have the same structure, which consists of a single list with two
> elements: the first one is a long list, the second one is a numpy object.
>
> I found out that, after calling that function, the memory taken by the
> Python executable (monitored using htop -- the entire thing runs on
> Python 3.6 on an Ubuntu 16.04, pretty standard conda installation with
> a few packages installed directly using `conda install`) increases in
> proportion to the size of the pickle object being read. My intuition
> is that that memory should be free upon exiting.
>
> Does pickle keep a cache of objects in memory after they have been
> returned? I thought that could be the answer, but then someone
> suggested to measure the time it takes to load the objects. This is a
> script I wrote to test this; nothing(filepath) just loads the pickle
> file, doesn't do anything with the output and returns how long it took
> to perform the load operation.
>

> Notice that memory usage increases noticeably specially on files 4 and
> 5, the biggest ones, and doesn't come down as I would expect it to.
> But the loading time is constant, so I think I can disregard any
> pickle caching mechanisms.
>
> So I guess now my question is: can anyone give me any pointers as to
> why is this happening? Any help is appreciated.
>
> Thanks,
>
Python likely doesn't return the memory it has gotten from the OS back
to the OS just because it isn't using it at the moment. This is actually
very common behavior as getting new memory from the OS is somewhat
expensive, and it is common that memory release will be used again shortly.

There is also the fact that to return the memory, the block needs to be
totally unused, and it isn't hard for a few small pieces to still be
left in use.

You are asking for the information about how much memory Python has
gotten from the OS, which is different than how much it is actively
using, as when objects go away there memory is returned to the free pool
INSIDE Python, to be used for other requests before asking the OS for more.

-- 
Richard Damon

-- 
https://mail.python.org/mailman/listinfo/python-list


"Don't install on the system Python"

2019-11-30 Thread John Ladasky
Long-time Ubuntu user here.

For years, I've read warnings about not installing one's personal stack of 
Python modules on top of the system Python.  It is possible to corrupt the OS, 
or so I've gathered.

Well, I've never heeded this advice, and so far nothing bad has happened to me. 
 I don't like Anaconda, or virtual environments in general.  I don't like 
heavyweight IDE's.  I like to be able to type "python3" at the command prompt 
and be sure what I'll be getting.  I have multiple user accounts on a system 
that I manage, and I want every user account to have access to the same modules.

Maybe the modules that I require are safe to install on the system Python, I'm 
not sure.  My must-haves are mostly scientific computing and data management 
modules: Numpy, Scipy, Scikit-learn, Matplotlib, Pandas, Biopython, and 
Tensorflow.  I also use PyQt5 from time to time.

Can anyone provide concrete examples of problems arising from installing 
modules on top of the system Python?  Am I courting disaster?

Thanks for your advice.
-- 
https://mail.python.org/mailman/listinfo/python-list