Return

2021-12-07 Thread vani arul
Hey There,
Can someone help to understand how a python function can return value with
using return in the code?
It is not not about explicit or implicit function call.


Thanks
Vani
-- 
https://mail.python.org/mailman/listinfo/python-list


For a hierarchical project, the EXE file generated by "pyinstaller" does not start.

2021-12-07 Thread Mohsen Owzar
Hi all, 

I have a problem with "pyinstaller".
When I compile a single Python file, an EXE file is created in the "dist" 
directory, with which I can start the program and the GUI appears after a few 
seconds.
But when I try to compile my project with "pyinstaller 
Relais_LastDauerTester.py" and generate an EXE file from it, an ".exe" file is 
also created in the "dist" directory, but nothing happens when I run both .exe 
files from Windows Explorer with a double click or from the DOS shell by 
invoking it.
The mouse pointer changes briefly to an hourglass and then back and nothing 
happens.
The only difference between these two programs is that the first is just a 
single file and my project is structured hierarchically. I tried to demonstrate 
the structure of the project below with two "DIR" DOS commands.
In PyCharm you only need to run the top file "Relais_LastDauerTester.py", so I 
used this file for "pyinstaller".
But it looks like I have to give (I believe) an extra argument for a 
hierarchical project.
I googled all weekend to find a workable solution. But all I found was for 
converting a ".py" file and not a project like mine.
With some suggestions, I should add a "pause" or "input ()" command to the code 
to prevent the DOS shell from disappearing quickly, which is out of the 
question for me.
Do you have any idea what it could be?
I thank you in advance for any useful advice that could help me.

Greeting
Mohsen

PS 
I work on a PC with 
1201 INFO: PyInstaller: 4.7 
1202 INFO: Python: 3.9.5 
1282 INFO: Platform: Windows-10-10.0.18363-SP0 

Project Structure from PyCharm
&&& 
...\SW\Relais_LastDauerTester_V0.5 
&&& 
 .idea 
 Logfiles 
 Relais_LastDauerTester 
276 Relais_LastDauerTester.py 
 Screenshotfiles 
405 settings.ini 

&&& 
...\SW\Relais_LastDauerTester_V0.5\Relais_LastDauerTester 
&&& 
9’308  GPIOControl.py 
90’618   GUI_View.py 
998 main.py 
28’625  TestControl.py 
269  __init__.py 
   __pycache__ 


Simplified project files with the "import" lines 
*** 
Relais_LastDauerTester.py.py 
*** 
#!/usr/bin/env python3 
# -*- coding: utf-8 -*- 
from Relais_LastDauerTester.main import main 

if __name__ == "__main__": 
main() 

*** 
main.py 
*** 
import sys 

from PyQt5.QtCore import * 
from PyQt5.QtWidgets import QApplication 
from .GUI_View import MainWindow 

def main(): 
app = QApplication(sys.argv) 

window = MainWindow() 
window.show() 

sys.exit(app.exec_()) 

if __name__ == '__main__': 
main() 

*** 
GUI_View.py 
*** 
import sys 
import subprocess 

import PyQt5.QtGui as qtg 
from PyQt5.QtWidgets import (QLabel, QPushButton, QLineEdit, QCheckBox, 
QWidget, 
QVBoxLayout, QHBoxLayout, QGridLayout, QDialog, QFileDialog) 
from .TestControl import * 

class MainWindow(QWidget): 
def __init__(self): 
super().__init__() 

def createMainWindow(self): 
... 

def exitMainWindow(self): 
... 

def ChangeToPrefWindow(self): 
self.prefwindow.show() 
self.hide() 

class PrefWindow(QWidget): 
def __init__(self, parent=None): 
super().__init__() 
self.parent = parent 
... 
self.createPrefWindow() 

def ChangeToMainWindow(self): 
... 

def createPrefWindow(self): 
... 

class CustomLineEdit(QLineEdit): 
clicked = pyqtSignal() 

def mousePressEvent(self, QMouseEvent): 
self.clicked.emit() 

class Keypad_Window_New(QDialog): 
def __init__(self, num=0, parent=None): 
super().__init__(parent) 
self.parent = parent 
... 

TestContrl.py 
*** 
from PyQt5.QtCore import * 
from .GPIOControl import GPIOControl 

class WorkerSignals(QObject): 
signal_Update_Label = pyqtSignal() 

class TestControl(QRunnable): 
signals = WorkerSignals() 

def __init__(self, parent=None): 
super().__init__() 
self.parent = parent 
... 

*** 
GPIOContrl.py 
*** 
class GPIOControl: 
def my_print(self, args): 
if print_allowed == 1: 
print(args) 

def __init__(self):
-- 
https://mail.python.org/mailman/listinfo/python-list


Urllib.request vs. Requests.get

2021-12-07 Thread Julius Hamilton
Hey,

I am currently working on a simple program which scrapes text from webpages
via a URL, then segments it (with Spacy).

I’m trying to refine my program to use just the right tools for the job,
for each of the steps.

Requests.get works great, but I’ve seen people use urllib.request.urlopen()
in some examples. It appealed to me because it seemed lower level than
requests.get, so it just makes the program feel leaner and purer and more
direct.

However, requests.get works fine on this url:

https://juno.sh/direct-connection-to-jupyter-server/

But urllib returns a “403 forbidden”.

Could anyone please comment on what the fundamental differences are between
urllib vs. requests, why this would happen, and if urllib has any option to
prevent this and get the page source?

Thanks,
Julius
-- 
https://mail.python.org/mailman/listinfo/python-list


Odd locale error that has disappeared on reboot.

2021-12-07 Thread Chris Green
I have a very short Python program that runs on one of my Raspberry
Pis to collect temperatures from a 1-wire sensor and write them to a
database:-

#!/usr/bin/python3
#
#
# read temperature from 1-wire sensor and store in database with date and 
time
#
import sqlite3
import time

ftxt = str(open("/sys/bus/w1/devices/28-01204e1e64c3/w1_slave").read(100))
temp = (float(ftxt[ftxt.find("t=") +2:]))/1000
#
#
# insert date, time and temperature into the database
#
tdb = sqlite3.connect("/home/chris/.cfg/share/temperature/temperature.db")
cr = tdb.cursor()
dt = time.strftime("%Y-%m-%d %H:%M")
cr.execute("Insert INTO temperatures (DateTime, Temperature) VALUES(?, 
round(?, 2))", (dt, temp)
)
tdb.commit()
tdb.close()

It's run by cron every 10 minutes.


At 03:40 last night it suddenly started throwing the following error every
time it ran:-

Fatal Python error: initfsencoding: Unable to get the locale encoding
LookupError: unknown encoding: UTF-8

Current thread 0xb6f8db40 (most recent call first):
Aborted

Running the program from the command line produced the same error.
Restarting the Pi system has fixed the problem.


What could have caused this?  I certainly wasn't around at 03:40! :-)
There aren't any automatic updates enabled on the system, the only 
thing that might have been going on was a backup as that Pi is also
my 'NAS' with a big USB drive connected to it.  The backups have been
running without problems for more than a year.  Looking at the system
logs shows that a backup was started at 03:35 so I suppose that *could*
have provoked something but I fail to understand how.

-- 
Chris Green
·
-- 
https://mail.python.org/mailman/listinfo/python-list


HTML extraction

2021-12-07 Thread Julius Hamilton
Hey,

Could anyone please comment on the purest way simply to strip HTML tags
from the internal text they surround?

I know Beautiful Soup is a convenient tool, but I’m interested to know what
the most minimal way to do it would be.

People say you usually don’t use Regex for a second order language like
HTML, so I was thinking about using xpath or lxml, which seem like very
pure, universal tools for the job.

I did find an example for doing this with the re module, though.

Would it be fair to say that to just strip the tags, Regex is fine, but you
need to build a tree-like object if you want the ability to select which
nodes to keep and which to discard?

Can xpath / lxml do that?

What are the chief differences between xpath / lxml and Beautiful Soup?

Thanks,
Julius
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: HTML extraction

2021-12-07 Thread Chris Angelico
On Wed, Dec 8, 2021 at 4:55 AM Julius Hamilton
 wrote:
>
> Hey,
>
> Could anyone please comment on the purest way simply to strip HTML tags
> from the internal text they surround?
>
> I know Beautiful Soup is a convenient tool, but I’m interested to know what
> the most minimal way to do it would be.

That's definitely the best and most general way, and would still be my
first thought most of the time.

> People say you usually don’t use Regex for a second order language like
> HTML, so I was thinking about using xpath or lxml, which seem like very
> pure, universal tools for the job.
>
> I did find an example for doing this with the re module, though.
>
> Would it be fair to say that to just strip the tags, Regex is fine, but you
> need to build a tree-like object if you want the ability to select which
> nodes to keep and which to discard?

Obligatory reference:

https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags

> Can xpath / lxml do that?
>
> What are the chief differences between xpath / lxml and Beautiful Soup?
>

I've never directly used lxml, mainly because bs4 offers all the same
advantages and more, with about the same costs. However, if you're
looking for a no-external-deps option, Python *does* include an HTML
parser in the standard library:

https://docs.python.org/3/library/html.parser.html

If your purpose is extremely simple (like "strip tags, search for
text"), then it should be easy enough to whip up something using that
module. No external deps, not a lot of code, pretty straight-forward.
On the other hand, if you're trying to do an "HTML to text"
conversion, you'd probably need to be aware of which tags are
block-level and which are inline content, so that (for instance)
"Hello world" would come out as two separate
paragraphs of text, whereas the same thing with  tags would become
just "Hello world". But for the most part, handle_data will probably
do everything you need.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Urllib.request vs. Requests.get

2021-12-07 Thread Chris Angelico
On Wed, Dec 8, 2021 at 4:51 AM Julius Hamilton
 wrote:
>
> Hey,
>
> I am currently working on a simple program which scrapes text from webpages
> via a URL, then segments it (with Spacy).
>
> I’m trying to refine my program to use just the right tools for the job,
> for each of the steps.
>
> Requests.get works great, but I’ve seen people use urllib.request.urlopen()
> in some examples. It appealed to me because it seemed lower level than
> requests.get, so it just makes the program feel leaner and purer and more
> direct.
>
> However, requests.get works fine on this url:
>
> https://juno.sh/direct-connection-to-jupyter-server/
>
> But urllib returns a “403 forbidden”.
>
> Could anyone please comment on what the fundamental differences are between
> urllib vs. requests, why this would happen, and if urllib has any option to
> prevent this and get the page source?
>

*Fundamental* differences? Not many. The requests module is designed
to be easy to use, whereas urllib is designed to be basic and simple.
Not really a fundamental difference, but perhaps indicative.

I'd recommend doing the query with requests, and seeing exactly what
headers are being sent. Most likely, there'll be something that you
need to add explicitly when using urllib that the server is looking
for (maybe a user agent or something). Requests uses Python's logging
module to configure everything, so it should be a simple matter of
setting log level to DEBUG and sending the request.

TBH though, I'd just recommend using requests, unless you specifically
need to avoid the dependency :)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Urllib.request vs. Requests.get

2021-12-07 Thread Paul Bryan
Cloudflare, for whatever reason, appears to be rejecting the `User-
Agent` header that urllib is providing:`Python-urllib/3.9`. Using a
different `User-Agent` seems to get around the issue:

import urllib.request

req = urllib.request.Request(
url="https://juno.sh/direct-connection-to-jupyter-server/";,
method="GET",
headers={"User-Agent": "Workaround/1.0"},
)

res = urllib.request.urlopen(req)

Paul

On Tue, 2021-12-07 at 12:35 +0100, Julius Hamilton wrote:
> Hey,
> 
> I am currently working on a simple program which scrapes text from
> webpages
> via a URL, then segments it (with Spacy).
> 
> I’m trying to refine my program to use just the right tools for the
> job,
> for each of the steps.
> 
> Requests.get works great, but I’ve seen people use
> urllib.request.urlopen()
> in some examples. It appealed to me because it seemed lower level
> than
> requests.get, so it just makes the program feel leaner and purer and
> more
> direct.
> 
> However, requests.get works fine on this url:
> 
> https://juno.sh/direct-connection-to-jupyter-server/
> 
> But urllib returns a “403 forbidden”.
> 
> Could anyone please comment on what the fundamental differences are
> between
> urllib vs. requests, why this would happen, and if urllib has any
> option to
> prevent this and get the page source?
> 
> Thanks,
> Julius

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: For a hierarchical project, the EXE file generated by "pyinstaller" does not start.

2021-12-07 Thread Chris Angelico
On Wed, Dec 8, 2021 at 4:49 AM Mohsen Owzar  wrote:
> ***
> GPIOContrl.py
> ***
> class GPIOControl:
> def my_print(self, args):
> if print_allowed == 1:
> print(args)
>
> def __init__(self):

Can't much help with your main question as I don't do Windows, but one
small side point: Instead of having a my_print that checks if printing
is allowed, you can conditionally replace the print function itself.

if not print_allowed:
def print(*args, **kwargs): pass

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Return

2021-12-07 Thread Roland Mueller via Python-list
Hello

ti 7. jouluk. 2021 klo 19.47 vani arul (arulvan...@gmail.com) kirjoitti:

> Hey There,
> Can someone help to understand how a python function can return value with
> using return in the code?
> It is not not about explicit or implicit function call.
>
>
Not sure whether I understood your question: I have a simple example about
return.
  * f() and h() explicitely return something
  * g() and h() return None

BTW, also Python documentation tool pydoc has an article about return.

#!/usr/bin/python

def f():
return 42
def g():
pass
def h():
return

if __name__ == "__main__":
print(f"f(): {f()}")
print(f"g(): {g()}")
print(f"h(): {h()}")

Result:
f(): 42
g(): None
h(): None

Pydoc:
$ pydoc return

BR,
Roland


> Thanks
> Vani
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: HTML extraction

2021-12-07 Thread Roland Mueller via Python-list
Hello,

ti 7. jouluk. 2021 klo 20.08 Chris Angelico (ros...@gmail.com) kirjoitti:

> On Wed, Dec 8, 2021 at 4:55 AM Julius Hamilton
>  wrote:
> >
> > Hey,
> >
> > Could anyone please comment on the purest way simply to strip HTML tags
> > from the internal text they surround?
> >
> > I know Beautiful Soup is a convenient tool, but I’m interested to know
> what
> > the most minimal way to do it would be.
>
> That's definitely the best and most general way, and would still be my
> first thought most of the time.
>
> > People say you usually don’t use Regex for a second order language like
> > HTML, so I was thinking about using xpath or lxml, which seem like very
> > pure, universal tools for the job.
> >
> > I did find an example for doing this with the re module, though.
> >
> > Would it be fair to say that to just strip the tags, Regex is fine, but
> you
> > need to build a tree-like object if you want the ability to select which
> > nodes to keep and which to discard?
>
> Obligatory reference:
>
>
> https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags
>
> > Can xpath / lxml do that?
> >
> > What are the chief differences between xpath / lxml and Beautiful Soup?
> >
>
> I've never directly used lxml, mainly because bs4 offers all the same
> advantages and more, with about the same costs. However, if you're
> looking for a no-external-deps option, Python *does* include an HTML
> parser in the standard library:
>
>
But isn't bs4 only for SOAP content?
Can bs4 or lxml cope with HTML code that does not comply with XML as the
following fragment?

A
B


BR,
Roland


> https://docs.python.org/3/library/html.parser.html
>
> If your purpose is extremely simple (like "strip tags, search for
> text"), then it should be easy enough to whip up something using that
> module. No external deps, not a lot of code, pretty straight-forward.
> On the other hand, if you're trying to do an "HTML to text"
> conversion, you'd probably need to be aware of which tags are
> block-level and which are inline content, so that (for instance)
> "Hello world" would come out as two separate
> paragraphs of text, whereas the same thing with  tags would become
> just "Hello world". But for the most part, handle_data will probably
> do everything you need.
>
> ChrisA
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: HTML extraction

2021-12-07 Thread Chris Angelico
On Wed, Dec 8, 2021 at 7:55 AM Roland Mueller
 wrote:
>
> Hello,
>
> ti 7. jouluk. 2021 klo 20.08 Chris Angelico (ros...@gmail.com) kirjoitti:
>>
>> On Wed, Dec 8, 2021 at 4:55 AM Julius Hamilton
>>  wrote:
>> >
>> > Hey,
>> >
>> > Could anyone please comment on the purest way simply to strip HTML tags
>> > from the internal text they surround?
>> >
>> > I know Beautiful Soup is a convenient tool, but I’m interested to know what
>> > the most minimal way to do it would be.
>>
>> That's definitely the best and most general way, and would still be my
>> first thought most of the time.
>>
>> > People say you usually don’t use Regex for a second order language like
>> > HTML, so I was thinking about using xpath or lxml, which seem like very
>> > pure, universal tools for the job.
>> >
>> > I did find an example for doing this with the re module, though.
>> >
>> > Would it be fair to say that to just strip the tags, Regex is fine, but you
>> > need to build a tree-like object if you want the ability to select which
>> > nodes to keep and which to discard?
>>
>> Obligatory reference:
>>
>> https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags
>>
>> > Can xpath / lxml do that?
>> >
>> > What are the chief differences between xpath / lxml and Beautiful Soup?
>> >
>>
>> I've never directly used lxml, mainly because bs4 offers all the same
>> advantages and more, with about the same costs. However, if you're
>> looking for a no-external-deps option, Python *does* include an HTML
>> parser in the standard library:
>>
>
> But isn't bs4 only for SOAP content?
> Can bs4 or lxml cope with HTML code that does not comply with XML as the 
> following fragment?
>
> A
> B
> 
>
> BR,
> Roland
>

Check out the bs4 docs for some of the things you can do with it :)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Return

2021-12-07 Thread dn via Python-list
On 08/12/2021 09.45, Roland Mueller via Python-list wrote:
> Hello
> 
> ti 7. jouluk. 2021 klo 19.47 vani arul (arulvan...@gmail.com) kirjoitti:
> 
>> Hey There,
>> Can someone help to understand how a python function can return value with
>> using return in the code?
>> It is not not about explicit or implicit function call.
>>
>>
> Not sure whether I understood your question: I have a simple example about
> return.
>   * f() and h() explicitely return something
>   * g() and h() return None
> 
> BTW, also Python documentation tool pydoc has an article about return.
> 
> #!/usr/bin/python
> 
> def f():
> return 42
> def g():
> pass
> def h():
> return
> 
> if __name__ == "__main__":
> print(f"f(): {f()}")
> print(f"g(): {g()}")
> print(f"h(): {h()}")
> 
> Result:
> f(): 42
> g(): None
> h(): None
> 
> Pydoc:
> $ pydoc return
> 
> BR,
> Roland
> 
> 
>> Thanks
>> Vani


plus Python, unlike some other languages, allows us to return multiple
values, either as a collection or as an implied-tuple:

def function_list():
a_list = [ i for i in range( 9 ) ]
return a_list

def function_multiples():
a = 1
b = 2
c = 3
return a, b, c

thus:

x, y, z = function_multiples()
-- 
Regards,
=dn
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Return

2021-12-07 Thread Chris Angelico
On Wed, Dec 8, 2021 at 9:04 AM dn via Python-list
 wrote:
>
> plus Python, unlike some other languages, allows us to return multiple
> values, either as a collection or as an implied-tuple:
>
> def function_list():
> a_list = [ i for i in range( 9 ) ]
> return a_list
>
> def function_multiples():
> a = 1
> b = 2
> c = 3
> return a, b, c
>
> thus:
>
> x, y, z = function_multiples()

Not sure what you mean by "implied". You're returning a tuple formed
from three values, and then unpacking that into three destinations.
Since, at a technical level, a function can only return one value,
returning a tuple is the standard way to return more than one thing.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Return

2021-12-07 Thread dn via Python-list
On 08/12/2021 11.07, Chris Angelico wrote:
> On Wed, Dec 8, 2021 at 9:04 AM dn via Python-list
>  wrote:
>>
>> plus Python, unlike some other languages, allows us to return multiple
>> values, either as a collection or as an implied-tuple:
>>
>> def function_list():
>> a_list = [ i for i in range( 9 ) ]
>> return a_list
>>
>> def function_multiples():
>> a = 1
>> b = 2
>> c = 3
>> return a, b, c
>>
>> thus:
>>
>> x, y, z = function_multiples()
> 
> Not sure what you mean by "implied". You're returning a tuple formed
> from three values, and then unpacking that into three destinations.
> Since, at a technical level, a function can only return one value,
> returning a tuple is the standard way to return more than one thing.


How's it going @Chris?
(we have another 'overseas-speaker' scheduled for next week's PUG
meeting. Rodrigo Girão Serrão will 'beam-in' from Portugal. He presented
at EuroPython. His topic with us will be "Python's Objects" - firstly at
an intro-level for people who've not built a custom-class previously,
and thereafter to more-advanced folk - details upon request...)


Back to tuples: You are (strictly) correct.

As we both know, a lot of people think that the parentheses 'make' the
tuple, whereas in-fact it is the comma-separators.

I'd estimate the OP to be in a learning situation/converting-over from
another language, so allowance for lax terminology/definitions.
-- 
Regards,
=dn
-- 
https://mail.python.org/mailman/listinfo/python-list