Re: Embedding Python in C

2019-07-24 Thread Jesse Ibarra
On Tuesday, July 23, 2019 at 2:20:45 PM UTC-6, Stefan Behnel wrote:
> Jesse Ibarra schrieb am 22.07.19 um 18:12:
> > On Saturday, July 20, 2019 at 1:11:51 PM UTC-6, Stefan Behnel wrote:
> >> Jesse Ibarra schrieb am 20.07.19 um 04:12:
> >>> Sorry, I am not understanding. Smalltlak VW 8.3 does not support Python.
> >>> I can only call Pyhton code through C/Python API.
> >>
> >> Ok, but that doesn't mean you need to write code that uses the C-API of
> >> Python. All you need to do is:
> >>
> >> 1) Start up a CPython runtime from Smalltalk (see the embedding example I
> >> posted) and make it import an extension module that you write (e.g. using
> >> the "inittab" mechanism [1]).
> >>
> >> 2) Use Cython to implement this extension module to provide an interface
> >> between your Smalltalk code and your Python code. Use the Smalltalk C-API
> >> from your Cython code to call into Smalltalk and exchange data with it.
> >>
> >> Now you can execute Python code inside of Python and make it call back and
> >> forth into your Smalltalk code, through the interface module. And there is
> >> no need to use the Python C-API for anything beyond step 1), which is about
> >> 5 lines of Python C-API code if you write it yourself. Everything else can
> >> be implemented in Cython and Python.
> >>
> >> Stefan
> >>
> >>
> >> [1]
> >> https://docs.python.org/3/extending/embedding.html?highlight=PyImport_appendinittab#extending-embedded-python
> > 
> > This cleared so much @Stefan, thank you. I just need some clarification if 
> > you don't mind.
> >  
> > In (1), when you say  "import an extension module that you write",  do you 
> > mean the Python library that was created "import emb"? Is that gonna be 
> > written in Cython or standalone .C file?
> 
> Yes. In Cython.
> 
> 
> > in (2), what do to mean when you said "Use the Smalltalk C-API from your 
> > Cython code to call into Smalltalk and exchange data with it."? 
> 
> Not sure what part exactly you are asking about, but you somehow have to
> talk to the Smalltalk runtime from your Cython/Python code if you want to
> interact with it. I assume that this will be done through the C API that
> Smalltalk provides.
> 
> Just in case, did you check if there is already a bridge for your purpose?
> A quick web search let me find this, not sure if it helps.
> 
> https://github.com/ObjectProfile/PythonBridge
> 
> Stefan

Yes, I think that can be done through the "inittab" mechanism  you recommended. 
I will try it. 

Yes, I trying to implement the same thing as shown in the GitHub link but 
instead of Pharo I will use VisualWorks IDE

Thank you
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Ebook for Understanding Financial Accounting, Canadian Edition by Burnley

2019-07-24 Thread helu . careers
On Friday, January 19, 2018 at 2:42:17 PM UTC-5, vaibhavb...@gmail.com wrote:
> Im lookjng for the pdf file. For understanding financial canadian edition

Hey buddy, have you got the PDF file for this textbook and the solutions manual 
of this textbook?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Hermetic environments

2019-07-24 Thread Eli the Bearded
In comp.lang.python, DL Neil   wrote:
> Is Python going 'the right way' with virtual environments?
...
> Am I 'getting away with it', perhaps because my work-pattern doesn't 
> touch some 'gotcha' or show-stopper?
> 
> Why, if so much of 'the rest of the world' is utilising "containers", 
> both for mobility and for growth, is the Python eco-system following its 
> own path?

I'm going to speculate that even inside containers, some people will use
multiple virtual environments. It could be that the app and the
monitoring for that app are developed by different branches of the
company and have different requirements.

But I think a lot of the use of virtual environments is in dev
environments where a developer wants to have multiple closed settings
for doing work. On the dev branch, newer versions of things can be
tested, but a production environment can be retained for hotfixes to
deployed code.

Or because the different microservices being used are each at different
update levels and need their own environments.

> Is there something about dev (and ops) using Python venvs which is a 
> significant advantage over a language-independent (even better: an 
> OpSys-independent) container?

I'm not a big fan of language-dependent virtual environments because
they only capture the needs of a particular language. Very often code
works with things that are outside of that language, even if it is only
system libraries.

Elijah
--
interested in hearing other voices on this
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Proper shebang for python3

2019-07-24 Thread Barry Scott



> On 23 Jul 2019, at 00:13, Cameron Simpson  wrote:
> 
> Why _any_ modern system has anything other than /bin in the base install 
> escapes me.  In the distant past /sbin and a distinct /usr with its own bin 
> had their values, but these days? Bah!


On fedora its all in /usr these days with symlinks to the old locations.

$ ls -l / | grep usr
lrwxrwxrwx1 root root 7 Feb 11 13:47 bin -> usr/bin/
lrwxrwxrwx1 root root 7 Feb 11 13:47 lib -> usr/lib/
lrwxrwxrwx1 root root 9 Feb 11 13:47 lib64 -> usr/lib64/
lrwxrwxrwx1 root root 8 Feb 11 13:47 sbin -> usr/sbin/
drwxr-xr-x.  13 root root  4096 May  5 17:22 usr/

You can read about why here 
https://www.freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/ 
 and 
also https://fedoraproject.org/wiki/Features/UsrMove 


Barry

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Proper shebang for python3

2019-07-24 Thread Cameron Simpson

On 24Jul2019 21:36, Barry Scott  wrote:

On 23 Jul 2019, at 00:13, Cameron Simpson  wrote:
Why _any_ modern system has anything other than /bin in the base 
install escapes me.  In the distant past /sbin and a distinct /usr 
with its own bin had their values, but these days? Bah!


On fedora its all in /usr these days with symlinks to the old 
locations.


$ ls -l / | grep usr
lrwxrwxrwx1 root root 7 Feb 11 13:47 bin -> usr/bin/
lrwxrwxrwx1 root root 7 Feb 11 13:47 lib -> usr/lib/
lrwxrwxrwx1 root root 9 Feb 11 13:47 lib64 -> usr/lib64/
lrwxrwxrwx1 root root 8 Feb 11 13:47 sbin -> usr/sbin/
drwxr-xr-x.  13 root root  4096 May  5 17:22 usr/


That is some progress, hooray. Then there's just sbin -> bin to go. They 
could merge lib and lib64 too if they embedded an architecture signature 
in library filenames.


You can read about why here 
https://www.freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/ 
and also https://fedoraproject.org/wiki/Features/UsrMove 


Thanks for these references.

Cheers,
Cameron Simpson 
--
https://mail.python.org/mailman/listinfo/python-list


Re: Hermetic environments

2019-07-24 Thread Cameron Simpson

On 24Jul2019 19:59, Eli the Bearded <*@eli.users.panix.com> wrote:

In comp.lang.python, DL Neil   wrote:

Is Python going 'the right way' with virtual environments?

...

Am I 'getting away with it', perhaps because my work-pattern doesn't
touch some 'gotcha' or show-stopper?

Why, if so much of 'the rest of the world' is utilising "containers",
both for mobility and for growth, is the Python eco-system following its
own path?


I'm going to speculate that even inside containers, some people will use
multiple virtual environments. It could be that the app and the
monitoring for that app are developed by different branches of the
company and have different requirements.

But I think a lot of the use of virtual environments is in dev
environments where a developer wants to have multiple closed settings
for doing work. On the dev branch, newer versions of things can be
tested, but a production environment can be retained for hotfixes to
deployed code.

Or because the different microservices being used are each at different
update levels and need their own environments.


Yeah. In a recent former life we were maintaining some APIs with many 
releases (point releases every sprint, for those APIs changing that 
sprint). The customers could stick with older API revisions if they had 
special requirements (or simply lacked their own dev time to verify a 
successful forward version shift), so there were multiple historic 
versions in play in the field.



Is there something about dev (and ops) using Python venvs which is a
significant advantage over a language-independent (even better: an
OpSys-independent) container?


I'm not a big fan of language-dependent virtual environments because
they only capture the needs of a particular language. Very often code
works with things that are outside of that language, even if it is only
system libraries.


The advantage of the language dependent venv is that it is self 
contained. You can update, say, the Python component of the project 
independently of some adjacent other language. This might all be 
contained within a larger environment which itself is snapshotted for 
release purposes.


In my current life I'm working on a project with a python API and a 
JavaScript front end. A release involves building a clean versioned 
directory on the server machine; it contains a specific Python venv 
inside it; the upper layer is the encapsulation. Example:


 STAGING -> app/version2
 app-version
   venv/
   webapp/javascript-here...
   ...
 app-version2
   venv/
   webapp/javascript-here...
   ...

I still want the venv because it encapsulates the Python arena's state 
of play.


Cheers,
Cameron Simpson 
--
https://mail.python.org/mailman/listinfo/python-list


Re: Proper shebang for python3

2019-07-24 Thread Michael Torrie
On 7/24/19 4:20 PM, Cameron Simpson wrote:
> That is some progress, hooray. Then there's just sbin -> bin to go. 

I suppose in the olden days sbin was for static binaries, usable in
single user mode for recovering the system without the main drive
mounted.  In more recent times, binaries that are mostly applicable to
the super user go there.  I don't see why you would want to merge those.
 A normal user rarely has need of much in /sbin.  Already /bin has way
too much stuff in it (although I don't see any other way to practically
do it without ridiculous PATHs searching all over the disk).

Having said that, I note that on my CentOS 7 workstation, sbin seems to
be in the path by default. So that negates my argument I suppose.
Although I might have made that change myself.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Proper shebang for python3

2019-07-24 Thread Cameron Simpson

On 24Jul2019 20:24, Michael Torrie  wrote:

On 7/24/19 4:20 PM, Cameron Simpson wrote:

That is some progress, hooray. Then there's just sbin -> bin to go.


I suppose in the olden days sbin was for static binaries, usable in
single user mode for recovering the system without the main drive
mounted.


Yep. Happy days.


In more recent times, binaries that are mostly applicable to
the super user go there.  I don't see why you would want to merge those.
A normal user rarely has need of much in /sbin.


I say unto to you "ifconfig". And, frankly, _any_ sbin command which can 
be meaningfully run as nonroot, particularly for reporting.


I have always found this "oh its for root" distinction pretty vacuous, 
and outstandingly annoying when basic system querying stuff isn't in the 
default $PATH because of this.  Maybe it is because I've been a sysadmin 
for many years, but most physical machines are personal machines these 
days anyway - we're our own sysadmins.



Already /bin has way>too much stuff in it (although I don't see any other way 
to practically
do it without ridiculous PATHs searching all over the disk).


Like modules with many names, the number of things in /bin or /usr/bin 
is generally irrelevant. Nobody does an "ls" in there without expecting 
a fair amount of stuff - like imports, we invoke commands by name. Who 
_cares_ how many names there are?



Having said that, I note that on my CentOS 7 workstation, sbin seems to
be in the path by default. So that negates my argument I suppose.
Although I might have made that change myself.


I have historically had to add it to my $PATH on most platforms.

Cheers,
Cameron Simpson 
--
https://mail.python.org/mailman/listinfo/python-list


Re: Hermetic environments

2019-07-24 Thread DL Neil

On 25/07/19 10:31 AM, Cameron Simpson wrote:

On 24Jul2019 19:59, Eli the Bearded <*@eli.users.panix.com> wrote:

In comp.lang.python, DL Neil   wrote:

Is Python going 'the right way' with virtual environments?

...

Am I 'getting away with it', perhaps because my work-pattern doesn't
touch some 'gotcha' or show-stopper?

Why, if so much of 'the rest of the world' is utilising "containers",
both for mobility and for growth, is the Python eco-system following its
own path?


I'm going to speculate that even inside containers, some people will use
multiple virtual environments. It could be that the app and the
monitoring for that app are developed by different branches of the
company and have different requirements.

But I think a lot of the use of virtual environments is in dev
environments where a developer wants to have multiple closed settings
for doing work. On the dev branch, newer versions of things can be
tested, but a production environment can be retained for hotfixes to
deployed code.

Or because the different microservices being used are each at different
update levels and need their own environments.


Yeah. In a recent former life we were maintaining some APIs with many 
releases (point releases every sprint, for those APIs changing that 
sprint). The customers could stick with older API revisions if they had 
special requirements (or simply lacked their own dev time to verify a 
successful forward version shift), so there were multiple historic 
versions in play in the field.



Is there something about dev (and ops) using Python venvs which is a
significant advantage over a language-independent (even better: an
OpSys-independent) container?


I'm not a big fan of language-dependent virtual environments because
they only capture the needs of a particular language. Very often code
works with things that are outside of that language, even if it is only
system libraries.


The advantage of the language dependent venv is that it is self 
contained. You can update, say, the Python component of the project 
independently of some adjacent other language. This might all be 
contained within a larger environment which itself is snapshotted for 
release purposes.


In my current life I'm working on a project with a python API and a 
JavaScript front end. A release involves building a clean versioned 
directory on the server machine; it contains a specific Python venv 
inside it; the upper layer is the encapsulation. Example:


  STAGING -> app/version2
  app-version
    venv/
    webapp/javascript-here...
    ...
  app-version2
    venv/
    webapp/javascript-here...
    ...

I still want the venv because it encapsulates the Python arena's state 
of play.



Do you use a VCS, eg git or Subversion? Thus, have you disciplined 
yourself to check-in work, and subsequently NOT to work on your (old) 
copy, but to check-out a fresh copy?


Similarly, rather than adding a second environment or updates to an 
existing (prod) VM, it is a 'discipline' to make a copy of the 
appropriate VM and work with the (new) copy! (either upgrading some 
component of the source, the Python eco-system, or the OpSys)


Copying/backing-up a VM is a rapid operation. So, why would you prefer 
to set up a second and separate py-env within an existing environment?
(and lose the "hermetic seal" - face the version collisions/combinations 
both philosophies seek to avoid)



NB a problem I experienced yesterday was that VMs are differentiated by 
versionNR and date - but in 'client-language' the date was not when the 
VM was created, but when she last used it. Users!

(nothing is perfect - and yes I found it by 'relative addressing' the dates)
--
Regards =dn
--
https://mail.python.org/mailman/listinfo/python-list


Re: Hermetic environments

2019-07-24 Thread Cameron Simpson

On 25Jul2019 15:45, DL Neil  wrote:
In my current life I'm working on a project with a python API and a 
JavaScript front end. A release involves building a clean versioned 
directory on the server machine; it contains a specific Python venv 
inside it; the upper layer is the encapsulation. Example:


 STAGING -> app/version2
 app-version
   venv/
   webapp/javascript-here...
   ...
 app-version2
   venv/
   webapp/javascript-here...
   ...

I still want the venv because it encapsulates the Python arena's 
state of play.



Do you use a VCS, eg git or Subversion? Thus, have you disciplined 
yourself to check-in work, and subsequently NOT to work on your (old) 
copy, but to check-out a fresh copy?


Well of course. Mercurial or git. That's what makes the deploy process 
fairly easy, one deploys from a revision with a release tag.


Similarly, rather than adding a second environment or updates to an 
existing (prod) VM, it is a 'discipline' to make a copy of the 
appropriate VM and work with the (new) copy! (either upgrading some 
component of the source, the Python eco-system, or the OpSys)


Copying/backing-up a VM is a rapid operation. So, why would you prefer 
to set up a second and separate py-env within an existing environment?


1: it is smaller and much lower overhead. How many _live_ VMs are you 
keeping around?

2: no VMs in play at this end.

Cheers,
Cameron Simpson 
--
https://mail.python.org/mailman/listinfo/python-list


Re: Hermetic environments

2019-07-24 Thread DL Neil

On 25/07/19 4:43 PM, Cameron Simpson wrote:

On 25Jul2019 15:45, DL Neil  wrote:
In my current life I'm working on a project with a python API and a 
JavaScript front end. A release involves building a clean versioned 
directory on the server machine; it contains a specific Python venv 
inside it; the upper layer is the encapsulation. Example:


 STAGING -> app/version2
 app-version
   venv/
   webapp/javascript-here...
   ...
 app-version2
   venv/
   webapp/javascript-here...
   ...

I still want the venv because it encapsulates the Python arena's 
state of play.



Do you use a VCS, eg git or Subversion? Thus, have you disciplined 
yourself to check-in work, and subsequently NOT to work on your (old) 
copy, but to check-out a fresh copy?


Well of course. Mercurial or git. That's what makes the deploy process 
fairly easy, one deploys from a revision with a release tag.


Similarly, rather than adding a second environment or updates to an 
existing (prod) VM, it is a 'discipline' to make a copy of the 
appropriate VM and work with the (new) copy! (either upgrading some 
component of the source, the Python eco-system, or the OpSys)


Copying/backing-up a VM is a rapid operation. So, why would you prefer 
to set up a second and separate py-env within an existing environment?


1: it is smaller and much lower overhead. How many _live_ VMs are you 
keeping around?



A pyenv is significantly smaller, but if we're including non-Python 
components in some system, 'size' increases accordingly. However, the 
'saving' in either copy-time or storage-space is not significant.


The important point here, is that the amount of time-taken in copying 
the env, might be considerably less than installing and verifying a new 
version of Python, pip-ing, upgrading non-Py components; and thereafter 
the copy and upgrade tasks are likely insignificant within the next 
sprint's-worth of dev-effort!


A VM provides the "hermetic" insulation under discussion, with no 
more/less effort than any/either of the v-env-s. (and includes the same 
advantages for the wider environment of the application)




2: no VMs in play at this end.


Dozens - haven't counted. (until you "made you look!"...) Every 
production system, for every client (don't currently have any new 
clients who are only in 'dev').


NB Am assuming by "keeping around" you are asking about "prod" versions, 
ie actually running systems.


However, as every 'prod' is backed by (usually) two others: 'dev' and 
(acceptance/user) 'test' (some of the latter are also installed in 
client's premises or on their networks/cloud). Then, there are some 
extra 'dev's which probably feature a PoC experiment or the like, 
completed, but yet to make it out of the client's "backlog".


OTOH if the client has not 'returned' for some time, it's not as if 
their VMs are sitting 'spooled-up' on one of my machines!



A quick 'find' shows quantities > three-digits. However, there are 
likely duplicates because of an active task to migrate between 
disks/archiving old projects 'paper work' - a task which I am 
assiduously finishing... Yeah right!



Now you've worried me. Am I keeping track? Should I be putting VMs 
(which in-turn likely contain a git client and tree), into some central 
git tree? Do I need to take a tea-break???



NB as mentioned earlier, VMs include dates and versionNRs in their 
name/label (cf DNS, etc) and such appears on docs, even invoices.


Oh, so that reminds me that I have an additional VM for (most of) my 
clients - which holds business docs, planning docs, maybe a SCRUM/Kanban 
board, mind-maps, etc, etc.


Also, if I'm supervising and in-house or external dev-team, we might 
have 'minor-version' VMs (which we refer to as "snapshots", even though 
this means something else in some VM-speak) of 'where-we-are-up-to' 
part-projects (the equivalent of me taking a whole branch from vcs for 
code-review purposes) - these don't tend to be 'kept' long.



NB I don't work exclusively in Python, eg DB and web work, so my 
definition of "VM" is a lot wider than Python venv-s! Also, I'm not a 
full-time programmer.


In the Python world, back-when, while starting to embrace Py3 I became 
nervous of the idea that I was developing in a separate Python version 
to that which ran aspects of my (Fedora-Linux) machines. VirtualBox 
offered an escape from such; and one VM led to another... As the saying 
goes: what were once habits, became vices!



BTW this is still a question: why the (venv) Python-only way cf 'other 
ideas for containerisation'; and is not intended as an exercise in 
self-justification or one-upmanship!

--
Regards =dn
--
https://mail.python.org/mailman/listinfo/python-list