On Thu, 3 Feb 2022 at 23:16, Greg Ewing wrote:
>
> On 4/02/22 5:07 am, Albert-Jan Roskam wrote:
> > On Feb 3, 2022 17:01, Dan Stromberg wrote:
> >
> > What profiler do you recommend
>
> If it runs for that long, just measuring execution time should
> be enough. Python comes with a "timeit
On 4/02/22 5:07 am, Albert-Jan Roskam wrote:
On Feb 3, 2022 17:01, Dan Stromberg wrote:
What profiler do you recommend
If it runs for that long, just measuring execution time should
be enough. Python comes with a "timeit" module to help with
that, or you can use whatever your OS provi
On Feb 3, 2022 17:01, Dan Stromberg wrote:
> The best answer to "is this slower on
> Pypy" is probably to measure.
> Sometimes it makes sense to rewrite C
> extension modules in pure python for pypy.
Hi Dan, thanks. What profiler do yo
The best answer to "is this slower on Pypy" is probably to measure.
Sometimes it makes sense to rewrite C extension modules in pure python for
pypy.
On Thu, Feb 3, 2022 at 7:33 AM Albert-Jan Roskam
wrote:
>Hi,
>I inherited a fairly large codebase that I need to
f the program (e.g
a modulo 11 digit check) are implemented in Cython. Should I use pure
Python instead when using Pypy? I compiled the Cython modules for pypy and
they work, but I'm afraid they might just slow things down.
Thanks!
Albert-Jan
--
https://mail.python.org/mailman/listi
Anthony Flury via Python-list schrieb am 21.12.18 um 09:06:
> I thought I would look at a side by side comparison of CPython, nuitka and
> PyPy
Interesting choice. Why nuitka?
> *The functionality under test*
>
> I have a library (called primelib) which implements a Sieve of E
I thought I would look at a side by side comparison of CPython, nuitka
and PyPy
*The functionality under test**
*
I have a library (called primelib) which implements a Sieve of
Erathoneses in pure Python - it was orginally written as part of my
project Euler attempts
Not only does it build
On 18 June 2018 at 22:18, Etienne Robillard wrote:
> Hi,
>
> Quick question: Does anyone of you know what is the effect of enabling
> gc.enable() in sitecustomize.py when using PyPy? Can it reduce latency for
> long-lived WSGI applications?
>
gc is enabled by default. yo
Le 2018-06-18 à 22:47, William ML Leslie a écrit :
On 18 June 2018 at 22:18, Etienne Robillard wrote:
Hi,
Quick question: Does anyone of you know what is the effect of enabling
gc.enable() in sitecustomize.py when using PyPy? Can it reduce latency for
long-lived WSGI applications?
gc is
sync with
the schema definition.
peace,
Etienne
Il Ven 8 Giu 2018, 03:29 Etienne Robillard <mailto:tkad...@yandex.com>> ha scritto:
Yo people I'm doing a nightly hacking sprint for django-hotsauce
on pypy
and got some cool bugs I would like to share:
Tracebac
sync with
the schema definition.
peace,
Etienne
Il Ven 8 Giu 2018, 03:29 Etienne Robillard <mailto:tkad...@yandex.com>> ha scritto:
Yo people I'm doing a nightly hacking sprint for django-hotsauce
on pypy
and got some cool bugs I would like to share:
Tracebac
seems to me really similar to https://github.com/zopefoundation/ZEO/pull/96
try to upgrade to ZEO 5.1.2
mauro.
Il Ven 8 Giu 2018, 03:29 Etienne Robillard ha scritto:
> Yo people I'm doing a nightly hacking sprint for django-hotsauce on pypy
> and got some cool bugs I would li
Yo people I'm doing a nightly hacking sprint for django-hotsauce on pypy
and got some cool bugs I would like to share:
Traceback (most recent call last):
File "/usr/local/bin/schevo", line 11, in
load_entry_point('libschevo', 'console_scripts',
directory
tests/benchmarks/lib/django_sqlite/myapp # Test app for benchmarking
templates, views, and core django api
tests/benchmarks/lib/django_sqlite/polls # Tutorial app for benchmarking
django orm/sqlite on pypy and cpython
tests/benchmarks/uwsgi # Testsuite for uWSGI
tests/benchmarks/django1
orm with
>> sqlite3 against ZODB databases?
>> i'm interested in comparing raw sqlite3 performance versus ZODB (schevo).
>> i would like to make specific testsuite(s) for benchmarking django 1.11.7,
>> django 2.0, pypy, etc.
>>
>> What do you think?
>>
and measure the speed/latency of the django orm
with sqlite3 against ZODB databases?
i'm interested in comparing raw sqlite3 performance versus ZODB
(schevo). i would like to make specific testsuite(s) for benchmarking
django 1.11.7, django 2.0, pypy, etc.
What do you think?
Et
suite(s) for benchmarking
django 1.11.7, django 2.0, pypy, etc.
What do you think?
Etienne
--
Etienne Robillard
tkad...@yandex.com
https://www.isotopesoftware.ca/
--
https://mail.python.org/mailman/listinfo/python-list
Le 2018-01-31 à 05:21, Ned Batchelder a écrit :
On 1/30/18 3:58 PM, Etienne Robillard wrote:
Hi Ned,
Le 2018-01-30 à 15:14, Ned Batchelder a écrit :
I'm curious what you had to change for PyPy? (Unless it's a Py2/Py3
thing as Chris mentions.)
Please take a look at the changese
On 1/30/18 3:58 PM, Etienne Robillard wrote:
Hi Ned,
Le 2018-01-30 à 15:14, Ned Batchelder a écrit :
I'm curious what you had to change for PyPy? (Unless it's a Py2/Py3
thing as Chris mentions.)
Please take a look at the changesets:
https://bitbucket.org/tkadm30/libsche
gt; if os.environ.get('SCHEVO_OPTIMIZE', '0') == '1':
> ...
>
> Anyways, what do you think about the weakref case?
>
I think, without any real facts to justify it, that this sort of thing
is *probably* an unintended compatibility break. So if you c
Le 2018-01-30 à 16:38, Ned Batchelder a écrit :
I'm confused by this:
-if os.environ.get('SCHEVO_OPTIMIZE', '1') == '1':
+if os.environ.get('SCHEVO_OPTIMIZE', '1') == True:
I was also curious about this: when does os.environ.get return
anything but a string?
I was probably high when I cod
On 1/30/18 4:08 PM, Chris Angelico wrote:
On Wed, Jan 31, 2018 at 7:58 AM, Etienne Robillard wrote:
Hi Ned,
Le 2018-01-30 à 15:14, Ned Batchelder a écrit :
I'm curious what you had to change for PyPy? (Unless it's a Py2/Py3 thing
as Chris mentions.)
Please take a look at the
On Wed, Jan 31, 2018 at 7:58 AM, Etienne Robillard wrote:
> Hi Ned,
>
>
> Le 2018-01-30 à 15:14, Ned Batchelder a écrit :
>>
>> I'm curious what you had to change for PyPy? (Unless it's a Py2/Py3 thing
>> as Chris mentions.)
>
>
> Please take a
Hi Ned,
Le 2018-01-30 à 15:14, Ned Batchelder a écrit :
I'm curious what you had to change for PyPy? (Unless it's a Py2/Py3
thing as Chris mentions.)
Please take a look at the changesets:
https://bitbucket.org/tkadm30/libschevo/commits/745d1aeab5c6ee0d336790cf13d16f327e10
On 1/30/18 2:35 PM, Etienne Robillard wrote:
Hi,
I managed to patch Schevo and Durus to run under PyPy 5.9. However,
I'm afraid the changes is breaking Python 2.7 compatibility.
I'm curious what you had to change for PyPy? (Unless it's a Py2/Py3
thing as Chris mentions.)
I
Hi Chris,
Le 2018-01-30 à 14:53, Chris Angelico a écrit :
If you're supporting Python 3, I don't think there's any problem with
saying "Python 2.7 support ceases as of Schevo v4.0, so if you need Py
2.7 use Schevo 3.x". (It's not as if the old versions will suddenly
cease working or anything.)
On Wed, Jan 31, 2018 at 6:35 AM, Etienne Robillard wrote:
> Hi,
>
> I managed to patch Schevo and Durus to run under PyPy 5.9. However, I'm
> afraid the changes is breaking Python 2.7 compatibility.
>
> I'm not sure how I should distribute my changes to the resp
Hi,
I managed to patch Schevo and Durus to run under PyPy 5.9. However, I'm
afraid the changes is breaking Python 2.7 compatibility.
I'm not sure how I should distribute my changes to the respective projects.
Since I decided to use more PyPy in my Django projects, should I drop
On Thursday 10 November 2016 17:43, breamore...@gmail.com wrote:
> On Thursday, November 10, 2016 at 1:09:31 AM UTC, Steve D'Aprano wrote:
>>
>
> [snipped]
>
> Steven, there is no need to be rude or condescending.
Indeed, and if I thought you were sincere, or knew what you were objecting to,
On Thu, Nov 10, 2016 at 12:16 PM, BartC wrote:
> But now that I was about to use it, another problem. The Ubuntu Python is
> 2.7. The Windows one has both 2.7 and 3.4 (and my IDE can select either).
>
> The bit of code I wanted to run has Py3-style print functions. I tried
> 'import six' as someon
On 10/11/2016 01:16, BartC wrote:
I suppose I can get rid of the prints for the test I wanted to do, or
find out how to do the same thing under Py2 print. Or install Py3 on
Ubuntu, which is a big job and I've no idea how to switch between them.
Some good news, it turned out Ubuntu had both Pyt
On 10/11/2016 00:38, Michael Torrie wrote:
On 11/09/2016 02:10 PM, BartC wrote:
Good point, I use Ubuntu under Windows. It should be child's play,
except... 'sudo apt-get install numpy' or 'python-numpy' doesn't work.
Something is wrong with your setup then. Because both python-numpy and
pytho
On Thu, 10 Nov 2016 11:38 am, Michael Torrie wrote:
> On 11/09/2016 02:10 PM, BartC wrote:
>> Good point, I use Ubuntu under Windows. It should be child's play,
>> except... 'sudo apt-get install numpy' or 'python-numpy' doesn't work.
>
> Something is wrong with your setup then.
Bart has been
On Thu, Nov 10, 2016 at 11:38 AM, Michael Torrie wrote:
> On 11/09/2016 02:10 PM, BartC wrote:
>> Good point, I use Ubuntu under Windows. It should be child's play,
>> except... 'sudo apt-get install numpy' or 'python-numpy' doesn't work.
>
> Something is wrong with your setup then.
Or with his e
On 11/09/2016 02:10 PM, BartC wrote:
> Good point, I use Ubuntu under Windows. It should be child's play,
> except... 'sudo apt-get install numpy' or 'python-numpy' doesn't work.
Something is wrong with your setup then. Because both python-numpy and
python3-numpy are in the standard ubuntu reposi
On Thu, Nov 10, 2016 at 8:35 AM, wrote:
> I don't actually use pip much myself, I use Synaptic Package Manager. Unless
> you need a package from the PSF repository that Canonical doesn't have,
> Synaptic should be fine for you. If you want to run the Python3 version of
> pip from the command
On Wednesday, November 9, 2016 at 1:10:34 PM UTC-8, BartC wrote:
> On 09/11/2016 19:44, j...@i...edu wrote:
> Good point, I use Ubuntu under Windows. It should be child's play,
> except... 'sudo apt-get install numpy' or 'python-numpy' doesn't work.
>
> 'pip' doesn't work; it needs to be installe
On 09/11/2016 19:44, jlada...@itu.edu wrote:
On Wednesday, November 9, 2016 at 5:03:30 AM UTC-8, BartC wrote:
On 05/11/2016 17:10, Mr. Wrobel wrote:
1. What I have found is modified python interpreter - pypy -
http://pypy.org that does not require any different approach to develop
your code
On Wednesday, November 9, 2016 at 5:03:30 AM UTC-8, BartC wrote:
> On 05/11/2016 17:10, Mr. Wrobel wrote:
>
> > 1. What I have found is modified python interpreter - pypy -
> > http://pypy.org that does not require any different approach to develop
> > your code.
&
On 05/11/2016 17:10, Mr. Wrobel wrote:
1. What I have found is modified python interpreter - pypy -
http://pypy.org that does not require any different approach to develop
your code.
2. And: Gpu based computing powered by Nvidia (NumbaPro compiler):
https://developer.nvidia.com/how-to-cuda
On Wed, 9 Nov 2016 06:35 pm, John Ladasky wrote:
[...]
> I work a lot with a package called GROMACS, which does highly iterative
> calculations to simulate the motions of atoms in complex molecules.
> GROMACS can be built to run on a pure-CPU platform (taking advantage of
> multiple cores, if you
Am 08.11.16 um 02:23 schrieb Steve D'Aprano:
But as far as I know, they [NVidia] 're not the only manufacturer of GPUs, and
they
are the only ones who support IEEE 754. So this is *exactly* the situation
I feared: incompatible GPUs with varying support for IEEE 754 making it
difficult or impossi
On Monday, November 7, 2016 at 5:23:25 PM UTC-8, Steve D'Aprano wrote:
> On Tue, 8 Nov 2016 05:47 am, j...@i...edu wrote:
> > It has been very important for the field of computational molecular
> > dynamics (and probably several other fields) to get floating-point
> > arithmetic working right on GP
On Tue, 8 Nov 2016 05:47 am, jlada...@itu.edu wrote:
> On Saturday, November 5, 2016 at 6:39:52 PM UTC-7, Steve D'Aprano wrote:
>> On Sun, 6 Nov 2016 09:17 am, Mr. Wrobel wrote:
>>
>>
>> I don't have any experience with GPU processing. I expect that it will be
>> useful for somethings, but for n
On 11/05/2016 11:10 AM, Mr. Wrobel wrote:
> Hi,
>
> Some skeptics asked my why there is a reason to use Python against of
> any other "not interpreted" languages, like objective-C. As my
> explanation, I have answered that there is a a lot of useful APIs,
> language is modern, has advanced obje
On Saturday, November 5, 2016 at 8:58:36 PM UTC-4, Steve D'Aprano wrote:
> On Sun, 6 Nov 2016 08:17 am, Ben Bacarisse wrote:
>
> > Steve D'Aprano writes:
>
> >> Here's the same program in Objective C:
> >>
> >> --- cut ---
> >>
> >> #import
> >>
> >> int main (int argc, const char * argv[])
> >
On Saturday, November 5, 2016 at 6:39:52 PM UTC-7, Steve D'Aprano wrote:
> On Sun, 6 Nov 2016 09:17 am, Mr. Wrobel wrote:
>
>
> I don't have any experience with GPU processing. I expect that it will be
> useful for somethings, but for number-crushing and numeric work, I am
> concerned that GPUs r
"Mr. Wrobel" writes:
> ...
> However the same skeptics told my that, ok we believe that it is true,
> however the code execution is much slower than any other compiled
> language.
However, in many cases "code execution speed" is not the primary concern.
In my experience, "development speed" is fa
Steve D'Aprano writes:
> On Sun, 6 Nov 2016 08:17 am, Ben Bacarisse wrote:
>
>> Steve D'Aprano writes:
>
>>> Here's the same program in Objective C:
>>>
>>> --- cut ---
>>>
>>> #import
>>>
>>> int main (int argc, const char * argv[])
>>> {
>>> NSAutoreleasePool *pool = [[NSAutoreleasePo
On Sun, 6 Nov 2016 09:17 am, Mr. Wrobel wrote:
> However the most important is second part of my question.
>
> What do you think about using GPU processing or pypy?
I don't have any experience with GPU processing. I expect that it will be
useful for somethings, but for numbe
On Sun, 6 Nov 2016 08:17 am, Ben Bacarisse wrote:
> Steve D'Aprano writes:
>> Here's the same program in Objective C:
>>
>> --- cut ---
>>
>> #import
>>
>> int main (int argc, const char * argv[])
>> {
>> NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
>> NSLog (@"He
"Mr. Wrobel" writes:
> ... However the most important is second part of my question.
>
> What do you think about using GPU processing or pypy?
Sorry, I don't have enough experience of them to offer any useful advice.
--
Ben.
--
https://mail.python.org/mailman/listinfo/python-list
nd then not much) but I
think this is the closest comparison.
Wel, indeed. However the most important is second part of my question.
What do you think about using GPU processing or pypy?
--
https://mail.python.org/mailman/listinfo/python-list
Steve D'Aprano writes:
> On Sun, 6 Nov 2016 04:10 am, Mr. Wrobel wrote:
>
>> Hi,
>>
>> Some skeptics asked my why there is a reason to use Python against of
>> any other "not interpreted" languages, like objective-C.
>
> Here's the "Hello World" program in Python:
>
> --- cut ---
>
> print("Hell
On Sun, 6 Nov 2016 04:10 am, Mr. Wrobel wrote:
> Hi,
>
> Some skeptics asked my why there is a reason to use Python against of
> any other "not interpreted" languages, like objective-C.
Here's the "Hello World" program in Python:
--- cut ---
print("Hello World")
--- cut ---
Here's the same
s to speed up python's code.
1. What I have found is modified python interpreter - pypy -
http://pypy.org that does not require any different approach to develop
your code.
2. And: Gpu based computing powered by Nvidia (NumbaPro compiler):
https://developer.nvidia.com/how-to-cuda-pytho
ected
from the linux or osx version), but unfortunately there's no 64 bits pypy
implementation for windows.
Replacing abs(z) by sqrt(r*r+i*i) avoids the problem and is even faster still.
Interesting! Now beware that a "real" hypot function does approximately
the following:
def m
On 20-9-2016 22:38, Irmen de Jong wrote:
> Hi,
>
> I've stumbled across a peculiar performance issue with Pypy across some
> different
> platforms. It was very visible in some calculation heavy code that I wrote
> that uses
> Python's complex number type to calc
On 21-9-2016 1:20, Chris Kaynor wrote:
>
> Regarding the performance decrease, it may be worthwhile to push the report
> to a PyPy specific forum - a PyPy developer will probably see it here, but
> you may get a faster response on a forum specific to PyPy.
You're right.
I do
On Tue, Sep 20, 2016 at 3:59 PM, Chris Angelico wrote:
> On Wed, Sep 21, 2016 at 8:50 AM, Irmen de Jong
> wrote:
> >> Dunno if it's the cause or not, but you're running a 32-bit PyPy on a
> >> 64-bit Windows. I could well imagine that that has some odd
&
On Wed, Sep 21, 2016 at 8:50 AM, Irmen de Jong wrote:
>> Dunno if it's the cause or not, but you're running a 32-bit PyPy on a
>> 64-bit Windows. I could well imagine that that has some odd
>> significance.
>>
>> ChrisA
>
>
> Perhaps. Though I can
z
>>
>> The test code I've been using is here:
>> https://gist.github.com/irmen/c6b12b4cf88a6a4fcf5ff721c7089078
>>
>> Test results:
>> function: mandel / iterations
>> Mac mini, Pypy 5.4.1 (64-bit): 0.81 sec / 0.65 sec
>&
ng is here:
> https://gist.github.com/irmen/c6b12b4cf88a6a4fcf5ff721c7089078
>
> Test results:
> function: mandel / iterations
> Mac mini, Pypy 5.4.1 (64-bit): 0.81 sec / 0.65 sec
> Linux, Pypy 5.1 (32-bit): 1.06 sec / 0.64 sec
> Windows, Pypy 5.4.1 (32-bit
Hi,
I've stumbled across a peculiar performance issue with Pypy across some
different
platforms. It was very visible in some calculation heavy code that I wrote that
uses
Python's complex number type to calculate the well-known Mandelbrot set.
Pypy running the code on my Windows
On Thursday, June 9, 2016 at 1:36:56 AM UTC-4, Lawrence D’Oliveiro wrote:
> On Wednesday, June 8, 2016 at 10:39:00 PM UTC+12, rocky wrote:
>
> > In addition to the example programs which give the classic arithmetic
> > expression evaluator, I now include the beginnings of a full Python 2.6
> > lan
s however long been neglected. John stopped working on it around 2002 or
so without having put it on Pypy. Since it wasn't it's own package, so you had
to find some other project (with possibly varying versions of the program) and
copy the spark.py file from that or download the "0.
On 08/06/2016 19:32, rocky wrote:
..
Sorry that should have been 1998 which would make more sense for the 7th
conference if the 1st one was around 2001. I've corrected the date in [1]
https://pypi.python.org/pypi/spark_parser/1.3.0
The automated tests in the package just don't catch
On Wednesday, June 8, 2016 at 10:39:00 PM UTC+12, rocky wrote:
> In addition to the example programs which give the classic arithmetic
> expression evaluator, I now include the beginnings of a full Python 2.6
> language.
Does anybody bother with LR(k) parsers any more?
--
https://mail.python.org
Robin Becker wrote:
"Python was conceived in the late 1980s[1] and its implementation was
started in December 1989[2] by Guido van Rossum at CWI in the Netherlands"
so that Aycocks's paper must have been at the -1st Python Conference
When the time machine was invented, Guido thought it would
On Wednesday, June 8, 2016 at 12:50:57 PM UTC-4, Robin Becker wrote:
> On 08/06/2016 11:38, rocky wrote:
> ...
> > [1] https://pypi.python.org/pypi/spark_parser/1.3.0
> ...
> the page above shows one can implement a time travel machine as it boldly
> states
>
> "The original version o
On 08/06/2016 11:38, rocky wrote:
...
[1] https://pypi.python.org/pypi/spark_parser/1.3.0
...
the page above shows one can implement a time travel machine as it boldly states
"The original version of this was written by John Aycock and was described in
his 1988 paper: “Compiling Li
For those who are interested in experimenting with parser systems in Python,
there has been one around for a long while. But in my opinion it was a bit
lacking in graded example demonstrating how to use it.
So in the recently in spark_parser 1.3.0 [1], I've beefed up the examples a
little. In a
using PyPy to increase speed.
I found http://project-trains.tumblr.com/post/102076598295/multiprocessing-pypy
, but there it says that the procedure only works with CPython 3.4.
I wonder is there is any clean direct way to do this.
Appreciate any help.
--
https://mail.python.org/mailman/listinfo
On Mon, Nov 2, 2015 at 8:27 AM, LJ wrote:
> Im wondering if there is a way in which I can use PyPy to solve the just
> subproblems in parallel, and return to CPython for the overall routines.
>
You could. What you'd have would be a setup where the subprocess is
utterly independent
By the way, Im using python 2.7.
Thanks
--
https://mail.python.org/mailman/listinfo/python-list
eneck of my procedure.
Im wondering if there is a way in which I can use PyPy to solve the just
subproblems in parallel, and return to CPython for the overall routines. The
reason behind this, is that Gurobipy (python interface for the Gurobi
optimization solver) is not compatible with PyPy, a
In a message of Tue, 29 Sep 2015 18:58:19 -0700, LJ writes:
>Hi All,
>
>I use gurobipy to model a large scale optimization problem. Is there a way to
>use pypy with the gurobipy library? Has anyone done this?
>
>Thanks.
I don't think so. I think that gurobipy depends
Hi All,
I use gurobipy to model a large scale optimization problem. Is there a way to
use pypy with the gurobipy library? Has anyone done this?
Thanks.
--
https://mail.python.org/mailman/listinfo/python-list
Read all about it http://morepypy.blogspot.co.uk/2015/05/cffi-10-beta-1.html
--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.
Mark Lawrence
--
https://mail.python.org/mailman/listinfo/python-list
Steven D'Aprano writes:
> An interesting point of view: threading is harmful because it removes
> determinism from your program.
> http://radar.oreilly.com/2007/01/threads-considered-harmful.html
Concurrent programs are inherently nondeterministic because they respond
to i/o events that can happ
rticle were threads, multiprocessing, old-fashioned async (callback
hell), and asyncio (still contorted and relies on Python 3 coroutines).
If you eliminate threads because of data sharing and asyncio because you
need Python 2 compatibility, you're left with multiprocessing if you
want to avo
On Thu, Feb 26, 2015 at 4:16 AM, Mark Lawrence wrote:
> IIRC the underlying JET engine was replaced by SQL Server years ago. Maybe
> not the best technlogy in the world, but you'd be hard pushed to do worse
> than JET :)
The way I understood it, MS Access could connect to a variety of
database ba
On 25/02/2015 17:00, Ian Kelly wrote:
On Wed, Feb 25, 2015 at 9:37 AM, Mark Lawrence wrote:
On 25/02/2015 06:02, Ian Kelly wrote:
Is the name of that database program "Microsoft Access" perchance?
Are you referring to the GUI, the underlying database engine, both, or what?
The engine. I
On Wed, Feb 25, 2015 at 9:37 AM, Mark Lawrence wrote:
> On 25/02/2015 06:02, Ian Kelly wrote:
>>
>>
>> Is the name of that database program "Microsoft Access" perchance?
>>
>
> Are you referring to the GUI, the underlying database engine, both, or what?
The engine. In theory it supports concurren
On 25/02/2015 06:02, Ian Kelly wrote:
Is the name of that database program "Microsoft Access" perchance?
Are you referring to the GUI, the underlying database engine, both, or what?
--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.
Mar
On Tue, Feb 24, 2015 at 10:54 PM, Chris Angelico wrote:
> On Wed, Feb 25, 2015 at 4:46 PM, Marko Rauhamaa wrote:
>> Marcos Almeida Azevedo :
>>
>>> Synchronized methods in Java really makes programming life simpler.
>>> But I think it is standard practice to avoid this if there is a
>>> lighter a
On Wed, Feb 25, 2015 at 1:46 PM, Marko Rauhamaa wrote:
> Marcos Almeida Azevedo :
>
> > Synchronized methods in Java really makes programming life simpler.
> > But I think it is standard practice to avoid this if there is a
> > lighter alternative as synchronized methods are slow. Worse case I
>
On Wed, Feb 25, 2015 at 4:46 PM, Marko Rauhamaa wrote:
> Marcos Almeida Azevedo :
>
>> Synchronized methods in Java really makes programming life simpler.
>> But I think it is standard practice to avoid this if there is a
>> lighter alternative as synchronized methods are slow. Worse case I
>> use
Marcos Almeida Azevedo :
> Synchronized methods in Java really makes programming life simpler.
> But I think it is standard practice to avoid this if there is a
> lighter alternative as synchronized methods are slow. Worse case I
> used double checked locking.
I have yet to see code whose perform
On Tue Feb 24 2015 at 3:32:47 PM Paul Rubin wrote:
> Ryan Stuart writes:
> Sure, the shared memory introduces the possibility of some bad errors,
> I'm just saying that I've found that by staying with a certain
> straightforward style, it doesn't seem difficult in practice to avoid
> those error
Chris Angelico :
> Actually, you can quite happily have multiple threads messing with the
> underlying file descriptors, that's not a problem. (Though you will
> tend to get interleaved output. But if you always produce output in
> single blocks of text that each contain one line with a trailing
>
>
> But as you say, there are good reasons for wishing to stick to CPython over
> Jython, IronPython, PyPy or Stackless, let alone less mature or experimental
> implementations. I get that.
Yes, not everyone is running Python 3.5 in production, I totally get
that :) But if I'
Paul Rubin wrote:
>> With threads in a single process, this isn't a problem. They all
>> access the same memory space, so they can all share state. As soon as
>> you go to separate processes, these considerations become serious.
>
> Right, that's a limitation of processes compared to threads.
>
people are not using the bleeding edge version of Python, and even
those who do, aren't usually using it in production. There are still plenty
of people using Python 2.3 in production, and even a few using 1.5.
But as you say, there are good reasons for wishing to stick to CPython over
Jy
On Tue, Feb 24, 2015 at 4:27 PM, Paul Rubin wrote:
>> Sure, your code might not be making any mutations (that you know of),
>> but malloc definitely is [1], and that's just the tip of the iceberg.
>> Other things like buffers for stdin and stdout, DNS resolution etc.
>> all have the same issue.
>
Ryan Stuart writes:
> I'm not sure what else to say really. It's just a fact of life that
> Threads by definition run in the same memory space and hence always
> have the possibility of nasty unforeseen problems. They are unforeseen
> because it is extremely difficult (maybe impossible?) to try an
Chris Angelico writes:
> So, you would have to pass code to the other process, probably. What
> about this:
> y = 4
> other_thread_queue.put(lambda x: x*y)
the y in the lambda is a free variable that's a reference to the
surrounding mutable context, so that's at best dubious. You could use:
Ethan Furman wrote:
> On 02/22/2015 11:41 PM, Steven D'Aprano wrote:
>
>> If you want *CPython* to work without a GIL, well, are you volunteering
>> to do the work? It is a massive job, and the core devs aren't terribly
>> interested. Probably because they understand that the GIL is not often an
Arrgh! I forgot to warn you that you need a very recent version of
virtualenv to work with PyPy. I am very sorry about that. Glad to
see that things are working now.
Laura
--
https://mail.python.org/mailman/listinfo/python-list
Dave Farrance schrieb am 23.02.2015 um 15:13:
> Dave Cook wrote:
>> On 2015-02-22, Dave Farrance wrote:
>>
>>> It's still quicker to do a re-write in the more cumbersome C
>>
>> You should try Cython.
>
> I did try Cython when I was trying to figure out what to do about the slow
> speed. My initi
1 - 100 of 370 matches
Mail list logo