Re: Force virtualenv pip to be used
On Monday, November 7, 2016 at 1:07:12 AM UTC+11, Chris Angelico wrote: > On Sun, Nov 6, 2016 at 10:19 PM, Peter Otten <__pete...@web.de> wrote: > > Chris Angelico wrote: > > > >> On Sun, Nov 6, 2016 at 9:17 PM, Alec Taylor > >> wrote: > >>> Running Ubuntu 16.10 with Python 2.7.12+ (default one) and virtualenv > >>> 15.0.3 (`sudo -H pip install virtualenv`). What am I doing wrong? > >>> > >>> $ virtualenv a && . "$_"/bin/activate && pip --version > >> > >> I'm pretty sure virtualenv (like venv, about which I'm certain) > >> creates something that you have to 'source' into your shell, rather > >> than running in the classic way: > >> > >> source env/bin/activate > > > > I think this is what the > > > > . "$_"/bin/activate > > > > part of Alec's command is supposed to do. > > > > Yes, that's a dot, not grit on Tim's screen ;) > > Yep, I see that now. Guess my screen's dirty again. Sorry! > > There are a few possibilities still. > > 1) You *are* running all this from /tmp, right? "virtualenv a" creates > a subdirectory off the current directory, and then you look for > /tmp/a. > > 2) Is there an esoteric interaction between the bash "&&" and the > source command? > > 3) virtualenv could behave differently from venv. It's a third-party > package that works by hacks, compared to the properly-integrated venv > module. > > Further research is required. > > ChrisA venv seems to be a Python 3 thing. I'm using Python 2. Will probably experiment with Python 3 also and get cross compatibility, but then I'd still use virtualenv so I get consistent tooling. And the issue has been found: PYTHONPATH being set caused the problem. -- https://mail.python.org/mailman/listinfo/python-list
Re: Confused with installing per-user in Windows
On Mon, Nov 7, 2016 at 1:11 AM, ddbug wrote: > > In Windows, the user-local directory for scripts is %APPDATA%\Python\Scripts. > It is not in > PATH by default and finding it is hard (because Microsoft made it hidden in > their infinite > wisdom). POSIX "~/.local" is hidden as well, by convention, so I don't see how the directory's being hidden is relevant. In Windows 10, showing hidden files and folders in Explorer takes just two clicks -- three if the ribbon is collapsed. It's not much harder in Windows 7, but you have to know how to set folder options. Or enter %appdata% in the location bar, or shell:appdata: http://www.winhelponline.com/blog/shell-commands-to-access-the-special-folders In cmd use "dir /a" to list all files or "dir /ad" to list all directories. You can use set "DIRCMD=/a" if you prefer to always list all files. (FYI, /a doesn't mean [a]ll; it's an [a]ttribute filter; an empty filter yields all files and directories.) In PowerShell it's "gci -fo" or "gci -fo -ad". To be verbose, the latter is "Get-ChildItem -Force -Attributes Directory". I do think using %APPDATA% is a mistake, but not because it's hidden. A --user install should be using %LOCALAPPDATA%, which is excluded from the user's roaming profile. It was also a mistake to dump everything into a common "Scripts" directory, since the Python version number isn't appended to script names. This was fixed in 3.5, which instead uses "%APPDATA%\Python\Python35\Scripts". > 1. would it be good if python interpreter could JUST find user-local scripts > - by default or by some > easy configuration option? That would be surprising behavior if it were enabled by default. It was already suggested that you add the user scripts directory to PATH and run the .py file directly. However, calling ShellExecuteEx to run a file by association isn't always an option. That's why pip and setuptools create EXE wrappers for script entry points when installing source and wheel distributions. Consider packaging your scripts in a wheel that defines entry points. For example: spam_scripts\spam.py: import sys def main(): print('spam%d%d' % sys.version_info[:2]) return 42 # unused classic entry point if __name__ == '__main__': sys.exit(main()) setup.py: from setuptools import setup setup(name='spam_scripts', version='1.0', description='...', url='...', author='...', author_email='...', license='...', packages=['spam_scripts'], entry_points = { 'console_scripts': ['spam=spam_scripts.spam:main'], }) build: > python setup.py bdist_wheel --universal > cd dist installation (3.5.2 on this system): > pip install --user spam_scripts-1.0-py2.py3-none-any.whl installation (latest 2.x): > py -2 -m pip install --user spam_scripts-1.0-py2.py3-none-any.whl Usage: >%APPDATA%\Python\Python35\Scripts\spam.exe spam35 >echo %errorlevel% 42 >%APPDATA%\Python\Scripts\spam.exe spam27 >echo %errorlevel% 42 -- https://mail.python.org/mailman/listinfo/python-list
Re: constructor classmethods
torstai 3. marraskuuta 2016 14.45.49 UTC Ethan Furman kirjoitti: > On 11/03/2016 01:50 AM, teppo wrote: > > > The guide is written in c++ in mind, yet the concepts stands for any > > programming language really. Read it through and think about it. If > > you come back to this topic and say: "yeah, but it's c++", then you > > haven't understood it. > > The ideas (loose coupling, easy testing) are certainly applicable in Python > -- the specific methods talked about in that paper, however, are not. Please elaborate. Which ones explicitly? > > To go back to the original example: > > def __init__(self, ...): > self.queue = Queue() > > we have several different (easy!) ways to do dependency injection: > > * inject a mock Queue into the module > * make queue a default parameter > > If it's just testing, go with the first option: > > import the_module_to_test > the_module_to_test.Queue = MockQueue > > and away you go. This is doable, but how would you inject queue (if we use Queue as an example) with different variations, such as full, empty, half-full, half-empty. :) For different tests. > > If the class in question has legitimate, non-testing, reasons to specify > different Queues, then make it a default argument instead: > > def __init__(self, ..., queue=None): > if queue is None: > queue = Queue() > self.queue = queue I already stated that this is is fine, as long as the number of arguments stays in manageable levels. Although I do think testing is good enough reason to have it injected anytime. For consistency, it makes sense to have same way to create all objects. I wouldn't suggested of using that mechanism in public API's, just in internal components. > > or, if it's just for testing but you don't want to hassle injecting a > MockQueue into the module itself: > > def __init__(self, ..., _queue=None): > if _queue is None: > _queue = Queue() > self.queue = _queue > > or, if the queue is only initialized (and not used) during __init__ (so you > can replace it after construction with no worries): > > class Example: > def __init__(self, ...): > self.queue = Queue() > > ex = Example() > ex.queue = MockQueue() > # proceed with test This I wouldn't recommend. It generates useless work when things start to change, especially in large code bases. Don't touch internal stuff of class in tests (or anywhere else). > > The thing each of those possibilities have in common is that the normal > use-case of just creating the thing and moving on is the very simple: > > my_obj = Example(...) > > To sum up: your concerns are valid, but using c++ (and many other language) > idioms in Python does not make good Python code. This is not necessarily simply a c++ idiom, but a factory method design pattern 4nasm4I7%2D0s4XSH[8cS}12M)320?IqGu7_7JS$d0k+V0Dqb7. Although there are many other benefits the current pattern would offer, in this case it is primarily used just for convenience (although it seems it is not seen as such) and giving more flexibility in writing tests. Br, Teppo -- https://mail.python.org/mailman/listinfo/python-list
Re: [PyQT] After MessageBox app quits...why?
Demosthenes Koptsis writes: > Hello, i have a PyQT systray app with a menu and two actions. > > Action1 is Exit and action2 display a MessageBox with Hello World message. > > When i click OK to MessageBox app quits...why? > > http://pastebin.com/bVA49k1C I haven't done anything with Qt in a while but apparently you need to call QtGui.QApplication.setQuitOnLastWindowClosed(False) before trayIcon.show(). -- https://mail.python.org/mailman/listinfo/python-list
Re: constructor classmethods
torstai 3. marraskuuta 2016 14.47.18 UTC Chris Angelico kirjoitti: > On Thu, Nov 3, 2016 at 7:50 PM, wrote: > > Little bit background related to this topic. It all starts from this > > article: > > http://misko.hevery.com/attachments/Guide-Writing%20Testable%20Code.pdf > > > > The guide is written in c++ in mind, yet the concepts stands for any > > programming language really. Read it through and think about it. If you > > come back to this topic and say: "yeah, but it's c++", then you haven't > > understood it. > > I don't have a problem with something written for C++ (though I do > have a problem with a thirty-eight page document on how to make your > code testable - TLDR), but do bear in mind that a *lot* of C++ code > can be simplified when it's brought to Python. I know differences of c++ and python, as I have done programming in both languages for years. By the way, generally all design patterns, not just this one, are aimed to improve maintenance work, testability or flexibility and there are tons of books around these topics. This is good place to look at too: https://sourcemaking.com/ > One Python feature that > C++ doesn't have, mentioned already in this thread, is the way you can > have a ton of parameters with defaults, and you then specify only > those you want, as keyword args: > > def __init__(self, important_arg1, important_arg2, > queue=None, cache_size=50, whatever=...): > pass > > MyClass("foo", 123, cache_size=75) > > I can ignore all the arguments that don't matter, and provide only the > one or two that I actually need to change. Cognitive load is > drastically reduced, compared to the "alternative constructor" > pattern, where I have to remember not to construct anything in the > normal way. If writing two constructors start to feel cumbersome, it's always possible to make metaclass or decorator do at least basic one for you. For example, decorator can do __init__ to you and with that same information many other useful magic functions. I see this more like of a trade-of of writing one factory method to help me write more stable and comprehensive tests vs having to write more brittle tests whose maintenance gets burden to developers (which can come real pain if things are done poorly in the beginning and the project goes forward). Python makes things much easier, but still care needs to be taken. Finally, some wise words to think about (these are all tied up together): "As an engineer, you should constantly work to make your feedback loops shorter in time and/or wider in scope." — @KentBeck Br, Teppo -- https://mail.python.org/mailman/listinfo/python-list
Re: [PyQT] After MessageBox app quits...why?
because there is no window open and quits by default. You have to do two things: 1) Make sure that your SystemTray class has parent a QWidget w = QtGui.QWidget() trayIcon = SystemTrayIcon(QtGui.QIcon("virtualdvd.png"), w) 2) set quit to false app.setQuitOnLastWindowClosed(False) Example def main(): app = QtGui.QApplication(sys.argv) app.setQuitOnLastWindowClosed(False) w = QtGui.QWidget() trayIcon = SystemTrayIcon(QtGui.QIcon("virtualdvd.png"), w) trayIcon.show() sys.exit(app.exec_()) if __name__ == '__main__': main() On 11/07/2016 10:49 AM, Anssi Saari wrote: Demosthenes Koptsis writes: Hello, i have a PyQT systray app with a menu and two actions. Action1 is Exit and action2 display a MessageBox with Hello World message. When i click OK to MessageBox app quits...why? http://pastebin.com/bVA49k1C I haven't done anything with Qt in a while but apparently you need to call QtGui.QApplication.setQuitOnLastWindowClosed(False) before trayIcon.show(). -- https://mail.python.org/mailman/listinfo/python-list
Re: [PyQT] After MessageBox app quits...why?
i answered my own question. On 11/07/2016 11:44 AM, Demosthenes Koptsis wrote: because there is no window open and quits by default. You have to do two things: 1) Make sure that your SystemTray class has parent a QWidget w = QtGui.QWidget() trayIcon = SystemTrayIcon(QtGui.QIcon("virtualdvd.png"), w) 2) set quit to false app.setQuitOnLastWindowClosed(False) Example def main(): app = QtGui.QApplication(sys.argv) app.setQuitOnLastWindowClosed(False) w = QtGui.QWidget() trayIcon = SystemTrayIcon(QtGui.QIcon("virtualdvd.png"), w) trayIcon.show() sys.exit(app.exec_()) if __name__ == '__main__': main() On 11/07/2016 10:49 AM, Anssi Saari wrote: Demosthenes Koptsis writes: Hello, i have a PyQT systray app with a menu and two actions. Action1 is Exit and action2 display a MessageBox with Hello World message. When i click OK to MessageBox app quits...why? http://pastebin.com/bVA49k1C I haven't done anything with Qt in a while but apparently you need to call QtGui.QApplication.setQuitOnLastWindowClosed(False) before trayIcon.show(). -- https://mail.python.org/mailman/listinfo/python-list
Re: constructor classmethods
>> >> If the class in question has legitimate, non-testing, reasons to specify >> different Queues, then make it a default argument instead: >> >> def __init__(self, ..., queue=None): >> if queue is None: >> queue = Queue() >> self.queue = queue > > I already stated that this is is fine, as long as the number of arguments > stays in manageable levels. Although I do think testing is good enough reason > to have it injected anytime. For consistency, it makes sense to have same way > to create all objects. I wouldn't suggested of using that mechanism in public > API's, just in internal components. > But if the number of arguments is not manageable, you need to change the design anyway, for the sake of cleanness. So YAGNI. -- https://mail.python.org/mailman/listinfo/python-list
Re: constructor classmethods
On 11/07/2016 12:46 AM, teppo.p...@gmail.com wrote: torstai 3. marraskuuta 2016 14.45.49 UTC Ethan Furman kirjoitti: On 11/03/2016 01:50 AM, teppo wrote: The guide is written in c++ in mind, yet the concepts stands for any programming language really. Read it through and think about it. If you come back to this topic and say: "yeah, but it's c++", then you haven't understood it. The ideas (loose coupling, easy testing) are certainly applicable in Python -- the specific methods talked about in that paper, however, are not. Please elaborate. Which ones explicitly? The ones in place solely to make testing easier. Others, such as passing an Engine into Car instead of making Car create its own, are valid. To compare to what I said elsewhere, the exact Engine used is *not* an implementation detail -- any particular Car could have a range of Engines that work with it, and which is used is not determined by the car itself. To go back to the original example: def __init__(self, ...): self.queue = Queue() we have several different (easy!) ways to do dependency injection: * inject a mock Queue into the module * make queue a default parameter If it's just testing, go with the first option: import the_module_to_test the_module_to_test.Queue = MockQueue and away you go. This is doable, but how would you inject queue (if we use Queue as an example) with different variations, such as full, empty, half-full, half-empty. :) For different tests. In this case I would go with my last example: ex = Example() test_queue = generate_half_empty_queue() ex.queue = test_queue # do the testing If the class in question has legitimate, non-testing, reasons to specify different Queues, then make it a default argument instead: def __init__(self, ..., queue=None): if queue is None: queue = Queue() self.queue = queue I already stated that this is is fine, as long as the number of arguments stays in manageable levels. How is having 15 arguments in a .create() method better than having 15 arguments in __init__() ? Although I do think testing is good enough reason to have it injected anytime. For consistency, it makes sense to have same way to create all objects. I wouldn't suggested of using that mechanism in public API's, just in internal components. And that consistent way is to just call the class -- not to call class.create(). or, if it's just for testing but you don't want to hassle injecting a MockQueue into the module itself: def __init__(self, ..., _queue=None): if _queue is None: _queue = Queue() self.queue = _queue or, if the queue is only initialized (and not used) during __init__ (so you can replace it after construction with no worries): class Example: def __init__(self, ...): self.queue = Queue() ex = Example() ex.queue = MockQueue() # proceed with test This I wouldn't recommend. It generates useless work when things start to change, especially in large code bases. Don't touch internal stuff of class in tests (or anywhere else). So, if you use the create() method, and it sets up internal data structures, how do you test them? In other words, if create() makes that queue then how do you test with a half-empty queue? The thing each of those possibilities have in common is that the normal use-case of just creating the thing and moving on is the very simple: my_obj = Example(...) To sum up: your concerns are valid, but using c++ (and many other language) idioms in Python does not make good Python code. This is not necessarily simply a c++ idiom, but a factory method design pattern [...] Not all design patterns make sense in every language. -- ~Ethan~ -- https://mail.python.org/mailman/listinfo/python-list
Re: [Theory] How to speed up python code execution / pypy vs GPU
On Saturday, November 5, 2016 at 6:39:52 PM UTC-7, Steve D'Aprano wrote: > On Sun, 6 Nov 2016 09:17 am, Mr. Wrobel wrote: > > > I don't have any experience with GPU processing. I expect that it will be > useful for somethings, but for number-crushing and numeric work, I am > concerned that GPUs rarely provide correctly rounded IEEE-754 maths. That > means that they are accurate enough for games where a few visual glitches > don't matter, but they risk being inaccurate for serious work. > > I fear that doing numeric work in GPUs will be returning to the 1970s, when > every computer was incompatible with every other computer, and it was > almost impossible to write cross-platform, correct, accurate numeric code. Hi Steve, You, Jason Swails, myself, and several others had a discussion about the state of GPU arithmetic and IEEE-754 compliance just over a year ago. https://groups.google.com/forum/#!msg/comp.lang.python/Gt_FzFlES8A/r_3dbW5XzfkJ;context-place=forum/comp.lang.python It has been very important for the field of computational molecular dynamics (and probably several other fields) to get floating-point arithmetic working right on GPU architecture. I don't know anything about other manufacturers of GPU's, but NVidia announced IEEE-754, double-precision arithmetic for their GPU's in 2008, and it's been included in the standard since CUDA 2.0. If floating-point math wasn't working on GPU's, I suspect that a lot of people in the scientific community would be complaining. Do you have any new information that would lead you to doubt what we said in the discussion we had last year? -- https://mail.python.org/mailman/listinfo/python-list
Lua tutorial help for Python programmer?
I just got Lua scripting dumped in my lap as a way to do some server side scripting in Redis. The very most basic stuff isn't too hard (i = 1, a = {"x"=4, ...}, for i = 1,10,2 do ... end), but as soon as I get beyond that, I find it difficult to formulate questions which coax Google into useful suggestions. Is there an equivalent to the python-tutor, python-help, or even this (python-list/comp.lang.python) for people to ask Lua questions from the perspective of a Python programmer? Maybe an idiom translation table? A couple questions to show what sort of (terribly basic) stuff I'm after. 1. print(tbl) where tbl is a Lua table prints something useless like table: 0x3c73310 How can I print a table in one go so I see all its keys and values? 2. The redis-py package helpfully converts the result of HGETALL to a Python dictionary. On the server, The Lua code just sees an interleaved list (array?) of the key/value pairs, e.g., "a" "1" "b" "2" "c" "hello". I'd dictify that in Python easily enough: dict(zip(result[::2], result[1::2])) and get {"a": "1", "b": "2", "c": "hello"} Skimming the Lua reference manual, I didn't see anything like dict() and zip(). I suspect I'm thinking like a Python programmer when I shouldn't be. Is there a Lua idiom which tackles this problem in a straightforward manner, short of a numeric for loop? As you can see, this is pretty trivial stuff, mostly representing things which are just above the level of the simplest tutorial. Thanks, Skip -- https://mail.python.org/mailman/listinfo/python-list
Re: [Theory] How to speed up python code execution / pypy vs GPU
On Saturday, November 5, 2016 at 8:58:36 PM UTC-4, Steve D'Aprano wrote: > On Sun, 6 Nov 2016 08:17 am, Ben Bacarisse wrote: > > > Steve D'Aprano writes: > > >> Here's the same program in Objective C: > >> > >> --- cut --- > >> > >> #import > >> > >> int main (int argc, const char * argv[]) > >> { > >> NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; > >> NSLog (@"Hello, World!"); > >> [pool drain]; > >> return 0; > >> } > >> > >> --- cut --- > >> > >> Which would you rather write? > > > > That's a rather odd comparison. Why not > > > > #import > > > > int main() > > { > > printf("Hello world\n"); > > return 0; > > } > > > Because that's not Objective-C? (This is not a rhetorical question.) > > I'm not an Objective-C expert, but to my eye, that doesn't look like > Objective-C. It looks like plain old regular C. > > Here's where I stole the code from: > > https://www.binpress.com/tutorial/objectivec-lesson-1-hello-world/41 > > and its not too dissimilar from the versions here: > > http://rosettacode.org/wiki/Hello_world/Text#Objective-C > > > > ? It's decades since I wrote any Objective-C (and then not much) but I > > think this is the closest comparison. > > > > -- > Steve > “Cheer up,” they said, “things could be worse.” So I cheered up, and sure > enough, things got worse. It is Objective-C. You are mistaken taking NS extensions to function names as a part of Objective-C. There are not. It is from NextStep/Sun implementation. Because they are always used a lot of people tend to think that they are part of Objective-C - but they are not - they are just libraries. -- https://mail.python.org/mailman/listinfo/python-list
Why are there so many Python Installers? Windows only :)
Hi folks, an interesting blog from Steve Dower giving the history of the little beasties http://stevedower.id.au/blog/why-so-many-python-installers/ Kindest regards. Mark Lawrence. -- https://mail.python.org/mailman/listinfo/python-list
Re: [Theory] How to speed up python code execution / pypy vs GPU
On 11/05/2016 11:10 AM, Mr. Wrobel wrote: > Hi, > > Some skeptics asked my why there is a reason to use Python against of > any other "not interpreted" languages, like objective-C. As my > explanation, I have answered that there is a a lot of useful APIs, > language is modern, has advanced objective architecture, and what is the > most important - it is dynamic and support is simply great. > > However the same skeptics told my that, ok we believe that it is true, > however the code execution is much slower than any other compiled language. > > I must tell you that is the reason I started to dig into internet and > searching some methods to speed up python's code. I'm reminded of the old adage, premature optimization is the root of all evil. Trying to find ways of speeding up Python code is interesting, but it may be less helpful to the general cases than you think. It's undeniable that given a particular CPU-bound algorithm implemented in Objective C and Python that the ObjC version will finish faster. Probably an order of magnitude faster. But even in this situation, the question is, does it matter? And in almost all cases, 90% of the execution time of a program is spent in 10% of the code. Once you isolate that 10%, you can bring other tools to bear, even while you use Python. For example that critical 10% could be compiled to binary code with Cython. Or you may find that using a library such as numpy to do fast linear algebra is the way to go. Many modern tasks are actually IO-bound rather than CPU bound so Python would be just as fast as any other language. That's why Python is well-placed in web development where scaling involves more than simply CPU execution time. I don't think you will have any luck in persuading your colleagues to replace Objective-C with Python by chasing ways of speeding up Python. But you may be able to show them the places where Python really shines and they may come around to the idea of using Python in certain places more often. -- https://mail.python.org/mailman/listinfo/python-list
Re: [Theory] How to speed up python code execution / pypy vs GPU
On Tue, 8 Nov 2016 05:47 am, jlada...@itu.edu wrote: > On Saturday, November 5, 2016 at 6:39:52 PM UTC-7, Steve D'Aprano wrote: >> On Sun, 6 Nov 2016 09:17 am, Mr. Wrobel wrote: >> >> >> I don't have any experience with GPU processing. I expect that it will be >> useful for somethings, but for number-crushing and numeric work, I am >> concerned that GPUs rarely provide correctly rounded IEEE-754 maths. That >> means that they are accurate enough for games where a few visual glitches >> don't matter, but they risk being inaccurate for serious work. >> >> I fear that doing numeric work in GPUs will be returning to the 1970s, >> when every computer was incompatible with every other computer, and it >> was almost impossible to write cross-platform, correct, accurate numeric >> code. > > Hi Steve, > > You, Jason Swails, myself, and several others had a discussion about the > state of GPU arithmetic and IEEE-754 compliance just over a year ago. I don't know why you think I was part of this discussion -- I made one comment early in the thread, and took part in none of the subsequent comments. If I had read any of the subsequent comments in the thread, I don't remember them. > https://groups.google.com/forum/#!msg/comp.lang.python/Gt_FzFlES8A/r_3dbW5XzfkJ;context-place=forum/comp.lang.python For those who dislike GoogleGroups, here's the official archive: https://mail.python.org/pipermail/python-list/2015-February/686683.html > It has been very important for the field of computational molecular > dynamics (and probably several other fields) to get floating-point > arithmetic working right on GPU architecture. I don't know anything about > other manufacturers of GPU's, but NVidia announced IEEE-754, > double-precision arithmetic for their GPU's in 2008, and it's been > included in the standard since CUDA 2.0. That's excellent news, and well-done to NVidia. But as far as I know, they're not the only manufacturer of GPUs, and they are the only ones who support IEEE 754. So this is *exactly* the situation I feared: incompatible GPUs with varying support for IEEE 754 making it difficult or impossible to write correct numeric code across GPU platforms. Perhaps it doesn't matter? Maybe people simply don't bother to use anything but Nvidia GPUs for numeric computation, and treat the other GPUs as toys only suitable for games. > If floating-point math wasn't working on GPU's, I suspect that a lot of > people in the scientific community would be complaining. I don't. These are scientists, not computational mathematics computer scientists. In the 1980s, the authors of the "Numeric Recipes in ..." books, William H Press et al, wrote a comment about the large number of scientific papers and simulations which should be invalidated due to poor numeric properties of the default pseudo-random number generators available at the time. I see no reason to think that the numeric programming sophistication of the average working scientist or Ph.D. student has improved since then. The average scientist cannot even be trusted to write an Excel spreadsheet without errors that invalidate their conclusion: https://www.washingtonpost.com/news/wonk/wp/2016/08/26/an-alarming-number-of-scientific-papers-contain-excel-errors/ let alone complex floating point numeric code. Sometimes those errors can change history: the best, some might say *only*, evidence for the austerity policies which have been destroying the economies in Europe for almost a decade now is simply a programming error. http://www.bloomberg.com/news/articles/2013-04-18/faq-reinhart-rogoff-and-the-excel-error-that-changed-history These are not new problems: dubious numeric computations have plagued scientists and engineers for decades, there is still a huge publication bias against negative results, most papers are written but not read, and even those which are read, most are wrong. http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124 Especially in fast moving fields of science where there is money to be made, like medicine and genetics. There the problems are much, much worse. Bottom line: I'm very glad that Nvidia now support IEEE 754 maths, and that reduces my concerns: at least users of one common GPU can be expected to have correctly rounded results of basic arithmetic operations. -- Steve “Cheer up,” they said, “things could be worse.” So I cheered up, and sure enough, things got worse. -- https://mail.python.org/mailman/listinfo/python-list