extending PATH on Windows?
I need to extend the PATH environment variable on Windows. So far, I use: system('setx PATH "%PATH%;'+bindir+'"') The problem: In a new process (cmd.exe) PATH contains a lot of double elements. As far as I have understood, Windows builds the PATH environment variable from a system component and a user component. With the setx command from above I have copied the system PATH into the user PATH component. Is there a better way to extend the PATH environment variable for the user? It must be persistent, not only for the current process. -- Ullrich Horlacher Server und Virtualisierung Rechenzentrum IZUS/TIK E-Mail: horlac...@tik.uni-stuttgart.de Universitaet Stuttgart Tel:++49-711-68565868 Allmandring 30aFax:++49-711-682357 70550 Stuttgart (Germany) WWW:http://www.tik.uni-stuttgart.de/ -- https://mail.python.org/mailman/listinfo/python-list
Re: Make a unique filesystem path, without creating the file
On 16 Feb 2016 05:57, "Ben Finney" wrote: > > Cameron Simpson writes: > > > I've been watching this for a few days, and am struggling to > > understand your use case. > > Yes, you're not alone. This surprises me, which is why I'm persisting. > > > Can you elaborate with a concrete example and its purpose which would > > work with a mktemp-ish official function? > > An example:: > > import io > import tempfile > names = tempfile._get_candidate_names() > > def test_frobnicates_configured_spungfile(): > """ ‘foo’ should frobnicate the configured spungfile. """ > > fake_file_path = os.path.join(tempfile.gettempdir(), names.next()) > fake_file = io.BytesIO("Lorem ipsum, dolor sit amet".encode("utf-8")) > > patch_builtins_open( > when_accessing_path=fake_file_path, > provide_file=fake_file) > > system_under_test.config.spungfile_path = fake_file_path > system_under_test.foo() > assert_correctly_frobnicated(fake_file) If you're going to patch open to return a fake file when asked to open fake_file_path why do you care whether there is a real file of that name? -- Oscar -- https://mail.python.org/mailman/listinfo/python-list
Will file be closed automatically in a "for ... in open..." statement?
I know with open('foo.txt') as f: ...do something... will close the file automatically when the "with" block ends. I also saw codes in a book: for line in open('foo.txt'): ...do something... but it didn't mention if the file will be closed automatically or not when the "for" block ends. Is there any document talking about this? and how to know if a file is in "open" or not? --Jach Fong -- https://mail.python.org/mailman/listinfo/python-list
Re: Will file be closed automatically in a "for ... in open..." statement?
On Tue, Feb 16, 2016 at 7:39 PM, wrote: > I know > > with open('foo.txt') as f: > ...do something... > > will close the file automatically when the "with" block ends. > > I also saw codes in a book: > > for line in open('foo.txt'): > ...do something... > > but it didn't mention if the file will be closed automatically or not when > the "for" block ends. Is there any document talking about this? and how to > know if a file is in "open" or not? > The file will be closed when the open file object is disposed of. That will happen at some point after there are no more references to it. You're guaranteed that it stays around for the entire duration of the 'for' loop (the loop keeps track of the thing it's iterating over), but exactly when after that is not guaranteed. In current versions of CPython, the garbage collector counts references, so the file will be closed immediately; but other Python interpreters, and future versions of CPython, may not behave the same way. So the file will *probably* be closed *reasonably* promptly, but unlike the "with" case, you have no guarantee that it'll be immediate. For small scripts, it probably won't even matter, though. You're unlikely to run out of file handles, and the only time it would matter is if you're opening, closing, and then reopening the file - for example: fn = input("Name a file to frobnosticate: ") with open(fn) as f: data = [] for line in f: data = frobnosticate(data, line) with open(fn, "w") as f: f.writelines(data) For this to work reliably, the file MUST be closed for reading before it's opened for writing. The context managers are important. But this is pretty unusual. Of course, since it's so little trouble to use the 'with' block, it's generally worth just using it everywhere. Why run the risk? :) ChrisA who often forgets to use 'with' anyway -- https://mail.python.org/mailman/listinfo/python-list
Re: Will file be closed automatically in a "for ... in open..." statement?
On 16Feb2016 00:39, jf...@ms4.hinet.net wrote: I know with open('foo.txt') as f: ...do something... will close the file automatically when the "with" block ends. Yes, because open is a context manager - they're great for reliably tidying up in the face of exceptions or "direct" departure from the block, such as a "return" statement. I also saw codes in a book: for line in open('foo.txt'): ...do something... but it didn't mention if the file will be closed automatically or not when the "for" block ends. Is there any document talking about this? and how to know if a file is in "open" or not? This does not reliably close the file. In CPython (the common implementation, and likely what you are using), objects are reference counted and when the interpreter notices their counter go to zero, the object's __del__ method is called before releasing the object's memory. For an open file, __del__ _does_ call close if the file is open. However, only reference counting Pythons will call __del__ promptly - other systems rely on garbage collectors to detect unused objects. In the for loop above, for interpreter obtains an iterator from the open file which returns lines of text. That iterator has a reference to the open file, and the for loop has a reference to the iterator. Therefore the file remains references while the loop runs. AT the end of the loop the iterator is discarded, reducing its references to zero. That in turn triggers releasing the open file, dropping its references to zero. In CPython, that in turn will fire the open file's __del__, which will close the file. In other Pythons, not necessarily that promptly. Also, there are plenty of ways to phrase this where the file reference doesn't go to zero. FOr example: f = open(...) for line in f: ... You can see that it is easy to forget to close the file here (especially if you have an exception or exit the function precipitously). Try to use the "with open(...) as f:" formulation when possible. It is much better. Cheers, Cameron Simpson -- https://mail.python.org/mailman/listinfo/python-list
Re: Will file be closed automatically in a "for ... in open..." statement?
On 2/16/2016 3:39 AM, jf...@ms4.hinet.net wrote: I know with open('foo.txt') as f: ...do something... will close the file automatically when the "with" block ends. I also saw codes in a book: for line in open('foo.txt'): ...do something... Some books were originally written before 'with context_manager' was added, or a little later, before it became the normal thing to do. -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
Re: extending PATH on Windows?
* Ulli Horlacher (Tue, 16 Feb 2016 08:30:59 + (UTC)) > I need to extend the PATH environment variable on Windows. > > So far, I use: > >system('setx PATH "%PATH%;'+bindir+'"') > > The problem: In a new process (cmd.exe) PATH contains a lot of double > elements. As far as I have understood, Windows builds the PATH > environment variable from a system component and a user component. With > the setx command from above I have copied the system PATH into the user > PATH component. > > Is there a better way to extend the PATH environment variable for the user? > It must be persistent, not only for the current process. `os.system` should be `subprocess.call` on modern systems (actually `subprocess.run` if you have Python 3.5). Since `setx` doesn't seem to have unique and add options, you basically have two options: 1. Add the path component yourself into HKEY_CURRENT_USER and make sure it's not there already (pure Python). 2. a) use a shell that offers that capability with `set`: https://jpsoft.com/help/set.htm (TCC/LE is free) b) use a dedicated environment variable editor: http://www.rapidee.com/en/command-line Thorsten -- https://mail.python.org/mailman/listinfo/python-list
Re: extending PATH on Windows?
Thorsten Kampe wrote: > * Ulli Horlacher (Tue, 16 Feb 2016 08:30:59 + (UTC)) > > I need to extend the PATH environment variable on Windows. > > 1. Add the path component yourself into HKEY_CURRENT_USER and make > sure it's not there already (pure Python). Preferred! What is HKEY_CURRENT_USER? Another environment variable? -- Ullrich Horlacher Server und Virtualisierung Rechenzentrum IZUS/TIK E-Mail: horlac...@tik.uni-stuttgart.de Universitaet Stuttgart Tel:++49-711-68565868 Allmandring 30aFax:++49-711-682357 70550 Stuttgart (Germany) WWW:http://www.tik.uni-stuttgart.de/ -- https://mail.python.org/mailman/listinfo/python-list
Re: Unable to insert data into MongoDB.
Thanks a lot. Will implement that. Although I am able to do using just 2 scripts as well. On Monday, February 15, 2016 at 5:34:08 PM UTC+1, Peter Otten wrote: > Arjun Srivatsa wrote: > > > Hi Peter. > > > > Thank you for the reply. > > > > This is the read_server code: > > > > import socket > > from pymongo import MongoClient > > #import datetime > > import sys > > > > # Connection to server (PLC) on port 27017 > > host = "10.52.124.135" > > port = 27017 > > > > s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) > > s.connect((host, port)) > > sys.stdout.write(s.recv(1024)) > > > > > > And the write_db code: > > > > from pymongo import MongoClient > > import datetime > > import socket > > import sys > > > > client = MongoClient('mongodb://localhost:27017/') > > db = client.test_database > > > > mongodoc = { "data": 'data', "date" : datetime.datetime.utcnow() } > > values = db.values > > values_id = values.insert_one(mongodoc).inserted_id > > > > > > So, both these scripts work independently. While, read_server shows the > > output of the actual data from PLC, write_db inserts the sample data into > > the MongoDB. > > > > I am not sure as to how to combine these both and get the desired output. > > What I mean is once you have working scripts > > connect_to_mongodb() > while True: > record = make_fake_data() > insert_record_into_mongodb(record) > > and > > connect_to_server() > while True: > record = read_record_from_server() > print(record) > > you can combine the code in a third script to > > connect_to_server() > connect_to_mongodb() > while True: > record = read_record_from_server() > insert_record_into_mongodb(record) > > and be fairly sure that the combination works, too. -- https://mail.python.org/mailman/listinfo/python-list
Multiple Assignment a = b = c
Hi, a = b = c as an assignment doesn't return anything, i ruled out a = b = c as chained assignment, like a = (b = c) SO i thought, a = b = c is resolved as a, b = [c, c] at-least i fixed in my mind that every assignment like operation in python is done with references and then the references are binded to the named variables. like globals()['a'] = result() but today i learned that this is not the case with great pain(7 hours of debugging.) class Mytest(object): def __init__(self, a): self.a = a def __getitem__(self, k): print('__getitem__', k) return self.a[k] def __setitem__(self, k, v): print('__setitem__', k, v) self.a[k] = v roots = Mytest([0, 1, 2, 3, 4, 5, 6, 7, 8]) a = 4 roots[4] = 6 a = roots[a] = roots[roots[a]] the above program's output is __setitem__ 4 6 __getitem__ 4 __getitem__ 6 __setitem__ 6 6 But the output that i expected is __setitem__ 4 6 __getitem__ 4 __getitem__ 6 __setitem__ 4 6 SO isn't it counter intuitive from all other python operations. like how we teach on how python performs a swap operation??? I just want to get a better idea around this. -- Regards Srinivas Devaki Junior (3rd yr) student at Indian School of Mines,(IIT Dhanbad) Computer Science and Engineering Department ph: +91 9491 383 249 telegram_id: @eightnoteight -- https://mail.python.org/mailman/listinfo/python-list
Installation error, compiling from source on Oracle Linux
I'm installing an app that requires Carbon and some other Python 2.7 features. The version of Oracle Linux we're using comes with 2.6. I've read that it is not a good idea to directly update the O/S as it "may break things" so I'm doing make altinstall. I've downloaded Python-2.7.11 Downloaded zlib-1.2.8 Done ./configure --prefix=/root/Python-2.7.8 --with-libs=/usr/local/lib --disable-ipv6 However, I get an error while compiling. make altinstall gcc -pthread -c -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -o Modules/python.o ./Modules/python.c In file included from Include/Python.h:58, from ./Modules/python.c:3: Include/pyport.h:256:13: error: #error "This platform's pyconfig.h needs to define PY_FORMAT_LONG_LONG" make: *** [Modules/python.o] Error 1 I CAN compile without zlib, but then pip gives an error. python2.7 get-pip.py Traceback (most recent call last): File "get-pip.py", line 19017, in main() File "get-pip.py", line 194, in main bootstrap(tmpdir=tmpdir) File "get-pip.py", line 82, in bootstrap import pip zipimport.ZipImportError: can't decompress data; zlib not available I'd use findRPM but that seems to be 2.7.8, not 2.7.11, and it seems reasonable, if I'm building this, to build the most recent version. Any ideas? I have web searched this; I found a bug that was closed in 2014, and I just got all new source *right now*. I'm doing a very vanilla install on Oracle Linux Server release 6.7 Thank you, == John == -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiple Assignment a = b = c
Hi Srinivas, On 16.02.2016 13:46, srinivas devaki wrote: Hi, a = b = c as an assignment doesn't return anything, i ruled out a = b = c as chained assignment, like a = (b = c) SO i thought, a = b = c is resolved as a, b = [c, c] at-least i fixed in my mind that every assignment like operation in python is done with references and then the references are binded to the named variables. like globals()['a'] = result() but today i learned that this is not the case with great pain(7 hours of debugging.) class Mytest(object): def __init__(self, a): self.a = a def __getitem__(self, k): print('__getitem__', k) return self.a[k] def __setitem__(self, k, v): print('__setitem__', k, v) self.a[k] = v roots = Mytest([0, 1, 2, 3, 4, 5, 6, 7, 8]) a = 4 roots[4] = 6 a = roots[a] = roots[roots[a]] the above program's output is __setitem__ 4 6 __getitem__ 4 __getitem__ 6 __setitem__ 6 6 But the output that i expected is __setitem__ 4 6 __getitem__ 4 __getitem__ 6 __setitem__ 4 6 SO isn't it counter intuitive from all other python operations. like how we teach on how python performs a swap operation??? I just want to get a better idea around this. I think the tuple assignment you showed basically nails it. First, the rhs is evaluated. Second, the lhs is evaluated from left to right. Completely wrong? Best, Sven -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiple Assignment a = b = c
On 16.02.2016 14:05, Sven R. Kunze wrote: Hi Srinivas, I think the tuple assignment you showed basically nails it. First, the rhs is evaluated. Second, the lhs is evaluated from left to right. Completely wrong? Best, Sven As you mentioned swapping. The following two statements do the same (as you suggested at the beginning). a,b=b,a=4,5 (a,b),(b,a)=(4,5),(4,5) Best, Sven -- https://mail.python.org/mailman/listinfo/python-list
Re: extending PATH on Windows?
On Tue, Feb 16, 2016 at 2:30 AM, Ulli Horlacher wrote: > > So far, I use: > >system('setx PATH "%PATH%;'+bindir+'"') > > The problem: In a new process (cmd.exe) PATH contains a lot of double > elements. As far as I have understood, Windows builds the PATH > environment variable from a system component and a user component. With > the setx command from above I have copied the system PATH into the user > PATH component. setx broadcasts a WM_SETTINGCHANGE [1] message that notifies Explorer to reload its environment from the registry, so the user doesn't have to start a new session. It also decides whether to use REG_SZ or REG_EXPAND_SZ depending on the presence of mutliple "%" characters in the string. [1]: https://msdn.microsoft.com/en-us/library/ms725497 But as you note it's no good for extending an existing value, especially not for PATH or a value that references other "%variables%" that you want to remain unexpanded. To do this right, you have to at least use winreg to query the user's PATH value from the registry. But then you may as well replace setx completely. Here's a little something to get you started. import os import sys import types import ctypes user32 = ctypes.WinDLL('user32', use_last_error=True) try: import winreg except ImportError: import _winreg as winreg def extend_path(new_paths, persist=True): if isinstance(new_paths, getattr(types, 'StringTypes', str)): new_paths = [new_paths] new_paths = [os.path.abspath(p) for p in new_paths] paths = [p for p in os.environ.get('PATH', '').split(os.pathsep) if p] for p in new_paths: if p not in paths: paths.append(p) os.environ['PATH'] = os.pathsep.join(paths) if persist: _persist_path(new_paths) def _persist_path(new_paths): if sys.version_info[0] == 2: temp_paths = [] for p in new_paths: if isinstance(p, unicode): temp_paths.append(p) else: temp_paths.append(p.decode('mbcs')) new_paths = temp_paths with winreg.OpenKey(winreg.HKEY_CURRENT_USER, 'Environment', 0, winreg.KEY_QUERY_VALUE | winreg.KEY_SET_VALUE) as hkey: try: user_path, dtype = winreg.QueryValueEx(hkey, 'PATH') except WindowsError as e: ERROR_FILE_NOT_FOUND = 0x0002 if e.winerror != ERROR_FILE_NOT_FOUND: raise paths = [] else: if dtype in (winreg.REG_SZ, winreg.REG_EXPAND_SZ): paths = [p for p in user_path.split(os.pathsep) if p] else: paths = [] for p in new_paths: if p not in paths: paths.append(p) pathstr = os.pathsep.join(paths) if pathstr.count('%') < 2: dtype = winreg.REG_SZ else: dtype = winreg.REG_EXPAND_SZ winreg.SetValueEx(hkey, 'PATH', 0, dtype, pathstr) _broadcast_change(u'Environment') def _broadcast_change(lparam): HWND_BROADCAST = 0x WM_SETTINGCHANGE = 0x001A SMTO_ABORTIFHUNG = 0x0002 ERROR_TIMEOUT = 0x05B4 wparam = 0 if not user32.SendMessageTimeoutW( HWND_BROADCAST, WM_SETTINGCHANGE, wparam, ctypes.c_wchar_p(lparam), SMTO_ABORTIFHUNG, 1000, None): err = ctypes.get_last_error() if err != ERROR_TIMEOUT: raise ctypes.WinError(err) -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
If you're handling coroutines there is an asyncio facility for "background tasks". The ensure_future [1] will take a coroutine, attach it to a Task, and return a future to you that resolves when the coroutine is complete. The coroutine you schedule with that function will not cause your current coroutine to wait unless you await the future it returns. [1] https://docs.python.org/3/library/asyncio-task.html#asyncio.ensure_future On Mon, Feb 15, 2016, 23:53 Chris Angelico wrote: > On Mon, Feb 15, 2016 at 6:39 PM, Paul Rubin > wrote: > > "Frank Millman" writes: > >> The benefit of my class is that it enables me to take the coroutine > >> and run it in another thread, without having to re-engineer the whole > >> thing. > > > > Threads in Python don't get you parallelism either, of course. > > > > They can. The only limitation is that, in CPython (and some others), > no two threads can concurrently be executing Python byte-code. The > instant you drop into a C-implemented function, it can release the GIL > and let another thread start running. Obviously this happens any time > there's going to be a blocking API call (eg if one thread waits on a > socket read, others can run), but it can also happen with > computational work: > > import numpy > import threading > > def thread1(): > arr = numpy.zeros(1, dtype=numpy.int64) > while True: > print("1: %d" % arr[0]) > arr += 1 > arr = (arr * arr) % 142957 > > def thread2(): > arr = numpy.zeros(1, dtype=numpy.int64) > while True: > print("2: %d" % arr[0]) > arr += 2 > arr = (arr * arr) % 142957 > > threading.Thread(target=thread1).start() > thread2() > > This will happily keep two CPU cores occupied. Most of the work is > being done inside Numpy, which releases the GIL before doing any work. > So it's not strictly true that threading can't parallelise Python code > (and as mentioned, it depends on your interpreter - Jython can, I > believe, do true multithreading), but just that there are limitations > on what can execute concurrently. > > ChrisA > -- > https://mail.python.org/mailman/listinfo/python-list > -- https://mail.python.org/mailman/listinfo/python-list
Re: extending PATH on Windows?
* Ulli Horlacher (Tue, 16 Feb 2016 12:38:44 + (UTC)) > > Thorsten Kampe wrote: > > * Ulli Horlacher (Tue, 16 Feb 2016 08:30:59 + (UTC)) > > > I need to extend the PATH environment variable on Windows. > > > > 1. Add the path component yourself into HKEY_CURRENT_USER and make > > sure it's not there already (pure Python). > > Preferred! > What is HKEY_CURRENT_USER? Another environment variable? It's a hive in the Windows registry and the equivalent of `~/.*` in Linux terms (HKEY_LOCAL_MACHINE[/Software] being the equivalent of `/etc`). The fact that you're asking indicates that you should read about that in advance. The task itself is definitely not that hard. Maybe someone has already asked at StackOverflow. But the devil's in the detail. Some things to consider - Python is not by default installed on Windows, so you have to use a way to run your script without (PyInstaller for instance). - by default there is no entry in HKCU, so you have to create it first (under HKEY_CURRENT_USER\Environment). - you need to create the correct type (REG_SZ, null-terminated string) - Windows paths are semicolon separated (not colon). - Windows only module for the Registry: https://docs.python.org/3/library/winreg.html Thorsten -- https://mail.python.org/mailman/listinfo/python-list
Re: extending PATH on Windows?
* Ulli Horlacher (Tue, 16 Feb 2016 12:38:44 + (UTC)) By the way: there is a script called `win_add2path.py` in your Python distribution which "is a simple script to add Python to the Windows search path. It modifies the current user (HKCU) tree of the registry.". That should do most of what you want. Thorsten -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
"Kevin Conway" wrote in message news:CAKF=+dim8wzprvm86_v2w5-xsopcchvgm0hy8r4xehdyzy_...@mail.gmail.com... If you're handling coroutines there is an asyncio facility for "background tasks". The ensure_future [1] will take a coroutine, attach it to a Task, and return a future to you that resolves when the coroutine is complete. The coroutine you schedule with that function will not cause your current coroutine to wait unless you await the future it returns. [1] https://docs.python.org/3/library/asyncio-task.html#asyncio.ensure_future Thank you Kevin! That works perfectly, and is much neater than my effort. Frank -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
Kevin Conway : > If you're handling coroutines there is an asyncio facility for > "background tasks". The ensure_future [1] will take a coroutine, > attach it to a Task, and return a future to you that resolves when the > coroutine is complete. Ok, yes, but those "background tasks" monopolize the CPU once they are scheduled to run. If your "background task" doesn't need a long time to run, just call the function in the foreground and be done with it. If it does consume time, you need to delegate it to a separate process so the other tasks remain responsive. Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
"Marko Rauhamaa" wrote in message news:87d1rwpwo2@elektro.pacujo.net... Kevin Conway : > If you're handling coroutines there is an asyncio facility for > "background tasks". The ensure_future [1] will take a coroutine, > attach it to a Task, and return a future to you that resolves when the > coroutine is complete. Ok, yes, but those "background tasks" monopolize the CPU once they are scheduled to run. If your "background task" doesn't need a long time to run, just call the function in the foreground and be done with it. If it does consume time, you need to delegate it to a separate process so the other tasks remain responsive. I will explain my situation - perhaps you can tell me if it makes sense. My background task does take a long time to run - about 10 seconds - but most of that time is spent waiting for database responses, which is handled in another thread. You could argue that the database thread should rather be handled by another process, and that is definitely an option if I find that response times are affected. So far my response times have been very good, even with database activity in the background. However, I have not simulated a large number of concurrent users. That could throw up the kinds of problem that you are concerned about. Frank -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
> Ok, yes, but those "background tasks" monopolize the CPU once they are scheduled to run. This is true if the coroutines are cpu bound. If that is the case then a coroutine is likely the wrong choice for that code to begin with. Coroutines, in asyncio land, are primarily designed for io bound work. > My background task does take a long time to run - about 10 seconds - but most of that time is spent waiting for database responses, which is handled in another thread. Something else to look into is an asyncio driver for your database connections. Threads aren't inherently harmful, but using them to achieve async networking when running asyncio is a definite code smell since that is precisely the problem asyncio is supposed to solve for. On Tue, Feb 16, 2016, 08:37 Frank Millman wrote: > "Marko Rauhamaa" wrote in message news:87d1rwpwo2@elektro.pacujo.net. > .. > > > > Kevin Conway : > > > > > If you're handling coroutines there is an asyncio facility for > > > "background tasks". The ensure_future [1] will take a coroutine, > > > attach it to a Task, and return a future to you that resolves when the > > > coroutine is complete. > > > > Ok, yes, but those "background tasks" monopolize the CPU once they are > > scheduled to run. > > > > If your "background task" doesn't need a long time to run, just call the > > function in the foreground and be done with it. If it does consume time, > > you need to delegate it to a separate process so the other tasks remain > > responsive. > > > > I will explain my situation - perhaps you can tell me if it makes sense. > > My background task does take a long time to run - about 10 seconds - but > most of that time is spent waiting for database responses, which is handled > in another thread. > > You could argue that the database thread should rather be handled by > another > process, and that is definitely an option if I find that response times are > affected. > > So far my response times have been very good, even with database activity > in > the background. However, I have not simulated a large number of concurrent > users. That could throw up the kinds of problem that you are concerned > about. > > Frank > > > -- > https://mail.python.org/mailman/listinfo/python-list > -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiple Assignment a = b = c
On Tue, 16 Feb 2016 11:46 pm, srinivas devaki wrote: > Hi, > > a = b = c > > as an assignment doesn't return anything, i ruled out a = b = c as > chained assignment, like a = (b = c) > SO i thought, a = b = c is resolved as > a, b = [c, c] That is one way of thinking of it. A better way would be: a = c b = c except that isn't necessarily correct for complex assignments involving attribute access or item assignment. A better way is: _temp = c a = _temp b = _temp del _temp except the name "_temp" isn't actually used. > at-least i fixed in my mind that every assignment like operation in > python is done with references and then the references are binded to > the named variables. > like globals()['a'] = result() That's broadly correct. > but today i learned that this is not the case with great pain(7 hours > of debugging.) > > class Mytest(object): > def __init__(self, a): > self.a = a > def __getitem__(self, k): > print('__getitem__', k) > return self.a[k] > def __setitem__(self, k, v): > print('__setitem__', k, v) > self.a[k] = v > > roots = Mytest([0, 1, 2, 3, 4, 5, 6, 7, 8]) > a = 4 > roots[4] = 6 > a = roots[a] = roots[roots[a]] `roots[4] = 6` will give "__setitem__ 4 6", as you expect. On the right hand side, you have: roots[roots[a]] which evaluates `roots[a]` first, giving "__getitem__ 4". That returns 6, as you expect. So now you have `roots[6]`, which gives "__getitem__ 6", as you expect, and returns 6. The left hand side has: a = roots[a] = ... which becomes: a = roots[a] = 6 which behaves like: a = 6 roots[a] = 6 So you end up with: a = roots[6] = 6 which gives "__setitem__ 6 6", **not** "__setitem__ 4 6" like you expected. Here is a simpler demonstration: py> L = [0, 1, 2, 3, 4, 5, 6] py> a = L[a//100] = 500 py> print a 500 py> print L [0, 1, 2, 3, 4, 500, 6] Let's look at the byte-code generated by the statement: a = L[a] = x The exact byte-code used will depend on the version of Python you have, but for 2.7 it looks like this: py> from dis import dis py> code = compile("a = L[a] = x", "", "exec") py> dis(code) 1 0 LOAD_NAME0 (x) 3 DUP_TOP 4 STORE_NAME 1 (a) 7 LOAD_NAME2 (L) 10 LOAD_NAME1 (a) 13 STORE_SUBSCR 14 LOAD_CONST 0 (None) 17 RETURN_VALUE Translated to English: - evaluate the expression `x` and push the result onto the stack; - duplicate the top value on the stack; - pop the top value off the stack and assign to name `a`; - evaluate the name `L`, and push the result onto the stack; - evaluate the name `a`, and push the result onto the stack; - call setattr with the top three items from the stack. > SO isn't it counter intuitive from all other python operations. > like how we teach on how python performs a swap operation??? No. Let's look at the equivalent swap: py> L = [10, 20, 30, 40, 50] py> a = 3 py> a, L[a] = L[a], a Traceback (most recent call last): File "", line 1, in IndexError: list assignment index out of range This is equivalent to: _temp1 = L[a] # 40 pushed onto the stack _temp2 = a # 3 pushed onto the stack a = _temp1 # 40 # rotate the stack, and pull the top item 40 L[a] = _temp2 # L[40] = 3 which obviously fails. Here's the byte-code: py> code = compile("a, L[a] = L[a], a", "", "exec") py> dis(code) 1 0 LOAD_NAME0 (L) 3 LOAD_NAME1 (a) 6 BINARY_SUBSCR 7 LOAD_NAME1 (a) 10 ROT_TWO 11 STORE_NAME 1 (a) 14 LOAD_NAME0 (L) 17 LOAD_NAME1 (a) 20 STORE_SUBSCR 21 LOAD_CONST 0 (None) 24 RETURN_VALUE If you do the swap in the other order, it works: py> L = [10, 20, 30, 40, 50] py> a = 3 py> L[a], a = a, L[a] py> print a 40 py> print L [10, 20, 30, 3, 50] In all cases, the same rule applies: - evaluate the right hand side from left-most to right-most, pushing the values onto the stack; - perform assignments on the left hand side, from left-most to right-most. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
"Kevin Conway" wrote in message news:CAKF=+dhXZ=yax8stawr_gjx3tg8yujprjg-7ym2_brv2kxm...@mail.gmail.com... > My background task does take a long time to run - about 10 seconds - but > most of that time is spent waiting for database responses, which is > handled > in another thread. Something else to look into is an asyncio driver for your database connections. Threads aren't inherently harmful, but using them to achieve async networking when running asyncio is a definite code smell since that is precisely the problem asyncio is supposed to solve for. Maybe I have not explained very well. I am not using threads to achieve async networking. I am using asyncio in a client server environment, and it works very well. If a client request involves a database query, I use a thread to perform that so that it does not slow down the other users. I usually want the originating client to block until I have a response, so I use 'await'. However, occasionally the request takes some time, and it is not necessary for the client to wait for the response, so I want to unblock the client straight away, run the task in the background, and then notify the client when the task is complete. This is where your suggestion of 'ensure_future' does the job perfectly. I would love to drive the database asynchronously, but of the three databases I use, only psycopg2 seems to have asyncio support. As my home-grown solution (using queues) seems to be working well so far, I am sticking with that until I start to experience responsiveness issues. If that happens, my first line of attack will be to switch from threads to processes. Hope this makes sense. Frank -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
On Wed, 17 Feb 2016 01:17 am, Marko Rauhamaa wrote: > Ok, yes, but those "background tasks" monopolize the CPU once they are > scheduled to run. Can you show some code demonstrating this? -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
On Wed, Feb 17, 2016 at 2:21 AM, Frank Millman wrote: > I would love to drive the database asynchronously, but of the three > databases I use, only psycopg2 seems to have asyncio support. As my > home-grown solution (using queues) seems to be working well so far, I am > sticking with that until I start to experience responsiveness issues. If > that happens, my first line of attack will be to switch from threads to > processes. And this is where we demonstrate divergent thought processes. *My* first line of attack if hybrid async/thread doesn't work would be to mandate a PostgreSQL backend, not to switch to hybrid async/process :) Is the added value of "you get three options of database back-end" worth the added cost of "but now my code is massively more complex"? ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
"Chris Angelico" wrote in message news:captjjmqmie4groqnyvhwahcn2mwqeyqxt5kvfivotrhqy-s...@mail.gmail.com... On Wed, Feb 17, 2016 at 2:21 AM, Frank Millman wrote: > I would love to drive the database asynchronously, but of the three > databases I use, only psycopg2 seems to have asyncio support. As my > home-grown solution (using queues) seems to be working well so far, I am > sticking with that until I start to experience responsiveness issues. If > that happens, my first line of attack will be to switch from threads to > processes. And this is where we demonstrate divergent thought processes. *My* first line of attack if hybrid async/thread doesn't work would be to mandate a PostgreSQL backend, not to switch to hybrid async/process :) Is the added value of "you get three options of database back-end" worth the added cost of "but now my code is massively more complex"? Then we will have to agree to diverge ;-) If I ever get my app off the ground, it will be an all-purpose, multi-company, multi-currency, multi-everything accounting/business system. There is a massive market out there, and a large percentage of that is Microsoft-only shops. I have no intention of cutting myself off from that market before I even start. I am very happy with my choice of 3 databases - 1. sqlite3 - ideal for demo purposes and for one-man businesses 2. Sql Server for those that insist on it 3. PostgreSQL for every one else, and my recommendation if asked Frank -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiple Assignment a = b = c
On Tue, Feb 16, 2016 at 6:35 PM, Sven R. Kunze wrote: > > First, the rhs is evaluated. > Second, the lhs is evaluated from left to right. Great, I will remember these two lines :) On Tue, Feb 16, 2016 at 8:46 PM, Steven D'Aprano wrote: > _temp = c > a = _temp > b = _temp > del _temp > > > except the name "_temp" isn't actually used. > So it is like first right most expression is evaluated and then lhs is evaluated from left to right. > py> from dis import dis > py> code = compile("a = L[a] = x", "", "exec") > py> dis(code) > 1 0 LOAD_NAME0 (x) > 3 DUP_TOP > 4 STORE_NAME 1 (a) > 7 LOAD_NAME2 (L) > 10 LOAD_NAME1 (a) > 13 STORE_SUBSCR > 14 LOAD_CONST 0 (None) > 17 RETURN_VALUE > > > Translated to English: > > - evaluate the expression `x` and push the result onto the stack; > > - duplicate the top value on the stack; > > - pop the top value off the stack and assign to name `a`; > > - evaluate the name `L`, and push the result onto the stack; > > - evaluate the name `a`, and push the result onto the stack; > > - call setattr with the top three items from the stack. > thank-you so much, for explaining how to find the underlying details. >> SO isn't it counter intuitive from all other python operations. >> like how we teach on how python performs a swap operation??? > > No. Let's look at the equivalent swap: > > In all cases, the same rule applies: > > - evaluate the right hand side from left-most to right-most, pushing the > values onto the stack; > > - perform assignments on the left hand side, from left-most to right-most. > uhh, i related it with swap because I was thinking variables are binded, like first of all for all lhs assignments get their references or names and then put the value of rhs in them. as `a` is a name, so the rhs reference is copied to the a `roots[a]` is a reference to an object, so it is initialized with the reference of rhs. anyway I got it, and all my further doubts are cleared from that compiled code. I tried some other examples and understood how it works. thanks a lot. -- Regards Srinivas Devaki Junior (3rd yr) student at Indian School of Mines,(IIT Dhanbad) Computer Science and Engineering Department ph: +91 9491 383 249 telegram_id: @eightnoteight -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
Steven D'Aprano : > On Wed, 17 Feb 2016 01:17 am, Marko Rauhamaa wrote: > >> Ok, yes, but those "background tasks" monopolize the CPU once they >> are scheduled to run. > > Can you show some code demonstrating this? Sure: #!/usr/bin/env python3 import asyncio, time def main(): asyncio.get_event_loop().run_until_complete(asyncio.wait([ background_task(), looping_task() ])) @asyncio.coroutine def looping_task(): while True: yield from asyncio.sleep(1) print(int(time.time())) @asyncio.coroutine def background_task(): yield from asyncio.sleep(4) t = time.time() while time.time() - t < 10: pass if __name__ == '__main__': main() which prints: 1455642629 1455642630 1455642631 1455642642<== gap 1455642643 1455642644 1455642645 Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
Steven D'Aprano : > On Wed, 17 Feb 2016 01:17 am, Marko Rauhamaa wrote: > >> Ok, yes, but those "background tasks" monopolize the CPU once they are >> scheduled to run. > > Can you show some code demonstrating this? Sure: #!/usr/bin/env python3 import asyncio, time def main(): asyncio.get_event_loop().run_until_complete(asyncio.wait([ background_task(), looping_task() ])) @asyncio.coroutine def looping_task(): while True: yield from asyncio.sleep(1) print(int(time.time())) @asyncio.coroutine def background_task(): yield from asyncio.sleep(4) t = time.time() while time.time() - t < 10: pass if __name__ == '__main__': main() which prints: 1455642629 1455642630 1455642631 1455642642<== gap 1455642643 1455642644 1455642645 Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
Steven D'Aprano : > On Wed, 17 Feb 2016 01:17 am, Marko Rauhamaa wrote: > >> Ok, yes, but those "background tasks" monopolize the CPU once they are >> scheduled to run. > > Can you show some code demonstrating this? Sure: #!/usr/bin/env python3 import asyncio, time def main(): asyncio.get_event_loop().run_until_complete(asyncio.wait([ background_task(), looping_task() ])) @asyncio.coroutine def looping_task(): while True: yield from asyncio.sleep(1) print(int(time.time())) @asyncio.coroutine def background_task(): yield from asyncio.sleep(4) t = time.time() while time.time() - t < 10: pass if __name__ == '__main__': main() which prints: 1455642629 1455642630 1455642631 1455642642<== gap 1455642643 1455642644 1455642645 Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
Marko Rauhamaa : > Sure: Sorry for the multiple copies. Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
Steven D'Aprano : > On Wed, 17 Feb 2016 01:17 am, Marko Rauhamaa wrote: > >> Ok, yes, but those "background tasks" monopolize the CPU once they are >> scheduled to run. > > Can you show some code demonstrating this? Sure: #!/usr/bin/env python3 import asyncio, time def main(): asyncio.get_event_loop().run_until_complete(asyncio.wait([ background_task(), looping_task() ])) @asyncio.coroutine def looping_task(): while True: yield from asyncio.sleep(1) print(int(time.time())) @asyncio.coroutine def background_task(): yield from asyncio.sleep(4) t = time.time() while time.time() - t < 10: pass if __name__ == '__main__': main() which prints: 1455642629 1455642630 1455642631 1455642642 1455642643 1455642644 1455642645 Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio - run coroutine in the background
"Frank Millman" : > I would love to drive the database asynchronously, but of the three > databases I use, only psycopg2 seems to have asyncio support. Yes, asyncio is at its infancy. There needs to be a moratorium on blocking I/O. Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: Make a unique filesystem path, without creating the file
On Tue, 16 Feb 2016 04:56 pm, Ben Finney wrote: > An example:: > > import io > import tempfile > names = tempfile._get_candidate_names() I'm not sure that calling a private function of the tempfile module is better than calling a deprecated function. > def test_frobnicates_configured_spungfile(): > """ ‘foo’ should frobnicate the configured spungfile. """ > > fake_file_path = os.path.join(tempfile.gettempdir(), names.next()) At this point, you have a valid pathname, but no guarantee whether it refers to a real file on the file system or not. That's the whole problem with tempfile.makepath -- it can return a file name which is not in use, but by the time it returns to you, you cannot guarantee that it still doesn't exist. Now, since this is a test which doesn't actually open that file, it doesn't matter. There's no actual security vulnerability here. So your test doesn't actually require that the file is unique, or that it doesn't actually exist. (Which is good, because you can't guarantee that it doesn't exist.) So why not just pick a random bunch of characters? chars = list(string.ascii_letters) random.shuffle(chars) fake_file_path = ''.join(chars[:10]) > fake_file = io.BytesIO("Lorem ipsum, dolor sit > amet".encode("utf-8")) > > patch_builtins_open( > when_accessing_path=fake_file_path, > provide_file=fake_file) There's nothing apparent in this that requires that fake_file_path not actually exist, which is good since (as I've pointed out before) you cannot guarantee that it doesn't exist. One could just as easily, and just as correctly, write: patch_builtins_open( when_accessing_path='/foo/bar/baz', provide_file=fake_file) and regardless of whether /foo/bar/baz actually exists or not, you are guaranteed to get the fake file rather than the real file. So I question whether you actually need this tempfile.makepath function at all. *But* having questioned it, for the sake of the argument I'll assume you do need it, and continue accordingly. > system_under_test.config.spungfile_path = fake_file_path > system_under_test.foo() > assert_correctly_frobnicated(fake_file) > > So the test case creates a fake file, makes a valid filesystem path to > associate with it, then patches the ‘open’ function so that it will > return the fake file when that specific path is requested. > > Then the test case alters the system under test's configuration, giving > it the generated filesystem path for an important file. The test case > then calls the function about which the unit test is asserting > behaviour, ‘system_under_test.foo’. When that call returns, the test > case asserts some properties of the fake file to ensure the system under > test actually accessed that file. Personally, I think it would be simpler and easier to understand if, instead of patching open, you allowed the test to read and write real files: file_path = '/tmp/spam' system_under_test.config.spungfile_path = file_path system_under_test.foo() assert_correctly_frobnicated(file_path) os.unlink(file_path) In practice, I'd want to only unlike the file if the test passes. If it fails, I'd want to look at the file to see why it wasn't frobnicated. I think that a correctly-working filesystem is a perfectly reasonable prerequisite for the test, just like a working CPU, memory, power supply, operating system and Python interpreter. You don't have to guard against every imaginable failure ("fixme: test may return invalid results if the speed of light changes by more than 0.0001%"), and you might as well take advantage of real files for debugging. But that's my opinion, and if you have another, that's your personal choice. > With a supported standard library API for this – ‘tempfile.makepath’ for > example – the generation of the filesystem path would change from four > separate function calls, one of which is a private API:: > > names = tempfile._get_candidate_names() > fake_file_path = os.path.join(tempfile.gettempdir(), names.next()) > > to a simple public function call:: > > fake_file_path = tempfile.makepath() Nobody doubts that your use of tempfile.makepath is legitimate for your use-case. But it is *not* legitimate for the tempfile module, and it is a mistake that it was added in the first place, hence the deprecation. Assuming that your test suite needs this function, your test library, or test suite, should provide that function, not tempfile. I believe it is unreasonable to expect the tempfile module to keep a function which is a security risk in the context of "temp files" just because it is useful for some completely unrelated use-cases. After all, your use of this doesn't actually have anything to do with temporary files. It is a mocked *permanent* file, not a real temporary one. > This whole thread began because I expected s
Re: asyncio - run coroutine in the background
On 16/02/2016 17:15, Marko Rauhamaa wrote: Marko Rauhamaa : Sure: Sorry for the multiple copies. Marko I thought perhaps background jobs were sending them :) -- Robin Becker -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiple Assignment a = b = c
On 2/16/2016 7:46 AM, srinivas devaki wrote: Hi, a = b = c as an assignment doesn't return anything, i ruled out a = b = c as chained assignment, like a = (b = c) SO i thought, a = b = c is resolved as a, b = [c, c] https://docs.python.org/3/reference/simple_stmts.html#assignment-statements "An assignment statement evaluates the expression list (remember that this can be a single expression or a comma-separated list, the latter yielding a tuple) and assigns the single resulting object to each of the target lists, from left to right." a = b = c is the same as tem = c; a = tem; b = tem. This does not work if tem is an iterator. >>> def g(): yield 1 yield 2 >>> a,b = c,d = g() Traceback (most recent call last): File "", line 1, in a,b = c,d = g() ValueError: not enough values to unpack (expected 2, got 0) >>> a, b (1, 2) >>> c,d Traceback (most recent call last): File "", line 1, in c,d NameError: name 'c' is not defined at-least i fixed in my mind that every assignment like operation in python is done with references and then the references are binded to the named variables. like globals()['a'] = result() but today i learned that this is not the case with great pain(7 hours of debugging.) class Mytest(object): def __init__(self, a): self.a = a def __getitem__(self, k): print('__getitem__', k) return self.a[k] def __setitem__(self, k, v): print('__setitem__', k, v) self.a[k] = v roots = Mytest([0, 1, 2, 3, 4, 5, 6, 7, 8]) a = 4 roots[4] = 6 a = roots[a] = roots[roots[a]] tem = roots[roots[a]] a = tem roots[a] = tem the above program's output is __setitem__ 4 6 __getitem__ 4 __getitem__ 6 __setitem__ 6 6 -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
Re: Make a unique filesystem path, without creating the file
Oscar Benjamin writes: > If you're going to patch open to return a fake file when asked to open > fake_file_path why do you care whether there is a real file of that > name? I don't, and have been saying explicitly many times in this thread that I do not care whether the file exists. Somehow that is still not clear? -- \ “Nothing exists except atoms and empty space; everything else | `\is opinion.” —Democritus, c. 460 BCE – 370 BCE | _o__) | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: Make a unique filesystem path, without creating the file
Steven D'Aprano writes: > On Tue, 16 Feb 2016 04:56 pm, Ben Finney wrote: > > > names = tempfile._get_candidate_names() > > I'm not sure that calling a private function of the tempfile module is > better than calling a deprecated function. Agreed, which is why I'm seeking a public API that is not deprecated. > So why not just pick a random bunch of characters? > > chars = list(string.ascii_letters) > random.shuffle(chars) > fake_file_path = ''.join(chars[:10]) This (an equivalent) is already implemented, internally to ‘tempfile’ and tested and maintained and more robust than me re-inventing the wheel. > Yes, but the system doesn't try to enforce the filesystem's rules, > does it? The test case I'm writing should not be prone to failure if the system happens to perform some arbitrary validation of filesystem paths. ‘tempfile’ already knows how to generate filesystem paths, I want to use that and not have to get it right myself. > and your system shouldn't care. If it does, this test case should not fail. > Since your test doesn't know what filesystem your code will be running > on, you can't make any assumptions about what paths are valid or not > valid. That implies that ‘tempfile._get_candidate_names’ would generate paths that would potentially be invalid. Is that what you intend to imply? > > Almost. I want the filesystem paths to be valid because the system > > under test expects them, it may perform its own validation, > > If the system tries to validate paths, it is broken. This is “you don't want what you say you want”, and seeing the justifications presented I don't agree. -- \ “I must say that I find television very educational. The minute | `\ somebody turns it on, I go to the library and read a book.” | _o__)—Groucho Marx | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
I m facing some problem while opening the interpreter. how can I resolve the issue?
Please guide me. #Chinmay Sent from Mail for Windows 10 -- https://mail.python.org/mailman/listinfo/python-list
[Glitch?] Python has just stopped working
I woke up two days ago to find out that python literally won't work any more. I have looked everywhere, asked multiple Stack Overflow questions, and am ready to give up. Whenever I run python (3.5), I get the following message: Fatal Python error: Py_initialize: unable to load the file system codec ImportError: No module named 'encodings' Current thread 0x2168 (most recent call first): If there's anything you know that I could do to fix this, then please tell me. I've tried uninstalling and reparing, so it's not those. Thanks! -- https://mail.python.org/mailman/listinfo/python-list
Re: [Glitch?] Python has just stopped working
Am 16.02.16 um 17:19 schrieb Theo Hamilton: I woke up two days ago to find out that python literally won't work any more. I have looked everywhere, asked multiple Stack Overflow questions, and am ready to give up. Whenever I run python (3.5), I get the following message: Fatal Python error: Py_initialize: unable to load the file system codec ImportError: No module named 'encodings' Can it be that you have just set a strange locale? What happens if you run it as LANG=C python ? Christian -- https://mail.python.org/mailman/listinfo/python-list
Re: I m facing some problem while opening the interpreter. how can I resolve the issue?
On 16/02/2016 19:55, Chinmaya Choudhury wrote: Please guide me. #Chinmay Sent from Mail for Windows 10 Please read http://catb.org/~esr/faqs/smart-questions.html and possibly http://www.sscce.org/, then try asking again. -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: I m facing some problem while opening the interpreter. how can I resolve the issue?
On Wed, 17 Feb 2016 01:25:52 +0530, Chinmaya Choudhury wrote: > Please guide me. > #Chinmay > > Sent from Mail for Windows 10 open it correctly -- The temperature of the aqueous content of an unremittingly ogled culinary vessel will not achieve 100 degrees on the Celsius scale. -- https://mail.python.org/mailman/listinfo/python-list
Re: I m facing some problem while opening the interpreter. how can I resolve the issue?
On Tue, Feb 16, 2016 at 2:55 PM, Chinmaya Choudhury wrote: > Please guide me. > #Chinmay Dear Cousin Muscle, I have a serious trouble with Tom. Need you help at once, Jerry. (C) > > Sent from Mail for Windows 10 > > -- > https://mail.python.org/mailman/listinfo/python-list -- https://mail.python.org/mailman/listinfo/python-list
Re: [Glitch?] Python has just stopped working
On Tue, Feb 16, 2016 at 10:19 AM, Theo Hamilton wrote: > Whenever I run python (3.5), I get the following message: > > Fatal Python error: Py_initialize: unable to load the file system codec > ImportError: No module named 'encodings' > > Current thread 0x2168 (most recent call first): The interpreter can't find the standard library, which is a symptom of having PYTHONHOME set to some other directory: C:\>set PYTHONHOME=C:\ C:\>py -3.5 Fatal Python error: Py_Initialize: unable to load the file system codec ImportError: No module named 'encodings' Current thread 0x0940 (most recent call first): Generally this environment variable is unnecessary for normal use and shouldn't be set permanently. -- https://mail.python.org/mailman/listinfo/python-list
Passing data across callbacks in ThreadPoolExecutor
What is the pattern for chaining execution of tasks with ThreadPoolExecutor? Callbacks is not an adequate facility as each task I have will generate new output. Thanks, jlc -- https://mail.python.org/mailman/listinfo/python-list
Not intalling
I'm trying to install the latest version of python . First time it didn't install successfully ,i tried again now it have installed but not working. I'm sending screenshot and log file. I tried reinstallation again and again but result is same. -- https://mail.python.org/mailman/listinfo/python-list
Re: Not intalling
SoNu KuMaR writes: > I'm trying to install the latest version of python . First time it > didn't install successfully ,i tried again now it have installed but > not working. What exactly did you try? What details about the host can you describe so we know what may be peculiar to the problem? > I'm sending screenshot and log file. I tried reinstallation again and > again but result is same. You will need to copy and paste the actual text; graphic images are not suitable for this forum. -- \ “The optimist thinks this is the best of all possible worlds. | `\ The pessimist fears it is true.” —J. Robert Oppenheimer | _o__) | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: Make a unique filesystem path, without creating the file
Ben Finney writes: > Cameron Simpson writes: > >> I've been watching this for a few days, and am struggling to >> understand your use case. > > Yes, you're not alone. This surprises me, which is why I'm persisting. > >> Can you elaborate with a concrete example and its purpose which would >> work with a mktemp-ish official function? > > An example:: Let me present another example that might strike some as more straightforward. If I want to create a temporary file, I can call mkstemp(). If I want to create a temporary directory, I can call mkdtemp(). Suppose that instead of a file or a directory, I want a FIFO or a socket. A FIFO is created by passing a pathname to os.mkfifo(). A socket is created by passing a pathname to an AF_UNIX socket's bind() method. In both cases, the pathname must not name anything yet (not even a symbolic link), otherwise the call will fail. So in the FIFO case, I might write something like the following: def make_temp_fifo(mode=0o600): while True: path = tempfile.mktemp() try: os.mkfifo(path, mode=mode) except FileExistsError: pass else: return path mktemp() is convenient here, because I don't have to worry about whether I should be using "/tmp" or "/var/tmp" or "c:\temp", or whether the TMPDIR environment variable is set, or whether I have permission to create entries in those directories. It just gives me a pathname without making me think about the rest of that stuff. Yes, I have to defend against the possibility that somebody else creates something with the same name first, but as you can see, I did that, and it wasn't rocket science. So is there something wrong with the above code? Other than the fact that the documentation says something scary about mktemp()? It looks to me like mktemp() provides some real utility, packaged up in a way that is orthogonal to the type of file system entry I want to create, the permissions I want to give to that entry, and the mode I want use to open it. It looks like a useful, albeit low-level, primitive that it is perfectly reasonable for the tempfile module to supply. And yet the documentation condemns it as "deprecated", and tells me I should use mkstemp() instead. (As if that would be of any use in the situation above!) It looks like anxiety that some people might use mktemp() in a stupid way has caused an over-reaction. Let the documentation warn about the problem and point to prepackaged solutions in the common cases of making files and directories, but I see no good reason to deprecate this useful utility. -- Alan Bawden -- https://mail.python.org/mailman/listinfo/python-list
Re: Will file be closed automatically in a "for ... in open..." statement?
Thanks for these detailed explanation. Both statements will close file automatically sooner or later and, when considering the exceptions, "with" is better. Hope my understanding is right. But, just curious, how do you know the "for" will do it? I can't find any document about it from every sources I know. Very depressed:-( --Jach -- https://mail.python.org/mailman/listinfo/python-list
Re: Will file be closed automatically in a "for ... in open..." statement?
On Wed, Feb 17, 2016 at 3:04 PM, wrote: > Thanks for these detailed explanation. Both statements will close file > automatically sooner or later and, when considering the exceptions, "with" is > better. Hope my understanding is right. > > But, just curious, how do you know the "for" will do it? I can't find any > document about it from every sources I know. Very depressed:-( > It's not the 'for' loop that does it. The for loop is kinda like this: _temp = open("foo.txt") _temp.read() # do stuff, do stuff _temp = None When you stop holding onto an object, Python can get rid of it. When that happens is not promised, though - and if you have a reference loop, it might hang around for a long time. But when a file object is disposed of, the underlying file will get closed. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Will file be closed automatically in a "for ... in open..." statement?
On 02/16/2016 11:04 PM, jf...@ms4.hinet.net wrote: > Thanks for these detailed explanation. Both statements will close file > automatically sooner or later and, when considering the exceptions, "with" is > better. Hope my understanding is right. > > But, just curious, how do you know the "for" will do it? I can't find any > document about it from every sources I know. Very depressed:-( > > --Jach > First-- IMO, don't depend on it. Instead, use something like: with open('foo.txt') as f: for line in f: pass # do something here It's one extra indent and one extra line, but it's cleaner. To answer your question, technically, it might not-- it really depends upon your implementation of Python. It just so happens that the most popular version of Python (CPython, the reference implementation) will garbage collect the file object right away. HOWEVER. The reason the "for" will PROBABLY result in file closure is because as soon as the for loop exits, there is no reason to hold onto the object returned by "open", so it is disposed. When file objects are disposed, they are closed. IMO, don't depend on this behaviour; it's bad form. -- https://mail.python.org/mailman/listinfo/python-list
threading - Doug Hellman stdlib book, Timer() subclassing etc
In Doug Hellman's book on the stdlib, he does: import threading import logging logging.basicConfig(level=logging.DEBUG, format=’(%(threadName)-10s) %(message)s’, ) class MyThreadWithArgs(threading.Thread): def __init__(self, group=None, target=None, name=None, args=(), kwargs=None, verbose=None): threading.Thread.__init__(self, group=group, target=target, name=name, verbose=verbose) self.args = args self.kwargs = kwargs return def run(self): logging.debug(’running with %s and %s’, self.args, self.kwargs) return for i in range(5): t = MyThreadWithArgs(args=(i,), kwargs={’a’:’A’, ’b’:’B’}) t.start() 1. Shouldn't def run() also include a call to the target function? 2. How does a call to a function_target result in a thread being created? Normally you'd have to call a function in pthreads (OS call) One can sort of figure that t.start() hides the actual OS call, but when we implement run().. somehow, magically there's no OS call? WTH! ?? Then in the Timer example in the next section, how is the whole delay/canecl bit implemented? We do t1.start so the 3 second counter starts ticking somewhere - where? And how does he cancel that? import threading import time import logging logging.basicConfig(level=logging.DEBUG, format=’(%(threadName)-10s) %(message)s’, ) def delayed(): logging.debug(’worker running’) return t1 = threading.Timer(3, delayed) t1.setName(’t1’) t2 = threading.Timer(3, delayed) t2.setName(’t2’) logging.debug(’starting timers’) t1.start() t2.start() logging.debug(’waiting before canceling %s’, t2.getName()) time.sleep(2) logging.debug(’canceling %s’, t2.getName()) t2.cancel() logging.debug(’done’) -- https://mail.python.org/mailman/listinfo/python-list
Re: I m facing some problem while opening the interpreter. how can I resolve the issue?
On Wednesday 17 February 2016 06:55, Chinmaya Choudhury wrote: > Please guide me. > #Chinmay > > Sent from Mail for Windows 10 How can we help you when we don't know what problem you have? Is the computer turned on? Is the mouse plugged in? Are you double-clicking the icon on the desktop? What happens? Do you get an error message or does the computer suddenly reboot? What does the error message say? -- Steve -- https://mail.python.org/mailman/listinfo/python-list
Re: Will file be closed automatically in a "for ... in open..." statement?
On Wednesday 17 February 2016 15:04, jf...@ms4.hinet.net wrote: > Thanks for these detailed explanation. Both statements will close file > automatically sooner or later and, when considering the exceptions, "with" > is better. Hope my understanding is right. > > But, just curious, how do you know the "for" will do it? I can't find any > document about it from every sources I know. Very depressed:-( This has nothing to do with "for". You would get exactly the same behaviour without "for": f = open("some file", "r") x = f.read(20) x = f.read(30) x = f.read() So long as the variable "f" is in-scope, the file will stay open. If the above code is in a function, "f" goes out of scope when the function returns. If the above code is at the top level of the module, "f" will stay in scope forever, or until you either delete the variable from the scope: del f or re-assign to something else: f = "hello world" At this point, after the function has exited, and the variable has completely gone out of scope, what happens? The garbage collector will: - reclaim the memory used by the object; - close the file. BUT there is no promise WHEN the file will be closed. It might be immediately, or it might be when the application shuts down. If you want the file to be closed immediately, you must: - use a with statement; - or explicitly call f.close() otherwise you are at the mercy of the interpreter, which will close the file whenever it wants. -- Steve -- https://mail.python.org/mailman/listinfo/python-list