memory consumption
Hello everyone! I'm experiencing problems with memory consumption. I have a class which is doing ETL job. What`s happening inside: - fetching existing objects from DB via SQLAchemy - iterate over raw data - create new/update existing objects - commit changes Before processing data I create internal cache(dictionary) and store all existing objects in it. Every 1 items I do bulk insert and flush. At the end I run commit command. Problem. Before executing, my interpreter process weighs ~100Mb, after first run memory increases up to 500Mb and after second run it weighs 1Gb. If I will continue to run this class, memory wont increase, so I think it's not a memory leak, but rather Python wont release allocated memory back to OS. Maybe I'm wrong. What I tried after executing: - gc.collect() - created snapshots with tracemalloc and searched for some garbage, diff = smapshot_before_run - smapshot_after_run - searched for links with "objgraph" library to internal cache(dictionary containing elements from DB) - cleared the cache(dictionary) - db.session.expire_all() This class is a periodic celery task. So when each worker executes this class at least two times, all celery workers need 1Gb of RAM. Before celery there was a cron script and this class was executed via API call and the problem was the same. So no matter how I run, interpreter consumes 1Gb of RAM after two runs. I see few solutions to this problem 1. Execute this class in separate process. But I had few errors when the same SQLAlchemy connection being shared between different processes. 2. Restart celery worker after executing this task by throwing exception. 3. Use separate queue for such tasks, but then worker will stay idle most of the time. All this is looks like a crutch. Do I have any other options ? I'm using: Python - 3.6.13 Celery - 4.1.0 Flask-RESTful - 0.3.6 Flask-SQLAlchemy - 2.3.2 Thanks in advance! -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
Hello Lars! Thanks for your interest. The problem appears when all celery workers require 1Gb of RAM each in idle state. They hold this memory constantly and when they do something useful, they grab more memory. I think 8Gb+ in idle state is quite a lot for my app. > Did it crash your system or prevent other processes from having enough memory? yes. More over, sometimes corporate watchdog just kills my app -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
понедельник, 29 марта 2021 г. в 15:57:43 UTC+3, Julio Oña: > It looks like the problem is on celery. > The mentioned issue is still open, so not sure if it was corrected. > > https://manhtai.github.io/posts/memory-leak-in-celery/ As I mentioned in my first message, I tried to run this task(class) via Flask API calls, without Celery. And results are the same. Flask worker receives the API call and executes MyClass().run() inside of view. After a few calls worker size increases to 1Gb of RAM. In production I have 8 workers, so in idle they will hold 8Gb. -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
понедельник, 29 марта 2021 г. в 17:19:02 UTC+3, Stestagg: > On Mon, Mar 29, 2021 at 2:32 PM Alexey wrote: > Some questions here to help understand more: > > 1. Do you have any actual problems caused by running 8 celery workers > (beyond high memory reports)? What are they? No. Everything works fine. > 2. Can you try a test with 16 or 32 active workers (i.e. number of > workers=2x available memory in GB), do they all still end up with 1gb > usage? or do you get any other memory-related issues running this? Yes. They will consume 1Gb each. It doesn't matter how many workers I have, they behave exactly the same. We can even forget about Flask and Celery. If I run this code in Python console, behavior will remain the same. -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
понедельник, 29 марта 2021 г. в 19:37:03 UTC+3, Dieter Maurer: > Alexey wrote at 2021-3-29 06:26 -0700: > >понедельник, 29 марта 2021 г. в 15:57:43 UTC+3, Julio Oña: > >> It looks like the problem is on celery. > >> The mentioned issue is still open, so not sure if it was corrected. > >> > >> https://manhtai.github.io/posts/memory-leak-in-celery/ > > > >As I mentioned in my first message, I tried to run > >this task(class) via Flask API calls, without Celery. > >And results are the same. Flask worker receives the API call and > >executes MyClass().run() inside of view. After a few calls > >worker size increases to 1Gb of RAM. In production I have 8 workers, > > so in idle they will hold 8Gb. > Depending on your system (this works for `glibc` systems), > you can instruct the memory management via the envvar > `MALLOC_ARENA_MAX` to use a common memory pool (called "arena") > for all threads. > It is known that this can drastically reduce memory consumption > in multi thread systems. Tried with this variable. No luck. Thanks anyway. -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
понедельник, 29 марта 2021 г. в 19:56:52 UTC+3, Stestagg: > > > 2. Can you try a test with 16 or 32 active workers (i.e. number of > > > workers=2x available memory in GB), do they all still end up with 1gb > > > usage? or do you get any other memory-related issues running this? > > Yes. They will consume 1Gb each. It doesn't matter how many workers I > > have, > > they behave exactly the same. We can even forget about Flask and Celery. > > If I run this code in Python console, behavior will remain the same. > > > > > Woah, funky, so you got to a situation where your workers were allocating > 2x more ram than your system had available? and they were still working? > Were you hitting lots of swap? > > If no big swap load, then it looks like no problem, it's just that the > metrics you're looking at aren't saying what they appear to be. > > > > > -- > > https://mail.python.org/mailman/listinfo/python-list > > I'm sorry. I didn't understand your question right. If I have 4 workers, they require 4Gb in idle state and some extra memory when they execute other tasks. If I increase workers count up to 16, they`ll eat all the memory I have (16GB) on my machine and will crash as soon as system get swapped. -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
вторник, 30 марта 2021 г. в 18:43:51 UTC+3, Marco Ippolito: > Have you tried to identify where in your code the surprising memory > allocations > are made? Yes. > You could "bisect search" by adding breakpoints: > > https://docs.python.org/3/library/functions.html#breakpoint > > At which point does the problem start manifesting itself? The problem spot is my cache(dict). I simplified my code to just load all the objects to this dict and then clear it. After loading "top" was showing resident memory usage at 3.3Gb and immediately after that I did self.__cache.clear() and memory reduced to 1Gb. Then I tried to find any references to this dict with no luck. Also I tried "del self.__cache". For debugging I use Pycharm -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
вторник, 30 марта 2021 г. в 18:43:54 UTC+3, Alan Gauld: > On 29/03/2021 11:12, Alexey wrote: > The first thing you really need to tell us is which > OS you are using? Memory management varies wildly > depending on OS. Even different flavours of *nix > do it differently. I'm using Ubuntu(5.8.0-45-generic #51~20.04.1-Ubuntu) in development and Centos 7 in production > However, most do it effectively, so you as a programmer > shouldn't have to worry too much provided you aren't > leaking, which you don't think you are. > > and after second run it weighs 1Gb. If I will continue > > to run this class, memory wont increase, so I think > > it's not a memory leak, but rather Python wont release > > allocated memory back to OS. Maybe I'm wrong. > A 1GB process on modern computers is hardly a big problem? > Most machines have 4G and many have 16G or even 32G > nowadays. In case of one worker it's ok. But when 8 workers holding 8Gb of garbage it becomes a problem and I cant ignore this due to company rules. -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
среда, 31 марта 2021 г. в 01:20:06 UTC+3, Dan Stromberg: > On Tue, Mar 30, 2021 at 1:25 AM Alexey wrote: > > > > > I'm sorry. I didn't understand your question right. If I have 4 workers, > > they require 4Gb > > in idle state and some extra memory when they execute other tasks. If I > > increase workers > > count up to 16, they`ll eat all the memory I have (16GB) on my machine and > > will crash as soon > > as system get swapped. > > > What if you increase the machine's (operating system's) swap space? Does > that take care of the problem in practice? I can`t do that because it will affect other containers running on this host. In my opinion it may significantly reduce their performance. -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
среда, 31 марта 2021 г. в 05:45:27 UTC+3, cameron...@gmail.com: > Since everyone is talking about vague OS memory use and not at all about > working set size of Python objects, let me ... > On 29Mar2021 03:12, Alexey wrote: > >I'm experiencing problems with memory consumption. > > > >I have a class which is doing ETL job. What`s happening inside: > > - fetching existing objects from DB via SQLAchemy > Do you need to? Or do you only need to fetch their ids? Or do you only > need to fetch a subset of the objects? I really need all the objects because I'm performing update and create operations. If I'll be fetching them on the go, this will take hours or even days to complete. > It is easy to accidentally suck in way too many db session entity > objects, or at any rate, more than you need to. > > - iterate over raw data > Can you prescan the data to determine which objects you care about, > reducing the number of objects you need to obtain? In this case I still need to iterate over raw and old data. As I said before if I'll try it without caching it'll take days > > - create new/update existing objects > Depoending what you're doing, you may not need to "create new/update > existing objects". You could collate changes and do an UPSERT (the > incantation varies a little depending on the SQL dialect behind > SQLAlchemy). Good advice. > > - commit changes > > Do you discard the SQLAlchemy session after this? Otherwise it may lurk > and hold onto the objects. Commit doesn't forget the objects. I tried expire_all() and expunge_all. Should I try rollback ? > For my current client we have a script to import historic data from a > legacy system. It has many of the issues you're dealing with: the naive > (ORM) way consumes gads of memory, and can be very slow too (udating > objects in an ad hoc manner tends to do individual UPDATE SQL commands, > very latency laden). > > I wrote a generic batch UPSERT function which took an accrued list of > changes and prepared a PostgreSQL INSERT...ON CONFLICT statement. The > main script hands it the accrued updates and it runs batches (which lets > up do progress reporting). Orders of magnitude faster, _and_ does not > require storing the db objects. > > On the subject of "fetching existing objects from DB via SQLAchemy": you > may not need to do that, either. Can you identify _which_ objects are of > interest? Associate with the same script I've go a batch_select > function: it takes an terable if object ids and collects them in > batches, where before we were really scanning the whole db because we > had an arbitrary scattering of relevant object ids from the raw data. I'll try to analyze if it's possible to rewrite code this way > It basicly collected ids into batches, and ran a SELECT...WHERE id in > (batch-of-ids). It's really fast considering, and also scales _way_ down > when the set of arbitrary ids is small. > > I'm happy to walk through the mechanics of these with you; the code at > this end is Django's ORM, but I prefer SQLAlchemy anyway - the project > dictated the ORM here. > >Before processing data I create internal cache(dictionary) and store all > >existing objects in it. > >Every 1 items I do bulk insert and flush. At the end I run commit > >command. > Yah. I suspect the session data are not being released. Also, SQLAlchemy > may be caching sessions or something across runs, since this is a celery > worker which survives from one task to the next. I tried to dig in this direction. Created a few graphs with "objgraph" but it has so much references under the hood. I'll try to measure size of session object before and after building cache. > You could try explicitly creating a new SQLAlchemy session around your > task. > >Problem. Before executing, my interpreter process weighs ~100Mb, after first > >run memory increases up to 500Mb > >and after second run it weighs 1Gb. If I will continue to run this class, > >memory wont increase, so I think > >it's not a memory leak, but rather Python wont release allocated memory back > >to OS. Maybe I'm wrong. > I don't know enough about Python's "release OS memory" phase. But > reducing the task memory footprint will help regardless. Definitely. I'll think about it. Thank you! -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
среда, 31 марта 2021 г. в 06:54:52 UTC+3, Inada Naoki: > First of all, I recommend upgrading your Python. Python 3.6 is a bit old. I was thinking about that. > As you saying, Python can not return the memory to OS until the whole > arena become unused. > If your task releases all objects allocated during the run, Python can > release the memory. > But if your task keeps at least one object, it may prevent releasing > the whole arena (256KB). > > Python manages only small (~256bytes) objects. Larger objects is > allocated by malloc(). > And glibc malloc may not efficient for some usage. jemalloc is better > for many use cases. > > You can get some hints from sys._debugmallocstats(). It prints > obmalloc (allocator for small objects) stats to stderr. > Try printing stats before and after 1st run, and after 2nd run. And > post it in this thread if you can. (no sensible information in the > stats). > > That is all I can advise. ** Before first run: class size num pools blocks in use avail blocks - - - 0 8 52404 126 1 16 3 611 148 2 24 13210381 3 32 371 4670343 4 40 292 2942270 5 48 233 1949973 6 561360 9790317 7 641614 10165428 8 721964 10994737 9 801056 5278317 10 88 436 2002333 11 96 297 1245519 12104 266 1009513 13112 193693018 14120 127417021 15128 217671215 161361299 37669 2 171441223 34239 5 18152 113292018 19160 781949 1 201681474 35369 7 21176 541237 5 22184 46 99121 23192 42 86418 24200 531054 6 25208 39 71328 26216 54 95517 27224 575 10350 0 28232 43 724 7 29240 32 49715 30248 73115315 31256 29 431 4 32264 25 375 0 33272 46 637 7 34280 24 328 8 35288 20 280 0 36296 3985167 7 37304 21 26112 38312 22 256 8 39320 17 195 9 40328 18 215 1 41336 57 675 9 42344 17 183 4 43352 18 194 4 44360 14 153 1 45368 14 153 1 46376 15 148 2 47384 15 148 2 48392 14 131 9 49400 15 149 1 50408 17 147 6 51416 16 142 2 52424 25 221 4 53432 36 317 7 54440 44 393 3 55448 45 399 6 56456 53 420 4 57464 46 363 5 58472 36 288 0 59480 35 274 6 60488 29 227 5 61496 29 230 2 62504 21 161 7 63512 85 589 6 # arenas allocated total = 776 # arenas reclaimed = 542 # arenas highwater mark= 234 # arenas allocated current = 234 234 arenas * 262144 bytes/arena= 61,341,696 # bytes in allocated blocks= 59,737,176 # bytes in available blocks
Re: memory consumption
среда, 31 марта 2021 г. в 11:52:43 UTC+3, Marco Ippolito: > > > At which point does the problem start manifesting itself? > > The problem spot is my cache(dict). I simplified my code to just load > > all the objects to this dict and then clear it. > What's the memory utilisation just _before_ performing this load? I am > assuming > it's much less than this 1 GB you can't seem to drop under after you run your > `.clear()`. Around 100Mb before first run. > > After loading "top" > > You may be using `top` in command line mode already but in case you aren't, > consider sorting processes whose command name is `python` (or whatever filter > selects your program) by RSS, like so, for easier collection of > machine-readable statistics: I'm using following command to highlight what I need - top -c -p $(pgrep -d',' -f python) and then sort by RSS and switch to Mb by pressing 'e'. > # ps -o rss,ppid,pid,args --sort -rss $(pgrep python) > RSS PPID PID COMMAND > 32836 14130 14377 python3 > 10644 14540 14758 python3 > > For debugging I use Pycharm > Sounds good, you can then use the GUI to set the breakpoint and consult > external statistics-gathering programs (like the `ps` invocation above) as > you > step through your code. > > Pycharm also allows you to see which variables are in scope in a particular > stack frame, so you'll have an easier time reasoning about garbage collection > in terms of which references might be preventing GC. That's what I tried in the first place and I see no references to this dict. I'll try that one more time anyway. -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
среда, 31 марта 2021 г. в 14:16:30 UTC+3, Inada Naoki: > > ** Before first run: > > # arenas allocated total = 776 > > # arenas reclaimed = 542 > > # arenas highwater mark = 234 > > # arenas allocated current = 234 > > 234 arenas * 262144 bytes/arena = 61,341,696 > > ** After first run: > > # arenas allocated total = 47,669 > > # arenas reclaimed = 47,316 > > # arenas highwater mark = 10,114 > > # arenas allocated current = 353 > > 353 arenas * 262144 bytes/arena = 92,536,832 > > ** After second run: > > # arenas allocated total = 63,635 > > # arenas reclaimed = 63,238 > > # arenas highwater mark = 10,114 > > # arenas allocated current = 397 > > 397 arenas * 262144 bytes/arena = 104,071,168 > OK, memory allocated by obmalloc is 61MB -> 92MB -> 104MB. > > Memory usage increasing, but it is much smaller than 1GB. 90% memory > is allocated by malloc(). > > You should try jemalloc. Trying jemalloc is not hard. You don't need > to rebuild Python. > Google " jemalloc LD_PRELOAD". > > > -- > Inada Naoki With jemalloc it looks like a memory leak :D After first run it grabs 980Mb, second run 1.4Gb then 2.6Gb and so on -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
среда, 31 марта 2021 г. в 18:17:46 UTC+3, Dieter Maurer: > Alexey wrote at 2021-3-31 02:43 -0700: > >среда, 31 марта 2021 г. в 06:54:52 UTC+3, Inada Naoki: > > ... > >> You can get some hints from sys._debugmallocstats(). It prints > >> obmalloc (allocator for small objects) stats to stderr. > >> Try printing stats before and after 1st run, and after 2nd run. And > >> post it in this thread if you can. (no sensible information in the > >> stats). > `glibc` has similar functions to monitor the memory allocation > at the C level: `mallinfo[2]`, `malloc_stats`, `malloc_info`. > > The `mallinfo` functions can be called via `ctypes`. > Provided your `glibc` has `mallinfo2`, I recommend its use. > > In order to use `malloc_info` from Python, you need > a C extension. I have one implemented via `cython`. Let me know, > if you are interested. I think I found something. I'll return tomorrow with update. -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
Found it. As I said before the problem was lurking in the cache. Few days ago I read about circular references and things like that and I thought to myself that it might be the case. To build the cache I was using lots of 'setdefault' methods chained together self.__cache.setdefault(cluster_name, {}).setdefault(database_name, {})... and instead of wring a long lines I decided to divide it to increase readability cluster = self.__cache.setdefault(cluster_name, {}) database = database.setdefault(database_name, {}) ... and I guess that was the problem. First thing I did was to rewrite this back to single line. And it helped. In the morning I tried different approach and decided to clear cache with different way. So instead of doing self.__cache.clear(), self.__cache = None or even 'del self.__cache' I did: for item in list(self.__cache.keys()): del self.__cache[item] and againg effect was positive. As a result I decided to rewrite all the methods to build,update and get from cache without 'setdefault' and to use "for loop" instead of dict.clear(). I don't understand underlying processes and why I had this issues. In my opinion if you're leaving the function scope you don't have to worry about what variables you're leaving there. There are other guys at StackOverflow who has similar problems with big dictionaries and memory but they're using different approaches. Thanks you everyone for your time and advices! I think the problem is solved and I hope some one will find this helpful -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
четверг, 1 апреля 2021 г. в 14:57:29 UTC+3, Barry: > > On 31 Mar 2021, at 09:42, Alexey wrote: > > > > среда, 31 марта 2021 г. в 01:20:06 UTC+3, Dan Stromberg: > >>> On Tue, Mar 30, 2021 at 1:25 AM Alexey wrote: > >>> > >>> > >>> I'm sorry. I didn't understand your question right. If I have 4 workers, > >>> they require 4Gb > >>> in idle state and some extra memory when they execute other tasks. If I > >>> increase workers > >>> count up to 16, they`ll eat all the memory I have (16GB) on my machine > >>> and > >>> will crash as soon > >>> as system get swapped. > >>> > >> What if you increase the machine's (operating system's) swap space? Does > >> that take care of the problem in practice? > > > > I can`t do that because it will affect other containers running on this > > host. > > In my opinion it may significantly reduce their performance. > Assuming this a modern linux then you should have control groups that allow > you to set limits on memory and swap for each container. > > Are you running with systemd? I really don't know. -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
четверг, 1 апреля 2021 г. в 15:27:01 UTC+3, Chris Angelico: > On Thu, Apr 1, 2021 at 10:56 PM Alexey wrote: > > > > Found it. As I said before the problem was lurking in the cache. > > Few days ago I read about circular references and things like that and > > I thought to myself that it might be the case. To build the cache I was > > using lots of 'setdefault' methods chained together > > > > self.__cache.setdefault(cluster_name, {}).setdefault(database_name, {})... > > > > and instead of wring a long lines I decided to divide it to increase > > readability > > > > cluster = self.__cache.setdefault(cluster_name, {}) > > database = database.setdefault(database_name, {}) > > ... > > and I guess that was the problem. > > > > First thing I did was to rewrite this back to single line. > If the cache is always and only used in this way, it might be cleaner > to use a defaultdict(dict) instead of the setdefault calls. Or, since > this appears to be a two-level cache: > > self.__cache = defaultdict(lambda: defaultdict(dict)) > > and then you can simply reference > self.__cache[cluster_name][database_name] to read or update the cache. I agree > Having that be more efficient than either self.__cache=None or del > self.__cache (which will be equivalent), I can understand. But better > than clearing the dict? Seems very odd. In this particular case 'cache.clear()' just don't work (in context of releasing memory). If someone can tell me why, I'll be very thankful > Ideally, though, you'd want to NOT have those reference loops. I > presume the database objects need to have a reference to whatever > 'self' is, but perhaps the cache can be done externally to the object, > which would make all the references one-way instead of circular. But > that's something only you can investigate. I did some refactoring and changed my code already. -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
четверг, 1 апреля 2021 г. в 16:02:15 UTC+3, Barry: > > On 1 Apr 2021, at 13:46, Marco Ippolito wrote: > > > > > >> > What if you increase the machine's (operating system's) swap space? Does > that take care of the problem in practice? > >>> > >>> I can`t do that because it will affect other containers running on this > >>> host. > >>> In my opinion it may significantly reduce their performance. > >> > >> Assuming this a modern linux then you should have control groups that > >> allow > >> you to set limits on memory and swap for each container. > >> > >> Are you running with systemd? > > > > If their problem is that their memory goes from `` to `` and > > then > > back down to `` rather than ``, how could `cgroups` have helped > > in > > that case? > > > > I suspect the high watermark of `` needs to be reachable still and, > > secondly, that a forceful constraint whilst running would crash the > > container? > > > > How else could one approach it? > > > I was responding to the assertion that adding swap to the system would impact > other containers. > The solution I have used is to set service/container resource limits to > ensure they work as expected. > > I was not suggestion this a fix for the memory leak. I think it's good advice anyway. Thanks! -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
четверг, 1 апреля 2021 г. в 17:21:59 UTC+3, Mats Wichmann: > On 4/1/21 5:50 AM, Alexey wrote: > > Found it. As I said before the problem was lurking in the cache. > > Few days ago I read about circular references and things like that and > > I thought to myself that it might be the case. To build the cache I was > > using lots of 'setdefault' methods chained together > > > > self.__cache.setdefault(cluster_name, {}).setdefault(database_name, {})... > > > > and instead of wring a long lines I decided to divide it to increase > > readability > > > > cluster = self.__cache.setdefault(cluster_name, {}) > > database = database.setdefault(database_name, {}) > > ... > > and I guess that was the problem. > I guess it is worth mentioning here that there are people who feel you > shouldn't use setdefault() this way. setdefault is primarily a "getter", > which has the side effect of filling in dict entry if one did not exist > before returning it, and there some folks feel that using it primarily > as a "setter" is abusing the interface... > > Do with that what you will :) Ok ) -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
четверг, 1 апреля 2021 г. в 15:46:21 UTC+3, Marco Ippolito: > I suspect the high watermark of `` needs to be reachable still and, > secondly, that a forceful constraint whilst running would crash the > container? Exactly. -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
четверг, 1 апреля 2021 г. в 15:56:23 UTC+3, Marco Ippolito: > > > Are you running with systemd? > > > > I really don't know. > An example of how to check: > > ``` > $ readlink /sbin/init > /lib/systemd/systemd > ``` > > You want to check which program runs as PID 1. Thank you Marco -- https://mail.python.org/mailman/listinfo/python-list
Re: memory consumption
> I had the (mis)pleasure of dealing with a multi-terabyte postgresql > instance many years ago and figuring out why random scripts were eating > up system memory became quite common. > > All of our "ETL" scripts were either written in Perl, Java, or Python > but the results were always the same, if a process grew to using 1gb of > memory (as your case), then it never "released" it back to the OS. What > this basically means is that your script at one time did in fact > use/need 1GB of memory. That becomes the "high watermark" and in most > cases usage will stay at that level. And if you think about it, it makes > sense. Your python program went through the trouble of requesting memory > space from the OS, it makes no sense for it to give it back to the OS as > if it needed 1GB in the past, it will probably need 1GB in the future so > you will just waste time with syscalls. Even the glibc docs state > calling free() does not necessarily mean that the OS will allocate the > "freed" memory back to the global memory space. > > There are basically two things you can try. First, try working in > smaller batch sizes. 10,000 is a lot, try 100. Second, as you hinted, > try moving the work to a separate process. The simple way to do this > would be to move away from modules that use threads and instead use > something that creates child processes with fork(). Thank you! Decided to use separate process, because despite some improvements and positive effects when it runs within Celery in production environment there are still some significant overheads. Another problem occurred with "Using Connection Pools with Multiprocessing or os.fork()" but I figured it out with 'Engine.dispose()' and and 'pool_pre_ping'. Solutions can be found in official documentation for SQLAlchemy. -- https://mail.python.org/mailman/listinfo/python-list
errors building python 2.7.3
Hi! I've tried to build Python 2.7.3rc2 on cygwin and got the following errors: $ CFLAGS=-I/usr/include/ncursesw/ CPPFLAGS=-I/usr/include/ncursesw/ ./configure $ make ... gcc -shared -Wl,--enable-auto-image-base build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/bufferedio.o build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/bytesio.o build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/fileio.o build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/iobase.o build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/_iomodule.o build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/stringio.o build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/textio.o -L/usr/local/lib -L. -lpython2.7 -o build/lib.cygwin-1.7.11-i686-2.7/_io.dll build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/bufferedio.o: In function `_set_BlockingIOError': /Python-2.7.3rc2/Modules/_io/bufferedio.c:579: undefined reference to `__imp__PyExc_BlockingIOError' /Python-2.7.3rc2/Modules/_io/bufferedio.c:579: undefined reference to `__imp__PyExc_BlockingIOError' build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/bufferedio.o: In function `_buffered_check_blocking_error': /Python-2.7.3rc2/Modules/_io/bufferedio.c:595: undefined reference to `__imp__PyExc_BlockingIOError' collect2: ld returned 1 exit status building '_curses' extension gcc -fno-strict-aliasing -I/usr/include/ncursesw/ -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -I/usr/include/ncursesw/ -I/Python-2.7.3rc2/Include -I/Python-2.7.3rc2 -c /Python-2.7.3rc2/Modules/_cursesmodule.c -o build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_cursesmodule.o /Python-2.7.3rc2/Modules/_cursesmodule.c: In function ‘PyCursesWindow_EchoChar’: /Python-2.7.3rc2/Modules/_cursesmodule.c:810:18: error: dereferencing pointer to incomplete type /Python-2.7.3rc2/Modules/_cursesmodule.c: In function ‘PyCursesWindow_NoOutRefresh’: /Python-2.7.3rc2/Modules/_cursesmodule.c:1238:22: error: dereferencing pointer to incomplete type /Python-2.7.3rc2/Modules/_cursesmodule.c: In function ‘PyCursesWindow_Refresh’: /Python-2.7.3rc2/Modules/_cursesmodule.c:1381:22: error: dereferencing pointer to incomplete type /Python-2.7.3rc2/Modules/_cursesmodule.c: In function ‘PyCursesWindow_SubWin’: /Python-2.7.3rc2/Modules/_cursesmodule.c:1448:18: error: dereferencing pointer to incomplete type /Python-2.7.3rc2/Modules/_cursesmodule.c: In function ‘PyCursesWindow_Refresh’: /Python-2.7.3rc2/Modules/_cursesmodule.c:1412:1: warning: control reaches end of non-void function /Python-2.7.3rc2/Modules/_cursesmodule.c: In function ‘PyCursesWindow_NoOutRefresh’: /Python-2.7.3rc2/Modules/_cursesmodule.c:1270:1: warning: control reaches end of non-void function /Python-2.7.3rc2/Modules/_cursesmodule.c: In function ‘PyCursesWindow_EchoChar’: /Python-2.7.3rc2/Modules/_cursesmodule.c:817:1: warning: control reaches end of non-void function ... Failed to build these modules: _curses_io Then tried to see if the problem is sovled, fetched the source from https://bitbucket.org/python_mirrors/releasing-2.7.3 and got another one: $ CFLAGS=-I/usr/include/ncursesw/ CPPFLAGS=-I/usr/include/ncursesw/ ./configure $ make gcc -Wno-unused-result -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/usr/include/ncursesw/ -I. -I./Include -I/usr/include/ncursesw/ -DPy_BUILD_CORE -c ./Modules/signalmodule.c -o Modules/signalmodule.o ./Modules/signalmodule.c: In function ‘fill_siginfo’: ./Modules/signalmodule.c:734:5: error: ‘siginfo_t’ has no member named ‘si_band’ Makefile:1456: recipe for target `Modules/signalmodule.o' failed make: *** [Modules/signalmodule.o] Error 1 Reporting here, because bugs.python.org refuses connections currently. Just in case CYGWIN_NT-6.1-WOW64 ... 1.7.11(0.260/5/3) 2012-02-24 14:05 i686 Cygwin gcc version 4.5.3 (GCC) -- Alex -- http://mail.python.org/mailman/listinfo/python-list
Re: errors building python 2.7.3
On 28.03.2012 14:50, Alexey Luchko wrote: Hi! I've tried to build Python 2.7.3rc2 on cygwin and got the following errors: $ CFLAGS=-I/usr/include/ncursesw/ CPPFLAGS=-I/usr/include/ncursesw/ ./configure $ make ... gcc -shared -Wl,--enable-auto-image-base build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/bufferedio.o build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/bytesio.o build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/fileio.o build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/iobase.o build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/_iomodule.o build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/stringio.o build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/textio.o -L/usr/local/lib -L. -lpython2.7 -o build/lib.cygwin-1.7.11-i686-2.7/_io.dll build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/bufferedio.o: In function `_set_BlockingIOError': /Python-2.7.3rc2/Modules/_io/bufferedio.c:579: undefined reference to `__imp__PyExc_BlockingIOError' /Python-2.7.3rc2/Modules/_io/bufferedio.c:579: undefined reference to `__imp__PyExc_BlockingIOError' build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_io/bufferedio.o: In function `_buffered_check_blocking_error': /Python-2.7.3rc2/Modules/_io/bufferedio.c:595: undefined reference to `__imp__PyExc_BlockingIOError' collect2: ld returned 1 exit status building '_curses' extension gcc -fno-strict-aliasing -I/usr/include/ncursesw/ -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -I/usr/include/ncursesw/ -I/Python-2.7.3rc2/Include -I/Python-2.7.3rc2 -c /Python-2.7.3rc2/Modules/_cursesmodule.c -o build/temp.cygwin-1.7.11-i686-2.7/Python-2.7.3rc2/Modules/_cursesmodule.o /Python-2.7.3rc2/Modules/_cursesmodule.c: In function ‘PyCursesWindow_EchoChar’: /Python-2.7.3rc2/Modules/_cursesmodule.c:810:18: error: dereferencing pointer to incomplete type /Python-2.7.3rc2/Modules/_cursesmodule.c: In function ‘PyCursesWindow_NoOutRefresh’: /Python-2.7.3rc2/Modules/_cursesmodule.c:1238:22: error: dereferencing pointer to incomplete type /Python-2.7.3rc2/Modules/_cursesmodule.c: In function ‘PyCursesWindow_Refresh’: /Python-2.7.3rc2/Modules/_cursesmodule.c:1381:22: error: dereferencing pointer to incomplete type /Python-2.7.3rc2/Modules/_cursesmodule.c: In function ‘PyCursesWindow_SubWin’: /Python-2.7.3rc2/Modules/_cursesmodule.c:1448:18: error: dereferencing pointer to incomplete type /Python-2.7.3rc2/Modules/_cursesmodule.c: In function ‘PyCursesWindow_Refresh’: /Python-2.7.3rc2/Modules/_cursesmodule.c:1412:1: warning: control reaches end of non-void function /Python-2.7.3rc2/Modules/_cursesmodule.c: In function ‘PyCursesWindow_NoOutRefresh’: /Python-2.7.3rc2/Modules/_cursesmodule.c:1270:1: warning: control reaches end of non-void function /Python-2.7.3rc2/Modules/_cursesmodule.c: In function ‘PyCursesWindow_EchoChar’: /Python-2.7.3rc2/Modules/_cursesmodule.c:817:1: warning: control reaches end of non-void function ... Failed to build these modules: _curses _io The same happens with Python 2.7.2. CYGWIN_NT-6.1-WOW64 ... 1.7.11(0.260/5/3) 2012-02-24 14:05 i686 Cygwin gcc version 4.5.3 (GCC) -- Alex -- http://mail.python.org/mailman/listinfo/python-list
Re: errors building python 2.7.3
JFI Reported as http://bugs.python.org/issue14437 http://bugs.python.org/issue14438 -- Regars, Alex -- http://mail.python.org/mailman/listinfo/python-list
Re: errors building python 2.7.3
On 28.03.2012 18:42, David Robinow wrote: > On Wed, Mar 28, 2012 at 7:50 AM, Alexey Luchko wrote: >> I've tried to build Python 2.7.3rc2 on cygwin and got the following errors: >> >> $ CFLAGS=-I/usr/include/ncursesw/ CPPFLAGS=-I/usr/include/ncursesw/ >> ./configure > I haven't tried 2.7.3 yet, so I'll describe my experience with 2.7.2 > I use /usr/include/ncurses rather than /usr/include/ncursesw > I don't remember what the difference is but ncurses seems to work. I've tried ncurses too. It does not matter. -- Alex -- http://mail.python.org/mailman/listinfo/python-list
Re: errors building python 2.7.3
On 29.03.2012 21:29, David Robinow wrote: Have you included the patch to Include/py_curses.h ? If you don't know what that is, download the cygwin src package for Python-2.6 and look at the patches. Not all of them are still Thanks for the hint. With cygwin's 2.6.5-ncurses-abi6.patch it works with both ncurses and ncursesw. -- Alex -- http://mail.python.org/mailman/listinfo/python-list
PyDoc - Python Documentation Plugin for Eclipse
Greets! Since i'm new to Python, i've decided to create a handy plugin for Elipse SDK which is my primary dev environment. Practically the plugin is a simple html archive from python documentation website running inside Eclipse so you can call it using Eclipse help system. As for now it is pretty large (~7 mb), but i'm planning to optimize it in near future. For more information, please visit: http://pydoc.tk/ or https://sourceforge.net/projects/pydoc/ Advices are appreciated! Contact e-mail: ahaidam...@gmail.com -- http://mail.python.org/mailman/listinfo/python-list
PyDoc - Python Documentation Plugin for Eclipse
Greets! Since i'm new to Python, i've decided to create a handy plugin for Elipse SDK which is my primary dev environment. Practically the plugin is a simple html archive from python documentation website running inside Eclipse so you can call it using Eclipse help system. As for now it is pretty large (~7 mb), but i'm planning to optimize it in near future. For more information, please visit: http://pydoc.tk/ or https://sourceforge.net/projects/pydoc/ Advices are appreciated! Contact e-mail: ahaidam...@gmail.com -- http://mail.python.org/mailman/listinfo/python-list
Re: PyDoc - Python Documentation Plugin for Eclipse
On Sun, 10 Jun 2012 05:02:35 -0500, Andrew Berg wrote: > On 6/10/2012 4:22 AM, Alexey Gaidamaka wrote: >> Practically the plugin is a simple html archive from python >> documentation website running >> inside Eclipse so you can call it using Eclipse help system. As for now >> it is pretty large (~7 mb), but i'm planning to optimize it in near >> future. > Rather than archive documentation, why not use a simple static page that > points to the different sections for each version of Python on > docs.python.org? The 2.7.3 documentation is mostly useless to me since > I'm using 3.3 (and of course there are some using 2.6 or 3.2 or 3.1...), > but I can easily access it from a link in the page you've archived. Not > only would this reduce the size of the plugin to almost nothing, but it > would prevent the documentation from being outdated. > >> For more information, please visit: >> https://sourceforge.net/projects/pydoc/ > Why isn't it installed like other Eclipse plugins? Is it even possible > to update the plugin via Eclipse? > > > This does look like a very useful plugin, though. Great idea. Thanx! All that you've mentioned is planned in the next versions of the plugin. -- http://mail.python.org/mailman/listinfo/python-list
Re: PyDoc - Python Documentation Plugin for Eclipse
On Sun, 10 Jun 2012 12:14:15 +0300, Alexey Gaidamaka wrote: > Greets! > > Since i'm new to Python, i've decided to create a handy plugin for > Elipse SDK which is my primary dev environment. Practically the plugin > is a simple html archive from python documentation website running > inside Eclipse so you can call it using Eclipse help system. As for now > it is pretty large (~7 mb), but i'm planning to optimize it in near > future. > > For more information, please visit: > > http://pydoc.tk/ > > or > > https://sourceforge.net/projects/pydoc/ > > > Advices are appreciated! > > Contact e-mail: > > ahaidam...@gmail.com Hi there again! I've made up some minor changes in plugin. Now it works with online documentation from docs.python.org and supports online doc for 2.6, 2.7, 3.2, 3.3. Plugin weights around 8KB because all the static content was deleted. Additional infos are here: http://pydoc.tk/news.html -- http://mail.python.org/mailman/listinfo/python-list
Re: PyDoc - Python Documentation Plugin for Eclipse
On Sun, 10 Jun 2012 15:37:50 +, Alexey Gaidamaka wrote: > On Sun, 10 Jun 2012 05:02:35 -0500, Andrew Berg wrote: > >> On 6/10/2012 4:22 AM, Alexey Gaidamaka wrote: >>> Practically the plugin is a simple html archive from python >>> documentation website running >>> inside Eclipse so you can call it using Eclipse help system. As for >>> now it is pretty large (~7 mb), but i'm planning to optimize it in >>> near future. >> Rather than archive documentation, why not use a simple static page >> that points to the different sections for each version of Python on >> docs.python.org? The 2.7.3 documentation is mostly useless to me since >> I'm using 3.3 (and of course there are some using 2.6 or 3.2 or >> 3.1...), but I can easily access it from a link in the page you've >> archived. Not only would this reduce the size of the plugin to almost >> nothing, but it would prevent the documentation from being outdated. >> >>> For more information, please visit: >>> https://sourceforge.net/projects/pydoc/ >> Why isn't it installed like other Eclipse plugins? Is it even possible >> to update the plugin via Eclipse? >> >> >> This does look like a very useful plugin, though. Great idea. > > Thanx! All that you've mentioned is planned in the next versions of the > plugin. http://pydoc.tk/news.html TBD: 1. updating and installing plugin through standard Eclipse "work with..." dialogue 2. reduce size of the original plugin that utilizes static content -- http://mail.python.org/mailman/listinfo/python-list
replacing `else` with `then` in `for` and `try`
Hello, what do you think about the idea of replacing "`else`" with "`then`" in the contexts of `for` and `try`? It seems clear that it should be rather "then" than "else." Compare also "try ... then ... finally" with "try ... else ... finally". Currently, with "else", it is almost impossible to guess the meaning without looking into the documentation. Off course, it should not be changed in Python 3, maybe in Python 4 or 5, but in Python 3 `then` could be an alias of `else` in these contexts. Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: replacing `else` with `then` in `for` and `try`
On Thu, 2017-11-02 at 08:29 +1100, Chris Angelico wrote: > On Thu, Nov 2, 2017 at 8:23 AM, Ned Batchelder > > wrote: > > > > > > Apart from the questions of backward compatibility etc (Python is > > unlikely > > to ever go through another shift like the 2/3 breakage), are you > > sure "then" > > is what you mean? This won't print "end": > > > > for i in range(10): > > print(i) > > else: > > print(end) > > Well, it'll bomb with NameError when it tries to look up the *name* > end. But it will run that line of code - if you quote it, it will > work. You see how people are confused over "for ... else". Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: replacing `else` with `then` in `for` and `try`
On Wed, 2017-11-01 at 21:30 +, Stefan Ram wrote: > > In languages like Algol 68, »then« is used for a clause > that is to be executed when the main condition of an > if-statement /is/ true, so this might cause some confusion. > sure, and `else` is used for a clause that is to be executed when the main condition of `if` is false. So, in try: do_something except: catch_exception else: continue_doing_something when no exception occurs in `do_something`, is `do_something` more true, or more false? Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: replacing `else` with `then` in `for` and `try`
On Thu, 2017-11-02 at 08:21 +1100, Chris Angelico wrote: > > With try/except/else, it's "do this, and if an exception happens, do this, else do this". So else makes perfect sense. Indeed, i forgot about `except`. I agree that "try/then/except/finally" would be better than "try/except/then/finally", but "try/except/else/finally" does not make a perfect sense IMHO. Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: replacing `else` with `then` in `for` and `try`
On Thu, 2017-11-02 at 16:31 +, Jon Ribbens wrote: > On 2017-11-02, Steve D'Aprano wrote: > > On Fri, 3 Nov 2017 12:39 am, Jon Ribbens wrote: > > > Why would we want to make the language worse? It is fairly > > > obvious > > > what 'else' means, > > > > Yes, obvious and WRONG. > > Nope, obvious and right. > I suppose that to continue this way we'd need at some point define somehow the meaning of "obvious." > > > whereas 'then' has an obvious meaning that is in > > > fact the opposite of what it would actually do. > > > > Er... is today opposite day? Because 'then' describes precisely > > what it > > actually does. > > No, 'then' describes the opposite of what it does. The word 'then' > implies something that always happens next, whereas 'else' conveys > the correct meaning, which is something that happens if the course > of the preceding piece of code did not go as expected. > Jon, i get from this that for you, when there is no exception in `try`, or no `break` in a loop, the things did not go as expected. Either we need to agree that what is "expected" is subjective, or agree on some kind of formal or informal common meaning for it, because i would not have put it this way. 'Then' describes what happens next indeed, unless some extraordinary situation prevents it from happening, for example: try: go_to_the_bakery() then: buy_croissants(2) except BakeryClosed: go_to_the_grociery() buy_baguette(1) finally: come_back() I know this is a poor program example (why not to use a boolean return value instead of an exception, etc.), and i know that currently in Python `except` must precede `else`, it is just to illustrate the choice of terms. Best regards, Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: replacing `else` with `then` in `for` and `try`
On Fri, 2017-11-03 at 22:03 +1100, Chris Angelico wrote: > On Fri, Nov 3, 2017 at 8:48 PM, Alexey Muranov com> wrote: > > 'Then' describes what happens next indeed, unless some > > extraordinary > > situation prevents it from happening, for example: > > > > try: > > go_to_the_bakery() > > then: > > buy_croissants(2) > > except BakeryClosed: > > go_to_the_grociery() > > buy_baguette(1) > > finally: > > come_back() > > > > I know this is a poor program example (why not to use a boolean > > return value > > instead of an exception, etc.), and i know that currently in Python > > `except` > > must precede `else`, it is just to illustrate the choice of terms. > > What is the semantic difference between that code and the same > without the "then:"? Chris, the semantic difference is that their meanings/behaviours are not identical (i imply that `then` here does what `else` currently does). I agree however that from practical viewpoint the difference will probably never be observable (unless the person enters the bakery and asks for croissants, but during this time the baker exits the bakery and closes it to go on vacation). I can try to think of a better example if you give me any good use-case for `else` in `try`. I have searched on-line, and on StackOverflow in particular, but didn't find anything better that this: * https://stackoverflow.com/a/6051978 People seem very shy when it comes to giving a real-life example of `else` in `try`. Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Problem/bug with class definition inside function definition
I have discovered the following bug or problem: it looks like i am forced to choose different names for class attributes and function arguments, and i see no workaround. Am i missing some special syntax feature ? Alexey. --- x = 42 class C1: y = x # Works class C2: x = x # Works # --- def f1(a): class D: b = a # Works return D def f2(a): class D: a = a # Does not work <<<<< return D def f3(a): class D: nonlocal a a = a # Does not work either <<<<< return D # --- def g1(a): def h(): b = a # Works return b return h def g2(a): def h(): a = a # Does not work (as expected) return a return h def g3(a): def h(): nonlocal a a = a # Works return a return h # --- if __name__ == "__main__": assert C1.y == 42 assert C2.x == 42 assert f1(13).b == 13 try: f2(13) # NameError except NameError: pass except Exception as e: raise Exception( 'Unexpected exception raised: ' '{}'.format(type(e).__name__) ) else: raise Exception('No exception') try: f3(13).a # AttributeError except AttributeError: pass except Exception as e: raise Exception( 'Unexpected exception raised: ' '{}'.format(type(e).__name__) ) else: raise Exception('No exception') assert g1(13)() == 13 try: g2(13)() # UnboundLocalError except UnboundLocalError: pass except Exception as e: raise Exception( 'Unexpected exception raised: ' '{}'.format(type(e).__name__) ) else: raise Exception('No exception') assert g3(13)() == 13 -- https://mail.python.org/mailman/listinfo/python-list
Re: Problem/bug with class definition inside function definition
To be more exact, i do see a few workarounds, for example: def f4(a): b = a class D: a = b # Works return D But this is not what i was hoping for. Alexey. On Tue, 8 May, 2018 at 12:02 AM, Alexey Muranov wrote: I have discovered the following bug or problem: it looks like i am forced to choose different names for class attributes and function arguments, and i see no workaround. Am i missing some special syntax feature ? Alexey. --- x = 42 class C1: y = x # Works class C2: x = x # Works # --- def f1(a): class D: b = a # Works return D def f2(a): class D: a = a # Does not work <<<<< return D def f3(a): class D: nonlocal a a = a # Does not work either <<<<< return D # --- def g1(a): def h(): b = a # Works return b return h def g2(a): def h(): a = a # Does not work (as expected) return a return h def g3(a): def h(): nonlocal a a = a # Works return a return h # --- if __name__ == "__main__": assert C1.y == 42 assert C2.x == 42 assert f1(13).b == 13 try: f2(13) # NameError except NameError: pass except Exception as e: raise Exception( 'Unexpected exception raised: ' '{}'.format(type(e).__name__) ) else: raise Exception('No exception') try: f3(13).a # AttributeError except AttributeError: pass except Exception as e: raise Exception( 'Unexpected exception raised: ' '{}'.format(type(e).__name__) ) else: raise Exception('No exception') assert g1(13)() == 13 try: g2(13)() # UnboundLocalError except UnboundLocalError: pass except Exception as e: raise Exception( 'Unexpected exception raised: ' '{}'.format(type(e).__name__) ) else: raise Exception('No exception') assert g3(13)() == 13 -- https://mail.python.org/mailman/listinfo/python-list
Re: Problem/bug with class definition inside function definition
Sorry, i was confused. I would say that this mostly works as expected, though the difference between x = 42 class C: x = x # Works and def f2(a): class D: a = a # Does not work <<<<< return D is still surprising to me. Otherwise, probably the solution with def f(a): _a = a class D: a = _a return D is good enough, if Python does not allow to refer "simultaneously" to variables from different scopes if they have the same name. Alexey. On Tue, 8 May, 2018 at 12:21 AM, Alexey Muranov wrote: To be more exact, i do see a few workarounds, for example: def f4(a): b = a class D: a = b # Works return D But this is not what i was hoping for. Alexey. On Tue, 8 May, 2018 at 12:02 AM, Alexey Muranov wrote: I have discovered the following bug or problem: it looks like i am forced to choose different names for class attributes and function arguments, and i see no workaround. Am i missing some special syntax feature ? Alexey. --- x = 42 class C1: y = x # Works class C2: x = x # Works # --- def f1(a): class D: b = a # Works return D def f2(a): class D: a = a # Does not work <<<<< return D def f3(a): class D: nonlocal a a = a # Does not work either <<<<< return D # --- def g1(a): def h(): b = a # Works return b return h def g2(a): def h(): a = a # Does not work (as expected) return a return h def g3(a): def h(): nonlocal a a = a # Works return a return h # --- if __name__ == "__main__": assert C1.y == 42 assert C2.x == 42 assert f1(13).b == 13 try: f2(13) # NameError except NameError: pass except Exception as e: raise Exception( 'Unexpected exception raised: ' '{}'.format(type(e).__name__) ) else: raise Exception('No exception') try: f3(13).a # AttributeError except AttributeError: pass except Exception as e: raise Exception( 'Unexpected exception raised: ' '{}'.format(type(e).__name__) ) else: raise Exception('No exception') assert g1(13)() == 13 try: g2(13)() # UnboundLocalError except UnboundLocalError: pass except Exception as e: raise Exception( 'Unexpected exception raised: ' '{}'.format(type(e).__name__) ) else: raise Exception('No exception') assert g3(13)() == 13 -- https://mail.python.org/mailman/listinfo/python-list
Syntax for one-line "nonymous" functions in "declaration style"
Whey you need a simple function in Python, there is a choice between a normal function declaration and an assignment of a anonymous function (defined by a lambda-expression) to a variable: def f(x): return x*x or f = lambda x: x*x It would be however more convenient to be able to write instead just f(x) = x*x (like in Haskell and such). Have this idea been discussed before? I do not see any conflicts with the existing syntax. The following would also work: incrementer(m)(n) = n + m instead of incrementer = lambda m: lambda n: n + m Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Syntax for one-line "nonymous" functions in "declaration style"
On mer., mars 27, 2019 at 10:10 AM, Paul Moore wrote: On Wed, 27 Mar 2019 at 08:25, Alexey Muranov wrote: Whey you need a simple function in Python, there is a choice between a normal function declaration and an assignment of a anonymous function (defined by a lambda-expression) to a variable: def f(x): return x*x or f = lambda x: x*x It would be however more convenient to be able to write instead just f(x) = x*x Why? Is saving a few characters really that helpful? So much so that it's worth adding a *third* method of defining functions, which would need documenting, adding to training materials, etc, etc? Because i think i would prefer to write it this way. (Almost no new documentation or tutorials would be needed IMHO.) Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Syntax for one-line "nonymous" functions in "declaration style"
On mer., Mar 27, 2019 at 5:00 PM, python-list-requ...@python.org wrote: On 27/03/19 09:21, Alexey Muranov wrote: Whey you need a simple function in Python, there is a choice between a normal function declaration and an assignment of a anonymous function (defined by a lambda-expression) to a variable: def f(x): return x*x or f = lambda x: x*x It would be however more convenient to be able to write instead just f(x) = x*x (like in Haskell and such). Have this idea been discussed before? I do not see any conflicts with the existing syntax. The following would also work: I don't know. Something like the following is already legal: f(x)[n] = x * n And it does something completly different. Thanks for pointing out this example, but so far i do not see any issue with this. Of course assignment (to an identifier) is a completely different type of operation than in-place mutation (of an object) with __setitem__, etc. In <...> [<...>] = <...> the part to the left of "[<...>]=" is an expression that is to be evaluated, and only its value matters. Here "[]=" can be viewed as a method call, which is distinguished by the context from "[]" method call (__getitem__). In = <...> the is not evaluated. I still think that ()...() = <...> is unambiguous. The following seems possible too: a[m][n](x)(y) = m*x + n*y It would be the same as a[m][n] = lambda x: lambda y: m*x + n*y Here a[m] is evaluated, and on the result the method "[]=" (__setitem__) is called. Basically, "()...()=" seems to technically fit all contexts where "=" fits... Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Syntax for one-line "nonymous" functions in "declaration style"
On jeu., Mar 28, 2019 at 5:00 PM, python-list-requ...@python.org wrote: So my opinion is that lambda expressions should only be used within larger expressions and never directly bound. It would be however more convenient to be able to write instead just f(x) = x*x Given my view above, this is, standing alone, strictly an abbreviation of the equivalent def statement. I am presuming that a proper implementation would result in f.__name__ == 'f'. No, after some thought, i think it should be an abbreviation of "f = lambda x: x*x", f.__name__ would still be ''. But i see your point about never assigning lambdas directly, it makes sense. But sometimes i do assign short lambdas directly to variable. Is the convenience and (very low) frequency of applicability worth the inconvenience of confusing the meaning of '=' and complicating the implementation? I do not see any conflicts with the existing syntax. It heavily conflicts with existing syntax. The current meaning of target_expression = object_expression is 1. Evaluate object_expression in the existing namespace to an object, prior to any new bindings and independent of the target_expression. 2. Evaluate target_expression in the existing namespace to one or more targets. 3. Bind object to target or iterate target to bind to multiple targets. I do not thick so. In "x = 42" the variable x is not evaluated. All examples of the proposed syntax i can think of are currently illegal, so i suppose there is no conflicts. (I would appreciate a counterexample, if any.) Thanks for the reference to PEP 8, this is indeed an argument against. Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Syntax for one-line "nonymous" functions in "declaration style"
On jeu., mars 28, 2019 at 5:00 PM, python-list-requ...@python.org wrote: On 2019-03-27 10:42 a.m., Paul Moore wrote: On Wed, 27 Mar 2019 at 12:27, Alexey Muranov wrote: On mer., mars 27, 2019 at 10:10 AM, Paul Moore wrote: On Wed, 27 Mar 2019 at 08:25, Alexey Muranov wrote: Whey you need a simple function in Python, there is a choice between a normal function declaration and an assignment of a anonymous function (defined by a lambda-expression) to a variable: def f(x): return x*x or f = lambda x: x*x It would be however more convenient to be able to write instead just f(x) = x*x Why? Is saving a few characters really that helpful? So much so that it's worth adding a *third* method of defining functions, which would need documenting, adding to training materials, etc, etc? Because i think i would prefer to write it this way. That's not likely to be sufficient reason for changing a language that's used by literally millions of people. (Almost no new documentation or tutorials would be needed IMHO.) Documentation would be needed to explain how the new construct worked, for people who either wanted to use it or encountered it in other people's code. While it may be obvious to you how it works, it likely won't be to others, and there will probably be edge cases you haven't considered that others will find and ask about. For what it's worth, if I encountered "f(x) = x * x" in code, my first thought would be that Python somehow added a way to return an assignable reference from a function, rather than this being an anonymous function declaration. So documentation of that syntax would 100% be required Alex The thing to the right of the assignment symbol represents a value (an object), but the thing to the left does not represent a value, it represents a place for a value. What would an "assignable reference" mean? Say, variable "x" holds an "assignable reference", what can be done next? Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Syntax for one-line "nonymous" functions in "declaration style"
On jeu., mars 28, 2019 at 8:57 PM, Terry Reedy wrote: But i see your point about never assigning lambdas directly, it makes sense. But sometimes i do assign short lambdas directly to variable. Is the convenience and (very low) frequency of applicability worth the inconvenience of confusing the meaning of '=' and complicating the implementation? I do not see any conflicts with the existing syntax. It heavily conflicts with existing syntax. The current meaning of target_expression = object_expression is 1. Evaluate object_expression in the existing namespace to an object, prior to any new bindings and independent of the target_expression. 2. Evaluate target_expression in the existing namespace to one or more targets. 3. Bind object to target or iterate target to bind to multiple targets. I do not thick so. In "x = 42" the variable x is not evaluated. All examples of the proposed syntax i can think of are currently illegal, so i suppose there is no conflicts. (I would appreciate a counterexample, if any.) You are talking about syntax conflicts, I am talking about semantic conflict, which is important for human understanding. I believe there is no semantic conflict either, or could you be more specific? Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Syntax for one-line "nonymous" functions in "declaration style"
On jeu., mars 28, 2019 at 8:57 PM, Terry Reedy wrote: On 3/28/2019 12:29 PM, Alexey Muranov wrote: On jeu., Mar 28, 2019 at 5:00 PM, python-list-requ...@python.org wrote: So my opinion is that lambda expressions should only be used within larger expressions and never directly bound. It would be however more convenient to be able to write instead just f(x) = x*x Given my view above, this is, standing alone, strictly an abbreviation of the equivalent def statement. I am presuming that a proper implementation would result in f.__name__ == 'f'. No, after some thought, i think it should be an abbreviation of "f = lambda x: x*x", f.__name__ would still be ''. Throwing the name away is foolish. Testing functions is another situation in which function names are needed for proper report. My idea however was to have it as an exact synonyme of an assignment of a lambda. Assignment is an assignment, it should not modify the attributs of the value that is being assigned. But i see your point about never assigning lambdas directly, it makes sense. But sometimes i do assign short lambdas directly to variable. Is the convenience and (very low) frequency of applicability worth the inconvenience of confusing the meaning of '=' and complicating the implementation? I do not see any conflicts with the existing syntax. It heavily conflicts with existing syntax. The current meaning of target_expression = object_expression is 1. Evaluate object_expression in the existing namespace to an object, prior to any new bindings and independent of the target_expression. 2. Evaluate target_expression in the existing namespace to one or more targets. 3. Bind object to target or iterate target to bind to multiple targets. I do not thick so. In "x = 42" the variable x is not evaluated. All examples of the proposed syntax i can think of are currently illegal, so i suppose there is no conflicts. (I would appreciate a counterexample, if any.) You are talking about syntax conflicts, I am talking about semantic conflict, which is important for human understanding. Thanks for the reference to PEP 8, this is indeed an argument against. The situation in which assigning lambda expressions is more tempting is when assigning to attributes or dicts. def double(x): return x*x C.double = double d['double'] = double versus C.double = lambda x: x*x d['double'] = lambda x: x*x These are some of examples i had in mind as well: C.double(x) = x*x d['double'](x) = x*x Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Syntax for one-line "nonymous" functions in "declaration style"
On jeu., mars 28, 2019 at 5:00 PM, python-list-requ...@python.org wrote: So documentation of that syntax would 100% be required Regarding documentation, i believe there would be 3 line to add: () = is a syntactic sugar for = lambda : Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python-list Digest, Vol 186, Issue 31
On ven., Mar 29, 2019 at 4:51 PM, python-list-requ...@python.org wrote: On Thu, Mar 28, 2019 at 2:30 PM Alexey Muranov wrote: On jeu., mars 28, 2019 at 8:57 PM, Terry Reedy wrote: > Throwing the name away is foolish. Testing functions is another > situation in which function names are needed for proper report. My idea however was to have it as an exact synonyme of an assignment of a lambda. Assignment is an assignment, it should not modify the attributs of the value that is being assigned. There could perhaps be a special case for lambda expressions such that, when they are directly assigned to a variable, Python would use the variable name as the function name. I expect this could be accomplished by a straightforward transformation of the AST, perhaps even by just replacing the assignment with a def statement. If this will happen, that is, if in Python assigning a lambda-defined function to a variable will mutate the function's attributes, or else, if is some "random" syntactically-determined cases f = ... will stop being the same as evaluating the right-hand side and assigning the result to "f" variable, it will be a fairly good extra reason for me to go away from Python. Since this could just as easily be applied to lambda though, I'm afraid it doesn't offer much of a case for the "f(x)" syntactic sugar. I did not get this. My initial idea was exactly about introducing a syntactic sugar for better readability. I've already understood that the use cases contradict PEP 8 recommendations. Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Syntax for one-line "nonymous" functions in "declaration style"
On ven., Mar 29, 2019 at 4:51 PM, python-list-requ...@python.org wrote: On Thu, Mar 28, 2019 at 2:30 PM Alexey Muranov wrote: On jeu., mars 28, 2019 at 8:57 PM, Terry Reedy wrote: > Throwing the name away is foolish. Testing functions is another > situation in which function names are needed for proper report. My idea however was to have it as an exact synonyme of an assignment of a lambda. Assignment is an assignment, it should not modify the attributs of the value that is being assigned. There could perhaps be a special case for lambda expressions such that, when they are directly assigned to a variable, Python would use the variable name as the function name. I expect this could be accomplished by a straightforward transformation of the AST, perhaps even by just replacing the assignment with a def statement. If this will happen, that is, if in Python assigning a lambda-defined function to a variable will mutate the function's attributes, or else, if is some "random" syntactically-determined cases f = ... will stop being the same as evaluating the right-hand side and assigning the result to "f" variable, it will be a fairly good extra reason for me to go away from Python. Since this could just as easily be applied to lambda though, I'm afraid it doesn't offer much of a case for the "f(x)" syntactic sugar. I did not get this. My initial idea was exactly about introducing a syntactic sugar for better readability. I've already understood that the use cases contradict PEP 8 recommendations. Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Syntax for one-line "nonymous" functions in "declaration style"
On dim., Mar 31, 2019 at 6:00 PM, python-list-requ...@python.org wrote: On Sat, Mar 30, 2019, 5:32 AM Alexey Muranov wrote: On ven., Mar 29, 2019 at 4:51 PM, python-list-requ...@python.org wrote: > > There could perhaps be a special case for lambda expressions such > that, > when they are directly assigned to a variable, Python would use the > variable name as the function name. I expect this could be > accomplished by > a straightforward transformation of the AST, perhaps even by just > replacing > the assignment with a def statement. If this will happen, that is, if in Python assigning a lambda-defined function to a variable will mutate the function's attributes, or else, if is some "random" syntactically-determined cases f = ... will stop being the same as evaluating the right-hand side and assigning the result to "f" variable, it will be a fairly good extra reason for me to go away from Python. Is there a particular reason you don't like this? It's not too different from the syntactic magic Python already employs to support the 0-argument form of super(). I do not want any magic in a programming language i use, especially if it breaks simple rules. I do not like 0-argument `super()` either, but at least I do not have to use it. I am suspicious of `__class__` too. But here only identifiers are hacked, not the assignment operator. (I suppose the hack can be unhacked by using your own meta-class with a custom `__prepare__`.) Neither i like how a function magically turns into a generator if the keyword `yield` appears somewhere within its definition. Those are the things i don't like the most in Python. Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Syntax for one-line "nonymous" functions in "declaration style"
On lun., avril 1, 2019 at 6:00 PM, python-list-requ...@python.org wrote: On Sun, Mar 31, 2019 at 1:09 PM Alexey Muranov wrote: On dim., Mar 31, 2019 at 6:00 PM, python-list-requ...@python.org wrote: > On Sat, Mar 30, 2019, 5:32 AM Alexey Muranov > > wrote: > >> >> On ven., Mar 29, 2019 at 4:51 PM, python-list-requ...@python.org >> wrote: >> > >> > There could perhaps be a special case for lambda expressions such >> > that, >> > when they are directly assigned to a variable, Python would use >> the >> > variable name as the function name. I expect this could be >> > accomplished by >> > a straightforward transformation of the AST, perhaps even by just >> > replacing >> > the assignment with a def statement. >> >> If this will happen, that is, if in Python assigning a >> lambda-defined >> function to a variable will mutate the function's attributes, or >> else, >> if is some "random" syntactically-determined cases >> >> f = ... >> >> will stop being the same as evaluating the right-hand side and >> assigning the result to "f" variable, it will be a fairly good extra >> reason for me to go away from Python. >> > > Is there a particular reason you don't like this? It's not too > different > from the syntactic magic Python already employs to support the > 0-argument > form of super(). I do not want any magic in a programming language i use, especially if it breaks simple rules. I do not like 0-argument `super()` either, but at least I do not have to use it. Well, you wouldn't have to use my suggestion either, since it only applies to assignments of the form "f = lambda x: blah". As has already been stated, the preferred way to do this is with a def statement. So just use a def statement for this, and it wouldn't affect you (unless you *really* want the function's name to be "" for some reason). I only see a superficial analogy with `super()`, but perhaps it is because you did not give much details of you suggestion. Not only i do not have to use `super()` (i do not have to use Python either), but the magic behaviour of `super` is explained by special implicit environments in which some blocks of code are executed. Though this looks somewhat hackish, it gives me no clue of how your idea of mutating objects during assignment is supposed to work. On the other hand, i do use assignment in Python, and you seem to propose to get rid of assignment or to break it. Note that foo.bar = baz and foo[bar] = baz are not assignments but method calls, but foo = bar it an assignment (if i understand the current model correctly). Do you propose to desugar it into a method/function call and to get rid of assignments in the language completely? Will the user be able to override this method? Something like: setvar("foo", bar) # desugaring of foo = bar Would the assignment operation remain in the language under a different name? Maybe, foo <- bar ? I am so perplexed by the proposed behaviour of `f = lambda...`, that i need to ask the followng: am i right to expact that 1. f = lambda x: x, g = lambda x: x*x 2. (f, g) = (lambda x: x, lambda x: x*x) 3. (f, g) = _ = (lambda x: x, lambda x: x*x) 4. f = (lambda x: x)(lambda x: x) g = (lambda x: x)(lambda x: x*x) Will all have the same net effect? I suppose in any case that return lambda x: and result = lambda x: return result would not return the same result, which is not what i want. I tried to imagine what semantics of the language could cause your proposed behaviour of `f = lambda...` and couldn't think of anything short of breaking the language. That said, that's also the reason why this probably wouldn't happen. Why go to the trouble of fixing people's lambda assignments for them when the preferred fix would be for them to do it themselves by replacing them with def statements? It is not fixing, it is breaking. Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Generator definition syntax (was: Syntax for one-line "nonymous" functions)
On mar., Apr 2, 2019 at 4:31 AM, python-list-requ...@python.org wrote: Re: ">> Neither i like how a function magically turns into a generator if the keyword `yield` appears somewhere within its definition. I agree, there should have been a required syntactic element on the "def" line as well to signal it immediately to the reader. It won't stop me from using them, though." One way to save people looking at the code from having to look through a function for a yield statement to see if it is a generator would be to add a """doc string""" immediately after the function def, saying that it is a generator and describing what it does. I realize I'm calling on the programmer to address this issue by adding doc strings. Nonetheless adding doc strings is a good habit to get in to. --- Joseph S. And even if Python did not have docstrings, the programmer could still use comments to tell a fellow programmer what kind of code the fellow programmer is looking at. Even languages like Brainfuck have comments :). Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Syntax for one-line "nonymous" functions in "declaration style"
On Mon, Apr 1, 2019 at 3:52 PM Alexey Muranov gmail.com> wrote: > > I only see a superficial analogy with `super()`, but perhaps it is > because you did not give much details of you suggestion. No, it's because the analogy was not meant to be anything more than superficial. Both are constructs of syntactic magic that aid readability at a high level but potentially obscure the details of execution (in relatively unimportant ways) when examined at a low level. Since i understand that the "super() magic" is just evaluation in a predefined environment, it does not look so very magic. I do not know if Python can already manipulate blocks of code and environments as first-class objects, but in any case this does not look to be too far from the its normal behaviour/semantics. Moreover, without this "magic", `super()` would have just produced an error. So this magic did not change behaviour of something that worked before, it made "magically" work something that did not work before (but i am still not excited about it). On the contrary, i have no idea of how in the current semantics executing an assignment can mutate the assigned value. > On the other hand, i do use assignment in Python, and you seem to > propose to get rid of assignment or to break it. I thought the proposal was clear and succinct. "When [lambda expressions] are directly assigned to a variable, Python would use the variable name as the function name." That's all. I don't know where you got the idea I was proposing "to get rid of assignment". I suppose we use the term "assignment operation" differently. By assignment i mean evaluating the expressions in the right-hand side and assigning (binding?) the value to the variable or location described in the left-hand side. I believed this was the usual meaning of "assignment"... The behaviour you describe cannot happen during assignment in this sense. Maybe it was from my talk of implementing this by replacing the assignment with an equivalent def statement in the AST. Bear in mind that the def statement is already just a particular kind of assignment: it creates a function and assigns it to a name. The only difference between the original assignment and the def statement that replaces it is in the __name__ attribute of the function object that gets created. The proposal just makes the direct lambda assignment and the def "assignment" to be fully equivalent. `def` is not an assignment, it is more than that. Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Syntax for one-line "nonymous" functions in "declaration style"
On mar., Apr 2, 2019 at 6:00 PM, python-list-requ...@python.org wrote: On Tue, Apr 2, 2019 at 1:43 AM Alexey Muranov wrote: > On Mon, Apr 1, 2019 at 3:52 PM Alexey Muranov gmail.com> > wrote: > > > > I only see a superficial analogy with `super()`, but perhaps it is > > because you did not give much details of you suggestion. > > No, it's because the analogy was not meant to be anything more than > superficial. Both are constructs of syntactic magic that aid > readability at > a high level but potentially obscure the details of execution (in > relatively unimportant ways) when examined at a low level. Since i understand that the "super() magic" is just evaluation in a predefined environment, it does not look so very magic. It's the reason why this doesn't work: superduper = super class A: def f(self): return 42 class B(A): def f(self): return superduper().f() B().f() Traceback (most recent call last): File "", line 1, in File "", line 3, in f RuntimeError: super(): __class__ cell not found But this does: class C(A): def f(self): return superduper().f() not super C().f() 42 I don't know, seems magical to me. Moreover, without this "magic", `super()` would have just produced an error. So this magic did not change behaviour of something that worked before, it made "magically" work something that did not work before (but i am still not excited about it). I'm curious how you feel about this example then (from the CPython 3.7.2 REPL; results from different Python implementations or from scripts that comprise a single compilation unit may vary)? 372 is 372 True b = 372; b is 372 True b = 372 b is 372 False > Maybe it was from my talk of implementing this by replacing the > assignment > with an equivalent def statement in the AST. Bear in mind that the def > statement is already just a particular kind of assignment: it creates > a > function and assigns it to a name. The only difference between the > original > assignment and the def statement that replaces it is in the __name__ > attribute of the function object that gets created. The proposal just > makes > the direct lambda assignment and the def "assignment" to be fully > equivalent. `def` is not an assignment, it is more than that. def is an assignment where the target is constrained to a single variable and the expression is constrained to a newly created function object (optionally "decorated" first with one or more composed function calls). The only ways in which: @decorate def foo(blah): return stuff is more than: foo = decorate(lambda blah: stuff) are: 1) the former syntactically allows statements inside the function body, not just expressions; 2) the former syntactically allows annotations on the function; and 3) the former syntactically sets a function name and the latter doesn't. In other words, all of the differences ultimately boil down to syntax. Sorry, i do not feel like continuing this discussion for much longer, or we need to concentrate on some specific statement on which we disagree. I clarified what i meant by an assignment, and i believe it to be a usual meaning. 1. `def` is not an assignment, there is no left-hand side or right-hand side. I was talking about the normal assignment by which anyone can bind any value to any variable. 2. If i execute an assignment statement foo = ... and instead of evaluating the right-hand side and assigning the value to "foo" variable Python does something else, i consider the assignment operation ( = ) broken, as it does not do assignment (and only assignment). I've said more on this in previous messages. 3. About the examples with `372 is 372`, Python gives no garanties about the id's of numerical objects, and about id's of many other types of immutable objects. The user is not supposed to rely on their equality or inequality. Anytime Python's interpreter encounter two immutable objects that it finds identical, it is free to allocate a single object for both, this does not change the guaranteed semantics of the program. The '__name__' attribute of an object, as well as most (or all) other attributes, is a part of object's value/contents, no analogies with the id's. I am sorry, but except maybe for one or two more very specific questions, I am probably not going to continue. Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Syntax for one-line "nonymous" functions in "declaration style"
On mer., Apr 3, 2019 at 6:00 PM, python-list-requ...@python.org wrote: On Wed, Apr 3, 2019 at 3:55 AM Alexey Muranov wrote: I clarified what i meant by an assignment, and i believe it to be a usual meaning. 1. `def` is not an assignment, there is no left-hand side or right-hand side. I was talking about the normal assignment by which anyone can bind any value to any variable. Actually, a 'def' statement DOES perform assignment. It does a bit more than that, but it definitely is assignment. You can easily check the CPython disassembly: A command that performs an assignment among other things is not an assignment command itself. Alexey. -- https://mail.python.org/mailman/listinfo/python-list
Re: Why is a JIT compiler faster than a byte-compiler
http://www-106.ibm.com/developerworks/linux/library/l-psyco.html?t=gr,lnxw03=PsycoC http://gnosis.cx/publish/programming/charming_python_b9.html On Sat, 26 Mar 2005 11:22:03 +0100, Torsten Bronger <[EMAIL PROTECTED]> wrote: > Hallöchen! > > "dodoo" <[EMAIL PROTECTED]> writes: > > > http://www-900.ibm.com/developerworks/cn/linux/sdk/python/charm-28/index_eng.shtml > > I can't reach it. Is there an alternative URL? > > Tschö, > Torsten. > > -- > Torsten Bronger, aquisgrana, europa vetus > -- > http://mail.python.org/mailman/listinfo/python-list > -- Best regards, Alexey. -- http://mail.python.org/mailman/listinfo/python-list
Permission denied when opening a file that was created concurrently by os.rename (Windows)
Hello! I've hit a strange problem that I reduced to the following test case: * Run several python processes in parallel that spin in the following loop: while True: if os.path.isfile(fname): with open(fname, 'rb') as f: f.read() break * Then, run another process that creates a temporary file and then renames it to the name than other processes are expecting * Now, some of the reading processes occasionally fail with "Permission denied" OSError I was able to reproduce it on two Windows 7 64-bit machines. It seems when the file appears on the filesystem it is still unavailable to reading, but I have no idea how it can happen. Both source and destination files are in the same directory, and the destination doesn't exist before calling os.rename. Everything I could find indicates that os.rename should be atomic under this conditions even on Windows, so nobody should be able to observe the destination in unaccessible state. I know that I can workaround this problem by removing useless os.path.isfile() check and wrapping open() with try-except, but I'd like to know the root cause of the problem. Please share you thoughts. The test case is attached, the main file is test.bat. Python is expected to be in PATH. Stderr of readers is redirected to *.log. You may need to run several times to hit the issue. Alexey Izbyshev, research assistant, ISP RAS -- https://mail.python.org/mailman/listinfo/python-list
Teatro.io - features preview for web-applications in one click
During developing web-projects, manager always need to test new features. Typically, this is done using test servers. Often, manager cannot run a test server himself to see new features and has to ask the developers for the help, distracting them from their work. Besides purchased test equipment is not necessary 90% of the time. Teatro automatically creates a test server in the cloud for each new feature at the time it was added to the project. The manager doesn't need to configure anything himself - when developers have prepared a new feature which they want to show to their manager, they make changes to the code. Teatro sees these changes, launches a personal test server for this concrete feature and sends a unique link to the manager. It only remains to follow the link and start testing. Teatro deploys a dedicated cloud server for each Pull Request (change in the code) on GitHub, and provides a comment with the link to the server. No need to do anything manually - Teatro automatically creates server specially for each branch. Service monitors the new commits and updates code on the server automatically. Thus, you can run any number of parallel test servers under each feature and only when it is needed. Servers are not "live" all the time, and are suspended during idle time and do not waste resources. Access to the service is provided on a monthly subscription basis. Pricing depends on a maximum number of parallel servers and projects. Our features: - Everything is set automatically, without the participation of developers and system administrators - You can test a few features parallel at any time - Teatro lets you save money, because no longer need to have your own equipment and spend valuable human resources for serving test infrastructure We offer special prices for early adopters. If you're interested in it, write me at ale...@teatro.io -- https://mail.python.org/mailman/listinfo/python-list
Re: comments on runpy module
Example script.py: """ def f(arg): return g(arg) def g(arg): return arg """ Reading the Lib/runpy.py I've found, that the temporary module created inside the run_path() calls, is destroyed right after the script.py code executed in the resulting namespace. I've got an idea. It would be nice if there existed such a way to use it: with runpy.run_path("script.py") as a_namespace: a_namespace["f"]("abc") -- Regards, Alex. -- http://mail.python.org/mailman/listinfo/python-list
comments on runpy module
Hi! I've just had fun with the runpy module in Python 2.7. I'm writing to share it :) What I've tried is to "load" a python script using runpy.run_path(), take a function from the resulting namespace and call it with arbitrary arguments. All the functions in the namespace seem to be ok. repr(namespace["f"]) gives "". But if the f() is referring the modules namespace (I supposed it is the same as the returned one), all the values appear to be None. Example script.py: """ def f(arg): return g(arg) def g(arg): return arg """ Then running main.py: """ import runpy namespace = runpy.run_path("./script.py") print namespace["f"] print namespace["g"] print namespace["f"]("abc") """ gives such an output """ Traceback (most recent call last): File "main.py", line 7, in print namespace["f"]("abc") File "./script.py", line 2, in f return g(arg) TypeError: 'NoneType' object is not callable """ Reading the Lib/runpy.py I've found, that the temporary module created inside the run_path() calls, is destroyed right after the script.py code executed in the resulting namespace. I suppose that it is ether an issue or a feature that should be documented :) -- Have a good time and a good mood! Alex. -- http://mail.python.org/mailman/listinfo/python-list
Relative import bug or not?
After reading PEP-0328 I wanted to give relative imports a try: # somepkg/__init__.py # somepkg/test1.py from __future__ import absolute_import from . import test2 if __name__ == "__main__": print "Test" # somepkg/test2.py But it complaints: C:\1\somepkg>test1.py Traceback (most recent call last): File "C:\1\somepkg\test1.py", line 1, in from . import test2 ValueError: Attempted relative import in non-package Does this mean that packages that implement self tests are not allowed to use relative import? Or is it just a bug? I can understand that I can use "import test2" when it's __main__, but when it's not now it complains about no module test2 with absolute_import on. PEP-0328 also has this phrase: "Relative imports use a module's __name__ attribute to determine that module's position in the package hierarchy. If the module's name does not contain any package information (e.g. it is set to '__main__') then relative imports are resolved as if the module were a top level module, regardless of where the module is actually located on the file system.", but maybe my english knowledge is not really good, because I can't understand what should actually happen here ("relative imports are resolved as if the module were a top level module")... :-/ So is it a bug, or am I doing something wrong? -- http://mail.python.org/mailman/listinfo/python-list
Re: A friendlier, sugarier lambda -- a proposal for Ruby-like blocks in python
[EMAIL PROTECTED] wrote: > Compared to the Python I know and love, Ruby isn't quite the same. > However, it has at least one terrific feature: "blocks". Well, I particularly like how Boo (http://boo.codehaus.org) has done it: func(a, b, c) def(p1, p2, p3): stmts I was so attached to these "nameless" def-forms that I was even shocked when I found that this doesn't work in python: f = def(a, b): return a*b Another good feature of Boo, btw. -- http://mail.python.org/mailman/listinfo/python-list
Re: A friendlier, sugarier lambda -- a proposal for Ruby-like blocks in python
[EMAIL PROTECTED] wrote: > but maybe it reduces code readabilty a bit for people > that have just started to program: > > mul2 = def(a, b): > return a * b > > Instead of: > > def mul2(a, b): > return a * b For such simple cases, yes. What about: button.click += def(obj): # do stuff You obviously can't: def button.click(obj): # do stuff :-) And if you make intermediate function and then assign it somewhere, it "pollutes namespace": it's still left there, unneeded. -- http://mail.python.org/mailman/listinfo/python-list
Re: chained attrgetter
On Oct 25, 10:00 pm, "David S." <[EMAIL PROTECTED]> wrote: > Does something like operator.getattr exist to perform a chained attr > lookup? Do you mean something like class cattrgetter: def __init__(self, name): self.names = name.split('.') def __call__(self, obj): for name in self.names: obj = getattr(obj, name) return obj ? -- http://mail.python.org/mailman/listinfo/python-list
Re: [Zopyrus] A python IDE for teaching that supports cyrillic i/o
Здравствуйте, Kirill. Вы писали 18 ноября 2006 г., 22:22:48: >> > Could anyone suggest me a simple IDE suitable for teaching Python as a >> > first programming language to high school students? >> >>Does it have to be an IDE? Wouldn't it be better to use a simple text >> editor + command line? > Preferably. I believe that using a editor + command line will only make > things worse because console and GUI have different encodings under > Windows. So a student that have written a script in a GUI editor and saved > it in UTF-8 would see garbage in console. Ничего, что я по русски? Вроде все свои :) Я тоже озаботился выбором нормального IDE. Кроме редактора и запускателя скриптов в ide же еще должен быть нормальный отладчик, интеграция с системой контроля версий, автодополнение кода... Под виндами самое приличное из всего увиденного показалось komodo. Да и то, подглюкивает периодически, подвисает, из vcs знает только css да svn. SourceOffSite'а не умеет :( Человеку, привыкшему к нормальному ide от вижуал студии тяжко :) -- С уважением, Alexey mailto:[EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Python assignment loop
On May 21, 8:12 am, "Silver Rock" <[EMAIL PROTECTED]> wrote: > yes, that is the way I a solving the problem. using lists. so it seems > that there is no way around it then.. There's at least one way to do it that I can think of straight away: selfmodule = __import__(__name__, None, None, (None,)) setattr(selfmodule, "varname", value) But I can't say it's anywhere near elegant. -- http://mail.python.org/mailman/listinfo/python-list
Import of egg packages installed with easy_install
Hi. There's an already installed with easy_install packet, let's say flup, to the home catalog: $ ls -la ~/python/lib/python2.5/site-packages/ total 176 drwxr-xr-x 3 4096 Nov 29 18:57 . drwxr-xr-x 3 4096 Nov 29 18:51 .. -rw-r--r-- 1 208 Nov 29 18:57 easy-install.pth -rw-r--r-- 1 134573 Nov 29 18:51 flup-1.0.1-py2.5.egg -rw-r--r-- 1 2362 Nov 29 18:51 site.py -rw-r--r-- 1 1853 Nov 29 18:51 site.pyc $ cat ~/python/lib/python2.5/site-packages/easy-install.pth import sys; sys.__plen = len(sys.path) ./flup-1.0.1-py2.5.egg import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sys.path[p:p]=new; sys.__egginsert = p+len(new) $ echo $PYTHONPATH /usr/lib64/portage/pym:/home/username/python/lib64/python2.5/site-packages $ python Python 2.5.2 (r252:60911, Nov 13 2008, 15:01:36) [GCC 4.1.2 (Gentoo 4.1.2 p1.1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import flup No errors. Then I create a simple CGI script: #!/usr/bin/python print "Content-type: text/plain"; print import sys sys.path.insert (0, '/home/username/python/lib64/python2.5/site-packages') print sys.path import flup Browser says: ['/home/username/python/lib64/python2.5/site-packages', '/home/username/http', '/usr/lib64/python25.zip', '/usr/lib64/python2.5', '/usr/lib64/python2.5/plat-linux2', '/usr/lib64/python2.5/lib-tk', '/usr/lib64/python2.5/lib-dynload', '/usr/lib64/python2.5/site-packages'] in error log: [Sat Nov 29 19:41:15 2008] [error] Traceback (most recent call last): [Sat Nov 29 19:41:15 2008] [error] File "path.cgi", line 9, in [Sat Nov 29 19:41:15 2008] [error] import flup [Sat Nov 29 19:41:15 2008] [error] ImportError: No module named flup If you start it with console, you get the same, but there appears also another path: /home/username/python/lib64/python2.5/site-packages/flup-1.0.1-py2.5.egg As I understand it is the problem actually, but I can't get why sys.path doesn't contain this path when I request with HTTP. -- BRGDS. Alexey Vlasov. -- http://mail.python.org/mailman/listinfo/python-list
Re: Import of egg packages installed with easy_install
Hi Diez. On Mon, Dec 01, 2008 at 08:09:29PM +0100, Diez B. Roggisch wrote: > It's not sufficient to add simply your local site-packages, you must install > it using the module site's addsitedir-function, like this: > > import site > site.addsitedir("/home/username/python/lib/python2.5/site-packages") Thanks for help. -- BRGDS. Alexey Vlasov. -- http://mail.python.org/mailman/listinfo/python-list
Re: Problem with lower() for unicode strings in russian
Martin, thanks for fast reply, now anything is ok! On Oct 6, 1:30 am, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote: > > I have a set of strings (all letters are capitalized) at utf-8, > > That's the problem. If these are really utf-8 encoded byte strings, > then .lower likely won't work. It uses the C library's tolower API, > which works on a byte level, i.e. can't work for multi-byte encodings. > > What you need to do is to operate on Unicode strings. I.e. instead > of > > s.lower() > > do > > s.decode("utf-8").lower() > > or (if you need byte strings back) > > s.decode("utf-8").lower().encode("utf-8") > > If you find that you write the latter, I recommend that you redesign > your application. Don't use byte strings to represent text, but use > Unicode strings all the time, except at the system boundary (where > you decode/encode as appropriate). > > There are some limitations with Unicode .lower also, but I don't > think they apply to Russian (specifically, SpecialCasing.txt is > not considered). > > HTH, > Martin -- http://mail.python.org/mailman/listinfo/python-list
Problem with lower() for unicode strings in russian
Hi! I have a set of strings (all letters are capitalized) at utf-8, russian language. I need to lower it, but my_string.lower(). Doesn't work. See sample script: # -*- coding: utf-8 -*- [skip] s1 = self.title s2 = self.title.lower() print s1 == s2 returns true. I have no problems with lower() for english letters:, or with something like this: u'russian_letters_here'.lower(), but I don't need constants, I need to modify variables, but there is no any changs, when I apply lower() function to mine strings. -- http://mail.python.org/mailman/listinfo/python-list