Inflex scan report [020220054666]
Administrator Email Reply Address: victor Email sent to: [EMAIL PROTECTED] Inflex ID: 020220054666 Report Details --- AntiVirus Results... 00:03 _headers_ 00:03 text.zip 00:03 Text.bat >>> Virus 'W32/MyDoom-O' found in file >>> /usr/local/inflex/tmp/inf_020220054666/unpacked/text.zip/Text.bat >>> Virus 'W32/MyDoom-O' found in file >>> /usr/local/inflex/tmp/inf_020220054666/unpacked/text.zip END OF MESSAGE. End. . -- http://mail.python.org/mailman/listinfo/python-list
PDF with nonLatin-1 characters
I want to generate a report and the PDF fits perfectly. Though there is an issue of using different encoding in the doc. I tried PyPS with no success. I need a lib that can make PDFs with an arbitrary set of fonts (possibly embed them into the document). What would you suggest? -- http://mail.python.org/mailman/listinfo/python-list
Re: how to stop python
or if u want explicit exit of program then use: import sys sys.exit(1) or raise SystemExit, 'message' Dan wrote: > bruce bedouglas at earthlink.net posted: > > > perl has the concept of "die". does python have anything > > similar. how can a python app be stopped? > > I see this sort of statement a lot in Perl: > open(FH, "myfile.txt") or die ("Could not open file"); > > I've no idea why you're asking for the Python equivalent to die, but if > it's for this sort of case, you don't need it. Usually in Python you > don't need to explicitly check for an error. The Python function will > raise an exception instead of returning an error code. If you want to > handle the error, enclose it in a try/except block. But if you just > want the program to abort with an error message so that you don't get > silent failure, it will happen automatically if you don't catch the > exception. So the equivalent Python example looks something like this: > fh = file("myfile.txt") > > If the file doesn't exist, and you don't catch the exception, you get > something like this: > $ ./foo.py > Traceback (most recent call last): >File "./foo.py", line 3, in ? > fh = file("myfile.txt") > IOError: [Errno 2] No such file or directory: 'myfile.txt' > > /Dan -- http://mail.python.org/mailman/listinfo/python-list
Python - forking an external process?
Hi, I have a Python script where I want to run fork and run an external command (or set of commands). For example, after doing , I then want to run ssh to a host, handover control back to the user, and have my script terminate. Or I might want to run ssh to a host, less a certain textfile, then exit. What's the idiomatic way of doing this within Python? Is it possible to do with Subprocess? Cheers, Victor (I did see this SO post - http://stackoverflow.com/questions/6011235/run-a-program-from-python-and-have-it-continue-to-run-after-the-script-is-kille, but it's a bit older, and I was going to see what the current idiomatic way of doing this is). -- http://mail.python.org/mailman/listinfo/python-list
Re: Python - forking an external process?
Hi, Hmm, this script is actually written using the Cliff framework (https://github.com/dreamhost/cliff). I was hoping to keep the whole approach fairly simple, without needing to pull in too much external stuff, or set anything up. There's no way to do it with just Python core is there? Also, what's this improvement you mentioned? Cheers, Victor On Wednesday, 3 July 2013 13:59:19 UTC+10, rusi wrote: > On Wednesday, July 3, 2013 9:17:29 AM UTC+5:30, Victor Hooi wrote: > > > Hi, > > > > > > I have a Python script where I want to run fork and run an external command > > > (or set of commands). > > > For example, after doing , I then want to run ssh to a host, handover > > > control back to the user, and have my script terminate. > > > > Seen Fabric? > > http://docs.fabfile.org/en/1.6/ > > > > Recently -- within the last month methinks -- there was someone who posted a > supposed improvement to it (forget the name) -- http://mail.python.org/mailman/listinfo/python-list
Re: how to insert random error in a programming
Try fuzzing. Examples: http://pypi.python.org/pypi/fusil/ http://peachfuzzer.com/ Victor -- http://mail.python.org/mailman/listinfo/python-list
Creating different classes dynamically?
Hi, I have a directory tree with various XML configuration files. I then have separate classes for each type, which all inherit from a base. E.g. class AnimalConfigurationParser: ... class DogConfigurationParser(AnimalConfigurationParser): ... class CatConfigurationParser(AnimalConfigurationParser): I can identify the type of configuration file from the root XML tag. I'd like to walk through the directory tree, and create different objects based on the type of configuration file: for root, dirs, files in os.walk('./'): for file in files: if file.startswith('ml') and file.endswith('.xml') and 'entity' not in file: with open(os.path.join(root, file), 'r') as f: try: tree = etree.parse(f) root = tree.getroot() print(f.name) print(root.tag) # Do something to create the appropriate type of parser except xml.parsers.expat.ExpatError as e: print('Unable to parse file {0} - {1}'.format(f.name, e.message)) I have a dict with the root tags - I was thinking of mapping these directly to the functions - however, I'm not sure if that's the right way to do it? Is there a more Pythonic way of doing this? root_tags = { 'DogRootTag': DogConfigurationParser(), 'CatRootTag': CatConfigurationParser(), } Cheers, Victor -- http://mail.python.org/mailman/listinfo/python-list
TypeError: 'in ' requires string as left operand, not Element
Hi, I'm getting a strange error when I try to run the following: for root, dirs, files in os.walk('./'): for file in files: if file.startswith('ml') and file.endswith('.xml') and 'entity' not in file: print(root) print(file) with open(os.path.join(root, file), 'r') as f: print(f.name) try: tree = etree.parse(f) root = tree.getroot() print(f.name) print(root.tag) except xml.parsers.expat.ExpatError as e: print('Unable to parse file {0} - {1}'.format(f.name, e.message)) The error is: Traceback (most recent call last): File "foo.py", line 275, in marketlink_configfiles() File "foo.py", line 83, in bar with open(os.path.join(root, file), 'r') as f: File "C:\Python27\lib\ntpath.py", line 97, in join if path[-1] in "/\\": TypeError: 'in ' requires string as left operand, not Element Cheers, Victor -- http://mail.python.org/mailman/listinfo/python-list
Re: TypeError: 'in ' requires string as left operand, not Element
Hi, Ignore me - PEBKAC...lol. I used "root" both for the os.walk, and also for the root XML element. Thanks anyhow =). Cheers, Victor On Monday, 10 December 2012 11:52:34 UTC+11, Victor Hooi wrote: > Hi, > > > > I'm getting a strange error when I try to run the following: > > > > for root, dirs, files in os.walk('./'): > > for file in files: > > if file.startswith('ml') and file.endswith('.xml') and 'entity' > not in file: > > print(root) > > print(file) > > with open(os.path.join(root, file), 'r') as f: > > print(f.name) > > try: > > tree = etree.parse(f) > > root = tree.getroot() > > print(f.name) > > print(root.tag) > > except xml.parsers.expat.ExpatError as e: > > print('Unable to parse file {0} - {1}'.format(f.name, > e.message)) > > > > The error is: > > > > Traceback (most recent call last): > > File "foo.py", line 275, in > > marketlink_configfiles() > > File "foo.py", line 83, in bar > > with open(os.path.join(root, file), 'r') as f: > > File "C:\Python27\lib\ntpath.py", line 97, in join > > if path[-1] in "/\\": > > TypeError: 'in ' requires string as left operand, not Element > > > > Cheers, > > Victor -- http://mail.python.org/mailman/listinfo/python-list
Re: Creating different classes dynamically?
heya, Dave: Ahah, thanks =). You're right, my terminology was off, I want to dynamically *instantiate*, not create new classes. And yes, removing the brackets worked =). Cheers, Victor On Monday, 10 December 2012 11:53:30 UTC+11, Dave Angel wrote: > On 12/09/2012 07:35 PM, Victor Hooi wrote: > > > Hi, > > > > > > I have a directory tree with various XML configuration files. > > > > > > I then have separate classes for each type, which all inherit from a base. > > E.g. > > > > > > class AnimalConfigurationParser: > > > ... > > > > > > class DogConfigurationParser(AnimalConfigurationParser): > > > ... > > > > > > class CatConfigurationParser(AnimalConfigurationParser): > > > > > > > > > I can identify the type of configuration file from the root XML tag. > > > > > > I'd like to walk through the directory tree, and create different objects > > based on the type of configuration file: > > > > > > for root, dirs, files in os.walk('./'): > > > for file in files: > > > if file.startswith('ml') and file.endswith('.xml') and 'entity' > > not in file: > > > with open(os.path.join(root, file), 'r') as f: > > > try: > > > tree = etree.parse(f) > > > root = tree.getroot() > > > print(f.name) > > > print(root.tag) > > > # Do something to create the appropriate type of > > parser > > > except xml.parsers.expat.ExpatError as e: > > > print('Unable to parse file {0} - > > {1}'.format(f.name, e.message)) > > > > > > I have a dict with the root tags - I was thinking of mapping these directly > > to the functions - however, I'm not sure if that's the right way to do it? > > Is there a more Pythonic way of doing this? > > > > > > root_tags = { > > >'DogRootTag': DogConfigurationParser(), > > > 'CatRootTag': CatConfigurationParser(), > > > } > > > > > > Cheers, > > > Victor > > > > Your subject line says you want to create the classes dynamically, but > > that's not what your code implies. if you just want to decide which > > class to INSTANTIATE dynamically, that's easily done, and you have it > > almost right. In your dict you should leave off those parentheses. > > > > > > > > Then the parser creation looks something like: > >parser_instance = root_tags[root.tag] (arg1, arg2) > > > > where the arg1, arg2 are whatever arguments the __init__ of these > > classes expects. > > > > (untested) > > > > -- > > > > DaveA -- http://mail.python.org/mailman/listinfo/python-list
Using regexes versus "in" membership test?
Hi, I have a script that trawls through log files looking for certain error conditions. These are identified via certain keywords (all different) in those lines I then process those lines using regex groups to extract certain fields. Currently, I'm using a for loop to iterate through the file, and a dict of regexes: breaches = { 'type1': re.compile(r'some_regex_expression'), 'type2': re.compile(r'some_regex_expression'), 'type3': re.compile(r'some_regex_expression'), 'type4': re.compile(r'some_regex_expression'), 'type5': re.compile(r'some_regex_expression'), } ... with open('blah.log', 'r') as f: for line in f: for breach in breaches: results = breaches[breach].search(line) if results: self.logger.info('We found an error - {0} - {1}'.format(results.group('errorcode'), results.group('errormsg')) # We do other things with other regex groups as well. (This isn't the *exact* code, but it shows the logic/flow fairly closely). For completeness, the actual regexes look something like this: Also, my regexs could possibly be tuned, they look something like this: (?P\d{2}:\d{2}:\d{2}.\d{9})\s*\[(?P\w+)\s*\]\s*\[(?P\w+)\s*\]\s*\[{0,1}\]{0,1}\s*\[(?P\w+)\s*\]\s*level\(\d\) broadcast\s*\(\[(?P\w+)\]\s*\[(?P\w+)\]\s*(?P\w{4}):(?P\w+) failed order: (?P\w+) (?P\d+) @ (?P[\d.]+), error on update \(\d+ : Some error string. Active Orders=(?P\d+) Limit=(?P\d+)\)\) (Feel free to suggest any tuning, if you think they need it). My question is - I've heard that using the "in" membership operator is substantially faster than using Python regexes. Is this true? What is the technical explanation for this? And what sort of performance characteristics are there between the two? (I couldn't find much in the way of docs for "in", just the brief mention here - http://docs.python.org/2/reference/expressions.html#not-in ) Would I be substantially better off using a list of strings and using "in" against each line, then using a second pass of regex only on the matched lines? (Log files are compressed, I'm actually using bz2 to read them in, uncompressed size is around 40-50 Gb). Cheers, Victor -- http://mail.python.org/mailman/listinfo/python-list
Re: Using regexes versus "in" membership test?
Hi, That was actually *one* regex expression...lol. And yes, it probably is a bit convoluted. Thanks for the tip about using VERBOSE - I'll use that, and comment my regex - that's a useful tip. Are there any other general pointers you might give for that regex? The lines I'm trying to match look something like this: 07:40:05.793627975 [Info ] [SOME_MODULE] [SOME_FUNCTION] [SOME_OTHER_FLAG] [RequestTag=0 ErrorCode=3 ErrorText="some error message" ID=0:0x Foo=1 Bar=5 Joe=5] Essentially, I'd want to strip out the timestamp, logging-level, module, function etc - and possibly the tag-value pairs? And yes, based on what you said, I probably will use the "in" loop first outside the regex - the lines I'm searching for are fairly few compared to the overall log size. Cheers, Victor On Thursday, 13 December 2012 12:09:33 UTC+11, Steven D'Aprano wrote: > On Wed, 12 Dec 2012 14:35:41 -0800, Victor Hooi wrote: > > > > > Hi, > > > > > > I have a script that trawls through log files looking for certain error > > > conditions. These are identified via certain keywords (all different) in > > > those lines > > > > > > I then process those lines using regex groups to extract certain fields. > > [...] > > > Also, my regexs could possibly be tuned, they look something like this: > > > > > > (?P\d{2}:\d{2}:\d{2}.\d{9})\s*\[(?P\w+)\s* > > \]\s*\[(?P\w+)\s*\]\s*\[{0,1}\]{0,1}\s*\[(?P\w+)\s*\] > > \s*level\(\d\) broadcast\s*\(\[(?P\w+)\]\s*\[(?P\w+)\] > > \s*(?P\w{4}):(?P\w+) failed order: (?P\w+) (? > > P\d+) @ (?P[\d.]+), error on update \(\d+ : Some error > > string. Active Orders=(?P\d+) Limit=(?P\d+)\)\) > > > > > > (Feel free to suggest any tuning, if you think they need it). > > > > "Tuning"? I think it needs to be taken out and killed with a stake to the > > heart, then buried in concrete! :-) > > > > An appropriate quote: > > > > Some people, when confronted with a problem, think "I know, > > I'll use regular expressions." Now they have two problems. > > -- Jamie Zawinski > > > > Is this actually meant to be a single regex, or did your email somehow > > mangle multiple regexes into a single line? > > > > At the very least, you should write your regexes using the VERBOSE flag, > > so you can use non-significant whitespace and comments. There is no > > performance cost to using VERBOSE once they are compiled, but a huge > > maintainability benefit. > > > > > > > My question is - I've heard that using the "in" membership operator is > > > substantially faster than using Python regexes. > > > > > > Is this true? What is the technical explanation for this? And what sort > > > of performance characteristics are there between the two? > > > > Yes, it is true. The technical explanation is simple: > > > > * the "in" operator implements simple substring matching, > > which is trivial to perform and fast; > > > > * regexes are an interpreted mini-language which operate via > > a complex state machine that needs to do a lot of work, > > which is complicated to perform and slow. > > > > Python's regex engine is not as finely tuned as (say) Perl's, but even in > > Perl simple substring matching ought to be faster, simply because you are > > doing less work to match a substring than to run a regex. > > > > But the real advantage to using "in" is readability and maintainability. > > > > As for the performance characteristics, you really need to do your own > > testing. Performance will depend on what you are searching for, where you > > are searching for it, whether it is found or not, your version of Python, > > your operating system, your hardware. > > > > At some level of complexity, you are better off just using a regex rather > > than implementing your own buggy, complicated expression matcher: for > > some matching tasks, there is no reasonable substitute to regexes. But > > for *simple* uses, you should prefer *simple* code: > > > > [steve@ando ~]$ python -m timeit \ > > > -s "data = 'abcd'*1000 + 'xyz' + 'abcd'*1000" \ > > > "'xyz' in data" > > 10 loops, best of 3: 4.17 usec per loop > > > > [steve@ando ~]$ python -m timeit \ > > > -s "data = 'abcd'*100
Re: Using regexes versus "in" membership test?
Heya, See my original first post =): > Would I be substantially better off using a list of strings and using "in" > against each line, then using a second pass of regex only on the matched > lines? Based on what Steven said, and what I know about the logs in question, it's definitely better to do it that way. However, I'd still like to fix up the regex, or fix any glaring issues with it as well. Cheers, Victor On Thursday, 13 December 2012 17:19:57 UTC+11, Chris Angelico wrote: > On Thu, Dec 13, 2012 at 5:10 PM, Victor Hooi wrote: > > > Are there any other general pointers you might give for that regex? The > > lines I'm trying to match look something like this: > > > > > > 07:40:05.793627975 [Info ] [SOME_MODULE] [SOME_FUNCTION] > > [SOME_OTHER_FLAG] [RequestTag=0 ErrorCode=3 ErrorText="some error message" > > ID=0:0x Foo=1 Bar=5 Joe=5] > > > > > > Essentially, I'd want to strip out the timestamp, logging-level, module, > > function etc - and possibly the tag-value pairs? > > > > If possible, can you do a simple test to find out whether or not you > > want a line and then do more complex parsing to get the info you want > > out of it? For instance, perhaps the presence of the word "ErrorCode" > > is all you need to check - it wouldn't hurt if you have a few percent > > of false positives that get discarded during the parse phase, it'll > > still be quicker to do a single string-in-string check than a complex > > regex to figure out if you even need to process the line at all. > > > > ChrisA -- http://mail.python.org/mailman/listinfo/python-list
Using mktime to convert date to seconds since epoch - omitting elements from the tuple?
Hi, I'm using pysvn to checkout a specific revision based on date - pysvn will only accept a date in terms of seconds since the epoch. I'm attempting to use time.mktime() to convert a date (e.g. "2012-02-01) to seconds since epoch. According to the docs, mktime expects a 9-element tuple. My question is, how should I omit elements from this tuple? And what is the expected behaviour when I do that? For example, (zero-index), element 6 is the day of the week, and element 7 is the day in the year, out of 366 - if I specify the earlier elements, then I shouldn't really need to specify these. However, the docs don't seem to talk much about this. I just tried testing putting garbage numbers for element 6 and 7, whilst specifying the earlier elements: > time.mktime((2012, 5, 5, 23, 59, 59, 23424234, 5234234 ,0 )) It seems to have no effect what numbers I set 6 and 7 to - is that because the earlier elements are set? How should I properly omit them? Is this all documented somewhere? What is the minimum I need to specify? And what happens to the fields I don't specify? Cheers, Victor -- http://mail.python.org/mailman/listinfo/python-list
Re: os.path.realpath(path) bug on win7 ?
It looks like the following issue: http://bugs.python.org/issue14094 Victor Le 6 janv. 2013 07:59, "iMath" <2281570...@qq.com> a écrit : > os.path.realpath(path) bug on win7 ? > > Temp.link is a Symbolic link > Its target location is C:\test\test1 > But > >>> os.path.realpath(r'C:\Users\SAMSUNG\Temp.link\test2') > 'C:\\Users\\SAMSUNG\\Temp.link\\test2' > > I thought the return value should be ' C:\\test\\test1\\test2' > > Is it a bug ? anyone can clear it to me ? > > > -- > http://mail.python.org/mailman/listinfo/python-list > > -- http://mail.python.org/mailman/listinfo/python-list
Searching through two logfiles in parallel?
Hi, I'm trying to compare two logfiles in Python. One logfile will have lines recording the message being sent: 05:00:06 Message sent - Value A: 5.6, Value B: 6.2, Value C: 9.9 the other logfile has line recording the message being received 05:00:09 Message received - Value A: 5.6, Value B: 6.2, Value C: 9.9 The goal is to compare the time stamp between the two - we can safely assume the timestamp on the message being received is later than the timestamp on transmission. If it was a direct line-by-line, I could probably use itertools.izip(), right? However, it's not a direct line-by-line comparison of the two files - the lines I'm looking for are interspersed among other loglines, and the time difference between sending/receiving is quite variable. So the idea is to iterate through the sending logfile - then iterate through the receiving logfile from that timestamp forwards, looking for the matching pair. Obviously I want to minimise the amount of back-forth through the file. Also, there is a chance that certain messages could get lost - so I assume there's a threshold after which I want to give up searching for the matching received message, and then just try to resync to the next sent message. Is there a Pythonic way, or some kind of idiom that I can use to approach this problem? Cheers, Victor -- http://mail.python.org/mailman/listinfo/python-list
Re: Searching through two logfiles in parallel?
Hi Oscar, Thanks for the quick reply =). I'm trying to understand your code properly, and it seems like for each line in logfile1, we loop through all of logfile2? The idea was that it would remember it's position in logfile2 as well - since we can assume that the loglines are in chronological order - we only need to search forwards in logfile2 each time, not from the beginning each time. So for example - logfile1: 05:00:06 Message sent - Value A: 5.6, Value B: 6.2, Value C: 9.9 05:00:08 Message sent - Value A: 3.3, Value B: 4.3, Value C: 2.3 05:00:14 Message sent - Value A: 1.0, Value B: 0.4, Value C: 5.4 logfile2: 05:00:09 Message received - Value A: 5.6, Value B: 6.2, Value C: 9.9 05:00:12 Message received - Value A: 3.3, Value B: 4.3, Value C: 2.3 05:00:15 Message received - Value A: 1.0, Value B: 0.4, Value C: 5.4 The idea is that I'd iterate through logfile 1 - I'd get the 05:00:06 logline - I'd search through logfile2 and find the 05:00:09 logline. Then, back in logline1 I'd find the next logline at 05:00:08. Then in logfile2, instead of searching back from the beginning, I'd start from the next line, which happens to be 5:00:12. In reality, I'd need to handle missing messages in logfile2, but that's the general idea. Does that make sense? (There's also a chance I've misunderstood your buf code, and it does do this - in that case, I apologies - is there any chance you could explain it please?) Cheers, Victor On Tuesday, 8 January 2013 09:58:36 UTC+11, Oscar Benjamin wrote: > On 7 January 2013 22:10, Victor Hooi wrote: > > > Hi, > > > > > > I'm trying to compare two logfiles in Python. > > > > > > One logfile will have lines recording the message being sent: > > > > > > 05:00:06 Message sent - Value A: 5.6, Value B: 6.2, Value C: 9.9 > > > > > > the other logfile has line recording the message being received > > > > > > 05:00:09 Message received - Value A: 5.6, Value B: 6.2, Value C: 9.9 > > > > > > The goal is to compare the time stamp between the two - we can safely > > assume the timestamp on the message being received is later than the > > timestamp on transmission. > > > > > > If it was a direct line-by-line, I could probably use itertools.izip(), > > right? > > > > > > However, it's not a direct line-by-line comparison of the two files - the > > lines I'm looking for are interspersed among other loglines, and the time > > difference between sending/receiving is quite variable. > > > > > > So the idea is to iterate through the sending logfile - then iterate > > through the receiving logfile from that timestamp forwards, looking for the > > matching pair. Obviously I want to minimise the amount of back-forth > > through the file. > > > > > > Also, there is a chance that certain messages could get lost - so I assume > > there's a threshold after which I want to give up searching for the > > matching received message, and then just try to resync to the next sent > > message. > > > > > > Is there a Pythonic way, or some kind of idiom that I can use to approach > > this problem? > > > > Assuming that you can impose a maximum time between the send and > > recieve timestamps, something like the following might work > > (untested): > > > > def find_matching(logfile1, logfile2, maxdelta): > > buf = {} > > logfile2 = iter(logfile2) > > for msg1 in logfile1: > > if msg1.key in buf: > > yield msg1, buf.pop(msg1.key) > > continue > > maxtime = msg1.time + maxdelta > > for msg2 in logfile2: > > if msg2.key == msg1.key: > > yield msg1, msg2 > > break > > buf[msg2.key] = msg2 > > if msg2.time > maxtime: > > break > > else: > > yield msg1, 'No match' > > > > > > Oscar -- http://mail.python.org/mailman/listinfo/python-list
// compile python core //
Hi I would like to know how to compile the python core. I am going to remove some modules of it to have a thin python. Where could I find further information about it? I would be grateful for your suggestions Waldemar -- http://mail.python.org/mailman/listinfo/python-list
Embedding a thin python
Hi All, I would like to remove some modules for embedding a thin python. how to do that? I would be grateful for your suggestions -- http://mail.python.org/mailman/listinfo/python-list
Argparse, and linking to methods in Subclasses
Hi, I have a simple Python script to perform operations on various types on in-house servers: manage_servers.py Operations are things like check, build, deploy, configure, verify etc. Types of server are just different types of inhouse servers we use. We have a generic server class, then specific types that inherit from that: class Server def configure_logging(self, loggin_file): ... def check(self): ... def deploy(self): ... def configure(self): ... def __init__(self, hostname): self.hostname = hostname logging = self.configure_logging(LOG_FILENAME) class SpamServer(Server): def check(self): ... class HamServer(Server): def deploy(self): ... My question is how to link that all up to argparse? Originally, I was using argparse subparses for the operations (check, build, deploy) and another argument for the type. subparsers = parser.add_subparsers(help='The operation that you want to run on the server.') parser_check = subparsers.add_parser('check', help='Check that the server has been setup correctly.') parser_build = subparsers.add_parser('build', help='Download and build a copy of the execution stack.') parser_build.add_argument('-r', '--revision', help='SVN revision to build from.') ... parser.add_argument('type_of_server', action='store', choices=types_of_servers, help='The type of server you wish to create.') Normally, you'd link each subparse to a method - and then pass in the type_of_server as an argument. However, that's slightly backwards due to the classes- I need to create an instance of the appropriate Server class, then call the operation method inside of that. Any ideas of how I could achieve the above? Perhaps a different design pattern for Servers? Or a way to use argparse in this situation? Thanks, Victor -- http://mail.python.org/mailman/listinfo/python-list
Using argparse to call method in various Classes?
Hi, I'm attempting to use argparse to write a simple script to perform operations on various types of servers: manage_servers.py Operations are things like check, build, deploy, configure, verify etc. Types of server are just different types of inhouse servers we use. We have a generic server class, and specific types that inherit from that: class Server def configure_logging(self, loggin_file): ... def check(self): ... def deploy(self): ... def configure(self): ... def __init__(self, hostname): self.hostname = hostname logging = self.configure_logging(LOG_FILENAME) class SpamServer(Server): def check(self): ... class HamServer(Server): def deploy(self): ... My question is how to link that all up to argparse? Originally, I was using argparse subparses for the operations (check, build, deploy) and another argument for the type. subparsers = parser.add_subparsers(help='The operation that you want to run on the server.') parser_check = subparsers.add_parser('check', help='Check that the server has been setup correctly.') parser_build = subparsers.add_parser('build', help='Download and build a copy of the execution stack.') parser_build.add_argument('-r', '--revision', help='SVN revision to build from.') ... parser.add_argument('type_of_server', action='store', choices=types_of_servers, help='The type of server you wish to create.') Normally, you'd link each subparse to a method - and then pass in the type_of_server as an argument. However, that's slightly backwards due to the classes- I need to create an instance of the appropriate Server class, then call the operation method inside of that - not a generic check/build/configure module Any ideas of how I could achieve the above? Perhaps a different design pattern for Servers? Or any way to mould argparse to work with this? Thanks, Victor -- http://mail.python.org/mailman/listinfo/python-list
// about building python //
Hi All, I'd like to embbed a thin python in one application of mine i'm developing so I need to know the module dependencies because I'm going to remove some modules. I also need to know the best way to rebuild the python core once these modules have been removed. So, could you provide me some pointers in the right direction? Thanks in advance... -- http://mail.python.org/mailman/listinfo/python-list
Re: Trying to learn about metaclasses
Hi Steven, I too am just learning about metaclasses in Python and I found the example you posted to be excellent. I played around with it and noticed that the issue seems to be the double-underscore in front of the fields (cls.__fields = {}). If you change this parameter to use the single-underscore, the code works perfectly. I think that because of the double-underscore, the name of the attribute "fields" gets mangled by the interpreter and is not inherited from the parent class in its accessible form. Now, I am not sure if the code posted uses an earlier version of Python where these rule are different or if there is a more correct way to achieve this. I will follow this discussion to see if someone has a better answer. -victor -- http://mail.python.org/mailman/listinfo/python-list
Can Epydoc be used to document decorated function in Python?
Hello, is there a way to use epydoc to document a Python function that has been decorated? @decorator def func1(): """ My function func1 """ print "In func1" The resulting output of epydoc is that func1 gets listed as a variable with no description. I am using Epydoc v3.0.1. Thanks -victor -- http://mail.python.org/mailman/listinfo/python-list
Communicating from Flash to Python
Hi; I have an AS3 script that is supposed to communicate with a python script and I don't think it is. The python script is to email. How can I trouble-shoot this? Beno -- http://mail.python.org/mailman/listinfo/python-list
Re: Communicating from Flash to Python
On Fri, Mar 4, 2011 at 2:57 AM, Godson Gera wrote: > You can use PyAMF http://pyamf.org > Thanks! Beno -- http://mail.python.org/mailman/listinfo/python-list
Absolutely Insane Problem with Gmail
Hi; I have this code: #!/usr/bin/python import sys, os, string import cgitb; cgitb.enable() import cgi cwd = os.getcwd() dirs = string.split(cwd, '/') dirs = dirs[1:-1] backLevel = '/' + string.join(dirs, '/') sys.path.append(cwd) sys.path.append(backLevel) import string form = cgi.FieldStorage() // all the fields here subject = 'Order For Maya 2012' msg = 'First Name: %s\nLast Name: %s\nEmail Address: %s\nAddress2: %s, City: %s\nState: %s\nCountry: %s\nZip: %s\nPhone: %s\nFax: %s\nMessage: %s\n' % (firstNameText, lastNameText, emailText, addrText, addr2Te xt, cityText, stateText, countryText, zipText, faxText, messageText) ### LOOK AT THESE TWO LINES ourEmail = 'myemaila...@gmail.com' ourEmail = 'q...@xxx.com' def my_mail(): emailOne() emailTwo() def emailOne(): from simplemail import Email Email( from_address = ourEmail, to_address = emailText, subject = 'Thank you for your order!', message = msg ).send() def emailTwo(): from simplemail import Email Email( from simplemail import Email Email( # from_address = emailText, # to_address = ourEmail, from_address = ourEmail, to_address = emailText, subject = 'Order for Maya 2012', message = msg ).send() print '''Content-type: text/html ''' my_mail() print ''' ''' Now what's absolutely crazy about this is that if I use my online form and enter my gmail address I get the email confirmations. However, if I get rid of that garbage value for ourEmail and use the other one which is the _very_same_gmail_address I get nothing!! No email. Ditto if I uncomment those lines in emailTwo and delete the next two lines. What on earth could be doing this??? TIA, Beno -- http://mail.python.org/mailman/listinfo/python-list
Re: Absolutely Insane Problem with Gmail
On Sat, Mar 5, 2011 at 11:11 PM, Littlefield, Tyler wrote: > >ourEmail = ' > myemaila...@gmail.com' > > >ourEmail = ' > q...@xxx.com' > > You redefine this twice. > Right. The second definition, of course, overwrites the first. That is deliberate. I simply comment out the second when I'm testing. The second is, of course, bogus. But it works while the first doesn't!!! WHY??? > You also don't define a variable down lower. > ># to_address = ourEmail, > > > from_address = ourEmail, > > to_address = emailText, > I could be wrong, but emailText isn't defined. > No, in fact, emailText *is* defined. And it, too, works, *unless* it's going to a gmail address!! In fact, I just now tested it, commenting out the second bogus email address, and using another gmail address but different than the one defined as ourEmail, and everything works as expected. Therefore, it appears that gmail, for whatever reason, filters out emails send to the same address from which they are sent. Thanks, Beno -- http://mail.python.org/mailman/listinfo/python-list
How Translate This PHP
Hi; How do I translate this PHP code? if($ok){ echo "returnValue=1"; }else{ echo "returnValue=0"; } In other words, when the email successfully sends, send back both the name of the variable and its value. TIA, Beno -- http://mail.python.org/mailman/listinfo/python-list
Re: How Translate This PHP
On Sun, Mar 6, 2011 at 10:53 AM, Noah Hall wrote: > On Sun, Mar 6, 2011 at 2:45 PM, Victor Subervi > wrote: > > Hi; > > How do I translate this PHP code? > > > > if($ok){ > > echo "returnValue=1"; > > }else{ > > echo "returnValue=0"; > > } > > From the code provided - > > if ok: >print 'returnValue=1' > else: >print 'returnValue=0' > Ah. I thought I had to "return" something! Thanks, Beno -- http://mail.python.org/mailman/listinfo/python-list
changing to function what works like a function
Hi everyone i understood that the goal of Python is to make programing easy (of course, powerful at the same time). I think one way to do it is to eliminate unnecessary syntax exceptions. One is the following: for a complex number "z", to get the real and imaginary part, you type: "z.real" and "z.imag". At the same time, the most obvious way would be to call it like a function, say: "real(z)", and, respectively, "imag(z)". Just like it was changed from " print 'something' " , to " print('something') " . What do you think? There are more examples like this. -- http://mail.python.org/mailman/listinfo/python-list
strange behaviour with while-else statement
Hi and please help me understand if it is a bug, or..,as someone said, there's a 'bug' in my understanding: (Python 3.2 (r32:88445, Feb 20 2011, 21:29:02) [MSC v.1500 32 bit (Intel)] on win32) (windows vista, the regular windows python installer) It's about the following code: while True: s = input('Enter something : ') if s == 'quit': break print('Length of the string is', len(s)) print('Done') when I put it in a editor, and run it from there ("Run module F5"), it runs fine; but when I try to type it in the python shell (IDLE), or in the python command line, it gives errors, though I tried to edit it differently: >>> while True: s = input('Enter something : ') if s == 'quit': break print('Length of the string is', len(s)) print('Done') SyntaxError: invalid syntax >>> while True: s = input('Enter something : ') if s == 'quit': break print('Length of the string is', len(s)) print('Done') SyntaxError: unindent does not match any outer indentation level >>> The only way I noticed it would accept is to set the last "print" statement directly under/in the block of "while": which is not the intention. (According to the docs, while statement should work without the "else" option). Is this a bug? Thanks, Victor p.s. the example is taken from http://swaroopch.com/notes/Python_en:Control_Flow -- http://mail.python.org/mailman/listinfo/python-list
Re: changing to function what works like a function
Well, thank you all for being honest ☺ What I conclude is that you, the programmers, don’t really care about those who are new to programming: for most people out of the programming world, I think it is simpler to be able to write: real(z), just as you write: sin(z), abs(z), (z)^2 etc. -- http://mail.python.org/mailman/listinfo/python-list
Re: How Translate This PHP
On Sun, Mar 6, 2011 at 12:00 PM, Noah Hall wrote: > On Sun, Mar 6, 2011 at 3:11 PM, Victor Subervi > wrote: > > Ah. I thought I had to "return" something! > > Well, based on what you asked, you would've, but based on the code, > all it was doing is printing "returnValue - value" > > Of course, a better way of doing it would be to use formatting - > > For example, > > print 'returnValue=%d' % ok > Thanks. Beno -- http://mail.python.org/mailman/listinfo/python-list
Re: Don't Want Visitor To See Nuttin'
On Wed, Mar 9, 2011 at 5:33 PM, Ian wrote: > On 09/03/2011 21:01, Victor Subervi wrote: > >> The problem is that it prints "Content-Type: text/html" to the screen >> > If you can see what is intended to be a header, then it follows that you > are not sending the header correctly. > > Sorry - can't tell you how to send a header. You don't say what framework > you are using. > Framework? Python on CentOS, if that's what you're asking. From what I know of python, one always begins a web page with something like this: print "Content-Type: text/html" print print ''' http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd";> and this has worked in the past, so I'm surprised it doesn't work here. Don't understand what I've done wrong, nor why it prints the first line to screen. TIA, Beno -- http://mail.python.org/mailman/listinfo/python-list
Re: Don't Want Visitor To See Nuttin'
On Thu, Mar 10, 2011 at 8:50 PM, Benjamin Kaplan wrote: > > print "Content-Type: text/html" > > print > > print ''' > > > "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd";> > > > > > > > > and this has worked in the past, so I'm surprised it doesn't work here. > > Don't understand what I've done wrong, nor why it prints the first line > to > > screen. > > TIA, > > Beno > > > > Typically, people developing web applications use a framework such as > Django or TurboGears (or web.py or CherryPy or any of a dozen others) > rather than just having the CGI scripts print stuff out. Rather than > having your Python script just print out a page, you make a template > and then have a templating engine fill in the blanks with the values > you provide. They'll also protect you from things like Injection > attacks and cross-site scripting (if you don't know what those are, > you're probably vulnerable to them). > > ok. I'm looking into Django. I'm ok for injections and I think most of my data is sanitized. Now, can someone please address my question? See above. TIA, Beno -- http://mail.python.org/mailman/listinfo/python-list
Re: Don't Want Visitor To See Nuttin'
On Fri, Mar 11, 2011 at 3:54 AM, Ian Kelly wrote: > On Wed, Mar 9, 2011 at 2:01 PM, Victor Subervi > wrote: > > Maya 2012: Transform At the Source > > Yow. You're designing a Maya 2012 website to help some travel company > bilk gullible people out of thousands of dollars? I would be ashamed > to have anything to do with this. > Um...just for the record, these guys have ben featured on the FRONT PAGES OF: The Wall Street Journal The Los Angeles Times The San Francisco Chronicle and have appeared on: Eye-To-Eye with Connie Chung CNN's Travel Guide and National Geographic's Travel Magazine called them "the graddaddy of metaphysical tours." If you'll go to the "About Us" page you'll see their photo with the Dalai Lama. They're ligit :) Beno -- http://mail.python.org/mailman/listinfo/python-list
Re: Don't Want Visitor To See Nuttin'
On Fri, Mar 11, 2011 at 4:26 AM, Dennis Lee Bieber wrote: > On Thu, 10 Mar 2011 18:00:10 -0800 (PST), alex23 > declaimed the following in gmane.comp.python.general: > > > He's comp.lang.python's version of Sisyphus. Or maybe Sisyphus' > > boulder...I forget where I was going with this. > > The boulder -- given that we are the ones suffering... > OK, fine, don't respond. The page works. I'm changing names and email addresses. CU as someone else. Bye, Beno -- http://mail.python.org/mailman/listinfo/python-list
I don't understand generator.send()
#! /usr/bin/env python def ints(): i=0 while True: yield i i += 1 gen = ints() while True: i = gen.next() print i if i==5: r = gen.send(2) print "return:",r if i>10: break I thought the send call would push the value "2" at the front of the queue. Instead it coughs up the 2, which seems senseless to me. 1/ How should I view the send call? I'm reading the manual and dont' get it 2/ Is there a way to push something in the generator object? So that it becomes the next yield expression? In my code I was hoping to get 0,1,2,3,4,5,2,6,7 as yield expressions. Victor. -- Victor Eijkhout -- eijkhout at tacc utexas edu -- http://mail.python.org/mailman/listinfo/python-list
Re: I don't understand generator.send()
Chris Angelico wrote: > For what you're doing, there's a little complexity. If I understand, > you want send() to be like an ungetc call... you could do that like > this: > > > def ints(): >i=0 >while True: >sent=(yield i) >if sent is not None: > yield None # This becomes the return value from gen.send() > yield sent # This is the next value yielded >i += 1 I think this will serve my purposes. Thanks everyone for broadening my understanding of generators. Victor. -- Victor Eijkhout -- eijkhout at tacc utexas edu -- http://mail.python.org/mailman/listinfo/python-list
Re: English Idiom in Unix: Directory Recursively
Harrison Hill wrote: > No need - I have the Dictionary definition of recursion here: > > Recursion: (N). See recursion. If you tell a joke, you have to tell it right. Recursion: (N). See recursion. See also tail recursion. Victor. -- Victor Eijkhout -- eijkhout at tacc utexas edu -- http://mail.python.org/mailman/listinfo/python-list
Package containing C sources
I am going to create a Python wrapper around a generally useful C library. So the wrapper needs to contain some C code to glue them together. Can I upload a package containing C sources to PyPi? If not, what is the proper way to distribute it? -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Package containing C sources (Posting On Python-List Prohibited)
Lawrence D’Oliveiro wrote: > On Wednesday, January 31, 2018 at 6:13:00 PM UTC+13, Victor Porton wrote: >> I am going to create a Python wrapper around a generally useful C >> library. So the wrapper needs to contain some C code to glue them >> together. > > Not necessarily. It’s often possible to implement such a wrapper entirely > in Python, using ctypes <https://docs.python.org/3/library/ctypes.html>. But if I will find that I need C code, do I need to package it separately? So I would get three packages: the C library, the C wrapper for Python, and the Python code. Can this be done with just two packages: the C library and C wrapper and Python in one package? -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Handle SIGINT in C and Python
I need to assign a real C signal handler to SIGINT. This handler may be called during poll() waiting for data. For this reason I cannot use Python signals because "A Python signal handler does not get executed inside the low-level (C) signal handler. Instead, the low-level signal handler sets a flag which tells the virtual machine to execute the corresponding Python signal handler at a later point(for example at the next bytecode instruction)." I want after my signal handler for SIGINT executed to raise KeyboardInterrupt (as if I didn't installed my own signal handler). Is this possible? How? -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Handle SIGINT in C and Python
Victor Porton wrote: > I need to assign a real C signal handler to SIGINT. > > This handler may be called during poll() waiting for data. For this reason > I cannot use Python signals because "A Python signal handler does not get > executed inside the low-level (C) signal handler. Instead, the low-level > signal handler sets a flag which tells the virtual machine to execute the > corresponding Python signal handler at a later point(for example at the > next bytecode instruction)." > > I want after my signal handler for SIGINT executed to raise > KeyboardInterrupt (as if I didn't installed my own signal handler). > > Is this possible? How? I've found a solution: I can use PyOS_setsig() from Python implementation to get the old (Python default) OS-level signal handler, while I assign the new one. -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Handle SIGINT in C and Python (Posting On Python-List Prohibited)
Lawrence D’Oliveiro wrote: > On Wednesday, January 31, 2018 at 8:58:18 PM UTC+13, Victor Porton wrote: >> For this reason I >> cannot use Python signals because "A Python signal handler does not get >> executed inside the low-level (C) signal handler. Instead, the low-level >> signal handler sets a flag which tells the virtual machine to execute the >> corresponding Python signal handler at a later point(for example at the >> next bytecode instruction)." > > Why is that a problem? As I already said, I need to handle the signal (SIGCHLD) while poll() waits for i/o. You can read the sources (and the file HACKING) of the C library which I am wrapping into Python here: https://github.com/vporton/libcomcom -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Help to debug my free library
LibComCom is a C library which passes a string as stdin of an OS command and stores its stdout in another string. I wrote this library recently: https://github.com/vporton/libcomcom Complete docs are available at https://vporton.github.io/libcomcom-docs/ Now I am trying to make Python bindings to the library: https://github.com/vporton/libcomcom-python I use CFFI (API level). However testing my Python bindings segfaults in an unknown reason. Please help to debug the following: $ LD_LIBRARY_PATH=.:/usr/local/lib:/usr/lib:/lib \ PYTHONPATH=build/lib.linux-x86_64-2.7/ python test2.py Traceback (most recent call last): Segmentation fault (here libcomcom.so is installed in /usr/local/lib) -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Handle SIGINT in C and Python (Posting On Python-List Prohibited)
Lawrence D’Oliveiro wrote: > On Wednesday, January 31, 2018 at 9:55:45 PM UTC+13, Victor Porton wrote: >> Lawrence D’Oliveiro wrote: >> >>> On Wednesday, January 31, 2018 at 8:58:18 PM UTC+13, Victor Porton >>> wrote: >>>> For this reason I >>>> cannot use Python signals because "A Python signal handler does not get >>>> executed inside the low-level (C) signal handler. Instead, the >>>> low-level signal handler sets a flag which tells the virtual machine to >>>> execute the corresponding Python signal handler at a later point(for >>>> example at the next bytecode instruction)." >>> >>> Why is that a problem? >> >> As I already said, I need to handle the signal (SIGCHLD) while poll() >> waits for i/o. > > The usual behaviour for POSIX is that the call is aborted with EINTR after > you get the signal. That poll() is interrupted does not imply that Python will run its pythonic signal handler at the point of interruption. That is a problem. -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Help to debug my free library
Chris Angelico wrote: > On Thu, Feb 1, 2018 at 5:58 AM, Victor Porton wrote: >> LibComCom is a C library which passes a string as stdin of an OS command >> and stores its stdout in another string. > > Something like the built-in subprocess module does? I was going to write: "It seems that subprocess module can cause deadlocks. For example, if it first writes a long string to "cat" command input (going to read cat's stdout later), then "cat" would block because its output is not read while writing input." But after reading the docs it seems that Popen.communicate() does the job. Well, I first wrote in Java. For Java there was no ready solution. Later I switched to Python and haven't checked the standard libraries. So, please help me to make sure if Popen.communicate() is a solution for my problem (namely that it does not deadlock, as in "cat" example above). I am interested in both Python 2.7 and 3.x. > ChrisA -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Help to debug my free library
wxjmfa...@gmail.com wrote: > Le mercredi 31 janvier 2018 20:13:06 UTC+1, Chris Angelico a écrit : >> On Thu, Feb 1, 2018 at 5:58 AM, Victor Porton wrote: >> > LibComCom is a C library which passes a string as stdin of an OS >> > command and stores its stdout in another string. >> >> Something like the built-in subprocess module does? >> >> ChrisA > > Do you mean the buggy subprocess module (coding of characters) ? Please elaborate: which bugs it has? in which versions? > Yes, there is a working workaround : QtCore.QProcess(). -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Help to debug my free library
Dennis Lee Bieber wrote: > On Wed, 31 Jan 2018 20:58:56 +0200, Victor Porton > declaimed the following: > >>LibComCom is a C library which passes a string as stdin of an OS command >>and stores its stdout in another string. >> >>I wrote this library recently: >>https://github.com/vporton/libcomcom >> >>Complete docs are available at >>https://vporton.github.io/libcomcom-docs/ >> >>Now I am trying to make Python bindings to the library: >>https://github.com/vporton/libcomcom-python >> > > Debug -- no help, I'm not a fluent Linux programmer... > > But based upon the description of this library, I would have to ask: > > "What does this library provide that isn't already in the Python standard > library?" > > "Why would I want to use this library instead of, say > subprocess.Popen().communicate()?" (or the older Popen* family) > > {Though .communicate() is a one-shot call -- sends one packet to > subprocess' stdin, reads to EOF, and waits for subprocess to terminate. If > one needs interactive control, one needs explicit write/read calls, > although those can deadlock if the caller and subprocess aren't written > for such interaction} If it "sends one packet", this would lead to a deadlock (even for "cat" command). Hopefully, you are wrong here. I found that Popen.communicate() seems to solve my problem. (Previously I programmed in Java and found no native solution. For this reason I created my LibComCom.) -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Handle SIGINT in C and Python (Posting On Python-List Prohibited)
Lawrence D’Oliveiro wrote: > On Thursday, February 1, 2018 at 8:10:24 AM UTC+13, Victor Porton wrote: >> Lawrence D’Oliveiro wrote: >> >>> The usual behaviour for POSIX is that the call is aborted with EINTR >>> after you get the signal. >> >> That poll() is interrupted does not imply that Python will run its >> pythonic signal handler at the point of interruption. That is a problem. > > * Python calls poll() > * poll() aborted with EINTR > * Python runs your signal handler > > Versus native C code: > > * your code calls poll() > * poll() aborted with EINTR > * your signal handler is run > > Where is there a fundamental difference? I meant to call poll() from C code, not Python code. In this case when poll() is aborted with EINTR, the pythonic signal handler does not run. -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Handle SIGINT in C and Python (Posting On Python-List Prohibited)
Lawrence D’Oliveiro wrote: > On Thursday, February 1, 2018 at 5:57:58 PM UTC+13, Victor Porton wrote: >> I meant to call poll() from C code, not Python code. > > Do you need to use C code at all? Python is quite capable of handling this > <https://docs.python.org/3/library/select.html>. I already concluded that I can use Popen.communicate() instead of my library. So the issue is closed. -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Dependency injection: overriding defaults
I am writing a library, a command line utility which uses the library, and a daemon which uses the library. I am going to use dependency_injector package. Consider loggers: For the core library the logger should default to stderr. For the command line utility, we use the default logger of the library. For the server, the log should go to a file (not to stderr). Question: How to profoundly make my software to use the appropriate logger, dependently on whether it is a command line utility or the daemon? -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Object-oriented gettext
I want to write a multiuser application which uses multiple languages (one language for logging and a language per user). https://docs.python.org/3/library/gettext.html describes a procedural gettext interface. The language needs to be switched before each gettext() call. I want an object oriented interface like: english.gettext("Word") == "Word" russian.gettext("Word") == "Слово" That is, I do no want to write any language-switching code, but the language should depend on the object (like "english" and "russian" in the above example). What is the best way to do this? Should I write an object-oriented wrapper around gettext package? -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Advice on where to define dependency injection providers
I define ExecutionContext in xmlboiler.core.execution_context module. ExecutionContext is meant to contain a common "environment" suitable for different kinds of tasks. Currently ExecutionContext contains a logger and a translator of messages: class ExecutionContext(object): def __init__(self, logger, translations): """ :param logger: logger :param translations: usually should be gettext.GNUTranslations """ self.logger = logger self.translations = translations Now I want to define some "provides" using dependency_injector.providers module. Where (in which module) should I define default factories for loggers, translations, and execution contexts? (Default logger should log to stderr, default translations should respect LANG environment variable.) The only quite clear thing is that providers should be defined somewhere in xmlboiler.core.* namespace, because it is the namespace for the core library (to be used by several applications). Should I define providers in xmlboiler.core.execution_context module or in something like xmlboiler.core.execution_context_build, or maybe in something like xmlboiler.core.providers.execution_context? -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Object-oriented gettext
Victor Porton wrote: > I want to write a multiuser application which uses multiple languages (one > language for logging and a language per user). > > https://docs.python.org/3/library/gettext.html describes a procedural > gettext interface. The language needs to be switched before each gettext() > call. > > I want an object oriented interface like: > > english.gettext("Word") == "Word" > russian.gettext("Word") == "Слово" > > That is, I do no want to write any language-switching code, but the > language should depend on the object (like "english" and "russian" in the > above example). > > What is the best way to do this? > > Should I write an object-oriented wrapper around gettext package? Oh, I see that gettext.translation() seems to do the job. -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Dependency injection: overriding defaults
dieter wrote: > Victor Porton writes: > >> I am writing a library, a command line utility which uses the library, >> and a I am going to use dependency_injector package. >> >> Consider loggers: >> >> For the core library the logger should default to stderr. >> >> For the command line utility, we use the default logger of the library. >> >> For the server, the log should go to a file (not to stderr). >> >> Question: How to profoundly make my software to use the appropriate >> logger, dependently on whether it is a command line utility or the >> daemon? > > I would distinguish between the common library and distinct > applications (command line utility, daemon). The applications > configure the logging system (differently) while the library uses > uniform logging calls. You have essentially just repeated my requirements. But HOW to do this (using dependency_injector module)? -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
$srcdir and $datadir
In GNU software written in C $srcdir and $datadir are accessible to C code through generated config.h file. What is the right way to config directories for a Python program? -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: $srcdir and $datadir
First, I've already solved my problem using setuptools and pkg_resources.resource_stream() and an environment variable to specify the path to data files. Ben Finney wrote: > Victor Porton writes: > >> In GNU software written in C $srcdir and $datadir are accessible to C >> code through generated config.h file. > > For what purpose? I want my program to work both when it is installed (using $datadir) and when it is not yet installed (using $srcdir). > Given that the source may not be at that location after the program is > compiled – especially, after the program is moved to a different machine > – what meaning does ‘$srcdir’ have when the program is running? In GNU system there is support of storing a value (such as source directory) into the program itself. If it is moved to a different machine, $srcdir remains the same. > What “data directory” is specified by ‘$datadir’, and why is it assumed > there is exactly one? It may be exactly one, but contain subdirectories. >> What is the right way to config directories for a Python program? > > We'll need to know what those concepts mean, to be able to discuss the > equivalent (if any) in a Python environment. -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
What is more Pythonic: subclass or adding functionality to base class?
The following is a real code fragment: ## class PredicateParser(object, metaclass=ABCMeta): """ Parses a given predicate (which may participate in several relationships) of a given RDF node. """ def __init__(self, predicate): self.predicate = predicate @abstractmethod def parse(self, parse_context, graph, node): pass ## Now I need to add new field on_error (taking one of three enumerated values) to some objects of this class. What is more pythonic? 1. Create its subclass PredicateParserWithError and add the additional field on_error to this class. 2. Add on_error field to the base class, setting it to None by default, if the class's user does not need this field. -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Found a problem
Olá, comunidade do Python! Meu nome é Victor Dib, e sou um estudante brasileiro de programação. Já entrei em contato com vocês hoje, e vocês solicitaram que eu me inscrevesse na lista de e-mails de vocês primeiro. Bom, isso já foi feito, então espero que agora vocês possam dar atenção ao meu caso. Já sou um usuário de nível iniciante pra médio da linguagem Python. Mas recentemente tive um problema no uso da linguagem. Ao construir uma função que encontra números perfeitos, percebi que a linguagem não estava realizando corretamente a inserção de novos dados inteiros à uma lista. Encaminharei imagens para demonstrar o meu problema. Tive problemas com a versão 3.9.2 da linguagem. Mas também usei a versão 3.8.7 (por ser marcada como estável no site da linguagem), e igualmente tive problemas. Acredito que não haja problemas em minha lógica, e por isso gostaria que vocês desse m uma olhada para ter certeza de que não é um problema na linguagem. Odiaria que uma linguagem tão incrível como o Python não funcionasse como deveria. Por isso, por favor, verifiquem meu caso! Obrigado! O programa que escrevi para encontrar números perfeitos: def num_perf_inf(n): divisors = [] perfects = [] limit = n - 1 for i in range(1, limit): dividend = i + 1 for j in range(i): divisor = j + 1 if dividend % divisor == 0: divisors.append(divisor) print(f'Divisors of {i + 1}: {divisors}') divisors.pop() if sum(divisors) == dividend: perfects.append(i) divisors.clear() print(perfects) num_perf_inf(28) E o resultado da execução desse código, em Python 3.9.2 e em Python 3.8.7 (como já mencionado, testei as duas versões da linguagem: Divisors of 2: [1] Divisors of 3: [1] Divisors of 4: [1, 2] Divisors of 5: [1] Divisors of 6: [1, 2, 3] Divisors of 7: [1] Divisors of 8: [1, 2, 4] Divisors of 9: [1, 3] Divisors of 10: [1, 2, 5] Divisors of 11: [1] Divisors of 12: [1, 2, 3, 4, 6] Divisors of 13: [1] Divisors of 14: [1, 2, 7] Divisors of 15: [1, 3, 5] Divisors of 16: [1, 2, 4, 8] Divisors of 17: [1] Divisors of 18: [1, 2, 3, 6, 9] Divisors of 19: [1] Divisors of 20: [1, 2, 4, 5, 10] Divisors of 21: [1, 3, 7] Divisors of 22: [1, 2, 11] Divisors of 23: [1] Divisors of 24: [1, 2, 3, 4, 6, 8, 12] Divisors of 25: [1, 5] Divisors of 26: [1, 2, 13] Divisors of 27: [1, 3, 9] [23] -- https://mail.python.org/mailman/listinfo/python-list
Two variants of class hierachy
I am developing software which shows hierarchical information (tree), including issues and comments from BitBucket (comments are sub-nodes of issues, thus it forms a tree). There are two kinds of objects in the hierarchy: a. with a (possibly long) paginated list of childs; b. with a short list of strings, each string being associated with a child object. I have two variants of class inheritance in mind. Please help to decide which is better. The first one declares only one base class, but some its method remain unimplemented (raise NotImplementedError) even in derived classes. The second one defines two distinct base classes HierarchyLevelWithPagination (for objects of above described class "a") and HierarchyLevelWithShortList (for objects of above described class "b"), but use multiple inheritance. # VARIANT 1: # class HierarchyLevel(object): def get_node(self): return None def childs(self, url, page, per_page=None): raise NotImplementedError() def short_childs(self): raise NotImplementedError() class BitBucketHierarchyLevel(HierarchyLevel): ... # A implements only childs() but not short_childs() class A(BitBucketHierarchyLevel): ... # B implements only short_childs() but not childs() class B(BitBucketHierarchyLevel): ... ## OR ## # VARIANT 2: # class HierarchyLevel(object): def get_node(self): return None class HierarchyLevelWithPagination(HierarchyLevel): def childs(self, url, page, per_page=None): raise NotImplementedError() class HierarchyLevelWithShortList(HierarchyLevel): def short_childs(self): raise NotImplementedError() ## THEN ## # code specific for BitBucket class BitBucketHierarchyLevel(HierarchyLevel): ... # diamonds: class A(BitBucketHierarchyLevel, HierarchyLevelWithPagination): ... class B(BitBucketHierarchyLevel, HierarchyLevelWithShortList): ... -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
C3 MRO
Do I understand correctly, than C3 applies to particular methods, and thus it does not fail, if it works for every defined method, even if it can fail after addition of a new method? Also, at which point it fails: at definition of a class or at calling a particular "wrong" method? -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Two variants of class hierachy
Peter Otten wrote: > Victor Porton wrote: > >> I am developing software which shows hierarchical information (tree), >> including issues and comments from BitBucket (comments are sub-nodes of >> issues, thus it forms a tree). >> >> There are two kinds of objects in the hierarchy: a. with a (possibly >> long) paginated list of childs; b. with a short list of strings, each >> string being associated with a child object. >> >> I have two variants of class inheritance in mind. Please help to decide >> which is better. >> >> The first one declares only one base class, but some its method remain >> unimplemented (raise NotImplementedError) even in derived classes. > > Two observations: > > In Python you can also use "duck-typing" -- if you don't want to share > code between the classes there is no need for an inhertitance tree at all. I know, but explicit inheritance serves as a kind of documentation for readers of my code. > Pagination is a matter of display and will be handled differently in a PDF > document or web page, say. I would not make it part of the data structure. Not in my case, because the data I receive is already paginated. I am not going to "re-paginate" it in another way. -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Extra base class in hierarchy
Consider class FinalTreeNode(object): def childs(self): return [] class UsualTreeNode(FinalTreeNode) def childs(self): return ... In this structure UsualTreeNode derives from FinalTreeNode. Is it better to introduce an extra base class? class BaseTreeNode(object): def childs(self): return [] # The same functionality as BaseTreeNode, but logically distinct class FinalTreeNode(BaseTreeNode): pass # Not derived from FinalTreeNode, because it is not logically final class UsualTreeNode(BaseTreeNode) def childs(self): return ... -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Which of two variants of code is better?
Which of two variants of code to construct an "issue comment" object (about BitBucket issue comments) is better? 1. obj = IssueComment(Issue(IssueGroup(repository, 'issues'), id1), id2) or 2. list = [('issues', IssueGroup), (id1, Issue), (id2, IssueComment)] obj = construct_subobject(repository, list) (`construct_subobject` is to be defined in such as way that "1" and "2" do the same.) Would you advise me to make such function construct_subobject function or just to use the direct coding as in "1"? -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Which of two variants of code is better?
Ned Batchelder wrote: > On Monday, November 21, 2016 at 12:48:25 PM UTC-5, Victor Porton wrote: >> Which of two variants of code to construct an "issue comment" object >> (about BitBucket issue comments) is better? >> >> 1. >> >> obj = IssueComment(Issue(IssueGroup(repository, 'issues'), id1), id2) >> >> or >> >> 2. >> >> list = [('issues', IssueGroup), (id1, Issue), (id2, IssueComment)] >> obj = construct_subobject(repository, list) >> >> (`construct_subobject` is to be defined in such as way that "1" and "2" >> do the same.) >> >> Would you advise me to make such function construct_subobject function or >> just to use the direct coding as in "1"? > > Neither of these seem very convenient. I don't know what an IssueGroup is, It is a helper object which helps to paginate issues. > so I don't know why I need to specify it. To create a comment on an > issue, why do I need id2, which seems to be the id of a comment? It does not create a comment. It is preparing to load the comment. > How about this: > > obj = IssueComment(repo=repository, issue=id1) > > or: > > obj = repository.create_issue_comment(issue=id1) Your code is too specialized. I want to make all my code following the same patterns. (And I am not going to define helper methods like yours, because we do not use it often enough to be worth of a specific method.) -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Python for WEB-page !?
Ionut Predoiu wrote: > I am a beginner in programming language. > I want to know what version of Python I must to learn to use, beside of > basic language, because I want to integrate in my site 1 page in which > users to can made calculus based on my formulas already write behind (the > users will only complete some field, and after push "Calculate" button > will see the results in form of: table, graphic, and so on ...). Please > take into account that behind will be more mathematical > equations/formulas, so the speed I think must be take into account. Consider PyPi. I never used it, but they say, it is faster than usual CPython interpreter. > I waiting with higher interest your feedback. > > Thanks to all members of community for support and advice. > Keep in touch. > Kind regards. -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Re: Python for WEB-page !?
Ionut Predoiu wrote: > I am a beginner in programming language. > I want to know what version of Python I must to learn to use, beside of > basic language, because I want to integrate in my site 1 page in which > users to can made calculus based on my formulas already write behind (the > users will only complete some field, and after push "Calculate" button > will see the results in form of: table, graphic, and so on ...). Please > take into account that behind will be more mathematical > equations/formulas, so the speed I think must be take into account. Consider PyPi. I never used it, but they say, it is faster than usual CPython interpreter. > I waiting with higher interest your feedback. > > Thanks to all members of community for support and advice. > Keep in touch. > Kind regards. -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
Recommended format for --log-level option
What is the recommended format for --log-level (or --loglevel?) command line option? Is it a number or NOTSET|DEBUG|INFO|WARNING|ERROR|CRITICAL? Or should I accept both numbers and these string constants? -- Victor Porton - http://portonvictor.org -- https://mail.python.org/mailman/listinfo/python-list
The next major Python version will be Python 8
Hi, Python 3 becomes more and more popular and is close to a dangerous point where it can become popular that Python 2. The PSF decided that it's time to elaborate a new secret plan to ensure that Python users suffer again with a new major release breaking all their legacy code. The PSF is happy to announce that the new Python release will be Python 8! Why the version 8? It's just to be greater than Perl 6 and PHP 7, but it's also a mnemonic for PEP 8. By the way, each minor release will now multiply the version by 2. With Python 8 released in 2016 and one release every two years, we will beat Firefox 44 in 2022 (Python 64) and Windows 2003 in 2032 (Python 2048). A major release requires a major change to justify a version bump: the new killer feature is that it's no longer possible to import a module which does not respect the PEP 8. It ensures that all your code is pure. Example: $ python8 -c 'import keyword' Lib/keyword.py:16:1: E122 continuation line missing indentation or outdented Lib/keyword.py:16:1: E265 block comment should start with '# ' Lib/keyword.py:50:1: E122 continuation line missing indentation or outdented (...) ImportError: no pep8, no glory Good news: since *no* module of the current standard library of Python 3 respect the PEP 8, the standard library will be simplified to one unique module, which is new in Python 8: pep8. The standard library will move to the Python Cheeseshop (PyPI), to reply to an old and popular request. DON'T PANIC! You are still able to import your legacy code into Python 8, you just have to rename all your modules to add a "_noqa" suffix to the filename. For example, rename utils.py to utils_noqa.py. A side effect is that you have to update all imports. For example, replace "import django" with "import django_noqa". After a study of the PSF, it's a best option to split again the Python community and make sure that all users are angry. The plan is that in 10 years, at least 50% of the 77,000 packages on the Python cheeseshop will be updated to get the "_noqa" tag. After 2020, the PSF will start to sponsor trolls to harass users of the legacy Python 3 to force them to migrate to Python 8. Python 8 is a work-in-progress (it's still an alpha version), the standard library was not removed yet. Hopefully, trying to import any module of the standard library fails. Don't hesitate to propose more ideas to make Python 8 more incompatible with Python 3! Note: The change is already effective in the default branch of Python: https://hg.python.org/cpython/rev/9aedec2dbc01 Have fun, Victor -- https://mail.python.org/mailman/listinfo/python-list
Language improvement: Get more from the `for .. else` clause
tl;dr: 1. Add `StopAsyncIteration.value`, with the same semantic as `StopIteration.value` (documented in PEP 380). 2. Capture `StopIteration.value` and StopAsyncIteration.value in the `else` clauses of the `for` and `async for` statements respectively. Note: I already have a proof-of-concept implementation: repository: https://github.com/Victor-Savu/cpython branch: feat/else_capture Dear members of the Python list, I am writing to discuss and get the community's opinion on the following two ideas: 1. Capture the `StopIteration.value` in the `else` clause of the `for .. else` statement: Generators raise StopIteration on the return statement. The exception captures the return value. The `for` statement catches the `StopIteration` exception to know when to jump to the optional `else` statement, but discards the enclosed return value. I want to propose an addition to the Python syntax which gives the option to capture the return value in the `else` statement of the `for` loop: ``` def holy_grenade(): yield 'One ...' yield 'Two ...' yield 'Five!' return ('Galahad', 'Three') for count_ in holy_grenade(): print("King Arthur: {count_}") else knight, correction: # << new capture syntax here print(f"{knight}: {correction}, Sir!") print(f"King Arthur: {correction}!") ``` prints: ``` King Arthur: One ... King Arthur: Two ... King Arthur: Five! Galahad: Three, Sir! King Arthur: Three! ``` Of course, the capture expression is optional, and omitting it preserves the current behavior, making this proposed change backwards compatible. Should the iterator end without raising the StopIteration exception, the value `None` will be implicitly passed to the capture expression. In the example above, this will result in: ``` TypeError: 'NoneType' object is not iterable ``` because of the attempt to de-structure the result into `knight` and `correction`. 2. Add a `StopAsyncIteration.value` member which can be used to transfer information about the end of the asynchronous iteration, in the same way the `StopIteration.value` member is used (as documented in PEP 380). Capture this value in the in the else clause of the `async for` statement in the same way as proposed for the `StopIteration.value` in the previous point. You can find a working proof-of-concept implementation of the two proposed changes in my fork of the semi-official cpython repository on GitHub: repository: https://github.com/Victor-Savu/cpython branch: feat/else_capture Disclaimer: My Internet searching skills have failed me and I could not find any previous discussion on any of the two topics. If you are aware of such discussion, I would be grateful if you could point it out. I look forward to your feedback, ideas, and (hopefully constructive) criticism! Best regards, Victor -- https://mail.python.org/mailman/listinfo/python-list
Re: "for/while ... break(by any means) ... else" make sense?
There are many posts trying to explain the else after for or while. Here is my take on it: There are three ways of getting out of a (for/while) loop: throw, break or the iterator gets exhausted. The question is, how cab we tell which way we exited? For the throw, we have the except clause. This leaves us to differentiatr between break and normal exhaustion of the iterator. This is that the else clause is for: we enter the body iff the loop iterator was exhausted. A lot of discussion goes around the actual keyword used: else. Opinions may differ, but I for one would have chosen 'then' as a keyword to mark something that naturally happens as part of the for statement but after the looping is over; assuming break jumps out of the entire statement, it makes sense that it skips the 'then' body as well. (In the same way, I prefer 'catch' to 'except' as a correspondent to 'throw', but all of this is just bikeshedding). At a language design level, the decision was made to reuse one of the existing keywords and for better or worse, 'else' was chosen, which can be thought of as having no relation to the other use of the same keyword in the 'if' statement. The only rationale behind this was to save one keyword. The search analogy often used for justifying 'else' is (to me) totally bogus, since the same argument can be used to support replacing the keyword 'for' by the keyword 'find' and have looping only as a side-effect of a search. I hope this gives you some sense of closure. Best, VS -- https://mail.python.org/mailman/listinfo/python-list
Re: Language improvement: Get more from the `for .. else` clause
Sure. Simple use-case: Decorate the yielded values and the return value of a generator. Right now, with `yield from` you can only decorate the return value, whereas with a for loop you can decorate the yielded values, but you sacrifice the returned value altogether. ``` def ret_decorator(target_generator): returned_value = yield from target_generator() return decorate_ret(returned_value) def yield_decorator(target_generator): for yielded_value in target_generator: yield decorate_yield(yielded_value) # Nothing to return. the target return value was # consumed by the for loop ``` With the proposed syntax, you can decorate both: ``` def decorator(target_generator): for yielded_value in target_generator: yield decorate_yield(yielded_value) else returned_value: return decorate_ret(returned_value) ``` Please let me know if you are interested in a more concrete case such as a domain-specific application (I can think of progress bars, logging, transfer rate statistics ...). Best, VS On Mon, Jun 27, 2016 at 5:06 PM, Michael Selik wrote: > On Mon, Jun 27, 2016 at 12:53 AM Victor Savu < > victor.nicolae.s...@gmail.com> wrote: > >> capture the [StopIteration] value in the `else` statement of the `for` >> loop >> > > I'm having trouble thinking of a case when this new feature is necessary. > Can you show a more realistic example? > -- https://mail.python.org/mailman/listinfo/python-list
Idle not opening
Hey there, After successfully installing Python 3.8.2(64 bit) on my system(windows 10 64 bit OS), my idle is not opening. I've tried uninstalling and reinstalling it again but still the same result. Looking forward to a fix please. Thanks -- https://mail.python.org/mailman/listinfo/python-list
Re: Metaclasses, decorators, and synchronization
You could do it with a metaclass, but I think that's probably overkill. It's not really efficient as it's doing test/set of an RLock all the time, but hey - you didn't ask for efficient. :) 1 import threading 2 3 def synchronized(func): 4 def innerMethod(self, *args, **kwargs): 5 if not hasattr(self, '_sync_lock'): 6 self._sync_lock = threading.RLock() 7 self._sync_lock.acquire() 8 print 'acquired %r' % self._sync_lock 9 try: 10 return func(self, *args, **kwargs) 11 finally: 12 self._sync_lock.release() 13 print 'released %r' % self._sync_lock 14 return innerMethod 15 16 class Foo(object): 17 @synchronized 18 def mySyncMethod(self): 19 print "blah" 20 21 22 f = Foo() 23 f.mySyncMethod() If you used a metaclass, you could save yourself the hassle of adding a sync_lock in each instance, but you could also do that by just using a plain old base class and making sure you call the base class's __init__ to add in the sync lock. vic On 9/24/05, Michael Ekstrand <[EMAIL PROTECTED]> wrote: > I've been googling around for a bit trying to find some mechanism for > doing in Python something like Java's synchronized methods. In the > decorators PEP, I see examples using a hypothetical synchronized > decorator, but haven't stumbled across any actual implementation of > such a decorator. I've also found Synch.py, but that seems to use > pre-2.2 metaclasses from what I have read. > > Basically, what I want to do is something like this: > > class MyClass: > __metaclass__ = SynchronizedMeta > @synchronized > def my_sync_method(): > pass > > where SychronizedMeta is some metaclass that implements synchronization > logic for classes bearing some synchronized decorator (probably also > defined by the module defining SynchronizedMeta). > > After looking in the Cheeseshop, the Python source distribution, and > Google, I have not been able to find anything that implements this > functionality. If there isn't anything, that's fine, I'll just write it > myself (and hopefully be able to put it in the cheeseshop), but I'd > rather avoid duplicating effort solving previously solved problems... > So, does anyone know of anything that alreaady does this? (or are there > some serious problems with my design?) > > TIA, > Michael > > -- > http://mail.python.org/mailman/listinfo/python-list > -- "Never attribute to malice that which can be adequately explained by stupidity." - Hanlon's Razor -- http://mail.python.org/mailman/listinfo/python-list
Re: Metaclasses, decorators, and synchronization
Hmmm well that's obvious enough. This is why I shouldn't write code off the cuff on c.l.p :)OTOH - if I just assign the RLock in the base classes initializer, is there any problem?vic On 9/26/05, Jp Calderone <[EMAIL PROTECTED]> wrote: On Sun, 25 Sep 2005 23:30:21 -0400, Victor Ng <[EMAIL PROTECTED]> wrote:>You could do it with a metaclass, but I think that's probably overkill.>>It's not really efficient as it's doing test/set of an RLock all the >time, but hey - you didn't ask for efficient. :)There's a race condition in this version of synchronized which can allow two or more threads to execute the synchronized function simultaneously.> > 1 import threading> 2> 3 def synchronized(func):> 4 def innerMethod(self, *args, **kwargs):> 5 if not hasattr(self, '_sync_lock'):Imagine two threads reach the above test at the same time - they both discover there is no RLock protecting this function. They both entire this suite to create one. > 6 self._sync_lock = threading.RLock()Now one of them zooms ahead, creating the RLock and acquiring it on the next line. The other one finally manages to get some runtime again afterwards and creates another RLock, clobbering the first. > 7 self._sync_lock.acquire()Now it proceeds to this point and acquires the newly created RLock. Woops. Two threads now think they are allowed to run this function.> 8 print 'acquired %r' % self._sync_lock > 9 try:> 10 return func(self, *args, **kwargs)And so they do.> 11 finally:> 12 self._sync_lock.release()> 13 print 'released %r' % self._sync_lock Of course, when the second gets to the finally suite, it will explode, since it will be releasing the same lock the first thread to get here has already released.> 14 return innerMethod> 15 > 16 class Foo(object):> 17 @synchronized> 18 def mySyncMethod(self):> 19 print "blah"> 20> 21> 22 f = Foo()> 23 f.mySyncMethod()To avoid this race condition, you need to serialize lock creation. This is exactly what Twisted's implementation does. You can read that version at < http://svn.twistedmatrix.com/cvs/trunk/twisted/python/threadable.py?view=markup&rev=13745>.The code is factored somewhat differently: the functionality is presented as pre- and post-execution hooks, and there is function decorator. The concept is the same, however. Jp--http://mail.python.org/mailman/listinfo/python-list-- "Never attribute to malice that which can be adequately explained by stupidity." - Hanlon's Razor -- http://mail.python.org/mailman/listinfo/python-list
Re: calling a dylib in python on OS X
You can use Pyrex which will generate a C module for you. vic On 22 Oct 2005 23:40:17 -0700, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > Hi > > Is there something similar to python's windll for calling DLLs on win32 > but meant for calling dylib's on OS X? > > thanks, > > r.s. > > -- > http://mail.python.org/mailman/listinfo/python-list > -- "Never attribute to malice that which can be adequately explained by stupidity." - Hanlon's Razor -- http://mail.python.org/mailman/listinfo/python-list
Re: python gui using boa
ash wrote: > Thanks Steve, i found out the solution to the problem. but a good > tutorial on sizers is still missing. Try this article I wrote a while back. It should at least help you get started. The code samples are written in C++, but they are trivially translated to python. (Change -> to ., change this to self, get rid of "new" and *) http://neume.sourceforge.net/sizerdemo/ -- Brian -- http://mail.python.org/mailman/listinfo/python-list
Re: python gui using boa
ash wrote: > I have another query for you - how can i make a captionless frame > draggable in wxWindows? If you look at the wxPython demo, there's a Shaped Window demo under Miscellaneous that does this. The key portion is on line 86 in my version: #v+ def OnMouseMove(self, evt): if evt.Dragging() and evt.LeftIsDown(): x, y = self.ClientToScreen(evt.GetPosition()) fp = (x - self.delta[0], y - self.delta[1]) self.Move(fp) #v- -- Brian -- http://mail.python.org/mailman/listinfo/python-list
Re: Wheel-reinvention with Python
[EMAIL PROTECTED] wrote: > Torsten Bronger wrote: >> I've been having a closer look at wxPython which is not Pythonic at >> all and bad documented. Probably I'll use it nevertheless. > Aye. Couldn't agree more. You know, whenever someone mentions wxPython being badly documented, I have to wonder whether they know about the nearly 2000 page PDF of wxWidgets documentation, which is available in html at http://www.wxwidgets.org/manuals/2.6.1/wx_contents.html wxPython has the same API as wxWidgets, except where indicated in that manual. If in doubt, you can also consult http://wxpython.org/docs/api/ And of course, the gaps are filled in by the wxPython wiki: http://wiki.wxpython.org/ I apologize if you already know about these things, but I find myself continually surprised that "wxPython is badly documented" has become conventional wisdom when I have never found that to be the case. -- Brian -- http://mail.python.org/mailman/listinfo/python-list
Re: [newbie]search string in tuples
Viper Jack wrote: > but i want check on several object inside the tuple so i'm trying this: > > list=["airplane","car","boat"] Note that this is actually a list, not a tuple as your subject suggests. For the difference, take a look at this: http://www.python.org/doc/faq/general.html#why-are-there-separate-tuple-and-list-data-types Also, it's generally considered bad form to shadow built-in type names like list. This prevents you from using methods in the list scope and makes for potentially confusing bugs. > while select != list[0] or list[1] or list[2]: This actually behaves as though you wrote this: while (select != list[0]) or list[1] or list[2]: The since list[1] always evaluates to a true value (non-empty strings are true), the while loop body will always execute. Fortunately, python has a very simple way of doing what you want: vehicles = ("airplane", "car", "boat") select = vars while select not in vehicles: select=raw_input("Wich vehicle?") -- Brian -- http://mail.python.org/mailman/listinfo/python-list
Re: Sizers VS window size
Deltones wrote: > However, if I add this part from the tutorial, I get a much smaller > window. Why is there an interference with the result I want when > adding the sizer code? [snip] > self.sizer.Fit(self) As noted in the the docs for Fit(): "Tell the sizer to resize the window to match the sizer's minimal size." Take this call out and the size should be as you expect. -- Brian -- http://mail.python.org/mailman/listinfo/python-list
Is there way to determine which class a method is bound to?
I'm doing some evil things in Python and I would find it useful to determine which class a method is bound to when I'm given a method pointer. For example: class Foo(object): def somemeth(self): return 42 class Bar(Foo): def othermethod(self): return 42 Is there some way I can have something like : findClass(Bar.somemeth) that would return the 'Foo' class, and findClass(Bar.othermethod) would return the 'Bar' class? vic -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there way to determine which class a method is bound to?
No - that doesn't work, im_class gives me the current class - in the case of inheritance, I'd like to get the super class which provides 'bar'. I suppose I could walk the __bases__ to find the method using the search routine outlined in: http://www.python.org/2.2/descrintro.html but I was hoping for an automatic way of resolving this. vic On Fri, 25 Feb 2005 14:54:34 +, Richie Hindle <[EMAIL PROTECTED]> wrote: > > [vic] > > I'm doing some evil things in Python and I would find it useful to > > determine which class a method is bound to when I'm given a method > > pointer. > > Here you go: > > >>> class Foo: > ... def bar(self): > ... pass > ... > >>> Foo.bar.im_class > > >>> Foo().bar.im_class > > >>> > > -- > Richie Hindle > [EMAIL PROTECTED] > > -- > http://mail.python.org/mailman/listinfo/python-list > -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there way to determine which class a method is bound to?
Awesome! I didn't see the getmro function in inspect - that'll do the trick for me. I should be able to just look up the methodname in each of the class's __dict__ attributes. vic On Fri, 25 Feb 2005 16:29:25 +0100, Peter Otten <[EMAIL PROTECTED]> wrote: > Victor Ng wrote: > > > I'm doing some evil things in Python and I would find it useful to > > determine which class a method is bound to when I'm given a method > > pointer. > > > > For example: > > > > class Foo(object): > > def somemeth(self): > > return 42 > > > > class Bar(Foo): > > def othermethod(self): > > return 42 > > > > > > Is there some way I can have something like : > > > >findClass(Bar.somemeth) > > > > that would return the 'Foo' class, and > > > >findClass(Bar.othermethod) > > > > would return the 'Bar' class? > > > > vic > > >>> import inspect > >>> class Foo(object): > ... def foo(self): pass > ... > >>> class Bar(Foo): > ... def bar(self): pass > ... > >>> def get_imp_class(method): > ... return [t for t in inspect.classify_class_attrs(method.im_class) if > t[-1] is method.im_func][0][2] > ... > >>> [get_imp_class(m) for m in [Bar().foo, Bar().bar, Bar.foo, Bar.bar]] > [, , , > ] > > but with this approach you will get into trouble as soon as you are using > the same function to define multiple methods. There may be something in the > inspect module more apt to solve the problem -- getmro() perhaps? > > Peter > > -- > http://mail.python.org/mailman/listinfo/python-list > -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there way to determine which class a method is bound to?
So I went digging through the documentation more and found the following: http://docs.python.org/ref/types.html There's a section titled "User-defined methods" which covers all the im_self, im_class attributes and what they are responsible for. vic On 25 Feb 2005 10:42:06 -0800, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > Another way is to make a simple metaclass, setting an attribute (like > defining_class, or something) on each function object in the class > dictionary. > > -- > http://mail.python.org/mailman/listinfo/python-list > -- --- "Never attribute to malice that which can be adequately explained by stupidity." - Hanlon's Razor -- http://mail.python.org/mailman/listinfo/python-list
Preserving the argspec of a function after generating a closure
Is there a way to preserve the argspec of a function after wrapping it in a closure? I'm looking for a general way to say "wrap function F in a closure", such that inspect.getargspec on the closure would return the same (args, varargs, varkw, defaults) tuple ass the enclosed function. The typical code I'm using is something like this: def wrapFunc(func): def tmpWrapper(*args, **kwargs): return func(*args, **kwargs) tmpWrapper.func_name = func.func_name return wrapFunc This preserves the function name - how do I do more? vic -- "Never attribute to malice that which can be adequately explained by stupidity." - Hanlon's Razor -- http://mail.python.org/mailman/listinfo/python-list
Where to handle try-except - close to the statement, or in outer loop?
Hi, I have a general question regarding try-except handling in Python. Previously, I was putting the try-handle blocks quite close to where the errors occured: A somewhat contrived example: if __name__ == "__main__": my_pet = Dog('spot', 5, 'brown') my_pet.feed() my_pet.shower() and then, in each of the methods (feed(), shower()), I'd open up files, open database connections etc. And I'd wrap each statement there in it's own individual try-except block. (I'm guessing I should wrap the whole lot in a single try-except, and handle each exception there?) However, the author here: http://stackoverflow.com/a/3644618/139137 suggests that it's a bad habit to catch an exception as early as possible, and you should handle it at an outer level. >From reading other posts, this seems to be the consensus as well. However, how does this work if you have multiple methods which can throw the same types of exceptions? For example, if both feed() and shower() above need to write to files, when you get your IOError, how do you distinguish where it came from? (e.g. If you wanted to print a friendly error message, saying "Error writing to file while feeding.", or if you otherwise wanted to handle it different). Would I wrap all of the calls in a try-except block? try: my_pet.feed() my_pet.shower() except IOError as e: # Do something to handle exception? Can anybody recommend any good examples that show current best practices for exception handling, for programs with moderate complexity? (i.e. anything more than the examples in the tutorial, basically). Cheers, Victor -- https://mail.python.org/mailman/listinfo/python-list
Re: [Python-ideas] Unicode stdin/stdout
Why do you need to force the UTF-8 encoding? Your locale is not correctly configured? It's better to set PYTHONIOENCODING rather than replacing sys.stdout/stderr at runtime. There is an open issue to add a TextIOWrapper.set_encoding() method: http://bugs.python.org/issue15216 Victor -- https://mail.python.org/mailman/listinfo/python-list
Using try-catch to handle multiple possible file types?
Hi, I have a script that needs to handle input files of different types (uncompressed, gzipped etc.). My question is regarding how I should handle the different cases. My first thought was to use a try-catch block and attempt to open it using the most common filetype, then if that failed, try the next most common type etc. before finally erroring out. So basically, using exception handling for flow-control. However, is that considered bad practice, or un-Pythonic? What other alternative constructs could I also use, and pros and cons? (I was thinking I could also use python-magic which wraps libmagic, or I can just rely on file extensions). Other thoughts? Cheers, Victor -- https://mail.python.org/mailman/listinfo/python-list
Re: Using try-catch to handle multiple possible file types?
Hi, Is either approach (try-excepts, or using libmagic) considered more idiomatic? What would you guys prefer yourselves? Also, is it possible to use either approach with a context manager ("with"), without duplicating lots of code? For example: try: with gzip.open('blah.txt', 'rb') as f: for line in f: print(line) except IOError as e: with open('blah.txt', 'rb') as f: for line in f: print(line) I'm not sure of how to do this without needing to duplicating the processing lines (everything inside the with)? And using: try: f = gzip.open('blah.txt', 'rb') except IOError as e: f = open('blah.txt', 'rb') finally: for line in f: print(line) won't work, since the exception won't get thrown until you actually try to open the file. Plus, I'm under the impression that I should be using context-managers where I can. Also, on another note, python-magic will return a string as a result, e.g.: gzip compressed data, was "blah.txt", from Unix, last modified: Wed Nov 20 10:48:35 2013 I suppose it's enough to just do a? if "gzip compressed data" in results: or is there a better way? Cheers, Victor On Tuesday, 19 November 2013 20:36:47 UTC+11, Mark Lawrence wrote: > On 19/11/2013 07:13, Victor Hooi wrote: > > > > > > So basically, using exception handling for flow-control. > > > > > > However, is that considered bad practice, or un-Pythonic? > > > > > > > If it works for you use it, practicality beats purity :) > > > > -- > > Python is the second best programming language in the world. > > But the best has yet to be invented. Christian Tismer > > > > Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Understanding relative imports in package - and running pytest with relative imports?
Hi, Ok, this is a topic that I've never really understood properly, so I'd like to find out what's the "proper" way of doing things. Say I have a directory structure like this: furniture/ __init__.py chair/ __init__.py config.yaml build_chair.py common/ __init__.py shared.py table/ __init__.py config.yaml create_table.sql build_table.py The package is called furniture, and we have modules chair, common and table underneath that. build_chair.py and build_table.py are supposed to import from common/shared.py using relative imports. e.g.: from ..common.shared import supplies However, if you then try to run the scripts build_chair.py, or build_table.py, they'll complain about: ValueError: Attempted relative import in non-package After some Googling: http://stackoverflow.com/questions/11536764/attempted-relative-import-in-non-package-even-with-init-py http://stackoverflow.com/questions/72852/how-to-do-relative-imports-in-python http://stackoverflow.com/questions/1198/getting-attempted-relative-import-in-non-package-error-in-spite-of-having-init http://stackoverflow.com/questions/14664313/attempted-relative-import-in-non-package-although-packaes-with-init-py-in http://melitamihaljevic.blogspot.com.au/2013/04/python-relative-imports-hard-way.html The advice seems to be either to run it from the parent directory of furniture with: python -m furniture.chair.build_chair Or to have a main.py outside of the package directory and run that, and have it import things. However, I don't see having a separate single main.py outside my package would work with keeping my code tidy/organised, and or how it'd work with the other files (config.yaml, or create_table.sql) which are associated with each script? A third way I thought of way just to create a setup.py and install the package into site-packages - and then everything will work? However, I don't think that solves my problem of understanding how things work, or getting my directory structure right. Although apparently running a script inside a package is an anti-pattern? (https://mail.python.org/pipermail/python-3000/2007-April/006793.html) How would you guys organise the code above? Also, if I have tests (say with pyttest), inside furniture/table/tests/test_table.py, how would I run these as well? If I run py.test from there, I get the same: $ py.test from ..table.build_table import Table E ValueError: Attempted relative import in non-package (Above is just an extract). Assuming I use pytest, where should my tests be in the directory structure, and how should I be running them? Cheers, Victor -- https://mail.python.org/mailman/listinfo/python-list
Python String Formatting - passing both a dict and string to .format()
Hi, I'm trying to use Python's new style string formatting with a dict and string together. For example, I have the following dict and string variable: my_dict = { 'cat': 'ernie', 'dog': 'spot' } foo = 'lorem ipsum' If I want to just use the dict, it all works fine: '{cat} and {dog}'.format(**my_dict) 'ernie and spot' (I'm also curious how the above ** works in this case). However, if I try to combine them: '{cat} and {dog}, {}'.format(**my_dict, foo) ... SyntaxError: invalid syntax I also tried with: '{0['cat']} {1} {0['dog']}'.format(my_dict, foo) ... SyntaxError: invalid syntax However, I found that if I take out the single quotes around the keys it then works: '{0[cat]} {1} {0[dog]}'.format(my_dict, foo) "ernie lorem ipsum spot" I'm curious - why does this work? Why don't the dictionary keys need quotes around them, like when you normally access a dict's elements? Also, is this the best practice to pass both a dict and string to .format()? Or is there another way that avoids needing to use positional indices? ({0}, {1} etc.) Cheers, Victor -- https://mail.python.org/mailman/listinfo/python-list
Python and PEP8 - Recommendations on breaking up long lines?
Hi, I'm running pep8 across my code, and getting warnings about my long lines (> 80 characters). I'm wonder what's the recommended way to handle the below cases, and fit under 80 characters. First example - multiple context handlers: with open(self.full_path, 'r') as input, open(self.output_csv, 'ab') as output: and in my case, with indents, the 80-character marks is just before the ending "as output". What's the standard recognised way to split this across multiple lines, so that I'm under 80 characters? I can't just split after the "as input," as that isn't valid syntax, and there's no convenient parentheses for me to split over. Is there a standard Pythonic way? Second example - long error messages: self.logger.error('Unable to open input or output file - %s. Please check you have sufficient permissions and the file and parent directory exist.' % e) I can use triple quotes: self.logger.error( """Unable to open input or output file - %s. Please check you have sufficient permissions and the file and parent directory exist.""" % e) However, that will introduce newlines in the message, which I don't want. I can use backslashes: self.logger.error( 'Unable to open input or output file - %s. Please check you\ have sufficient permissions and the file and parent directory\ exist.' % e) which won't introduce newlines. Or I can put them all as separate strings, and trust Python to glue them together: self.logger.error( 'Unable to open input or output file - %s. Please check you' 'have sufficient permissions and the file and parent directory' 'exist.' % e) Which way is the recommended Pythonic way? Third example - long comments: """ NB - We can't use Psycopg2's parametised statements here, as that automatically wraps everything in single quotes. So s3://my_bucket/my_file.csv.gz would become s3://'my_bucket'/'my_file.csv.gz'. Hence, we use Python's normal string formating - this could potentially exposes us to SQL injection attacks via the config.yaml file. I'm not aware of any easy ways around this currently though - I'm open to suggestions though. See http://stackoverflow.com/questions/9354392/psycopg2-cursor-execute-with-sql-query-parameter-causes-syntax-error for further information. """ In this case, I'm guessing a using triple quotes (""") is a better idea with multi-line comments, right? However, I've noticed that I can't seem to put in line-breaks inside the comment without triggering a warning. For example, trying to put in another empty line in between lines 6 and 7 above causes a warning. Also, how would I split up the long URLs? Breaking it up makes it annoying to use the URL. Thoughts? Cheers, Victor -- https://mail.python.org/mailman/listinfo/python-list
Re: Python and PEP8 - Recommendations on breaking up long lines?
Hi, Also, forgot two other examples that are causing me grief: cur.executemany("INSERT INTO foobar_foobar_files VALUES (?)", [[os.path.relpath(filename, foobar_input_folder)] for filename in filenames]) I've already broken it up using the parentheses, not sure what's the tidy way to break it up again to fit under 80? In this case, the 80-character mark is hitting me around the "for filename" towards the end. and: if os.path.join(root, file) not in previously_processed_files and os.path.join(root, file)[:-3] not in previously_processed_files: In this case, the 80-character mark is actually partway through "previously processed files" (the first occurrence)... Cheers, Victor On Thursday, 28 November 2013 12:57:13 UTC+11, Victor Hooi wrote: > Hi, > > > > I'm running pep8 across my code, and getting warnings about my long lines (> > 80 characters). > > > > I'm wonder what's the recommended way to handle the below cases, and fit > under 80 characters. > > > > First example - multiple context handlers: > > > > with open(self.full_path, 'r') as input, open(self.output_csv, > 'ab') as output: > > > > and in my case, with indents, the 80-character marks is just before the > ending "as output". > > > > What's the standard recognised way to split this across multiple lines, so > that I'm under 80 characters? > > > > I can't just split after the "as input," as that isn't valid syntax, and > there's no convenient parentheses for me to split over. > > > > Is there a standard Pythonic way? > > > > Second example - long error messages: > > > > self.logger.error('Unable to open input or output file - %s. > Please check you have sufficient permissions and the file and parent > directory exist.' % e) > > > > I can use triple quotes: > > > > self.logger.error( > > """Unable to open input or output file - %s. Please check you > > have sufficient permissions and the file and parent directory > > exist.""" % e) > > > > However, that will introduce newlines in the message, which I don't want. > > > > I can use backslashes: > > > > self.logger.error( > > 'Unable to open input or output file - %s. Please check you\ > > have sufficient permissions and the file and parent directory\ > > exist.' % e) > > > > which won't introduce newlines. > > > > Or I can put them all as separate strings, and trust Python to glue them > together: > > > > self.logger.error( > > 'Unable to open input or output file - %s. Please check you' > > 'have sufficient permissions and the file and parent > directory' > > 'exist.' % e) > > > > Which way is the recommended Pythonic way? > > > > Third example - long comments: > > > > """ NB - We can't use Psycopg2's parametised statements here, as > > that automatically wraps everything in single quotes. > > So s3://my_bucket/my_file.csv.gz would become > s3://'my_bucket'/'my_file.csv.gz'. > > Hence, we use Python's normal string formating - this could > > potentially exposes us to SQL injection attacks via the > config.yaml > > file. > > I'm not aware of any easy ways around this currently though - I'm > > open to suggestions though. > > See > > > http://stackoverflow.com/questions/9354392/psycopg2-cursor-execute-with-sql-query-parameter-causes-syntax-error > > for further information. """ > > > > In this case, I'm guessing a using triple quotes (""") is a better idea with > multi-line comments, right? > > > > However, I've noticed that I can't seem to put in line-breaks inside the > comment without triggering a warning. For example, trying to put in another > empty line in between lines 6 and 7 above causes a warning. > > > > Also, how would I split up the long URLs? Breaking it up makes it annoying to > use the URL. Thoughts? > > > > Cheers, > > Victor -- https://mail.python.org/mailman/listinfo/python-list
Re: [Python-Dev] [RELEASED] Python 3.4.0 release candidate 1
Hi, It would be nice to give also the link to the whole changelog in your emails and on the website: http://docs.python.org/3.4/whatsnew/changelog.html Congrats for your RC1 release :-) It's always hard to make developers stop addings "new minor" changes before the final version :-) Victor 2014-02-11 8:43 GMT+01:00 Larry Hastings : > > On behalf of the Python development team, I'm delighted to announce > the first release candidate of Python 3.4. > > This is a preview release, and its use is not recommended for > production settings. > > Python 3.4 includes a range of improvements of the 3.x series, including > hundreds of small improvements and bug fixes. Major new features and > changes in the 3.4 release series include: > > * PEP 428, a "pathlib" module providing object-oriented filesystem paths > * PEP 435, a standardized "enum" module > * PEP 436, a build enhancement that will help generate introspection >information for builtins > * PEP 442, improved semantics for object finalization > * PEP 443, adding single-dispatch generic functions to the standard library > * PEP 445, a new C API for implementing custom memory allocators > * PEP 446, changing file descriptors to not be inherited by default >in subprocesses > * PEP 450, a new "statistics" module > * PEP 451, standardizing module metadata for Python's module import system > * PEP 453, a bundled installer for the *pip* package manager > * PEP 454, a new "tracemalloc" module for tracing Python memory allocations > * PEP 456, a new hash algorithm for Python strings and binary data > * PEP 3154, a new and improved protocol for pickled objects > * PEP 3156, a new "asyncio" module, a new framework for asynchronous I/O > > Python 3.4 is now in "feature freeze", meaning that no new features will be > added. The final release is projected for mid-March 2014. > > > To download Python 3.4.0rc1 visit: > > http://www.python.org/download/releases/3.4.0/ > > > Please consider trying Python 3.4.0rc1 with your code and reporting any > new issues you notice to: > > http://bugs.python.org/ > > > Enjoy! > > -- > Larry Hastings, Release Manager > larry at hastings.org > (on behalf of the entire python-dev team and 3.4's contributors) > ___ > Python-Dev mailing list > python-...@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com -- https://mail.python.org/mailman/listinfo/python-list
Re: [Python-ideas] How the heck does async/await work in Python 3.5
See also Doug Hellmann article on asyncio, from its serie of "Python 3 Module of the Week" articles: https://pymotw.com/3/asyncio/index.html Victor 2016-02-23 22:25 GMT+01:00 Joao S. O. Bueno : > Today I also stumbled on this helpful "essay" from Brett Cannon about > the same subject > > http://www.snarky.ca/how-the-heck-does-async-await-work-in-python-3-5 > > On 23 February 2016 at 18:05, Sven R. Kunze wrote: >> On 20.02.2016 07:53, Christian Gollwitzer wrote: >> >> If you have difficulties wit hthe overall concept, and if you are open to >> discussions in another language, take a look at this video: >> >> https://channel9.msdn.com/Shows/C9-GoingNative/GoingNative-39-await-co-routines >> >> MS has added coroutine support with very similar syntax to VC++ recently, >> and the developer tries to explain it to the "stackful" programmers. >> >> >> Because of this thread, I finally finished an older post collecting valuable >> insights from last year discussions regarding concurrency modules available >> in Python: http://srkunze.blogspot.com/2016/02/concurrency-in-python.html It >> appears to me that it would fit here well. >> >> @python-ideas >> Back then, the old thread ("Concurrency Modules") was like basically meant >> to result in something useful. I hope the post covers the essence of the >> discussion. >> Some even suggested putting the table into the Python docs. I am unaware of >> the formal procedure here but I would be glad if somebody could point be at >> the right direction if that the survey table is wanted in the docs. >> >> Best, >> Sven >> >> ___ >> Python-ideas mailing list >> python-id...@python.org >> https://mail.python.org/mailman/listinfo/python-ideas >> Code of Conduct: http://python.org/psf/codeofconduct/ > ___ > Python-ideas mailing list > python-id...@python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ -- https://mail.python.org/mailman/listinfo/python-list
Re: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition
2014-12-11 15:47 GMT+01:00 Giampaolo Rodola' : > I still think the only *real* obstacle remains the lack of important > packages such as twisted, gevent and pika which haven't been ported yet. twisted core works on python 3, right now. Contribute to Twisted if you want to port more code... Or start something new, asyncio (with trollius, it works on Python 2 too). The develpoment branch of gevent supports Python 3, especially if you dont use monkey patching. Ask the developers to release a version, at least with "experimental" Python 3 support. I don't know pika. I read "Pika Python AMQP Client Library". You may take a look at https://github.com/dzen/aioamqp if you would like to play with asyncio. > With those ones ported switching to Python 3 *right now* is not only > possible and relatively easy, but also convenient. Victor -- https://mail.python.org/mailman/listinfo/python-list