Re: unittest: assertRaises() with an instance instead of a type
Ben Finney wrote: > Steven D'Aprano writes: > >> (By the way, I have to question the design of an exception with error >> codes. That seems pretty poor design to me. Normally the exception *type* >> acts as equivalent to an error code.) > > Have a look at Python's built-in OSError. The various errors from the > operating system can only be distinguished by the numeric code the OS > returns, so that's what to test on in one's unit tests. The core devs are working to fix that: $ python3.2 -c'open("does-not-exist")' Traceback (most recent call last): File "", line 1, in IOError: [Errno 2] No such file or directory: 'does-not-exist' $ python3.3 -c'open("does-not-exist")' Traceback (most recent call last): File "", line 1, in FileNotFoundError: [Errno 2] No such file or directory: 'does-not-exist' $ python3.2 -c'open("unwritable", "w")' Traceback (most recent call last): File "", line 1, in IOError: [Errno 13] Permission denied: 'unwritable' $ python3.3 -c'open("unwritable", "w")' Traceback (most recent call last): File "", line 1, in PermissionError: [Errno 13] Permission denied: 'unwritable' http://www.python.org/dev/peps/pep-3151/ -- http://mail.python.org/mailman/listinfo/python-list
Re: unittest: assertRaises() with an instance instead of a type
Am 28.03.2012 20:07, schrieb Steven D'Aprano: First off, that is not Python code. "catch Exception" gives a syntax error. Old C++ habits... :| Secondly, that is not the right way to do this unit test. You are testing two distinct things, so you should write it as two separate tests: [..code..] If foo does *not* raise an exception, the unittest framework will handle the failure for you. If it raises a different exception, the framework will also handle that too. Then write a second test to check the exception code: [...] Again, let the framework handle any unexpected cases. Sorry, you got it wrong, it should be three tests: 1. Make sure foo() raises an exception. 2. Make sure foo() raises the right exception. 3. Make sure the errorcode in the exception is right. Or maybe you should in between verify that the exception raised actually contains an errorcode? And that the errorcode can be equality-compared to the expected value? :> Sorry, I disagree that these steps should be separated. It would blow up the code required for testing, increasing maintenance burdens. Which leads back to a solution that uses a utility function, like the one you suggested or the one I was looking for initially. (By the way, I have to question the design of an exception with error codes. That seems pretty poor design to me. Normally the exception *type* acts as equivalent to an error code.) True. Normally. I'd adapting to a legacy system though, similar to OSError, and that system simply emits error codes which the easiest way to handle is by wrapping them. Cheers! Uli -- http://mail.python.org/mailman/listinfo/python-list
Re: unittest: assertRaises() with an instance instead of a type
Ulrich Eckhardt wrote: > True. Normally. I'd adapting to a legacy system though, similar to > OSError, and that system simply emits error codes which the easiest way > to handle is by wrapping them. If you have err = some_func() if err: raise MyException(err) the effort to convert it to exc = lookup_exception(some_func()) if exc: raise exc is small. A fancy way is to use a decorator: #untested def code_to_exception(table): def deco(f): def g(*args, **kw): err = f(*args, **kw) exc = table[err] if exc is not None: raise exc return g return f class MyError(Exception): pass class HyperspaceBypassError(MyError): pass @code_to_exception({42: HyperspaceBypassError, 0: None}) def some_func(...): # ... -- http://mail.python.org/mailman/listinfo/python-list
Re: unittest: assertRaises() with an instance instead of a type
Am 28.03.2012 20:26, schrieb Terry Reedy: On 3/28/2012 8:28 AM, Ulrich Eckhardt wrote: with self.assertRaises(MyException(SOME_FOO_ERROR)): foo() I presume that if this worked the way you want, all attributes would have to match. The message part of builtin exceptions is allowed to change, so hard-coding an exact expected message makes tests fragile. This is a problem with doctest. I would have assumed that comparing two exceptions leaves out messages that are intended for the user, not as part of the API. However, my expectations aren't met anyway, because ... This of course requires the exception to be equality-comparable. Equality comparison is by id. So this code will not do what you want. >>> Exception('foo') == Exception('foo') False Yikes! That was unexpected and completely changes my idea. Any clue whether this is intentional? Is identity the fallback when no equality is defined for two objects? Thanks for your feedback! Uli -- http://mail.python.org/mailman/listinfo/python-list
tabs/spaces (was: Re: unittest: assertRaises() with an instance instead of a type)
Am 28.03.2012 20:26, schrieb Terry Reedy: On 3/28/2012 8:28 AM, Ulrich Eckhardt wrote: [...] # call testee and verify results try: ...call function here... except exception_type as e: if not exception is None: self.assertEqual(e, exception) Did you use tabs? They do not get preserved indefinitely, so they are bad for posting. I didn't consciously use tabs, actually I would rather avoid them. That said, my posting looks correctly indented in my "sent" folder and also in the copy received from my newsserver. What could also have an influence is line endings. I'm using Thunderbird on win32 here, acting as news client to comp.lang.python. Or maybe it's your software (or maybe some software in between) that fails to preserve formatting. *shrug* Uli -- http://mail.python.org/mailman/listinfo/python-list
Nhung dieu ban can biet
thanks, -- http://mail.python.org/mailman/listinfo/python-list
Re: errors building python 2.7.3
JFI Reported as http://bugs.python.org/issue14437 http://bugs.python.org/issue14438 -- Regars, Alex -- http://mail.python.org/mailman/listinfo/python-list
Re: errors building python 2.7.3
On 28.03.2012 18:42, David Robinow wrote: > On Wed, Mar 28, 2012 at 7:50 AM, Alexey Luchko wrote: >> I've tried to build Python 2.7.3rc2 on cygwin and got the following errors: >> >> $ CFLAGS=-I/usr/include/ncursesw/ CPPFLAGS=-I/usr/include/ncursesw/ >> ./configure > I haven't tried 2.7.3 yet, so I'll describe my experience with 2.7.2 > I use /usr/include/ncurses rather than /usr/include/ncursesw > I don't remember what the difference is but ncurses seems to work. I've tried ncurses too. It does not matter. -- Alex -- http://mail.python.org/mailman/listinfo/python-list
Re: tabs/spaces (was: Re: unittest: assertRaises() with an instance instead of a type)
In article <0ved49-hie@satorlaser.homedns.org>, Ulrich Eckhardt wrote: > I didn't consciously use tabs, actually I would rather avoid them. That > said, my posting looks correctly indented in my "sent" folder and also > in the copy received from my newsserver. What could also have an > influence is line endings. I'm using Thunderbird on win32 here, acting > as news client to comp.lang.python. Or maybe it's your software (or > maybe some software in between) that fails to preserve formatting. > > *shrug* Oh noes! The line eater bug is back! -- http://mail.python.org/mailman/listinfo/python-list
Re: Number of languages known [was Re: Python is readable] - somewhat OT
On Wed, Mar 28, 2012 at 9:33 PM, Chris Angelico wrote: > On Thu, Mar 29, 2012 at 11:59 AM, Rodrick Brown > wrote: >> The best skill any developer can have is the ability to pickup languages >> very quickly and know what tools work well for which task. > > Definitely. Not just languages but all tools. The larger your toolkit > and the better you know it, the more easily you'll be able to grasp > the tool you need. The thing that bothers me is that people spend time and mental energy on a wide variety of syntax when the semantics are ~90% identical in most cases (up to organization). We would be better off if all the time that was spent on learning syntax, memorizing library organization and becoming proficient with new tools was spent learning the mathematics, logic and engineering sciences. Those solve problems, languages are just representations. Unfortunately, programming languages seem to have become a way to differentiate yourself and establish sub-cultural membership. All the cool kids are using XYZ, people who use LMN are dorks! Who cares about sharing or compatibility! Human nature is depressingly self-defeating. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python is readable
In article , Nathan Rice wrote: >> >> http://www.joelonsoftware.com/articles/fog18.html > >I read that article a long time ago, it was bullshit then, it is >bullshit now. The only thing he gets right is that the Shannon >information of a uniquely specified program is proportional to the >code that would be required to generate it. Never mind that if a Thank you for drawing my attention to that article. It attacks the humbug software architects. Are you one of them? I really liked that article. >program meets a specification, you shouldn't care about any of the >values used for unspecified parts of the program. If you care about >the values, they should be specified. So, if Joel had said that the >program was uniquely specified, or that none of the things that >weren't specified require values in the programming language, he might >have been kinda, sorta right. Of course, nobody cares enough to >specify every last bit of minutiae in a program, and specifications >change, so it is pretty much impossible to imagine either case ever >actually occurring. I wonder if you're not talking about a different article. Groetjes Albert -- -- Albert van der Horst, UTRECHT,THE NETHERLANDS Economic growth -- being exponential -- ultimately falters. albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst -- http://mail.python.org/mailman/listinfo/python-list
Re: Number of languages known [was Re: Python is readable] - somewhat OT
On Fri, Mar 30, 2012 at 12:44 AM, Nathan Rice wrote: > We would be better off if all the time that was spent on learning > syntax, memorizing library organization and becoming proficient with > new tools was spent learning the mathematics, logic and engineering > sciences. Those solve problems, languages are just representations. Different languages are good at different things. REXX is an efficient text parser and command executor. Pike allows live updates of running code. Python promotes rapid development and simplicity. PHP makes it easy to add small amounts of scripting to otherwise-static HTML pages. C gives you all the power of assembly language with all the readability of... assembly language. SQL describes a database request. You can't merge all of them without making a language that's suboptimal at most of those tasks - probably, one that's woeful at all of them. I mention SQL because, even if you were to unify all programming languages, you'd still need other non-application languages to get the job done. Keep the diversity and let each language focus on what it's best at. ChrisA who has lots and lots of hammers, so every problem looks like... lots and lots of nails. -- http://mail.python.org/mailman/listinfo/python-list
Re: "convert" string to bytes without changing data (encoding)
On 2012-03-28 23:37, Terry Reedy wrote: > 2. Decode as if the text were latin-1 and ignore the non-ascii 'latin-1' > chars. When done, encode back to 'latin-1' and the non-ascii chars will > be as they originally were. ... actually, in the beginning of my quest, I ran into an decoding exception trying to read data as "latin1" (which was more or less what I had expected anyway because byte values between 128 and 160 are not defined there). Obviously, I must have misinterpreted something there; I just ran a little test: l=[i for i in range(256)]; b=bytes(l) s=b.decode('latin1'); b=s.encode('latin1'); s=b.decode('latin1') for c in s: print(hex(ord(c)), end=' ') if (ord(c)+1) % 16 ==0: print("") print() ... and got all the original bytes back. So it looks like I tried to solve a problem that did not exist to start with (the problems, I ran into then were pretty real, though ;-) > 3. Decode using encoding = 'ascii', errors='surrogate_escape'. This > reversibly encodes the unknown non-ascii chars as 'illegal' non-chars > (using the surrogate-pair second-half code units). This is probably the > safest in that invalid operations on the non-chars should raise an > exception. Re-encoding with the same setting will reproduce the original > hi-bit chars. The main danger is passing the illegal strings out of your > local sandbox. Unfortunately, this is a very well-kept secret unless you know that something with that name exists. The options currently mentioned in the documentation are not really helpful, because the non-decodeable will be lost. With some trying, I got it to work, too (the option is named "surrogateescape" without the "_" and in python 3.1 it exists, but only not as a keyword argument: "s=b.decode('utf-8','surrogateescape')" ...) Thank you very much for your constructive advice! Regards, Peter -- http://mail.python.org/mailman/listinfo/python-list
Re: unittest: assertRaises() with an instance instead of a type
On 3/29/2012 3:28 AM, Ulrich Eckhardt wrote: Equality comparison is by id. So this code will not do what you want. >>> Exception('foo') == Exception('foo') False Yikes! That was unexpected and completely changes my idea. Any clue whether this is intentional? Is identity the fallback when no equality is defined for two objects? Yes. The Library Reference 4.3. Comparisons (for built-in classes) puts is this way. "Objects of different types, except different numeric types, never compare equal. Furthermore, some types (for example, function objects) support only a degenerate notion of comparison where any two objects of that type are unequal." In other words, 'a==b' is the same as 'a is b'. That is also the default for user-defined classes, but I am not sure where that is documented, if at all. -- Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list
Re: tabs/spaces
On 03/29/2012 03:18 AM, Ulrich Eckhardt wrote: Am 28.03.2012 20:26, schrieb Terry Reedy: On 3/28/2012 8:28 AM, Ulrich Eckhardt wrote: [...] # call testee and verify results try: ...call function here... except exception_type as e: if not exception is None: self.assertEqual(e, exception) Did you use tabs? They do not get preserved indefinitely, so they are bad for posting. I didn't consciously use tabs, actually I would rather avoid them. That said, my posting looks correctly indented in my "sent" folder and also in the copy received from my newsserver. What could also have an influence is line endings. I'm using Thunderbird on win32 here, acting as news client to comp.lang.python. Or maybe it's your software (or maybe some software in between) that fails to preserve formatting. *shrug* Uli More likely, you failed to tell Thunderbird to send it as text. Html messages will read differently on html aware readers than on the standard text readers. They also take maybe triple the space and bandwidth. In thunderbird 3.1.19 In Edit->Preferences, Composition->general Configure Text Format Behavior -> SendOptions In that dialog, under Text Format, choose Convert the message to plain text. Then in the tab called "Plain text domains", add python.org -- DaveA -- http://mail.python.org/mailman/listinfo/python-list
Re: tabs/spaces
On 3/29/2012 3:18 AM, Ulrich Eckhardt wrote: Am 28.03.2012 20:26, schrieb Terry Reedy: On 3/28/2012 8:28 AM, Ulrich Eckhardt wrote: [...] # call testee and verify results try: ...call function here... except exception_type as e: if not exception is None: self.assertEqual(e, exception) Did you use tabs? They do not get preserved indefinitely, so they are bad for posting. I didn't consciously use tabs, actually I would rather avoid them. That said, my posting looks correctly indented in my "sent" folder and also in the copy received from my newsserver. What could also have an influence is line endings. I'm using Thunderbird on win32 here, acting as news client to comp.lang.python. I am using Thunderbird, win64, as news client for gmane. The post looked fine as originally received. The indents only disappeared when I hit reply and the >s were added. That does not happen, in general, for other messages. Unfortunately I cannot go back and read that message as received because the new version of Tbird is misbehaving and deleting read messages on close even though I asked to keep them 6 months. I will look immediately when I next see indents disappearing. -- Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list
Re: "convert" string to bytes without changing data (encoding)
Steven D'Aprano wrote: >Your reaction is to make an equally unjustified estimate of Evan's >mindset, namely that he is not just wrong about you, but *deliberately >and maliciously* lying about you in the full knowledge that he is wrong. No, Evan in his own words admitted that his post was ment to be harsh, "a bit harsher than it deserves", showing his malicious intent. He made accusations that where neither supported by anything I've said in this thread nor by the code I actually write. His accusation about me were completely made up, he was not telling the truth and had no reasonable basis to beleive he was telling the truth. He was malicously lying and I'm completely justified in saying so. Just to make it clear to all you zealots. I've not once advocated writing any sort "risky code" in this thread. I have not once advocated writing any style of code in thread. Just because I refuse to drink the "it's impossible to represent strings as a series of bytes" kool-aid does't mean that I'm a heretic that must oppose against everything you believe in. Ross Ridge -- l/ // Ross Ridge -- The Great HTMU [oo][oo] rri...@csclub.uwaterloo.ca -()-/()/ http://www.csclub.uwaterloo.ca/~rridge/ db // -- http://mail.python.org/mailman/listinfo/python-list
Re: unittest: assertRaises() with an instance instead of a type
Steven D'Aprano wrote: On Wed, 28 Mar 2012 14:28:08 +0200, Ulrich Eckhardt wrote: Hi! I'm currently writing some tests for the error handling of some code. In this scenario, I must make sure that both the correct exception is raised and that the contained error code is correct: try: foo() self.fail('exception not raised') catch MyException as e: self.assertEqual(e.errorcode, SOME_FOO_ERROR) catch Exception: self.fail('unexpected exception raised') Secondly, that is not the right way to do this unit test. You are testing two distinct things, so you should write it as two separate tests: I have to disagree -- I do not see the advantage of writing a second test that *will* fail if the first test fails as opposed to bundling both tests together, and having one failure. ~Ethan~ -- http://mail.python.org/mailman/listinfo/python-list
Re: question about file handling with "with"
On Wed, 28 Mar 2012 11:31:21 +0200, Jabba Laci wrote: > Is the following function correct? Is the input file closed in order? > > def read_data_file(self): > with open(self.data_file) as f: > return json.loads(f.read()) Yes. The whole point of being able to use a file as a context manager is so that the file will be closed immediately upon leaving the with statement, whether by falling off the end, "return", an exception, or whatever. IOW, it's like calling .close() immediately after the "with" block, only more so, i.e. it will also handle cases that an explicit .close() misses. -- http://mail.python.org/mailman/listinfo/python-list
Re: Re: Re: Re: "convert" string to bytes without changing data (encoding)
On 01/-10/-28163 01:59 PM, Ross Ridge wrote: Evan Driscoll wrote: People like you -- who write to assumptions which are not even remotely guaranteed by the spec -- are part of the reason software sucks. ... This email is a bit harsher than it deserves -- but I feel not by much. I don't see how you could feel the least bit justified. Well meaning, if unhelpful, lies about the nature Python strings in order to try to convince someone to follow what you think are good programming practices is one thing. Maliciously lying about someone else's code that you've never seen is another thing entirely. I'm not even talking about code that you or the OP has written. I'm talking about your suggestion that I can in fact say what the internal byte string representation of strings is any given build of Python 3. Aside from the questionable truth of this assertion (there's no guarantee that an implementation uses one consistent encoding or data structure representation consistently), that's of no consequence because you can't depend on what the representation is. So why even bring it up? Also irrelevant is: In practice the number of ways that CPython (the only Python 3 implementation) represents strings is much more limited. Pretending otherwise really isn't helpful. If you can't depend on CPython's implementation (and, I would argue, your code is broken if you do), then it *is* helpful. Saying that "you can just look at what CPython does" is what is unhelpful. That said, looking again I did misread your post that I sent that harsh reply to; I was looking at it perhaps a bit too much through the lens of the CPython comment I said above, and interpreting it as "I can say what the internal representation is of CPython, so just give me that" and launched into my spiel. If that's not what was intended, I retract my statement. As long as everyone is clear on the fact that Python 3 implementations can use whatever encoding and data structures they want, perhaps even different encodings or data structures for equal strings, and that as a consequence saying "what's the internal representation of this string" is a meaningless question as far as Python itself is concerned, I'm happy. Evan -- http://mail.python.org/mailman/listinfo/python-list
Re: Number of languages known [was Re: Python is readable] - somewhat OT
On Thu, Mar 29, 2012 at 10:03 AM, Chris Angelico wrote: > You can't merge all of them without making a language that's > suboptimal at most of those tasks - probably, one that's woeful at all > of them. I mention SQL because, even if you were to unify all > programming languages, you'd still need other non-application > languages to get the job done. Not really. You can turn SQL (or something equivalent) into a subset of your programming language, like C# does with LINQ, or like Scheme does with macros. The Scheme approach generalizes to programming languages in general with even some fairly alien semantics (e.g. you can do prolog using macros and first-class continuations). In fact, for a more difficult target, I even recently saw an implementation of Python in Common-Lisp that uses reader macros to compile a subset of Python to equivalent Common-Lisp code: http://common-lisp.net/project/clpython/ On the other hand, even similar languages are really hard to run in the same VM: imagine the hoops you'd have to jump through to get libraries written in Python 2 and 3 to work together. For a more concrete example, take the attempt to make elisp and guile work together in guilemacs: http://www.red-bean.com/guile/notes/emacs-lisp.html But this has nothing to do with being "suboptimal at most tasks". It's easy to make a language that can do everything C can do, and also everything that Haskell can do. I can write an implementation of this programming language in one line of bash[*]. The easy way is to make those features mutually exclusive. We don't have to sacrifice anything by including more features until we want them to work together. With that in mind, the interesting languages to "merge" aren't things like SQL or regular expressions -- these are so easy to make work with programming languages, that we do it all the time already (via string manipulation, but first-class syntax would also be easily possible). The hard problems are when trying to merge in the semantics of languages that only "make sense" because they have drastically different expectations of the world. The example that comes to mind is Haskell, which relies incredibly strongly on the lack of side effects. How do you merge Haskell and Python? Well, you can't. As soon as you add side-effects, you can no longer rely on the weak equivalence of things executed eagerly versus lazily, and the semantics of Haskell go kaput. So the only actual effort (that I am aware of) to implement side-effects with Haskell *deliberately* makes mutability and laziness mutually exclusive. Anything else is impossible. The effort mentioned here is called Disciple, and the relevant thesis is very fun reading, check it out: http://www.cse.unsw.edu.au/~benl/papers/thesis/lippmeier-impure-world.pdf I guess what I really want to say is that the world looks, to me, to be more optimistic than so many people think it is. If we wanted to, we could absolutely take the best features from a bunch of things. This is what C++ does, this is what Scheme does, this is what D does. They sometimes do it in different ways, they have varying tradeoffs, but this isn't a hard problem except when it is, and the examples you mentioned are actually the easy cases. We can merge Python and C, while keeping roughly the power of both, it's called Cython. We can merge Python and PHP, in that PHP adds nothing incompatible with Python technically (it'd be a lot of work and there would be many tears shed because it's insane) -- but Python Server Pages probably add the feature you want. We could merge SQL and Python, arguably we already do via e.g. SQLAlchemy's query API (etc.) or DBAPI2's string API. These can all becomes subsets of a language that interoperate well with the rest of the language with no problems. These are non-issues: the reasons for not doing so are not technical, they are political or sociological (e.g., "bloat the language", "there should be one obvious way to do it", "PHP's mixing of business logic with presentation logic is bad", etc.) There _are_ times when this is technical, and there are specific areas of this that have technical difficulties, but... that's different, and interesting, and being actively researched, and not really impossible either. I don't know. This is maybe a bit too rant-y and disorganized; if so I apologize. I've been rethinking a lot of my views on programming languages lately. :) I hope at least the links help make this interesting to someone. -- Devin [*] A "language" is really just a set of programs that compile. If we assume that the set of haskell and C programs are disjoint, then we can create a new language that combines both of them, by trying the C (or Haskell) compiler first, and then running the other if that should fail. This is really an argument from the absurd, though. I just said it 'cause it sounds awesome. -- http://mail.python.org/mailman/listinfo/python-list
Re: "convert" string to bytes without changing data (encoding)
On 3/29/2012 11:30 AM, Ross Ridge wrote: No, Evan in his own words admitted that his post was ment to be harsh, I agree that he should have restrained and censored his writing. Just because I refuse to drink the > "it's impossible to represent strings as a series of bytes" kool-aid I do not believe *anyone* has made that claim. Is this meant to be a wild exaggeration? As wild as Evan's? In my first post on this thread, I made three truthful claims. 1. A 3.x text string is logically a sequence of unicode 'characters' (codepoints). 2. The Python language definition does not require that a string be bytes or become bytes unless and until it is explicitly encoded. 3. The intentionally hidden byte implementation of strings on byte machines is version and system dependent. The bytes used for a particular character is (in 3.3) context dependent. As it turns out, the OP had mistakenly assumed that the hidden byte implementation of 3.3 strings was both well-defined and something (utf-8) that it is not and (almost certainly) never will be. Guido and most other devs strongly want string indexing (and hence slice endpoint finding) to be O(1). So all of the above is moot as far as the OP's problem is concerned. I already gave him the three standard solutions. -- Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list
RE: RE: Advise of programming one of my first programs
From: Anatoli Hristov [mailto:toli...@gmail.com] Sent: Wednesday, March 28, 2012 5:36 PM To: Prasad, Ramit Cc: python-list@python.org Subject: Re: RE: Advise of programming one of my first programs >>> > Um, at least by my understanding, the use of Pickle is also dangerous if you >>> > are not completely sure what is being passed in: >>> >>> Oh goodness yes. pickle is exactly as unsafe as eval is. Try running this >>> code: >>> >>> from pickle import loads >>> loads("c__builtin__\neval\n(c__builtin__\nraw_input\n(S'py>'\ntRtR.") >>It might be as dangerous, but which is more likely to cause problems in >>real world scenarios? >Guys this is really something that is not that important at this time for me “My Eyes! The goggles do nothing!” Ramit Ramit Prasad | JPMorgan Chase Investment Bank | Currencies Technology 712 Main Street | Houston, TX 77002 work phone: 713 - 216 - 5423 -- This email is confidential and subject to important disclaimers and conditions including on offers for the purchase or sale of securities, accuracy and completeness of information, viruses, confidentiality, legal privilege, and legal entity disclaimers, available at http://www.jpmorgan.com/pages/disclosures/email. -- http://mail.python.org/mailman/listinfo/python-list
RE: "convert" string to bytes without changing data (encoding)
> > Technically, ASCII goes up to 256 but they are not A-z letters. > > > Technically, ASCII is 7-bit, so it goes up to 127. > No, ASCII only defines 0-127. Values >=128 are not ASCII. > > >From https://en.wikipedia.org/wiki/ASCII: > > ASCII includes definitions for 128 characters: 33 are non-printing > control characters (now mostly obsolete) that affect how text and > space is processed and 95 printable characters, including the space > (which is considered an invisible graphic). Doh! I was mistaking extended ASCII for ASCII. Thanks for the correction. Ramit Ramit Prasad | JPMorgan Chase Investment Bank | Currencies Technology 712 Main Street | Houston, TX 77002 work phone: 713 - 216 - 5423 -- > -Original Message- > From: python-list-bounces+ramit.prasad=jpmorgan@python.org > [mailto:python-list-bounces+ramit.prasad=jpmorgan@python.org] On > Behalf Of MRAB > Sent: Wednesday, March 28, 2012 2:50 PM > To: python-list@python.org > Subject: Re: "convert" string to bytes without changing data (encoding) > > On 28/03/2012 20:02, Prasad, Ramit wrote: > >> >The right way to convert bytes to strings, and vice versa, is via > >> >encoding and decoding operations. > >> > >> If you want to dictate to the original poster the correct way to do > >> things then you don't need to do anything more that. You don't need > to > >> pretend like Chris Angelico that there's isn't a direct mapping from > >> the his Python 3 implementation's internal respresentation of strings > >> to bytes in order to label what he's asking for as being "silly". > > > > It might be technically possible to recreate internal implementation, > > or get the byte data. That does not mean it will make any sense or > > be understood in a meaningful manner. I think Ian summarized it > > very well: > > > >>You can't generally just "deal with the ascii portions" without > >>knowing something about the encoding. Say you encounter a byte > >>greater than 127. Is it a single non-ASCII character, or is it the > >>leading byte of a multi-byte character? If the next character is less > >>than 127, is it an ASCII character, or a continuation of the previous > >>character? For UTF-8 you could safely assume ASCII, but without > >>knowing the encoding, there is no way to be sure. If you just assume > >>it's ASCII and manipulate it as such, you could be messing up > >>non-ASCII characters. > > > -- > http://mail.python.org/mailman/listinfo/python-list This email is confidential and subject to important disclaimers and conditions including on offers for the purchase or sale of securities, accuracy and completeness of information, viruses, confidentiality, legal privilege, and legal entity disclaimers, available at http://www.jpmorgan.com/pages/disclosures/email. -- http://mail.python.org/mailman/listinfo/python-list
Re: Number of languages known [was Re: Python is readable] - somewhat OT
On Thu, Mar 29, 2012 at 10:03 AM, Chris Angelico wrote: > On Fri, Mar 30, 2012 at 12:44 AM, Nathan Rice > wrote: >> We would be better off if all the time that was spent on learning >> syntax, memorizing library organization and becoming proficient with >> new tools was spent learning the mathematics, logic and engineering >> sciences. Those solve problems, languages are just representations. > > Different languages are good at different things. REXX is an efficient > text parser and command executor. Pike allows live updates of running > code. Python promotes rapid development and simplicity. PHP makes it > easy to add small amounts of scripting to otherwise-static HTML pages. > C gives you all the power of assembly language with all the > readability of... assembly language. SQL describes a database request. Here's a thought experiment. Imagine that you have a project tree on your file system which includes files written in many different programming languages. Imagine that the files can be assumed to be contiguous for our purposes, so you could view all the files in the project as one long chunk of data. The directory and file names could be interpreted as statements in this data, analogous to "in the context of somedirectory" or "in the context of somefile with sometype". Any project configuration files could be viewed as declarative statements about contexts, such as "in xyz context, ignore those" or "in abc context, any that is actually a this". Imagine the compiler or interpreter is actually part of your program (which is reasonable since it doesn't do anything by itself). Imagine the build management tool is also part of your program in pretty much the same manner. Imagine that your program actually generates another program that will generate the program the machine runs. I hope you can follow me here, and further I hope you can see that this is a completely valid description of what is actually going on (from a different perspective). In the context of the above thought experiment, it should be clear that we currently have something that is a structural analog of a single programming metalanguage (or rather, one per computer architecture), with many domain specific languages constructed above that to simplify tasks in various contexts. The model I previously proposed is not fantasy, it exists, just not in a form usable by human beings. Are machine instructions the richest possible metalanguage? I really doubt it. Lets try another thought experiment... Imagine that instead of having machine instructions as the common metalanguage, we pushed the point of abstraction closer to something programmers can reasonably work with: abstract syntax trees. Imagine all programming languages share a common abstract syntax tree format, with nodes generated using a small set of human intelligible semantic primes. Then, a domain specific language is basically a context with a set of logical implications. By associating a branch of the tree to one (or the union of several) context, you provide a transformation path to machine instructions via logical implication. If implications of a union context for the nodes in the branch are not compatible, this manifests elegantly in the form of a logical contradiction. What does pushing the abstraction point that far up provide? For one, you can now reason across language boundaries. A compiler can tell me if my prolog code and my python code will behave properly together. Another benefit is that you make explicit the fact that your parser, interpreter, build tools, etc are actually part of your program, from the perspective that your program is actually another program that generates programs in machine instructions. By unifying your build chain, it makes deductive inference spanning steps and tools possible, and eliminates some needless repetition. This also greatly simplifies code reuse, since you only need to generate a syntax tree of the proper format and associate the correct context to it. It also simplifies learning languages, since people only need to understand the semantic primes in order to read anything. Of course, this describes Lisp to some degree, so I still need to provide some answers. What is wrong with Lisp? I would say that the base syntax being horrible is probably the biggest issue. Beyond that, transformations on lists of data are natural in Lisp, but graph transformations are not, making some things awkward. Additionally, because Lisp tries to nudge you towards programming in a functional style, it can be un-intuitive to learn. Programming is knowledge representation, and state is a natural concept that many people desire to model, so making it a second class citizen is a mistake. If I were to re-imagine Lisp for this purpose, I would embrace state and an explicit notion of temporal order. Rather than pretending it didn't exist, I would focus on logical and mathematical machinery necessary to allow powerful deductive
Re: "convert" string to bytes without changing data (encoding)
Ross Ridge wrote: > Just because I refuse to drink the > "it's impossible to represent strings as a series of bytes" kool-aid Terry Reedy wrote: >I do not believe *anyone* has made that claim. Is this meant to be a >wild exaggeration? As wild as Evan's? Sorry, it would've been more accurate to label the flavour of kool-aid Chris Angelico was trying to push as "it's impossible ... without encoding": What is a string? It's not a series of bytes. You can't convert it without encoding those characters into bytes in some way. >In my first post on this thread, I made three truthful claims. I'm not objecting to every post made in this thread. If your post had been made before the original poster had figured it out on his own, I would've hoped he would have found it much more convincing than what I quoted above. Ross Ridge -- l/ // Ross Ridge -- The Great HTMU [oo][oo] rri...@csclub.uwaterloo.ca -()-/()/ http://www.csclub.uwaterloo.ca/~rridge/ db // -- http://mail.python.org/mailman/listinfo/python-list
Re: errors building python 2.7.3
On Thu, Mar 29, 2012 at 6:55 AM, Alexey Luchko wrote: > On 28.03.2012 18:42, David Robinow wrote: >> On Wed, Mar 28, 2012 at 7:50 AM, Alexey Luchko wrote: >>> I've tried to build Python 2.7.3rc2 on cygwin and got the following >>> errors: >>> >>> $ CFLAGS=-I/usr/include/ncursesw/ CPPFLAGS=-I/usr/include/ncursesw/ >>> ./configure >> I haven't tried 2.7.3 yet, so I'll describe my experience with 2.7.2 >> I use /usr/include/ncurses rather than /usr/include/ncursesw >> I don't remember what the difference is but ncurses seems to work. > > I've tried ncurses too. It does not matter. Have you included the patch to Include/py_curses.h ? If you don't know what that is, download the cygwin src package for Python-2.6 and look at the patches. Not all of them are still necessary for 2.7 but some are. -- http://mail.python.org/mailman/listinfo/python-list
Re: Number of languages known [was Re: Python is readable] - somewhat OT
On 03/29/12 12:48, Nathan Rice wrote: Of course, this describes Lisp to some degree, so I still need to provide some answers. What is wrong with Lisp? I would say that the base syntax being horrible is probably the biggest issue. Do you mean something like: ((so (describes Lisp (to degree some) (of course)) still-need (provide I some-answers)) (is wrong what (with Lisp)) (would-say I ((is (base-syntax being-horrible) (probably-biggest issue) nah...can't fathom what's wrong with that... «grins, ducks, and runs» -tkc -- http://mail.python.org/mailman/listinfo/python-list
Re: Number of languages known [was Re: Python is readable] - somewhat OT
Agreed with your entire first chunk 100%. Woohoo! High five. :) On Thu, Mar 29, 2012 at 1:48 PM, Nathan Rice wrote: > transformations on lists of data are natural in Lisp, but graph > transformations are not, making some things awkward. Eh, earlier you make some argument towards lisp being a universal metalanguage. If it can simulate prolog, it can certainly grow a graph manipulation form. You'd just need to code it up as a macro or function :p > Additionally, > because Lisp tries to nudge you towards programming in a functional > style, it can be un-intuitive to learn. I think you're thinking of Scheme here. Common Lisp isn't any more functional than Python, AFAIK (other than having syntactic heritage from the lambda calculus?) Common-Lisp does very much embrace state as you later describe, Scheme much less so (in that it makes mutating operations more obvious and more ugly. Many schemes even outlaw some entirely. And quoted lists default to immutable (rgh)). > I'm all for diversity of language at the level of minor notation and > vocabulary, but to draw an analogy to the real world, English and > Mandarin are redundant, and the fact that they both creates a > communication barrier for BILLIONS of people. That doesn't mean that > biologists shouldn't be able to define words to describe biological > things, if you want to talk about biology you just need to learn the > vocabulary. That also doesn't mean or that mathematicians shouldn't > be able to use notation to structure complex statements, if you want > to do math you need to man up and learn the notation (of course, I > have issues with some mathematical notation, but there is no reason > you should cry about things like set builder). Well, what sort of language differences make for English vs Mandarin? Relational algebraic-style programming is useful, but definitely a large language barrier to people that don't know any SQL. I think this is reasonable. (It would not matter even if you gave SQL python-like syntax, the mode of thinking is different, and for a good reason.) -- Devin -- http://mail.python.org/mailman/listinfo/python-list
Re: Number of languages known [was Re: Python is readable] - somewhat OT
On Thu, Mar 29, 2012 at 2:53 PM, Devin Jeanpierre wrote: > Agreed with your entire first chunk 100%. Woohoo! High five. :) Damn, then I'm not trolling hard enough ಠ_ಠ > On Thu, Mar 29, 2012 at 1:48 PM, Nathan Rice > wrote: >> transformations on lists of data are natural in Lisp, but graph >> transformations are not, making some things awkward. > > Eh, earlier you make some argument towards lisp being a universal > metalanguage. If it can simulate prolog, it can certainly grow a graph > manipulation form. You'd just need to code it up as a macro or > function :p Well, a lisp-like language. I would also argue that if you are using macros to do anything, the thing you are trying to do should classify as "not natural in lisp" :) I'm really thinking here more in terms of a general graph reactive system here, matching patterns in an input graph and modifying the graph in response. There are a lot of systems that can be modeled as a graph that don't admit a nested list (tree) description. By having references to outside the nesting structure you've just admitted that you need a graph rather than a list, so why not be honest about it and work in that context from the get-go. >> Additionally, >> because Lisp tries to nudge you towards programming in a functional >> style, it can be un-intuitive to learn. > > I think you're thinking of Scheme here. Common Lisp isn't any more > functional than Python, AFAIK (other than having syntactic heritage > from the lambda calculus?) > > Common-Lisp does very much embrace state as you later describe, Scheme > much less so (in that it makes mutating operations more obvious and > more ugly. Many schemes even outlaw some entirely. And quoted lists > default to immutable (rgh)). I find it interesting that John McCarthy invented both Lisp and the situation calculus. As for set/setq, sure, you can play with state, but it is verbose, and there is no inherent notion of temporal locality. Your program's execution order forms a nice lattice when run on hardware, that should be explicit in software. If I were to do something crazy like take the union of two processes that can potentially interact, with an equivalence relation between some time t1 in the first process and a time t2 in the second (so that you can derive a single partial order), the computer should be able to tell if I am going to shoot myself in the foot, and ideally suggest the correct course of action. > Well, what sort of language differences make for English vs Mandarin? > Relational algebraic-style programming is useful, but definitely a > large language barrier to people that don't know any SQL. I think this > is reasonable. (It would not matter even if you gave SQL python-like > syntax, the mode of thinking is different, and for a good reason.) I don't think they have to be. You can view functions as names for temporally ordered sequence of declarative implication statements. Databases just leave out the logic (this is hyperbole, I know), so you have to do it in client code. I don't feel that a database necessarily has to be a separate entity, that is just an artifact of the localized, specialized view of computation. As stronger abstractions are developed and concurrent, distributed computation is rigorously systematized, I think we'll go full circle. -- http://mail.python.org/mailman/listinfo/python-list
Re: Number of languages known [was Re: Python is readable] - somewhat OT
On Fri, Mar 30, 2012 at 3:42 AM, Devin Jeanpierre wrote: > On Thu, Mar 29, 2012 at 10:03 AM, Chris Angelico wrote: >> You can't merge all of them without making a language that's >> suboptimal at most of those tasks - probably, one that's woeful at all >> of them. I mention SQL because, even if you were to unify all >> programming languages, you'd still need other non-application >> languages to get the job done. > ... > But this has nothing to do with being "suboptimal at most tasks". It's > easy to make a language that can do everything C can do, and also > everything that Haskell can do. I can write an implementation of this > programming language in one line of bash[*]. The easy way is to make > those features mutually exclusive. We don't have to sacrifice anything > by including more features until we want them to work together. Of course it's POSSIBLE. You can write everything in Ook if you want to. But any attempt to merge all programming languages into one will either: 1) Allow different parts of a program to be written in different subsets of this universal language, which just means that you've renamed all the languages but kept their distinctions (so a programmer still has to learn all of them); or 2) Shoehorn every task into one language, equivalent to knowing only one language and using that for everything. Good luck with that. The debate keeps on coming up, but it's not just political decisions that maintain language diversity. ChrisA -- http://mail.python.org/mailman/listinfo/python-list
Re: "convert" string to bytes without changing data (encoding)
On Fri, Mar 30, 2012 at 5:00 AM, Ross Ridge wrote: > Sorry, it would've been more accurate to label the flavour of kool-aid > Chris Angelico was trying to push as "it's impossible ... without > encoding": > > What is a string? It's not a series of bytes. You can't convert > it without encoding those characters into bytes in some way. I still stand by that statement. Do you try to convert a "dictionary of filename to open file object" into a "series of bytes" inside Python? It doesn't matter that, on some level, it's *stored as* a series of bytes; the actual object *is not* a series of bytes. There is no logical equivalency, ergo it is illogical and nonsensical to expect to turn one into the other without some form of encoding. Python does include an encoding that can handle lists and dictionaries. It's called Pickle, and it returns (in Python 3) a bytes object - which IS a series of bytes. It doesn't simply return some internal representation. ChrisA -- http://mail.python.org/mailman/listinfo/python-list
Re: Python is readable
On Thu, Mar 29, 2012 at 9:44 AM, Albert van der Horst wrote: > In article , > Nathan Rice wrote: >>> >>> http://www.joelonsoftware.com/articles/fog18.html >> >>I read that article a long time ago, it was bullshit then, it is >>bullshit now. The only thing he gets right is that the Shannon >>information of a uniquely specified program is proportional to the >>code that would be required to generate it. Never mind that if a > > Thank you for drawing my attention to that article. > It attacks the humbug software architects. > Are you one of them? > I really liked that article. I read the first paragraph, remembered that I had read it previously and stopped. I accidentally remembered something from another Joel article as being part of that article (read it at http://www.joelonsoftware.com/items/2007/12/03.html). I don't really have anything to say on Joel's opinions about why people can or should code, their his and he is entitled to them. I feel they are overly reductionist (this isn't a black/white thing) and have a bit of luddite character to them. I will bet you everything I own the only reason Joel is alive today because of some mathematical abstraction he would be all too happy to discount as meaningless (because, to him, it is). Of course, I will give Joel one point: too many things related to programming are 100% hype, without any real substance; if his article had been about bullshit software hype and he hadn't fired the broadsides at the very notion of abstraction, I wouldn't have anything to say. Anyhow, if you "ugh rock good caveman smash gazelle put in mouth make stomach pain go away" meaning, here it is: Programs are knowledge. The reverse is not true, because programming is an infantile area of human creation, mere feet from the primordial tide pool from whence it spawned. We have a very good example of what a close to optimal outcome is: human beings - programs that write themselves, all knowledge forming programs, strong general artificial intelligence. When all knowledge is also programs, we will have successfully freed ourselves from necessary intellectual drudgery (the unnecessary kind will still exist). We will be able to tell computers what we want on our terms, and they will go and do it, checking in with us from time to time if they aren't sure what we really meant in the given context. If we have developed advanced robotics, we will simultaneously be freed from most manual labor. The only thing left for Joel to do will be to lounge about, being "creative" while eating mangos that were picked, packed, shipped and unloaded by robots, ordered by his computer assistant because it knows that he likes them, then delivered, prepared and served by more robots. The roadblocks in the path include the ability to deal with uncertainty, understand natural languages and the higher order characteristics of information. Baby steps to deal with these roadblocks are to explicitly forbid uncertainty, simplify the language used, and explicitly state higher order properties of information. The natural evolution of the process is to find ways to deal with ambiguity, correctly parse more complex language and automatically deduce higher order characteristics of information. Clearly, human intelligence demonstrates that this is not an impossible pipe dream. You may not be interested in working towards making this a reality, but I can pretty much guarantee on the scale of human achievement, it is near the top. -- http://mail.python.org/mailman/listinfo/python-list
RE: Advise of programming one of my first programs
>>From the Zen of Python, "Simple is better than complex." It is a good >>programming mentality. >Complex is better than complicated. :p Absolutely! Too bad your version would be considered the more “complicated” version ;) >With the main navigation menu I will only have the option to select a nickname >and when a nickname is selected then it loads Details of the contact and from >loaded details I can choice Edit or back to main screen, like I did it the >first time, or else I can do it => when 'e' pressed to ask for a nickname and >then edit it. I was trying to simplify it to “guide” you to a more correct solution without feeding you the answer. Maybe I should have given you the explanation first to explain why you should be doing it a different way. Going back to your original program (and your modifications to it), the original menu can cause crashing in more complicated programs and thus is considered a bad style. It was basically using recursion (I will touch on this later) but without any of the benefits. It was endlessly branching instead of a simple loop. Sort of like the following Menu/submenu example. Menu submenu Menu submenu Menu __ad infinitum__ How does this matter? Let’s look at some simpler code below. print ‘start’ function_a() # a function is called print ‘a’# code inside the function print ‘b’# code inside the function a = ‘ something ‘ print a function_b() # another function call print ‘c’ # code inside a different function print ‘d’ # code inside a different function print ‘end’ Let us pretend we are the computer who executes one line at a time and so basically goes through a list of commands. The list we are going to execute is the following: print ‘start’ function_a() print ‘a’ print ‘b’ a = ‘ something ‘ print a function_b() print ‘c’ print ‘d’ print ‘end’ How does the computer know to execute “a = ‘ something ‘” after “print ‘b’”? It does it by storing the location where it was before it proceeds to the function call. That way when the end of the function is reached it returns to the previous spot. In essence: print ‘start’ function_a() __store this location so I can come back__ print ‘a’ print ‘b’ __return to previous location__ a = ‘ something ‘ print a function_b() __store this location so I can come back__ print ‘c’ print ‘b’ __return to previous location__ print ‘end’ Now what happens if “function_a” calls “function_a”? By the way, the term for this type of call is recursion. print ‘start’ function_a() __store this location so I can come back__ print ‘a’ print ‘b’ function_a() __store this location so I can come back__ print ‘a’ print ‘b’ function_a() __store this location so I can come back__ print ‘a’ print ‘b’ function_a() __store this location so I can come back__ print ‘a’ print ‘b’ function_a() __store this location so I can come back__ print ‘a’ print ‘b’ function_a() __store this location so I can come back__ *until the program ends* Now each __store__ action takes up memory and when the computer (or your program) runs out of memory your computer crashes. Your application is trivial and more likely to be ended by the user instead of going through the tens of thousands if not hundreds of thousands that Python will let you take, but it is a bad practice and a habit to avoid. A real world program would use more memory and quit even faster than yours. Recursion has its place in programming, but not in this case! What you need is a simple loop. That is why I provided you with the menu I did. The following menu sounds like what you want; there were a couple different ways I could have done this. In this version, if you type anything when asked for a menu choice that is not ‘e’ or ‘q’, the program will automatically ask you for the next book choice. def mmenu(): # load tbook here while True: book = get_book_choice() details( tbook, book ) choicem = get_menu_choice() if choicem == 'e' or choicem == 'E': edit( tbook, book ) # save tbook here elif choicem =='Q' or choicem == 'q': break # end loop to exit program Ramit Ramit Prasad | JPMorgan Chase Investment Bank | Currencies Technology 712 Main Street | Houston, TX 77002 work phone: 713 - 216 - 5423 -- This email is confidential and subject to important disclaimers and conditions including on offers for the purchase or sale of securities, accuracy and completeness of information, viruses, confidentiality, legal privilege, and legal entity disclaimers, available a
RE: Number of languages known [was Re: Python is readable] - somewhat OT
> >> You can't merge all of them without making a language that's > >> suboptimal at most of those tasks - probably, one that's woeful at all > >> of them. I mention SQL because, even if you were to unify all > >> programming languages, you'd still need other non-application > >> languages to get the job done. > > ... > > But this has nothing to do with being "suboptimal at most tasks". It's > > easy to make a language that can do everything C can do, and also > > everything that Haskell can do. I can write an implementation of this > > programming language in one line of bash[*]. The easy way is to make > > those features mutually exclusive. We don't have to sacrifice anything > > by including more features until we want them to work together. > > Of course it's POSSIBLE. You can write everything in Ook if you want > to. But any attempt to merge all programming languages into one will > either: > > 1) Allow different parts of a program to be written in different > subsets of this universal language, which just means that you've > renamed all the languages but kept their distinctions (so a programmer > still has to learn all of them); or > > 2) Shoehorn every task into one language, equivalent to knowing only > one language and using that for everything. Good luck with that. In a much simpler context, isn't this what .NET's CLR does? Except that instead of converting each language into each other it converts everything into a different language. I have trouble in my mind seeing how what you suggest would not end up with badly coded versions of a translated program. Never yet seen a program that could convert from one paradigm/language directly to another (and do it well/maintainable). > The debate keeps on coming up, but it's not just political decisions > that maintain language diversity. Not a bad thing in my opinion. A tool for each problem, but I can see the appeal of a multi-tool language. Ramit Ramit Prasad | JPMorgan Chase Investment Bank | Currencies Technology 712 Main Street | Houston, TX 77002 work phone: 713 - 216 - 5423 -- This email is confidential and subject to important disclaimers and conditions including on offers for the purchase or sale of securities, accuracy and completeness of information, viruses, confidentiality, legal privilege, and legal entity disclaimers, available at http://www.jpmorgan.com/pages/disclosures/email. -- http://mail.python.org/mailman/listinfo/python-list
Re: Number of languages known [was Re: Python is readable] - somewhat OT
On Thu, Mar 29, 2012 at 4:33 PM, Chris Angelico wrote: > Of course it's POSSIBLE. You can write everything in Ook if you want > to. But any attempt to merge all programming languages into one will > either: In that particular quote, I was saying that the reason that you claimed we can't merge languages was not a correct reason. You are now moving the goalposts, in that you've decided to abandon your original point. Also you are now discussing the merger of all programming languages, whereas I meant to talk about pairs of programming languages. e.g. such as SQL and Python. Merging all programming languages is ridiculous. Even merging two, Haskell and C, is impossible without running into massive world-bending problems. (Yes, these problems are interesting, but no, they can't be solved without running into your "issue 1" -- this is in fact a proven theorem.) > 1) Allow different parts of a program to be written in different > subsets of this universal language, which just means that you've > renamed all the languages but kept their distinctions (so a programmer > still has to learn all of them); or Yes. I mentioned this. It is not entirely useless (if you're going to use the other language _anyway_, like SQL or regexps, might as well have it be checked at compile-time same as your outer code), but in a broad sense it's a terrible idea. Also, programmers would have to learn things regardless. You can't avoid this, that's what happens when you add features. The goal in integrating two languages is, well, integration, not reducing learning. > 2) Shoehorn every task into one language, equivalent to knowing only > one language and using that for everything. Good luck with that. This isn't true for the "merge just two languages" case, which is what I meant to talk about. > The debate keeps on coming up, but it's not just political decisions > that maintain language diversity. Are you disagreeing with me, or somebody else? I never said that. Yes, I said that in some cases, e.g. SQL/Python, because there are no technical issues, it must be something political or stylistic. I wasn't saying that the only reason we don't merge languages in is political. As a matter of fact, the very next paragraph begins with "There _are_ times when this is technical". ("political" is a bad word for it, because it covers things that are just plain bad ideas (but, subjectively). For example, there's nothing technically challenging about adding an operator that wipes the user's home directory.) -- Devin -- http://mail.python.org/mailman/listinfo/python-list
Re: Number of languages known [was Re: Python is readable] - somewhat OT
On Thu, Mar 29, 2012 at 3:50 PM, Nathan Rice wrote: > Well, a lisp-like language. I would also argue that if you are using > macros to do anything, the thing you are trying to do should classify > as "not natural in lisp" :) You would run into disagreement. Some people feel that the lisp philosophy is precisely that of extending the language to do anything you want, in the most natural way. At least, I disagree, but my lisp thoughts are the result of indoctrination of the Racket crowd. I don't know how well they represent the larger lisp community. But you should definitely take what I say from the viewpoint of the sort of person that believes that the whole purpose of lisps is to embed new syntax and new DSLs via macros. Without macros, there's no point of having this despicable syntax (barring maybe pedagogy and other minor issues). > I'm really thinking here more in terms of a general graph reactive > system here, matching patterns in an input graph and modifying the > graph in response. There are a lot of systems that can be modeled as > a graph that don't admit a nested list (tree) description. By having > references to outside the nesting structure you've just admitted that > you need a graph rather than a list, so why not be honest about it and > work in that context from the get-go. I don't see any issue in defining a library for working with graphs. If it's useful enough, it could be added to the standard library. There's nothing all that weird about it. Also, most representations of graphs are precisely via a tree-like non-recursive structure. For example, as a matrix, or adjacency list, etc. We think of them as deep structures, but implement them as flat, shallow structures. Specialized syntax (e.g. from macros) can definitely bridge the gap and let you manipulate them in the obvious way, while admitting the usual implementation. > I don't think they have to be. You can view functions as names for > temporally ordered sequence of declarative implication statements. > Databases just leave out the logic (this is hyperbole, I know), so you > have to do it in client code. I don't feel that a database > necessarily has to be a separate entity, that is just an artifact of > the localized, specialized view of computation. As stronger > abstractions are developed and concurrent, distributed computation is > rigorously systematized, I think we'll go full circle. Maybe I'm too tired, but this went straight over my head, sorry. Perhaps you could be a bit more explicit about what you mean by the implications/logic? -- Devin -- http://mail.python.org/mailman/listinfo/python-list
Re: "convert" string to bytes without changing data (encoding)
On Thu, 29 Mar 2012 17:36:34 +, Prasad, Ramit wrote: >> > Technically, ASCII goes up to 256 but they are not A-z letters. >> > >> Technically, ASCII is 7-bit, so it goes up to 127. > >> No, ASCII only defines 0-127. Values >=128 are not ASCII. >> >> >From https://en.wikipedia.org/wiki/ASCII: >> >> ASCII includes definitions for 128 characters: 33 are non-printing >> control characters (now mostly obsolete) that affect how text and >> space is processed and 95 printable characters, including the space >> (which is considered an invisible graphic). > > > Doh! I was mistaking extended ASCII for ASCII. Thanks for the > correction. There actually is no such thing as "extended ASCII" -- there is a whole series of many different "extended ASCIIs". If you look at the encodings available in (for example) Thunderbird, many of the ISO-8859-* and Windows-* encodings are "extended ASCII" in the sense that they extend ASCII to include bytes 128-255. Unfortunately they all extend ASCII in a different way (hence they are different encodings). -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: "convert" string to bytes without changing data (encoding)
On Thu, 29 Mar 2012 11:30:19 -0400, Ross Ridge wrote: > Steven D'Aprano wrote: >>Your reaction is to make an equally unjustified estimate of Evan's >>mindset, namely that he is not just wrong about you, but *deliberately >>and maliciously* lying about you in the full knowledge that he is wrong. > > No, Evan in his own words admitted that his post was ment to be harsh, > "a bit harsher than it deserves", showing his malicious intent. Being harsher than it deserves is not synonymous with malicious. You are making assumptions about Evan's mental state that are not supported by the evidence. Evan may believe that by "punishing" (for some feeble sense of punishment) you harshly, he is teaching you better behaviour that will be to your own benefit; or that it will act as a warning to others. Either way he may believe that he is actually doing good. And then he entirely undermined his own actions by admitting that he was over-reacting. This suggests that, in fact, he wasn't really motivated by either malice or beneficence but mere frustration. It is quite clear that Evan let his passions about writing maintainable code get the best of him. His rant was more about "people like you" than you personally. Evan, if you're reading this, I think you owe Ross an apology for flying off the handle. Ross, I think you owe Evan an apology for unjustified accusations of malice. > He made > accusations that where neither supported by anything I've said Now that is not actually true. Your posts have defended the idea that copying the raw internal byte representation of strings is a reasonable thing to do. You even claimed to know how to do so, for any version of Python (but so far have ignored my request for you to demonstrate). > in this > thread nor by the code I actually write. His accusation about me were > completely made up, he was not telling the truth and had no reasonable > basis to beleive he was telling the truth. He was malicously lying and > I'm completely justified in saying so. No, they were not completely made up. Your posts give many signs of being somebody who might very well write code to the implementation rather than the interface. Whether you are or not is a separate question, but your posts in this thread indicate that you very likely could be. If this is not the impression you want to give, then you should reconsider your posting style. Ross, to be frank, your posting style in this thread has been cowardly and pedantic, an obnoxious combination. Please take this as constructive criticism and not an attack -- you have alienated people in this thread, leading at least one person to publicly kill-file your future posts. I choose to assume you aren't aware of why that is than that you are doing so deliberately. Without actually coming out and making a clear, explicit statement that you approve or disapprove of the OP's attempt to use implementation details, you *imply* support without explicitly giving it; you criticise others for saying it can't be done without demonstrating that it can be done. If this is a deliberate rhetorical trick, then shame on you for being a coward without the conviction to stand behind concrete expressions of your opinion. If not, then you should be aware that you are using a rhetorical style that will make many people predisposed to think you are a twat. You *might* have said Guys, you're technically wrong about this. This is how you can retrieve the internal representation of a string as a sequence of bytes: ...code... but you shouldn't use this in production code because it is fragile and depends on implementation details that may break in PyPy and Jython and IronPython. But you didn't. You *might* have said Wrong, you can convert a string into a sequence of bytes without encoding or decoding: ...code... but don't do this. But you didn't. Instead you puffed yourself up as a big shot who was more technically correct than everyone else, but without *actually* demonstrating that you can do what you said you can do. You labelled as "bullshit" our attempts to discourage the OP from his misguided approached. If your intention was to put people off-side, you succeeded very well. If not, you should be aware that you have, and consider how you might avoid this in the future. -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: Python is readable
On Thu, 29 Mar 2012 14:37:09 -0400, Nathan Rice wrote: > On Thu, Mar 29, 2012 at 9:44 AM, Albert van der Horst > wrote: >> In article , Nathan >> Rice wrote: http://www.joelonsoftware.com/articles/fog18.html > Of course, I will give Joel one point: too many things related to > programming are 100% hype, without any real substance; if his article > had been about bullshit software hype and he hadn't fired the broadsides > at the very notion of abstraction He did no such thing. I challenge you to find me one place where Joel has *ever* claimed that "the very notion of abstraction" is meaningless or without use. -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: Number of languages known [was Re: Python is readable] - somewhat OT
On Thu, Mar 29, 2012 at 7:37 PM, Devin Jeanpierre wrote: > On Thu, Mar 29, 2012 at 3:50 PM, Nathan Rice > wrote: >> Well, a lisp-like language. I would also argue that if you are using >> macros to do anything, the thing you are trying to do should classify >> as "not natural in lisp" :) > > You would run into disagreement. Some people feel that the lisp > philosophy is precisely that of extending the language to do anything > you want, in the most natural way. That is some people's lisp philosophy, though I wouldn't say that is a universal. Just like I might say my take on python's philosophy is "keep it simple, stupid" but others could disagree. > At least, I disagree, but my lisp thoughts are the result of > indoctrination of the Racket crowd. I don't know how well they > represent the larger lisp community. But you should definitely take > what I say from the viewpoint of the sort of person that believes that > the whole purpose of lisps is to embed new syntax and new DSLs via > macros. Without macros, there's no point of having this despicable > syntax (barring maybe pedagogy and other minor issues). Heh, I think you can have a homoiconic language without nasty syntax, but I won't get into that right now. >> I'm really thinking here more in terms of a general graph reactive >> system here, matching patterns in an input graph and modifying the >> graph in response. There are a lot of systems that can be modeled as >> a graph that don't admit a nested list (tree) description. By having >> references to outside the nesting structure you've just admitted that >> you need a graph rather than a list, so why not be honest about it and >> work in that context from the get-go. > > I don't see any issue in defining a library for working with graphs. > If it's useful enough, it could be added to the standard library. > There's nothing all that weird about it. Graphs are the more general and expressive data structure, I think if anything you should special case the less general form. > Also, most representations of graphs are precisely via a tree-like > non-recursive structure. For example, as a matrix, or adjacency list, > etc. We think of them as deep structures, but implement them as flat, > shallow structures. Specialized syntax (e.g. from macros) can > definitely bridge the gap and let you manipulate them in the obvious > way, while admitting the usual implementation. We do a lot of things because they are efficient. That is why gaussian distributions are everywhere in statistics, people approximate nonlinear functions with sums of kernels, etc. It shouldn't be the end goal though, unless it really is the most expressive way of dealing with things. My personal opinion is that graphs are more expressive, and I think it would be a good idea to move towards modeling knowledge and systems with graphical structures. >> I don't think they have to be. You can view functions as names for >> temporally ordered sequence of declarative implication statements. >> Databases just leave out the logic (this is hyperbole, I know), so you >> have to do it in client code. I don't feel that a database >> necessarily has to be a separate entity, that is just an artifact of >> the localized, specialized view of computation. As stronger >> abstractions are developed and concurrent, distributed computation is >> rigorously systematized, I think we'll go full circle. > > Maybe I'm too tired, but this went straight over my head, sorry. > Perhaps you could be a bit more explicit about what you mean by the > implications/logic? Well, the curry howard correspondance says that every function can be seen as a named implication of outputs given inputs, with the code for that function being a representation of its proof. Since pretty much every function is a composition of many smaller functions, this holds down to the lowest level. Even imperative statements can be viewed as functions in this light, if you assume discrete time, and view every function or statement as taking the state of the world at T as an implicit input and returning as an implicit output the state of the world at T+1. Thus, every function (and indeed pretty much all code) can be viewed as a named collection of implication statements in a particular context :) -- http://mail.python.org/mailman/listinfo/python-list
Re: Number of languages known [was Re: Python is readable] - somewhat OT
On Thu, 29 Mar 2012 13:48:40 -0400, Nathan Rice wrote: > Here's a thought experiment. Imagine that you have a project tree on > your file system which includes files written in many different > programming languages. Imagine that the files can be assumed to be > contiguous for our purposes, so you could view all the files in the > project as one long chunk of data. The directory and file names could > be interpreted as statements in this data, analogous to "in the context > of somedirectory" or "in the context of somefile with sometype". Any > project configuration files could be viewed as declarative statements > about contexts, such as "in xyz context, ignore those" or "in abc > context, any that is actually a this". Imagine the compiler or > interpreter is actually part of your program (which is reasonable since > it doesn't do anything by itself). Imagine the build management tool is > also part of your program in pretty much the same manner. Imagine that > your program actually generates another program that will generate the > program the machine runs. I hope you can follow me here, and further I > hope you can see that this is a completely valid description of what is > actually going on (from a different perspective). [...] > What does pushing the abstraction point that far up provide? I see why you are so hostile towards Joel Spolsky's criticism of Architecture Astronauts: you are one of them. Sorry Nathan, I don't know how you breathe that high up. For what it's worth, your image of "everything from the compiler on up is part of your program" describes both Forth and Hypercard to some degree, both of which I have used and like very much. I still think you're sucking vacuum :( -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: Python is readable
> He did no such thing. I challenge you to find me one place where Joel has > *ever* claimed that "the very notion of abstraction" is meaningless or > without use. "When great thinkers think about problems, they start to see patterns. They look at the problem of people sending each other word-processor files, and then they look at the problem of people sending each other spreadsheets, and they realize that there's a general pattern: sending files. That's one level of abstraction already. Then they go up one more level: people send files, but web browsers also "send" requests for web pages. And when you think about it, calling a method on an object is like sending a message to an object! It's the same thing again! Those are all sending operations, so our clever thinker invents a new, higher, broader abstraction called messaging, but now it's getting really vague and nobody really knows what they're talking about any more. Blah. When you go too far up, abstraction-wise, you run out of oxygen. Sometimes smart thinkers just don't know when to stop, and they create these absurd, all-encompassing, high-level pictures of the universe that are all good and fine, but don't actually mean anything at all." To me, this directly indicates he views higher order abstractions skeptically, and assumes because he does not see meaning in them, they don't hold any meaning. Despite Joel's beliefs, new advances in science are in many ways the result of advances in mathematics brought on by very deep abstraction. Just as an example, Von Neumann's treatment of quantum mechanics with linear operators in Hilbert spaces utilizes very abstract mathematics, and without it we wouldn't have modern electronics. I'm 100% behind ranting on software hype. Myopically bashing the type of thinking that resulted in the computer the basher is writing on, not so much. If he had said "if you're getting very high up, find very smart people and talk to them to make sure you're not in wing nut territory" I could have given him a pass. I really wish people wouldn't try to put Joel up on a pedestal. The majority of his writings either seem like sensationalist spins on tautological statements, self aggrandizement or luddite trolling. At least Stephen Wolfram has cool shit to back up his ego, Fog Creek makes decent but overpriced debuggers/version control/issue trackers... From my perspective, Stack Overflow is the first really interesting thing Joel had his hand in, and I suspect Jeff Atwood was probably the reason for it, since SO doesn't look like anything Fog Creek ever produced prior to that. -- http://mail.python.org/mailman/listinfo/python-list
Re: unittest: assertRaises() with an instance instead of a type
On Thu, 29 Mar 2012 09:08:30 +0200, Ulrich Eckhardt wrote: > Am 28.03.2012 20:07, schrieb Steven D'Aprano: >> Secondly, that is not the right way to do this unit test. You are >> testing two distinct things, so you should write it as two separate >> tests: > [..code..] >> If foo does *not* raise an exception, the unittest framework will >> handle the failure for you. If it raises a different exception, the >> framework will also handle that too. >> >> Then write a second test to check the exception code: > [...] >> Again, let the framework handle any unexpected cases. > > Sorry, you got it wrong, it should be three tests: 1. Make sure foo() > raises an exception. 2. Make sure foo() raises the right exception. 3. > Make sure the errorcode in the exception is right. > > Or maybe you should in between verify that the exception raised actually > contains an errorcode? And that the errorcode can be equality-compared > to the expected value? :> Of course you are free to slice it even finer if you like: testFooWillRaiseSomethingButIDontKnowWhat testFooWillRaiseMyException testFooWillRaiseMyExceptionWithErrorcode testFooWillRaiseMyExceptionWithErrorcodeWhichSupportsEquality testFooWillRaiseMyExceptionWithErrorcodeEqualToFooError Five tests :) To the degree that the decision of how finely to slice tests is a matter of personal judgement and/or taste, I was wrong to say "that is not the right way". I should have said "that is not how I would do that test". I believe that a single test is too coarse, and three or more tests is too fine, but two tests is just right. Let me explain how I come to that judgement. If you take a test-driven development approach, the right way to test this is to write testFooWillFail once you decide that foo() should raise MyException but before foo() actually does so. You would write the test, the test would fail, and you would fix foo() to ensure it raises the exception. Then you leave the now passing test in place to detect regressions. Then you do the same for the errorcode. Hence two tests. Since running tests is (usually) cheap, you never bother going back to remove tests which are made redundant by later tests. You only remove them if they are made redundant by chances to the code. So even though the first test is made redundant by the second (if the first fails, so will the second), you don't remove it. Why not? Because it guards against regressions. Suppose I decide that errorcode is no longer needed, so I remove the test for errorcode. If I had earlier also removed the independent test for MyException being raised, I've now lost my only check against regressions in foo(). So: never remove tests just because they are redundant. Only remove them when they are obsolete due to changes in the code being tested. Even when I don't actually write the tests in advance of the code, I still write them as if I were. That usually makes it easy for me to decide how fine grained the tests should be: since there was never a moment when I thought MyException should have an errorcode attribute, but not know what that attribute would be, I don't need a *separate* test for the existence of errorcode. (I would only add such a separate test if there was a bug that sometimes the errorcode does not exist. That would be a regression test.) The question of the exception type is a little more subtle. There *is* a moment when I knew that foo() should raise an exception, but before I decided what that exception would be. ValueError? TypeError? Something else? I can write the test before making that decision: def testFooRaises(self): try: foo() except: # catch anything pass else: self.fail("foo didn't raise") However, the next step is broken: I have to modify foo() to raise an exception, and there is no "raise" equivalent to the bare "except", no way to raise an exception without specifying an exception type. I can use a bare raise, but only in response to an existing exception. So to raise an exception at all, I need to decide what exception that will be. Even if I start with a placeholder "raise BaseException", and test for that, when I go back and change the code to "raise MyException" I should change the test, not create a new test. Hence there is no point is testing for "any exception, I don't care what" since I can't write code corresponding to that test case. Hence, I end up with two tests, not three and certainly not five. -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: unittest: assertRaises() with an instance instead of a type
On Thu, 29 Mar 2012 08:35:16 -0700, Ethan Furman wrote: > Steven D'Aprano wrote: >> On Wed, 28 Mar 2012 14:28:08 +0200, Ulrich Eckhardt wrote: >> >>> Hi! >>> >>> I'm currently writing some tests for the error handling of some code. >>> In this scenario, I must make sure that both the correct exception is >>> raised and that the contained error code is correct: >>> >>> >>>try: >>>foo() >>>self.fail('exception not raised') >>>catch MyException as e: >>>self.assertEqual(e.errorcode, SOME_FOO_ERROR) >>>catch Exception: >>>self.fail('unexpected exception raised') >> >> Secondly, that is not the right way to do this unit test. You are >> testing two distinct things, so you should write it as two separate >> tests: > > I have to disagree -- I do not see the advantage of writing a second > test that *will* fail if the first test fails as opposed to bundling > both tests together, and having one failure. Using that reasoning, your test suite should contain *one* ginormous test containing everything: def testDoesMyApplicationWorkPerfectly(self): # TEST ALL THE THINGS!!! ... since *any* failure in any part will cause cascading failures in every other part of the software which relies on that part. If you have a tree of dependencies, a failure in the root of the tree will cause everything to fail, and so by your reasoning, everything should be in a single test. I do not agree with that reasoning, even when the tree consists of two items: an exception and an exception attribute. The problem of cascading test failures is a real one. But I don't believe that the solution is to combine multiple conceptual tests into a single test. In this case, the code being tested covers two different concepts: 1. foo() will raise MyException. Hence one test for this. 2. When foo() raises MyException, the exception instance will include an errorcode attribute with a certain value. This is conceptually separate from #1 above, even though it depends on it. Why is it conceptually separate? Because there may be cases where the caller cares about foo() raising MyException, but doesn't care about the errorcode. Hence errorcode is dependent but separate, and hence a separate test. -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: Python is readable
On Thu, 29 Mar 2012 22:26:38 -0400, Nathan Rice wrote: >> He did no such thing. I challenge you to find me one place where Joel >> has *ever* claimed that "the very notion of abstraction" is meaningless >> or without use. [snip quote] > To me, this directly indicates he views higher order abstractions > skeptically, Yes he does, and so we all should, but that's not the claim you made. You stated that he "fired the broadsides at the very notion of abstraction". He did no such thing. He fired a broadside at (1) software hype based on (2) hyper-abstractions which either don't solve any problems that people care about, or don't solve them any better than more concrete solutions. > and assumes because he does not see meaning in them, they > don't hold any meaning. You are making assumptions about his mindset that not only aren't justified by his comments, but are *contradicted* by his comments. He repeatedly describes the people coming up with these hyper-abstractions as "great thinkers", "clever thinkers", etc. who are seeing patterns in what people do. He's not saying that they're dummies. He's saying that they're seeing patterns that don't mean anything, not that the patterns aren't there. > Despite Joel's beliefs, new advances in science > are in many ways the result of advances in mathematics brought on by > very deep abstraction. Just as an example, Von Neumann's treatment of > quantum mechanics with linear operators in Hilbert spaces utilizes very > abstract mathematics, and without it we wouldn't have modern > electronics. I doubt that very much. The first patent for the transistor was made in 1925, a year before von Neumann even *started* working on quantum mechanics. In general, theory *follows* practice, not the other way around: parts of quantum mechanics theory followed discoveries made using the transistor: http://en.wikipedia.org/wiki/History_of_the_transistor The Romans had perfectly functioning concrete without any abstract understanding of chemistry. If we didn't have QM, we'd still have advanced electronics. Perhaps not *exactly* the electronics we have now, but we'd have something. We just wouldn't understand *why* it works, and so be less capable of *predicting* useful approaches and more dependent on trial-and-error. Medicine and pharmaceuticals continue to be discovered even when we can't predict the properties of molecules. My aunt makes the best damn lasagna you've ever tasted without any overarching abstract theory of human taste. And if you think that quantum mechanics is more difficult than understanding human perceptions of taste, you are badly mistaken. In any case, Spolsky is not making a general attack on abstract science. Your hyperbole is completely unjustified. -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: Python is readable
>>> He did no such thing. I challenge you to find me one place where Joel >>> has *ever* claimed that "the very notion of abstraction" is meaningless >>> or without use. > [snip quote] >> To me, this directly indicates he views higher order abstractions >> skeptically, > > Yes he does, and so we all should, but that's not the claim you made. You > stated that he "fired the broadsides at the very notion of abstraction". > He did no such thing. He fired a broadside at (1) software hype based on > (2) hyper-abstractions which either don't solve any problems that people > care about, or don't solve them any better than more concrete solutions. Mathematics is all about abstraction. There are theories and structures in mathematics that have probably gone over a hundred years before being applied. As an analogy, just because a spear isn't useful while farming doesn't mean it won't save your life when you venture into the woods and come upon a bear. >> and assumes because he does not see meaning in them, they >> don't hold any meaning. > > You are making assumptions about his mindset that not only aren't > justified by his comments, but are *contradicted* by his comments. He > repeatedly describes the people coming up with these hyper-abstractions > as "great thinkers", "clever thinkers", etc. who are seeing patterns in > what people do. He's not saying that they're dummies. He's saying that > they're seeing patterns that don't mean anything, not that the patterns > aren't there. He is basically saying they are too clever for their own good, as a result of being fixated upon purely intellectual constructs. If math was a failed discipline I might be willing to entertain that notion, but quite the opposite, it is certainly the most successful area of study. > >> Despite Joel's beliefs, new advances in science >> are in many ways the result of advances in mathematics brought on by >> very deep abstraction. Just as an example, Von Neumann's treatment of >> quantum mechanics with linear operators in Hilbert spaces utilizes very >> abstract mathematics, and without it we wouldn't have modern >> electronics. > > I doubt that very much. The first patent for the transistor was made in > 1925, a year before von Neumann even *started* working on quantum > mechanics. The electronic properties of silicon (among other compounds) is an obvious example of where quantum theory provides for us. We might have basic circuits, but we wouldn't have semiconductors. > In general, theory *follows* practice, not the other way around: parts of > quantum mechanics theory followed discoveries made using the transistor: You do need data points to identify an explanatory mathematical structure. > The Romans had perfectly functioning concrete without any abstract > understanding of chemistry. If we didn't have QM, we'd still have > advanced electronics. Perhaps not *exactly* the electronics we have now, > but we'd have something. We just wouldn't understand *why* it works, and > so be less capable of *predicting* useful approaches and more dependent > on trial-and-error. Medicine and pharmaceuticals continue to be > discovered even when we can't predict the properties of molecules. The stochastic method, while useful, is many orders of magnitude less efficient than analytically closed solutions. Not having access to closed form solutions would have put us back hundreds of years at least. > My aunt makes the best damn lasagna you've ever tasted without any > overarching abstract theory of human taste. And if you think that quantum > mechanics is more difficult than understanding human perceptions of > taste, you are badly mistaken. Taste is subjective, and your aunt probably started from a good recipe and tweaked it for local palates. That recipe could easily be over a hundred years old. An overarching mathematical theory of human taste/mouth perception, if such a silly thing were to exist, would be able to generate new recipes that were perfect for a given person's tastes very quickly. Additionally, just to troll this point some more (fun times!), I would argue that there is an implicit theory of human taste (chefs refer to it indirectly as gastronomy) that is very poorly organized and lacks any sort of scientific rigor. Nonetheless, enough empirical observations about pairings of flavors, aromas and textures have been made to guide the creation of new recipes. Gastronomy doesn't need to be organized or rigorous because fundamentally it isn't very important. > In any case, Spolsky is not making a general attack on abstract science. > Your hyperbole is completely unjustified. The mathematics of the 20th century, (from the early 30s onward) tend to get VERY abstract, in just the way Joel decries. Category theory, model theory, modern algebraic geometry, topos theory, algebraic graph theory, abstract algebras and topological complexes are all very difficult to understand because they seem so incredibly abstract, yet most of
Re: Number of languages known [was Re: Python is readable] - somewhat OT
>> Here's a thought experiment. Imagine that you have a project tree on >> your file system which includes files written in many different >> programming languages. Imagine that the files can be assumed to be >> contiguous for our purposes, so you could view all the files in the >> project as one long chunk of data. The directory and file names could >> be interpreted as statements in this data, analogous to "in the context >> of somedirectory" or "in the context of somefile with sometype". Any >> project configuration files could be viewed as declarative statements >> about contexts, such as "in xyz context, ignore those" or "in abc >> context, any that is actually a this". Imagine the compiler or >> interpreter is actually part of your program (which is reasonable since >> it doesn't do anything by itself). Imagine the build management tool is >> also part of your program in pretty much the same manner. Imagine that >> your program actually generates another program that will generate the >> program the machine runs. I hope you can follow me here, and further I >> hope you can see that this is a completely valid description of what is >> actually going on (from a different perspective). > [...] >> What does pushing the abstraction point that far up provide? > > I see why you are so hostile towards Joel Spolsky's criticism of > Architecture Astronauts: you are one of them. Sorry Nathan, I don't know > how you breathe that high up. > > For what it's worth, your image of "everything from the compiler on up is > part of your program" describes both Forth and Hypercard to some degree, > both of which I have used and like very much. I still think you're > sucking vacuum :( We live in a world where the tools that are used are based on tradition (read that as backwards compatibility if it makes you feel better) and as a mechanism for deriving personal identity. The world is backwards and retarded in many, many ways, this problem is interesting to me because it actually cuts across a much larger tract than is immediately obvious. People throughout history have had the mistaken impression that the world as it existed for them was the pinnacle of human development. Clearly all of those people were tragically deluded, and I suspect that is the case here as well. -- http://mail.python.org/mailman/listinfo/python-list