Finding evidence among programmers can be tricky. You may be up against a considerable selection bias in a lot of cases:
* You pick among experts in the field. Newcomers are unlikely to have the same problems as experts. * The set of people who wants a certain feature tend to be more vocal about it. The set of people who doesn't really care can be far larger, but since they are not vocal, you don't get any response. * Languages with dynamic types tend to be more "operator heavy" since they have to. You need an operator for certain pairings of different types, where a language with static types can overload them and figure out what to do based on the input types (implicitly). Personal opinion: I'm hesitant when people suggest syntactic sugar in programming languages, because it adds very little expressive power for convenience saved when typing. After all, because it is sugar, there is a way to write it without said construct in the language. Granted, it is nice you can save some keystrokes. But it has to be weighed against how often those keystrokes are saved. If it is just an argument of less typing, you could argue more keystrokes would be saved by changing func to fn and while to whl. A construction such as =&& may help if the language you are looking at has a less sophisticated optimization engine. It becomes a "macro" which can be expanded easily by the compiler into a "faster variant". Yet, once you have optimizations in place and you have propagation of values, the benefit of the "macro" can disappear. Another point worth mentioning is that whenever you add more grammar to a language, you make it harder to add more grammar later. We have an artificial limit in our keyboards and which characters are easy to type on keyboards by default. So languages tend to pick amongst those characters. Whenever you build up a digram or trigram, you may reap yourself of later extension. It is a bit more involved because lexers and parsers have limitations in what language they can easily recognize. A seemingly innocent addition can wreak havoc down the road. Finally, since grammar is limited, using it for syntactic sugar removes your ability to use it for other constructions later. C++ 11 seems to have some reference semantics around the && operator in overloaded form, and thus =&& may have a different meaning. If you had used it for boolean values, you might reject the new construction on grounds of it being confusing. Productivity in a programming language is weird and non-linear. Often, it is not the code itself that provides a limit, but rather the idea and solution you are cooking up in your mind. Slightly paraphrasing Peter Naur[0], programmers have a built-up knowledge in their brain of what they are working on, but the program is merely a projection of that knowledge. Crucially, we tend to value simple elegant programs, so the program is almost always a well-chosen subset of the built-up knowledge. Especially when the programmer is highly skilled, since they are able to cut off parts which are not important to the problem solution. We revel in the short and elegant program, but the real amazement to the seasoned programmer is every code line not written, but still accounted for. Thus, we value emergent behaviors of software. In this light, productivity tend to be less tied to the verbosity of the written code. After all, most of the time should be conducted by thinking about the problem to solve, not typing it up. If you find yourself furiously typing away at the keyboard for a given problem, chances are the problem has a better solution[1] [0] http://pages.cs.wisc.edu/~remzi/Naur.pdf [1] Aside: the reason we often end up doing lots of mundane stuff is because people who came before decided on other abstractions, which turn out to be slightly less optimal for our cause. A rather innocuous choice between JSON and protobufs, for example, can have far-reaching consequences for the quality of the software in the long run, because the dynamic nature and weak type classification of JSON tend to virally infect a code base. This can only be solved by additional mundane typing, debugging and bug fixing. On Sat, May 6, 2017 at 1:07 AM <occi...@esperanto.org> wrote: > Le jeudi 4 mai 2017 22:44:58 UTC+2, Ian Lance Taylor a écrit : >> >> I assert without evidence that for many people it would be confusing >> to see a function call alone on the right hand side of an `=` when >> that function may or may not be called. >> > > Various languages like Perl, PHP, Ruby & zsh have the two operators I > propose. If you can find no evidence among programmers of those, your > point seems moot. Also there is discussion (e.g. on Stackoverflow) about > this for various languages like C, C++, C# & Java, so people do want this! > > regards – Da > > > -- > You received this message because you are subscribed to the Google Groups > "golang-nuts" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to golang-nuts+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.