Re: try & catch / repeating code - DRY
On 2018-05-22 18:34:34 +, Ali ehreli said: An idiom known in C++ circles is a Lippincott function: https://cppsecrets.blogspot.ca/2013/12/using-lippincott-function-for.html Just wanted to mention that it can be a part of a clean solution. Thanks, and I assume that D has the same property WRT exception re-throwing as C++, right? -- Robert M. Münch http://www.saphirion.com smarter | better | faster
Re: Efficient idiom for fastest code
On Wednesday, 23 May 2018 at 02:24:08 UTC, IntegratedDimensions wrote: In some cases the decision holds for continuous ranges. For some 0 <= n <= N the decision is constant, but n is arbitrary(determined by unknown factors at compile time). One can speed up the routine by using something akin to a simplified strategy pattern where one uses functions/delegates/lambdas to code a faster version without the test: for(int i = 0; i < N; i++) { d = (){ if(decision(i)) A; else d = () { B; }; d(); } this code basically reduces to for(int i = 0; i < N; i++) { B; } Once the decision fails, which we assume once it fails it always fails in this particular case. Therefor, once the decision fails it kicks in the "faster" code. Suppose decision is very slow. I would just do int i=0; for(;decision(i) && i < N; i++) { A; } for(;i < N; i++) { B; } This could be turned to a mixin template with something like this: mixin template forSplit(alias condition, alias A, alias B) { void execute() { int i = 0; for (; condition(i) && i < N; i++) { A(); } for (; i < N; i++) { B(); } } } and to use it in code (assuming N is defined in the scope) mixin forSplit!((int i)=>(decision(i)), {A;}, {B;}) loop; loop.execute(); I have't measured anything, but I would assume that delegates come with an overhead that you just don't need here. In fact when trying to use auto d = (int i) {}; d = (int i){ if(decision(i)) A; else d = (int i) { B; };}; for(int i = 0; i < N; i++) { d(i); } All I got was "cannot access frame of function D main" which sums up my experiences with lambdas in D so far. While PGO and branch predictor are good, they don't help much here. Not executing an expression is out of scope for them. All they do is prevent pipeline flushes. Also I think you overestimate what the compiler can do. My decision function to do some testing was this: bool decision(int a) pure out (result) { assert(result == (a < 10)); } do { import std.algorithm, std.range; // stolen from https://dlang.org/library/std/parallelism.html enum n = 1_000_000; enum delta = 1.0 / n; alias getTerm = (int i) { immutable x = ( i - 0.5 ) * delta; return delta / ( 1.0 + x * x ) ; }; immutable pi = 4.0 * reduce!"a + b"(n.iota.map!getTerm); return a < 3*pi; } With N=100 I got a speedup of ~10 (ldc -O3 -release). Even though this function is pure and could be optimized a lot. It calculated pi for every single call. And optimizing the decision function isn't even the point of that question.
Re: try & catch / repeating code - DRY
On 05/23/2018 12:47 AM, Robert M. Münch wrote: On 2018-05-22 18:34:34 +, Ali ‡ehreli said: An idiom known in C++ circles is a Lippincott function: https://cppsecrets.blogspot.ca/2013/12/using-lippincott-function-for.html Just wanted to mention that it can be a part of a clean solution. Thanks, and I assume that D has the same property WRT exception re-throwing as C++, right? I think you have to catch and rethrow explicitly: import std.stdio; void main() { try { try { throw new Exception("Yo"); } catch (Exception e) { writeln("Rethrowing"); throw e; } } catch (Exception e) { writeln(e.msg); } } Rethrowing Yo Keeping in mind that it's possible to catch Throwable as well but it's considered less sanitary because it would catch Errors as well, which is supposed to mean "unrecoverable error". There are long discussions about whether one should do that or not... Ali
Re: try & catch / repeating code - DRY
On Wednesday, May 23, 2018 04:07:25 Ali Çehreli via Digitalmars-d-learn wrote: > On 05/23/2018 12:47 AM, Robert M. Münch wrote: > > On 2018-05-22 18:34:34 +, Ali ‡ehreli said: > >> An idiom known in C++ circles is a Lippincott function: > >> > >> > >> https://cppsecrets.blogspot.ca/2013/12/using-lippincott-function-for.ht > >> ml > >> > >> Just wanted to mention that it can be a part of a clean solution. > > > > Thanks, and I assume that D has the same property WRT exception > > re-throwing as C++, right? > > I think you have to catch and rethrow explicitly: > > import std.stdio; > > void main() { > try { > try { > throw new Exception("Yo"); > } catch (Exception e) { > writeln("Rethrowing"); > throw e; > } > } catch (Exception e) { > writeln(e.msg); > } > } > > Rethrowing > Yo > > Keeping in mind that it's possible to catch Throwable as well but it's > considered less sanitary because it would catch Errors as well, which is > supposed to mean "unrecoverable error". There are long discussions about > whether one should do that or not... The short answer to that would be that you should never do it. The long answer gets considerably more complicated, and while it _can_ make sense under certain circumstances when you're very careful, it's a minefield of potential problems such that no one who who isn't a very advanced D user who really knows what they're doing should even consider it. Increasingly, I tend to think that D should not have had Errors or any Throwables other than exceptions and should have just printed something useful and exited in a way that created a core dump in any case that's supposed to be non-recoverable. :| Either way, I think that we should be clear that doing anything involving catching anything that isn't an Exception or derived from Exception is fraught with peril and only for advanced users. - Jonathan M Davis
Re: assertNotThrown (and asserts in general)
On Monday, 21 May 2018 at 19:44:17 UTC, Jonathan M Davis wrote: Walter wants to use assertions to then have the compiler make assumptions about the code and optimized based on it, but he hasn't implemented anything like that, and there are a number of arguments about why it's a very bad idea - in particular, if it allows the compiler to have undefined behavior if the assertion would have failed if it were left in. So, what is actually going to happen with that is unclear. There are folks who want additional performance benefits by allowing assertions to work as hints to the compiler, and there are folks who want them to truly just be for debugging purposes, because they don't want the compiler to then generate code that makes the function behave even more badly when the assertion would have failed but had been compiled out. If your code is based on untrue assumptions, you probably have a bug anyways. If you used asserts and an optimization brought it in, you will at least find it as soon as you remove the release flag. It shouldn't be a problem to make it a compiler flag for those who don't want it. Defaulted to true with -O3 but can be turned off with -fno-assert-optimize or something like that. Personally, my big concern is that it can't introduce undefined behavior, or it would potentially violate memory safety in @safe code, which would then mean that using assertions in @safe code could make your code effectively @system, which would defeat the whole purpose of @safe. Fair point, that probably limits the optimiations that can be done. If I have an assert that an array has 10 elements when it actually has only 3 and do some operations on it, that could read/write to memory I have never allocated. However some optimations should still be possible in SafeD, like ignoring if conditions where the results are known at compile time if the asserts are true. Or loop unrolling and auto-vectorization without checking for the rest should also be possible if you have an assert, that the length of an array is divisible by something. Neither of them should be able to add unsafe instructions. The worst that could happen is relying on a wrong value to access an element of an array and fail a bounds check. assertNotThrown doesn't use any assertions. It explicitly throws an AssertError (which is what a failed assertion does when it's not compiled out). assertNotThrown would have to use a version(assert) block to version the checks to try and mirror what the assert statement does. However, assertNotThrown is specifically intended for unit tests. IIRC, assertions in unit tests are left in when compiled with -unittest (otherwise, compiling with -release and -unittest - like Phobos does for one of its passes as part of its unittest build - would not work), but I don't think that the assertions outside of unittest blocks get left in in that case, so using version(assert) on assertThrown or assertNotThrown might break them. I'm not sure. Regardless, using them for testing what assertions do is just wrong. You need to test actual assert statements if that's what you want to be testing. Okay, clearly a misunderstanding on my side then. Thanks for clarifying that.
Re: Locking data
On 24/05/2018 1:20 AM, Malte wrote: On Tuesday, 22 May 2018 at 21:45:07 UTC, IntegratedDimensions wrote: an idea to lock data by removing the reference: class A { Lockable!Data data; } [...] This sounds like you are looking for is an atomic swap. Afaik it doesn't exist in the standard library. You could use asm for the XCHG, but that would make your code x86 dependent. I think the easiest way would be to just use a mutex and tryLock. What are you talking about? :p http://dpldocs.info/experimental-docs/core.atomic.cas.1.html
Re: Locking data
On Tuesday, 22 May 2018 at 21:45:07 UTC, IntegratedDimensions wrote: an idea to lock data by removing the reference: class A { Lockable!Data data; } [...] This sounds like you are looking for is an atomic swap. Afaik it doesn't exist in the standard library. You could use asm for the XCHG, but that would make your code x86 dependent. I think the easiest way would be to just use a mutex and tryLock.
Re: Locking data
On Wednesday, 23 May 2018 at 13:24:35 UTC, rikki cattermole wrote: On 24/05/2018 1:20 AM, Malte wrote: On Tuesday, 22 May 2018 at 21:45:07 UTC, IntegratedDimensions wrote: an idea to lock data by removing the reference: class A { Lockable!Data data; } [...] This sounds like you are looking for is an atomic swap. Afaik it doesn't exist in the standard library. You could use asm for the XCHG, but that would make your code x86 dependent. I think the easiest way would be to just use a mutex and tryLock. What are you talking about? :p http://dpldocs.info/experimental-docs/core.atomic.cas.1.html That is Compare-and-set. To make an exchange using cas I first have to read the value, then write to it expecting to be still the value I read before. That are more instructions than just a swap. If a cas fails, I have to redo everything. An exchange never fails, I just might not get the result I would like to have (null instead of pointer).
each & opApply
This is a question is about usage of ´each´ https://dlang.org/phobos/std_algorithm_iteration.html#each with a type where different opApply overloads are defined. Say, I have something like this: ´´´ void main() { import std.stdio : writeln; import std.algorithm : each; auto c = Container(); c.arr1.length = 50; c.arr2.length = 5; c.each!((a, b) => writeln(a, b)); //c.each!(a => writeln(a)); // why this line does not compile? } struct El1{} struct El2{} struct Container { El1[] arr1; El2[] arr2; //http://ddili.org/ders/d.en/foreach_opapply.html int opApply(int delegate(ref El1, ref El2) operations){ assert(0); } int opApply(int delegate(ref El2) operations){ assert(0); } int opApply(int delegate(ref El1) operations){ assert(0); } int opApply(int delegate(ref El2, ref El1) operations){ assert(0); } } ´´´ The compilation error on the last line in the main is: /usr/local/opt/dmd/include/dlang/dmd/std/algorithm/iteration.d(966,21): Error: template `D main.__lambda2` cannot deduce function from argument types `!()(El1, El2)`, candidates are: source/app.d(12,13):`app.main.__lambda2` source/app.d(12,6): Error: template instance `app.main.each!((a) => writeln(a)).each!(Container)` error instantiating So... I get the idea, that ´each´ looks only on the first opApply overload, right? Is there any possibility, to convince it to use a specific one? Say, for the last line in the main, to use the third overload of opApply? By the way, iterating via foreach works as expected: each of ´´´ foreach(El1 el; c){} foreach(El2 el; c){} foreach(El1 el1, El2 el2; c){} foreach(El2 el1, El1 el2; c){} ´´´ compiles and iterates as it should.
Re: Locking data
On 24/05/2018 1:29 AM, Malte wrote: On Wednesday, 23 May 2018 at 13:24:35 UTC, rikki cattermole wrote: On 24/05/2018 1:20 AM, Malte wrote: On Tuesday, 22 May 2018 at 21:45:07 UTC, IntegratedDimensions wrote: an idea to lock data by removing the reference: class A { Lockable!Data data; } [...] This sounds like you are looking for is an atomic swap. Afaik it doesn't exist in the standard library. You could use asm for the XCHG, but that would make your code x86 dependent. I think the easiest way would be to just use a mutex and tryLock. What are you talking about? :p http://dpldocs.info/experimental-docs/core.atomic.cas.1.html That is Compare-and-set. To make an exchange using cas I first have to read the value, then write to it expecting to be still the value I read before. That are more instructions than just a swap. If a cas fails, I have to redo everything. An exchange never fails, I just might not get the result I would like to have (null instead of pointer). So you want a load + store as swap in a single function (that is optimized). In that case, please create an issue on bugzilla (issues.dlang.org).
Re: each & opApply
On 5/23/18 9:37 AM, Alex wrote: This is a question is about usage of ´each´ https://dlang.org/phobos/std_algorithm_iteration.html#each with a type where different opApply overloads are defined. Say, I have something like this: ´´´ void main() { import std.stdio : writeln; import std.algorithm : each; auto c = Container(); c.arr1.length = 50; c.arr2.length = 5; c.each!((a, b) => writeln(a, b)); //c.each!(a => writeln(a)); // why this line does not compile? } struct El1{} struct El2{} struct Container { El1[] arr1; El2[] arr2; //http://ddili.org/ders/d.en/foreach_opapply.html int opApply(int delegate(ref El1, ref El2) operations){ assert(0); } int opApply(int delegate(ref El2) operations){ assert(0); } int opApply(int delegate(ref El1) operations){ assert(0); } int opApply(int delegate(ref El2, ref El1) operations){ assert(0); } } ´´´ The compilation error on the last line in the main is: /usr/local/opt/dmd/include/dlang/dmd/std/algorithm/iteration.d(966,21): Error: template `D main.__lambda2` cannot deduce function from argument types `!()(El1, El2)`, candidates are: source/app.d(12,13): `app.main.__lambda2` source/app.d(12,6): Error: template instance `app.main.each!((a) => writeln(a)).each!(Container)` error instantiating So... I get the idea, that ´each´ looks only on the first opApply overload, right? Apparently, but that's not very good. IMO, it should use the same rules as foreach. In which case, BOTH lines should fail to compile. Is there any possibility, to convince it to use a specific one? Say, for the last line in the main, to use the third overload of opApply? By the way, iterating via foreach works as expected: each of ´´´ foreach(El1 el; c){} foreach(El2 el; c){} foreach(El1 el1, El2 el2; c){} foreach(El2 el1, El1 el2; c){} ´´´ compiles and iterates as it should. Right, but not foreach(el1, el2; c), which is the equivalent of your each call. -Steve
Re: each & opApply
On 05/23/2018 06:49 AM, Steven Schveighoffer wrote: Apparently, but that's not very good. IMO, it should use the same rules as foreach. In which case, BOTH lines should fail to compile. -Steve I think this is a compiler bug (limitation), which I think has been reported already (or similar ones where definition order matters). The outcome is different when one reorders the opCall definitions. It looks like only the first one is successful. Ali
Re: each & opApply
On Wednesday, 23 May 2018 at 13:49:45 UTC, Steven Schveighoffer wrote: Right, but not foreach(el1, el2; c), which is the equivalent of your each call. Yes. I tried this in the first place and get a compiler error. But it seemed logical to me, that if I define two opApply overloads, which both matches two arguments, then I need to specify which one I want to use. I achieved this by specifying the types inside the foreach... concisely enough for me :) So... I'm looking how to do the same with ´each´, as defining the type of the lambda didn't help.
Re: each & opApply
On 5/23/18 9:59 AM, Alex wrote: On Wednesday, 23 May 2018 at 13:49:45 UTC, Steven Schveighoffer wrote: Right, but not foreach(el1, el2; c), which is the equivalent of your each call. Yes. I tried this in the first place and get a compiler error. But it seemed logical to me, that if I define two opApply overloads, which both matches two arguments, then I need to specify which one I want to use. I achieved this by specifying the types inside the foreach... concisely enough for me :) So... I'm looking how to do the same with ´each´, as defining the type of the lambda didn't help. In your example, you did not define the types for the lambda (you used (a, b) => writeln(a, b) ). But I suspect `each` is not going to work even if you did. In essence, `each` does not know what the lambda requires, especially if it is a typeless lambda. So it essentially needs to replicate what foreach would do -- try each of the overloads, and if one matches, use it, if none or more than one matches, fail. I suspect it's more complex, and I'm not sure that it can be done with the current tools. But it's definitely a bug that it doesn't work when you specify the types. -Steve
Re: each & opApply
On Wednesday, 23 May 2018 at 14:19:31 UTC, Steven Schveighoffer wrote: On 5/23/18 9:59 AM, Alex wrote: On Wednesday, 23 May 2018 at 13:49:45 UTC, Steven Schveighoffer wrote: Right, but not foreach(el1, el2; c), which is the equivalent of your each call. Yes. I tried this in the first place and get a compiler error. But it seemed logical to me, that if I define two opApply overloads, which both matches two arguments, then I need to specify which one I want to use. I achieved this by specifying the types inside the foreach... concisely enough for me :) So... I'm looking how to do the same with ´each´, as defining the type of the lambda didn't help. In your example, you did not define the types for the lambda (you used (a, b) => writeln(a, b) ). But I suspect `each` is not going to work even if you did. Yep. Tried this... In essence, `each` does not know what the lambda requires, especially if it is a typeless lambda. So it essentially needs to replicate what foreach would do -- try each of the overloads, and if one matches, use it, if none or more than one matches, fail. I suspect it's more complex, and I'm not sure that it can be done with the current tools. But it's definitely a bug that it doesn't work when you specify the types. Ah... ok. Then, let me file a bug...
Re: each & opApply
On Wednesday, 23 May 2018 at 14:24:18 UTC, Alex wrote: Ah... ok. Then, let me file a bug... Bug filed. https://issues.dlang.org/show_bug.cgi?id=18898
Re: Is HibernateD dead?
On Monday, 7 May 2018 at 18:10:14 UTC, Matthias Klumpp wrote: On Saturday, 5 May 2018 at 09:32:32 UTC, Brian wrote: On Thursday, 3 May 2018 at 10:27:47 UTC, Pasqui23 wrote: Last commit on https://github.com/buggins/hibernated was almost a year ago So what is the status of HibernateD?Should I use it if I need an ORM? Or would I risk unpatched security risks? You can use Entity & Database library: https://github.com/huntlabs/entity https://github.com/huntlabs/database I've tried both a while back, and they are still inferior to Hibernated (no surprise there, both projects are very new). [...] I've looked at this again today, end Entity now seems to have OneToMany/ManyToMany relations (for 18 days), which is great news! I might need to play with this a little again. In any case, if I do port my D code to another ORM, I want the next port to be the last time I ever do that, because it's a lot of work with quite some risk of breakage. It's also really sad that the existing ORMs don't share a common database abstraction library, but well, different people do things differently. In any case, many thanks to Vadim Lopatin for merging the existing PRs into ddbc and Hibernated for now! That makes life easier already :-)
Re: Locking data
On Wednesday, 23 May 2018 at 13:36:20 UTC, rikki cattermole wrote: On 24/05/2018 1:29 AM, Malte wrote: On Wednesday, 23 May 2018 at 13:24:35 UTC, rikki cattermole wrote: On 24/05/2018 1:20 AM, Malte wrote: On Tuesday, 22 May 2018 at 21:45:07 UTC, IntegratedDimensions wrote: an idea to lock data by removing the reference: class A { Lockable!Data data; } [...] This sounds like you are looking for is an atomic swap. Afaik it doesn't exist in the standard library. You could use asm for the XCHG, but that would make your code x86 dependent. I think the easiest way would be to just use a mutex and tryLock. What are you talking about? :p http://dpldocs.info/experimental-docs/core.atomic.cas.1.html That is Compare-and-set. To make an exchange using cas I first have to read the value, then write to it expecting to be still the value I read before. That are more instructions than just a swap. If a cas fails, I have to redo everything. An exchange never fails, I just might not get the result I would like to have (null instead of pointer). So you want a load + store as swap in a single function (that is optimized). In that case, please create an issue on bugzilla (issues.dlang.org). No, as I said, that is already one instruction on X86: https://www.felixcloutier.com/x86/XCHG.html Just being able to use that instruction with the standard library would be good. You could also use it with compiler intrinsics. Something like import ldc.intrinsics; T* tryGetPtr(T)(T** a) { return cast(T*)llvm_atomic_rmw_xchg!size_t(cast(shared(size_t)*)a, 0); } void restorePtr(T)(T** a, T* b) { llvm_atomic_rmw_xchg!size_t(cast(shared(size_t)*)a,cast(size_t)b); } I would just go with mutexes unless your really need to go that low level though, much saner.
How to convert ubyte[] to uint?
read fails with both uint and ulong on 64bit platform: Error: template std.bitmanip.read cannot deduce function from argument types !(ulong)(ubyte[8]), candidates are: C:\ldc2-1.9.0-windows-x64\bin\..\import\std\bitmanip.d(3213,3): std.bitmanip.read(T, Endian endianness = Endian.bigEndian, R)(ref R range) if (canSwapEndianness!T && isInputRange!R && is(ElementType!R : const(ubyte))) code: import digestx.fnv; import std.bitmanip : read; FNV64 fnv64; fnv64.start(); fnv64.put(cast(ubyte[])word); ubyte[8] arr = fnv64.finish(); auto h = arr.read!ulong; return cast(uint)h;
Re: How to convert ubyte[] to uint?
On Wednesday, May 23, 2018 19:36:07 Dr.No via Digitalmars-d-learn wrote: > read fails with both uint and ulong on 64bit platform: > > Error: template std.bitmanip.read cannot deduce function from > argument types !(ulong)(ubyte[8]), candidates are: > C:\ldc2-1.9.0-windows-x64\bin\..\import\std\bitmanip.d(3213,3): > std.bitmanip.read(T, Endian endianness = Endian.bigEndian, > R)(ref R range) if (canSwapEndianness!T && isInputRange!R && > is(ElementType!R : const(ubyte))) > > code: > > import digestx.fnv; > import std.bitmanip : read; > FNV64 fnv64; > fnv64.start(); > fnv64.put(cast(ubyte[])word); > ubyte[8] arr = fnv64.finish(); > auto h = arr.read!ulong; > return cast(uint)h; As the template constraint in the error message says, read requires an input range. Static arrays are not input ranges. You need to give it a dynamic array - and since read takes its argument by reference, you can't simply slice the static array and pass it. You need a variable that's a dynamic array. - Jonathan M Davis
Re: How to convert ubyte[] to uint?
On Wednesday, 23 May 2018 at 19:49:27 UTC, Jonathan M Davis wrote: On Wednesday, May 23, 2018 19:36:07 Dr.No via Digitalmars-d-learn wrote: [...] As the template constraint in the error message says, read requires an input range. Static arrays are not input ranges. You need to give it a dynamic array - and since read takes its argument by reference, you can't simply slice the static array and pass it. You need a variable that's a dynamic array. - Jonathan M Davis sorry, the error message wasn't clear to me. When I use dynamic arrays I get: slice of static array temporary returned by fnv64.finish() assigned to longer lived variable arr What should use instead of?
Re: How to convert ubyte[] to uint?
On 5/23/18 3:53 PM, Dr.No wrote: On Wednesday, 23 May 2018 at 19:49:27 UTC, Jonathan M Davis wrote: On Wednesday, May 23, 2018 19:36:07 Dr.No via Digitalmars-d-learn wrote: [...] As the template constraint in the error message says, read requires an input range. Static arrays are not input ranges. You need to give it a dynamic array - and since read takes its argument by reference, you can't simply slice the static array and pass it. You need a variable that's a dynamic array. - Jonathan M Davis sorry, the error message wasn't clear to me. When I use dynamic arrays I get: slice of static array temporary returned by fnv64.finish() assigned to longer lived variable arr What should use instead of? I'm guessing you wrote: ubyte[] arr = fnv64.finish(); ?? You want: auto arrtmp = fnv64.finish(); auto arr = arrtmp[]; Basically, what you were doing is allocating some stack space to hold a static array that immediately goes out of scope, and then storing a slice to it. -Steve
Re: Efficient idiom for fastest code
On Wednesday, 23 May 2018 at 10:55:02 UTC, Malte wrote: On Wednesday, 23 May 2018 at 02:24:08 UTC, IntegratedDimensions wrote: [...] I would just do [...] [...] Thanks, I didn't think about using a for loop like that. While it is not the most general it does solve the specific case for a simple step/toggled decision.
Re: Efficient idiom for fastest code
On Wednesday, 23 May 2018 at 03:12:52 UTC, IntegratedDimensions wrote: I knew someone was going to say that and I forgot to say DON'T! Saying to profile when I clearly said these ARE cases where they are slow is just moronic. Please don't use default answers to arguments. This was a general question about cases on how to attack a problem WHEN profiling says I need to optimize. Your SO 101 answer sucks! Sorry! To prove to you that your answer is invalid: I profile my code, it says that it is very slow and shows that it is do to the decision checking... I then I have to come here and write up a post trying to explain how to solve the problem. I then get a post telling me I should profile. I then respond I did profile and that this is my problem. A lot of wasted energy when it is better to know a general attack strategy. Yes, some of us can judge if code is needed to be optimized before profiling. It is not difficult. Giving a generic answer that always does not apply and is obvious to anyone trying to do optimization is not helpful. Everyone today pretty must does not even optimize code anymore... this isn't 1979. It's not ok to keep repeating the same mantra. I guess we should turn this in to a meme? The reason I'm getting on to you is that the "profile before optimization" sounds a bit grade school, specially since I wasn't talking anything about profiling but a general programming pattern speed up code, which is always valid but not always useful(and hence that is when profiling comes in). I'm going to ignore the tone of your response, but I am going to say that responding like that isn't going to get you very far. Don't expect others to do likewise. Assuming that your decision function is indeed the bottleneck, you'll see I did actually provide some hints as to how to optimise the case where decision is pure. even if you can't convince the compiler to inline and expression combine, as in the case for the other answer, you can memoize it (look in std.functional). One of the great things about D is that you can force lots of computation to happen at compile time, so in the case where decision is impure, factoring it into pure and impure parts and `enum x = pureFunc(args);`ing the part that can be can make a large difference if you can't convince the optimiser to do it for you. If you still can't do that consider Jitting it (https://github.com/ldc-developers/druntime/blob/e3bfc5fb780967f1b6807039ff00b2ccaf4b03d9/src/ldc/attributes.d#L78 ) with `-enable-dynamic-compile` or running the loop in parallel with std.parallelism.
Re: try & catch / repeating code - DRY
On 2018-05-22 18:33:06 +, Jacob Carlborg said: You can always create a function that takes a delegate or lambda and handles the exception in the function. Here are three versions of the same thing, depending on how you want the call site to look like. Hi, great! Thanks for the examples... BTW: Is there a place where such generic and fundamental examples are collected? void handleException1(alias dg)() { try dg(); catch (Exception e) { /* handle exception */ } } void handleException2(lazy void dg) { try dg(); catch (Exception e) { /* handle exception */ } } void handleException3(scope void delegate () dg) { try dg(); catch (Exception e) { /* handle exception */ } } void main() { handleException1!({ writeln("asd"); }); handleException1!(() => writeln("asd")); handleException2(writeln("asd")); handleException3({ writeln("asd"); }); } What is exactly the difference between handleException1 and 3? -- Robert M. Münch http://www.saphirion.com smarter | better | faster