I think that it's not necessarily non-idiomatic to use closures instead of 
interfaces in Go, it's more that Go has had interfaces longer than it's had 
closures, and so more code has been written this way.

In Angular 2+ you have the option of embedding HTML, CSS and TS code inside 
one file, or instead having 4, with the main file importing the other 
three. I don't like this for several reasons, but mainly because it makes a 
more complex interlinking between them, and I think even you can say that 
if your application is big enough, all this extra import work will also 
cost a lot of time, though in Go of course that time is not so big in the 
first place due to how efficient its compiler is.

The way I see it is that closures and interfaces are two ways to do exactly 
the same thing, binding namespaces together. One requires more accessory 
declarations and lines of code, not a huge extra overhead, but then for 
exported stuff - well, this is the one upside of it but I don't think it is 
that great, gofmt wants to see header comments on every exported function. 
Which is a nice idea, in theory, but in my opinion if you need comments 
your naming scheme sucks.

That's also why I created distinct comparators with meaningful names 
instead of creating named return values. b.IsEqual(data, cursor) compared 
to b.Compare(data, cursor) == EQUAL is not just a lot longer, but harder to 
read. I think technically it may also consume more code cache space to 
implement this extra operation, though on the other side, there is a small 
extra overhead for having three instead of 1. The compare function has to 
run three cases, most concisely expressed as 

    switch{
        case b.Store(c.index)>d: 
            return GREATER; 
        case b.Store(c.index)<d: 
            return LESSER
    }
    return EQUAL 

If my function only needs to know if it's equal (for example a search tree 
walk), it's just done two comparison/branch operations for absolutely no 
reason, and if I switch the order to benefit search, I raise the cost of 
determining which direction to step next after no match on a node.

I worked for a little while on the C++ server application for the Steem 
network node, and I was intending to remove a whole swathe of code relating 
to protocol changes at various hard forks. The number of times I ran across 
poorly ordered if/then (not even using switch!) that would perform 
unnecessary comparisons in more cases than not, to me it explained the 
ballooning amount of time that was required to play the blockchain into a 
database shared file. Literally, over a week, at that point, almost 9 
months ago, and last I heard 270Gb of memory is required because this 
playback takes a stupid amount of time (effectively, eternity) unless the 
shared file is stored in a ramdisk.

So, just to be clear, I am not using OOPish techniques because I am a 
believer, and I don't think that convention should dictate effective 
programming either. Closures are a more compact notation than interfaces, 
and for me this is equally important as writing code that does not waste 
cycles doing things for the sake of some arbitrary model that does not 
greatly improve maintainability and readability at the cost of overhead 
that will stack up the more this approach is used.

Heaps have certainly got some disadvantages compared to bucket sorts and 
reference based trees but the one thing they have is data locality. Data 
locality can be a huge advantage because of CPU cache misses being avoided. 
Cache memory on my i5 CPU is 14Gb/s write speed. The DDR4-2400 memory in my 
system writes at 4Gb/s. This is linear writing, in the case of the main 
memory, so not only does conventional binary tree architecture cause a very 
high bandwidth load on the main memory, it also seeks the data randomly 
which adds even more delays due to the linearity of the memory cells.

To illustrate my point, here is a couple of articles I found relating to 
this and how data locality has brought massive performance benefits in 
reverse proxy servers: https://queue.acm.org/detail.cfm?id=1814327 and I 
found this story from this stackexchange 
topic: 
https://stackoverflow.com/questions/6531543/efficient-implementation-of-binary-heaps

The benefit of exploiting the properties of cpu and memory caching yielded 
a net boost of nearly 10x. I can't see how, at bare minimum, based on the 
ratios of cache and memory write speed on my system, I won't see close to 
or around 3x improvement compared to reference based binary trees, and 
potentially a similar kind of improvement compared to bucket sorting (which 
is how Cuckoo Cycle searches for cycles in hashchains). 

I don't know anything about what actual result it will have, but I am 
building it anyway, and I will use closures because I personally prefer the 
notation.

On Wednesday, 25 April 2018 10:48:28 UTC+3, rog wrote:
>
> On 25 April 2018 at 08:05, Louki Sumirniy 
> <louki.sumir...@gmail.com <javascript:>> wrote: 
> > 
> https://stackoverflow.com/questions/6531543/efficient-implementation-of-binary-heaps
>  
> > 
> > Pretty much what I'm working on here is this, except with left to right 
> sort 
> > instead of vertical. I think this guy's work will help me iron out the 
> > performance issues. 
>
> You do know that heaps aren't a great data structure for searching, right? 
>
> > Another thing, that is more on topic more specifically, is that 
> collections 
> > of interface methods introduce a significant overhead, compared to 
> simply 
> > passing the pointer to the structure as a parameter. I am thinking that 
> > maybe a way to hide this parameter passing is by using closures, which 
> bind 
> > in the namespace from a hypothetical initialiser function, without 
> actually 
> > having to specify the pointer passing across. The structure is created 
> and 
> > allocated by the initialising function (constructor, if you like) and 
> > because all of the functions are declared within the namespace as 
> closures, 
> > the compiler implicitly passes the pointer to the struct without having 
> to 
> > specify it. 
>
> You don't need to do this. You're still thinking in traditional OO 
> terms. I'd suggest trying to think in a more Go like way instead. FWIW 
> I tried to point you in a more sensible direction with this code: 
> https://play.golang.org/p/wv0T8T8Ynns 
>
> > I don't know exactly what FP paradigm says about structures and 
> collections 
> > of functions exactly, as prima facie it looks like OOP. But closures are 
> a 
> > uniquely FP mechanism, and really they are just a way to merge 
> namespaces, 
> > and by doing this we don't have to pass the pointer to the function 
> > explicitly, as it is already accessible. 
>
> What gives you the idea that closures are a uniquely FP mechanism? 
>
>   rog. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to