On Sun, 12 Nov 2017 15:45:40 +0000 Jesper Louis Andersen <jesper.louis.ander...@gmail.com> wrote: > > To elaborate on Jan's point: > > If you extended capacity every time you called append, then you will have > to reallocate and copy data quite a lot. So a clever system pushes a little > bit more capacity in at the end to avoid such copies whenever an append > happens. It is a trade-off between space usage and processing time. From a > complexity-point of view, you are amortizing the capacity allocation over > several appends which yields better (amortized) complexity, turning O(n^2) > into O(n). A proof sketch: associate a credit invariant (or potential) > with the excess capacity and note it never goes below 0.
It will be worse than O(n), right? Old elements have to be likely copied the new bigger slice. > Typically, systems runs exponentially up to a point and then start using > something different, when the added power of two becomes too large. Either > you extend by a constant amount, or you extend by a fibonacci factor, > depending on runtime. In short: you start trading the other way once the > space usage grow too large. The optimization is there to handle the common > case: extend a slice with a small set of data in some loop. If you allocate 50% more space each time (or use fibonacci factor), allocation is still exponential in nature but wastes less space. For the loop case it is better to use loop count as an allocation hint. [For expanding a map similar considerations but you have to expand it at 2/3 full capacity or so] -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.