On 2019-01-25 22:58, Travis Griggs wrote:
Yesterday, I was pondering how to implement groupby, more in the vein of how
Kotlin, Swift, Objc, Smalltalk do it, where order doesn’t matter. For example:
def groupby(iterable, groupfunc):
result = defaultdict(list)
for each in iterable:
result[groupfunc(each)].append(each)
return result
original = [1, 2, 3, 4, 5, 1, 2, 4, 2]
groupby(original, lambda x: str(x)) ==> {‘1’: [1, 1], ‘2’: [2, 2, 2], ‘3’:
[3], ‘4’: [4, 4], ‘5’: [5]}
Easy enough, but I found myself obsessing about doing it with a reduce. At one
point, I lost sight of whether that was even a better idea or not (the above is
pretty simple); I just wanted to know if I could do it. My naive attempt didn’t
work so well:
grouped = reduce(
lambda grouper, each: grouper[str(each)].append(each),
allValues,
defaultdict(list))
Since the result of the append() function is None, the second reduction fails,
because the accumulator ceases to be a dictionary.
I persisted and came up with the following piece of evil, using a tuple to move
the dict reference from reduction to reduction, but also force the (ignored)
side effect of updating the same dict:
grouped = reduce(
lambda accum, each: (accum[0], accum[0][str(each)].append(each)),
allValues,
(defaultdict(list), None))[0]
My question, only for the sake of learning python3 fu/enlightenment, is there a
simpler way to do this with a reduce? I get there’s lots of way to do a
groupby. The pursuit here is what’s the simplest/cleverest/sneakiest way to do
it with reduce, especially if the quality that gorupfunc (str() in this
example) is only called once per item is persevered.
How about this:
grouped = lambda iterable, groupfunc: dict(reduce(
lambda accum, each: accum[groupfunc(each)].append(each) or accum,
iterable,
defaultdict(list)))
--
https://mail.python.org/mailman/listinfo/python-list