On Sat, 17 Dec 2016 08:31 pm, Peter Otten wrote:
> Steve D'Aprano wrote:
>
>> I have experimented with two solutions, and hope somebody might be able
>> to suggest some improvements. Attached is the test code I ran,
>> suggestions for improving the performance will be appreciated.
>
> If there w
Steve D'Aprano wrote:
> I have experimented with two solutions, and hope somebody might be able to
> suggest some improvements. Attached is the test code I ran, suggestions
> for improving the performance will be appreciated.
If there was an attachment to this text -- that didn't make it to the
On 16/12/2016 14:27, Steve D'Aprano wrote:
(2) The clever solution: use a pair of lists, one holding the starting value
of each group of keys, and the other holding the common values. This saves
a lot of memory, but is slower.
A concrete example might help. Suppose I have 15 keys in five groups
On Fri, Dec 16, 2016 at 4:06 AM, Steve D'Aprano
wrote:
> I have about a million or two keys, with a few hundred or perhaps a few
> thousand distinct values. The size of each contiguous group of keys with
> the same value can vary from 1 to perhaps a hundred or so.
Going right back to the beginnin
On Thursday, 15 December 2016 17:06:39 UTC, Steve D'Aprano wrote:
> I have some key:value data where the keys often are found in contiguous
> ranges with identical values. For example:
>
> {1: "foo",
> 2: "foo",
> 3: "foo",
> # same for keys 4 through 99
> 100: "foo",
> 101: "bar",
> 102: "
On 12/15/2016 11:57 PM, Terry Reedy wrote:
On 12/15/2016 4:30 PM, Thomas Nyberg wrote:
On 12/15/2016 12:48 PM, Terry Reedy wrote:
A sparse array has at least half missing values. This one has none on
the defined domain, but contiguous dupicates.
I'm sorry for devolving into semantics, but t
On Fri, 16 Dec 2016 04:06 am, Steve D'Aprano wrote:
> I have some key:value data where the keys often are found in contiguous
> ranges with identical values.
[...]
Thank you to everyone who gave suggestions!
I have experimented with two solutions, and hope somebody might be able to
suggest some
On 12/15/2016 4:30 PM, Thomas Nyberg wrote:
On 12/15/2016 12:48 PM, Terry Reedy wrote:
On 12/15/2016 12:27 PM, Thomas Nyberg wrote:
I haven't dealt with a data structure exactly like this, but it's
basically a sparse array.
A sparse array has at least half missing values. This one has none
On 12/15/2016 12:48 PM, Terry Reedy wrote:
On 12/15/2016 12:27 PM, Thomas Nyberg wrote:
I haven't dealt with a data structure exactly like this, but it's
basically a sparse array.
A sparse array has at least half missing values. This one has none on
the defined domain, but contiguous dupicat
On 12/15/2016 12:27 PM, Thomas Nyberg wrote:
On 12/15/2016 09:06 AM, Steve D'Aprano wrote:
Has anyone dealt with data like this and could give a recommendation
of the
right data structure to use?
I haven't dealt with a data structure exactly like this, but it's
basically a sparse array.
A s
On 12/15/2016 12:06 PM, Steve D'Aprano wrote:
I have some key:value data where the keys often are found in contiguous
ranges with identical values. For example:
{1: "foo",
2: "foo",
3: "foo",
# same for keys 4 through 99
100: "foo",
101: "bar",
102: "bar",
103: "foobar",
104: "bar",
105
Steve D'Aprano wrote:
> I have some key:value data where the keys often are found in contiguous
> ranges with identical values. For example:
>
> {1: "foo",
> 2: "foo",
> 3: "foo",
> # same for keys 4 through 99
> 100: "foo",
> 101: "bar",
> 102: "bar",
> 103: "foobar",
> 104: "bar",
> 10
On 12/15/2016 09:06 AM, Steve D'Aprano wrote:
Has anyone dealt with data like this and could give a recommendation of the
right data structure to use?
I haven't dealt with a data structure exactly like this, but it's
basically a sparse array. (I'm sure it has a name somewhere in the
academic
On 2016-12-15 12:06 PM, Steve D'Aprano wrote:
I have about a million or two keys, with a few hundred or perhaps a few
thousand distinct values. The size of each contiguous group of keys with
the same value can vary from 1 to perhaps a hundred or so.
There isn't enough info in your post to be su
I have some key:value data where the keys often are found in contiguous
ranges with identical values. For example:
{1: "foo",
2: "foo",
3: "foo",
# same for keys 4 through 99
100: "foo",
101: "bar",
102: "bar",
103: "foobar",
104: "bar",
105: "foo",
}
So in this case, the keys 1 throug
15 matches
Mail list logo