> .The combination was performing better was querying for 500 rows at a time
> with 1000 columns while different combinations, such as 125 rows for 4000
> columns or 1000 rows for 500 columns, were about the 15% slower.
I would rarely go above 100 rows, specially if you are asking for 1000 colum
Thank you Aaron, your advice about a newer client it is really
interesting. We will take in account it!
Here, some numbers about our tests: we found that more or less that with
more than 500k elements (multiplying rows and columns requested) there was
the inflection point, and so asking for more
> In ours tests, we found there's a significant performance difference
> between various configurations and we are studying a policy to optimize it.
> The doubt is that, if the needing of issuing multiple requests is caused only
> by a fixable implementation detail, would make pointless do th
Hi Rob,
of course, we could issue multiple requests, but then we should consider
which is the optimal way to split the query in smaller ones. Moreover, we
should choose how many of sub-query run in parallel.
In ours tests, we found there's a significant performance difference
between various c
On Tue, Jul 16, 2013 at 4:46 AM, cesare cugnasco
wrote:
> We are working on porting some life science applications to Cassandra,
> but we have to deal with its limits managing huge queries. Our queries are
> usually multiget_slice ones: many rows with many columns each.
>
You are not getting muc
Hi everybody,
We are working on porting some life science applications to Cassandra, but
we have to deal with its limits managing huge queries. Our queries are
usually multiget_slice ones: many rows with many columns each.
We have seen system start to slower until the entry point node crashes wh