Cassandra is not very good at massive read/bulk read if you need to
retrieve and compute a large amount of data on multiple machines using
something like spark or hadoop (or you'll need to hack and process the
sstable directly, something which is not "natively" supported, you'll have
to hack your w
Hi,
So since, I upgraded to 2.2-rc2 I get the CASSANDRA-9643 warning :
WARN o.a.c.i.s.f.b.BigTableWriter - Compacting large partition bytes
Some of my partitions may have ~20 millions of rows, while others may have
only a few hundreds of rows. It may grow up to 300 millions of rows per
par
Hi,
3.x beta release date ?
2015-06-11 16:21 GMT+02:00 Jonathan Ellis :
> 3.1 is EOL as soon as 3.3 (the next bug fix release) comes out.
>
> On Thu, Jun 11, 2015 at 4:10 AM, Stefan Podkowinski <
> stefan.podkowin...@1und1.de> wrote:
>
>> > We are also extending our backwards compatibility polic
Hi,
I m streaming a big sstable using bulk loader of sstableloader but it's
very slow (3 Mbytes/sec) :
Summary statistics:
Connections per host: : 1
Total files transferred: : 1
Total bytes transferred: : 10357947484
Total duration (ms): : 3280229
Average