Hi,
I have a question about commit log.
When commit log segment files are remove actually?
I'm running a single node cluster for a few weeks to test C* performance.
My simple test have been issuing only read and write requests to the
cluster, then the data size (SSTables size) are increasing mon
That is probably because the relevant memtable is flushed to SSTable and
the content of the commit log is not required any more so it's rewinding.
See:
http://docs.datastax.com/en/cassandra/3.x/cassandra/dml/dmlHowDataWritten.html
2016年12月8日(木) 17:45 Satoshi Hikida :
> Hi,
>
> I have a questio
Yes, I use compression.
Tried without and it gave ~15% increase in speed, but is still too low
(~35Mbps)
On sending side no high CPU/IO/etc utilization.
But on receiving node I see that one "STREAM-IN" thread takes 100% CPU and
it just doesn't scale by design since "Each stream is a single thread"
On sending side no high CPU/IO/etc utilization.
But on receiving node I see that one "STREAM-IN" thread takes 100% CPU and
it just doesn't scale by design since "Each stream is a single thread" (
http://www.mail-archive.com/user@cassandra.apache.org/msg42095.html)
Maybe your System cannot Stream
Just an educated guess: you have materialized Views? They are known to
Stream very slow
Am 08.12.2016 10:28 schrieb "Aleksandr Ivanov" :
> Yes, I use compression.
> Tried without and it gave ~15% increase in speed, but is still too low
> (~35Mbps)
>
> On sending side no high CPU/IO/etc utilizatio
Nope, no MVs
On Thu, Dec 8, 2016 at 11:31 AM, Benjamin Roth
wrote:
> Just an educated guess: you have materialized Views? They are known to
> Stream very slow
>
> Am 08.12.2016 10:28 schrieb "Aleksandr Ivanov" :
>
>> Yes, I use compression.
>> Tried without and it gave ~15% increase in speed, bu
> The reason you don't want to use SERIAL in multi-DC clusters
I'm not a fan of blanket statements like that. There is a high cost to
SERIAL
consistency in multi-DC setups, but if you *need* global linearizability,
then
you have no choice and the latency may be acceptable for your use case. Take
t
Fellow C* users,
I've got a cluster of C* 3.5 serving a single keyspace with DTCS table
and no deletes. We knew that data does not expire on time (even after
gc_grace_period) - that's sth we wanted to investigate later and
eventually let C* to keep it longer. The moment when C* decided to
remove o
/* Sorry for the mess, accidental Ctrl-Enter */
Fellow C* users,
I've got a cluster of C* 3.5 serving a single keyspace with DTCS table
and no deletes. We knew that data does not expire on time (even after
gc_grace_period) - that's sth we wanted to investigate later and
eventually let C* to keep
On Wed, Dec 7, 2016 at 6:35 PM, Sotirios Delimanolis
wrote:
> I have a couple of SSTables that are humongous
>
> -rw-r--r-- 1 user group 138933736915 Dec 1 03:41 lb-29677471-big-Data.db
> -rw-r--r-- 1 user group 78444316655 Dec 1 03:58 lb-29677495-big-Data.db
> -rw-r--r-- 1 user group 212429252
On Thu, Dec 8, 2016 at 3:27 AM, Aleksandr Ivanov wrote:
> On sending side no high CPU/IO/etc utilization.
> But on receiving node I see that one "STREAM-IN" thread takes 100% CPU and
> it just doesn't scale by design since "Each stream is a single thread"
> (http://www.mail-archive.com/user@cassan
Hi Everyone,
I'm performing a repair as I usual do, but this time I got a weird
notification:
"Requested range intersects a local range but is not fully contained in
one; this would lead to imprecise repair".
I've never encountered this before during a repair.
The repair command that I ran is:
*n
What do you mean?
I'm logging the list of files when creating the CompactionTask and it's showing
these
-rw-r--r-- 1 user group 3540336 Dec 7 23:40
lb-29715834-big-Data.db-rw-r--r-- 1 user group 5997853 Dec 7 22:07
lb-29715833-big-Data.db-rw-r--r-- 1 user group 5210561 Dec 7 2
On Thu, Dec 8, 2016 at 5:10 AM, Sylvain Lebresne
wrote:
> > The reason you don't want to use SERIAL in multi-DC clusters
>
> I'm not a fan of blanket statements like that. There is a high cost to
> SERIAL
> consistency in multi-DC setups, but if you *need* global linearizability,
> then
> you hav
Hi, Yoshi
Thank you for your replay.
Of cause I read the document you linked, But I'm still confused about the
deletion timing of the commit log segment files Because the total size of
the commit log segment files grows until 3GB even if there are many
memtable flushes.
Anyway I'll check the del
On Fri, Dec 9, 2016 at 1:35 AM, Edward Capriolo
wrote:
>
> I copied the wrong issue:
>
> The core issue was this: https://issues.apache.
> org/jira/browse/CASSANDRA-6123
>
Well, my previous remark applies equally well to this ticket so let me just
copy-paste:
"That ticket has nothing to do with L
16 matches
Mail list logo