Hi Alex,
After I changed one node to TWCS using JMX command, it started to compact. I
expect the old large sstable files will be split into smaller ones according to
the time bucket. But I got still large sstable file.
JMX command used:
set CompactionParametersJson
{"class":"com.jeffjirsa.cassa
We have run restarts on the cluster and that doesn’t seem to help at all.
We ran repair separately for each table that seems to go through usually but
running a repair on a keyspace doesn’t.
Anything anyone?
Hannu
> On 3 Jan 2018, at 23:24, Hannu Kröger wrote:
>
> I can certainly try that.
Quick follow up.
Others in AWS reporting/seeing something similar, e.g.:
https://twitter.com/BenBromhead/status/950245250504601600
So, while we have seen an relative CPU increase of ~ 50% since Jan 4, 2018, we
now also have applied a kernel update at OS/VM level on a single node (loadtest
and
The old files will not be split. TWCS doesn’t ever do that.
> On Jan 9, 2018, at 12:26 AM, wxn...@zjqunshuo.com wrote:
>
> Hi Alex,
> After I changed one node to TWCS using JMX command, it started to compact. I
> expect the old large sstable files will be split into smaller ones according
>
Thanks a lot for the info!
Much appreciated.
On Tue, Jan 9, 2018 at 2:33 AM, Mick Semb Wever
wrote:
>
>
>> Can you please provide dome JIRAs for superior fixes and performance
>> improvements which are present in 3.11.1 but are missing in 3.0.15.
>>
>
>
> Some that come to mind…
>
> Cassandra St
Dear Everyone,
We are running Cassandra v2.0.15 on our production cluster.
We would like to reduce the replication factor from 3 to 2 but we are not
sure if it is a safe operation. We would like to get some feedback from you
guys.
Have anybody tried to shrink the replication factor?
Does "nodet
Run repair first to ensure the data is properly replicated, then cleanup.
--
Jeff Jirsa
> On Jan 9, 2018, at 9:36 AM, Alessandro Pieri wrote:
>
> Dear Everyone,
>
> We are running Cassandra v2.0.15 on our production cluster.
>
> We would like to reduce the replication factor from 3 to 2 bu
Make sure you pick instances with PCID cpu capability, their TLB overhead
flush
overhead is much smaller
On Tue, Jan 9, 2018 at 2:04 AM, Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:
> Quick follow up.
>
>
>
> Others in AWS reporting/seeing something similar, e.g.:
> https://twit
Hi All,Has anyone seen any test results for SQL Server? Although I am a
Cassandra user I do use SQL Server for other companies.
Thanks,-Tony
From: Dor Laor
To: user@cassandra.apache.org
Sent: Tuesday, January 9, 2018 10:31 AM
Subject: Re: Meltdown/Spectre Linux patch - Performance impa
>
> Can you please provide dome JIRAs for superior fixes and performance
> improvements which are present in 3.11.1 but are missing in 3.0.15.
>
>
For the security conscious, CASSANDRA-11695 allows you to use Cassandra's
authentication and authorization to lock down JMX/nodetool access instead
of r
Hi All,
If using TWCS, will a full repair trigger major compaction and then compact all
the sstable files into big ones no matter the time bucket?
Thanks,
-Simon
Full repair on TWCS maintains proper bucketing
--
Jeff Jirsa
> On Jan 9, 2018, at 5:36 PM, "wxn...@zjqunshuo.com"
> wrote:
>
> Hi All,
> If using TWCS, will a full repair trigger major compaction and then compact
> all the sstable files into big ones no matter the time bucket?
>
> Thanks
The parent repair session will be on the node that you kicked off the
repair on. Are the logs above from that node? Can you make it a bit clearer
how many nodes are involved and the corresponding logs from each node?
On 9 January 2018 at 09:49, Hannu Kröger wrote:
> We have run restarts on the c
Good luck with that. Pcid out since mid 2017 as I recall?
Daemeon (Dæmœn) Reiydelle
USA 1.415.501.0198
On Jan 9, 2018 10:31 AM, "Dor Laor" wrote:
Make sure you pick instances with PCID cpu capability, their TLB overhead
flush
overhead is much smaller
On Tue, Jan 9, 2018 at 2:04 AM, Steinmaure
Hard to tell from the first 10 google search results which Intel CPUs
has it so I went to ask my /proc/cpuinfo, turns out my >1 year Dell XPS
laptop has it. AWS's i3 has it too.
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht
Longer than that. Years. Check /proc/cpuinfo
--
Jeff Jirsa
> On Jan 9, 2018, at 11:19 PM, daemeon reiydelle wrote:
>
> Good luck with that. Pcid out since mid 2017 as I recall?
>
>
> Daemeon (Dæmœn) Reiydelle
> USA 1.415.501.0198
>
> On Jan 9, 2018 10:31 AM, "Dor Laor" wrote:
> Make sur
Dear All,
We met some C* nodes oom during secondary index creation with C* 2.1.18.
As per https://issues.apache.org/jira/browse/CASSANDRA-12796,the flush writer
will be blocked by index rebuild.but we still have some confusions:
1.not sure if secondary index creation is the same as index rebuild
17 matches
Mail list logo