if capacity allowed, increase compaction_throughput_mb_per_sec as 1st
tuning, and if still behind, increase concurrent_compactors as 2nd tuning.
Regards,
Jim
On Fri, Sep 2, 2022 at 3:05 AM onmstester onmstester via user <
user@cassandra.apache.org> wrote:
> Another thing that comes to my mind
Is it over max hint window ? if over, better to do a full repair.
check table system.hints, do you see rows ?
As I remember, during upgrade, transactions will store in hints until other
cluster have done upgrade, so for safety, change default 3 hours hint
window to long time just before starting
Though it is not required to run upgradesstables, but upgradesstables -a
will re-write the file to kick out tombstones, in sizeTieredcompaction, the
largest files may stay a long time to wait for the next compaction to
kick out tombstones.
So it really depends, to run it or not, usually upgrades
Andrey:
cassandra every cell has a timestamp, select writetime (..) can see
the timestamp,
cassandra merge cells when compaction, when read, sort by timestamp.
for you example, if you left pad the writetime to column value (writetime +
cell value), then sort, shall return what you see no
Raphael:
Have you found root cause ? If not, here are a few tips, based on what
I experienced before, but may not be same as your case, just hope it is
helpful.
1) app side called wrong code module
get the cql from system.prepared_statements
cql statement is helpful to developers to search t
I remember gocql.DataCentreHostFilter was used. try add it to see whether
will read local DC only in your case ?
Thanks,
James
On Fri, Aug 5, 2022 at 2:40 PM Raphael Mazelier wrote:
> Hi Cassandra Users,
>
> I'm relatively new to Cassandra and first I have to say I'm really
> impressed by the
My experience to debug this kind of issue is to turn on trace. The nice
thing in cassandra is:
you can turn on trace only on 1 node and with a small percentage, i.e.
nodetool settraceprobability 0.05 --- only run on 1 node.
Hope it helps.
Regards,
James
On Thu, Jul 21, 2022 at 2:50 PM Tolbert
eems pretty much normal.
>
> On Thu, Oct 7, 2021, 4:05 AM Jim Shaw wrote:
>
>> I met similar issue before. What I did was: reduce Heap size for
>> rebuild, reduce streamthroughput.
>> But it depends on version, and your env., may not your case, just hope
>> it helpful
I met similar issue before. What I did was: reduce Heap size for
rebuild, reduce streamthroughput.
But it depends on version, and your env., may not your case, just hope it
helpful.
ps -ef | grep , you will see a new java process for rebuild, see what
memory size used, if use default, it may
If data size not big, you may try copy primary key values to a file, then
copy back to table, then do compaction.
Both copy and compact may set some throttles. If size not so big, you may
try get partition key values first, then loop partition key values to get
all primary key values to file.
On
You may try roll up the data, i.e. a table only 1 month data, old data
roll up to a table keep a year data.
Thanks,
Jim
On Wed, Sep 15, 2021 at 1:26 AM Isaeed Mohanna wrote:
> My cluster column is the time series timestamp, so basically sourceId,
> metric type for partition key and timestamp f
you start c* from docker command, right ? check docker log, may see some
info helpful.
On Wed, Aug 18, 2021 at 8:58 AM FERON Matthieu wrote:
> Hello you all,
>
>
> I'm trying to set cassandra on a docker container centos7.
>
> When I start the service, it says Failed but I see the proccess in m
I will try cd to all directories of data files of this table, do
pwd get path
df -h "above path"
to see whether all have 2.7 TB ?
Thanks,
Jim
On Fri, Aug 6, 2021 at 6:31 PM Kian Mohageri
wrote:
> When running a "nodetool scrub" to repair a table, the following warning
> appears:
>
> ---
CMS heap too large will have long GC. you may try reduce heap on 1 node to
see. or go GC1 if it is easy way.
Thanks,
Jim
On Tue, Aug 3, 2021 at 3:33 AM manish khandelwal <
manishkhandelwa...@gmail.com> wrote:
> Long GC (1 seconds /2 seconds) pauses seen during repair on the
> coordinator. Runn
I think Erick posted https://community.datastax.com/questions/6947/.
explained very clearly.
We hit same issue only on a huge table when upgrade, and we changed back
after done.
My understanding, Which option to chose, shall depend on your user case.
If chasing high performance on a big table, t
Shaurya:
What's the purpose to partise too many data centers ?
RF=3, is within a center, you have 3 copies of data.
If you have 3 DCs, means 9 copies of data.
Think about space wasted, Network bandwidth wasted for number of copies.
BTW, Ours just 2 DCs for regional DR.
Thanks,
Jim
On Wed, Jul 1
16 matches
Mail list logo