Sorry, should have first looked at the source code. In case of 0, it is set to
Double.MAX_VALUE.
Thomas
From: Steinmaurer, Thomas [mailto:thomas.steinmau...@dynatrace.com]
Sent: Montag, 11. Juni 2018 08:53
To: user@cassandra.apache.org
Subject: compaction_throughput: Difference between 0 (unthro
Hi,
we've had this issue with large partitions (100 MB and more). Use
nodetool tablehistograms to find partition sizes for each table.
If you have enough heap space to spare, try increasing this parameter:
file_cache_size_in_mb: 512
There's also the following parameter, but I did not test the im
thanks Martin.
99 percentile of all tables are even size. Max is always higher in all
tables.
The question is, How do I identify, which table is throwing this "Maximum
memory usage reached (512.000MiB)" usage message?
On Mon, Jun 11, 2018 at 5:59 AM, Martin Mačura wrote:
> Hi,
> we've had this
Sorry, I didn't mean to high jack the thread. But I have seen similar
issues and ignore it always because it wasn't really causing any issues.
But I am really curious on how to find these.
On Mon, Jun 11, 2018 at 9:45 AM, Nitan Kainth wrote:
> thanks Martin.
>
> 99 percentile of all tables are e
I have hit dead-ends every where I turned on this issue.
We had a 15-node cluster that was doing 35 ms all along for years. At some
point, we made a decision to shrink it to 13. Read latency rose to near 70
ms. Shortly after, we decided this was not acceptable, so we added the
three nodes back in
Did you run cleanup too?
On Mon, Jun 11, 2018 at 10:16 AM, Fred Habash wrote:
> I have hit dead-ends every where I turned on this issue.
>
> We had a 15-node cluster that was doing 35 ms all along for years. At
> some point, we made a decision to shrink it to 13. Read latency rose to
> near 70
Yes we did after adding the three nodes back and a full cluster repair as well.
But even it we didn’t run cleanup, would it have impacted read latency the fact
that some nodes still have sstables that they no longer need?
Thanks
Thank you
From: Nitan Kainth
Sent: Monday, Ju
>Finally can I run mixed Datastax and Apache nodes in the same cluster same
>version?
>Thank you for all your help.
I have run DSE and Apache Cassandra in the same cluster while migrating to DSE.
The versions of Cassandra were the same. It was relatively brief -- just during
the upgrade proce
I think it would because it Cassandra will process more sstables to create
response to read queries.
Now after clean if the data volume is same and compaction has been running, I
can’t think of any more diagnostic step. Let’s wait for other experts to
comment.
Can you also check sstable count
I will check for both.
On a different subject, I have read some user testimonies that running
‘nodetool cleanup’ requires a C* process reboot at least around 2.2.8. Is this
true?
Thank you
From: Nitan Kainth
Sent: Monday, June 11, 2018 10:40 AM
To: user@cassandra.apache.org
S
No
--
Jeff Jirsa
> On Jun 11, 2018, at 7:49 AM, Fd Habash wrote:
>
> I will check for both.
>
> On a different subject, I have read some user testimonies that running
> ‘nodetool cleanup’ requires a C* process reboot at least around 2.2.8. Is
> this true?
>
>
>
> Than
Really wild guess : do you monitor I/O performance and are positive those
are the same over the past year ? (network becoming a little busier, hard
drive a bit slower and so on) ?
Wild guess 2 : a new 'monitoring' software (log shipping agent for
instance) added meanwhile on the box ?
On 11 June 2
Hello,
We have been working on a distributed data proxy for Cassandra. A data
proxy is a combination of proxy and caching that also takes care of data
consistency and invalidation for insert and updates. In addition, the data
proxy is distributed based on consistent hashing and using gossip betwee
Hello Chidamber
When you said "In addition, the data proxy is distributed based on
consistent hashing and using gossip between data proxy nodes to keep the
cached data unique (per node) and consistent", did you re-implement
Consistent hashing and gossip algorithm from scratch in your proxy layer ?
Hi Duy,
Yes, we have implemented consistent hashing and gossip in FPGA since we
implement the read pipeline in FPGA.
Chidamber
On Mon, Jun 11, 2018 at 9:22 PM, DuyHai Doan wrote:
> Hello Chidamber
>
> When you said "In addition, the data proxy is distributed based on
> consistent hashing and
It's possible that it's something more subtle, but keep in mind that
sstables don't include the schema. If you've made schema changes, you need
to apply/revert those first or C* probably doesn't know what to do with
those columns in the sstable.
On Sun, Jun 10, 2018 at 11:38 PM, wrote:
> Dear C
Hi,
I know Cassandra can make use of multiple disks. My data disk is almost full
and I want to add another 2TB disk. I don't know what will happen after the
addition.
1. C* will write to both disks util the old disk is full?
2. And what will happen after the old one is full? Will C* stop writing
Hi Vishal!
Did you copy the sstables into data directory?
another thing is, check the id of the table that is the same as casaandra
has in the metadata with the directory.
https://docs.datastax.com/en/dse/5.1/cql/cql/cql_using/useCreateTableCollisionFix.html
El El lun, 11 de jun. de 2018 a
Before restoring check version od sstables which you are using to import
data into your schema. , as you remove and add age, due to this, already
there will be no data for that column.
now if you want that your old data should be shown here, then you have to
use proper sstable for dumping the data
In my experience, adding a new disk and restarting the Cassandra process slowly
distributes the disk usage evenly, so that existing disks have less disk usage
> On 12 Jun 2018, at 11:09 AM, wxn...@zjqunshuo.com wrote:
>
> Hi,
> I know Cassandra can make use of multiple disks. My data disk is alm
20 matches
Mail list logo