Its the decompressed size of the partitions. Each sstable has stats
component that contains histograms for the size and number of columns in
the partitions (among other things, can see with sstablemetadata tool),
tablehistograms merges it for each sstable and gives the results.
Chris
On Fri, Feb
I am working on some issues involving really big partitions. I have been
making extensive use of nodetool tablehistograms. What exactly is the
partition size being reported? I have a table for which the max value
reported is about 3.5 GB, but running du -h against the table data
directory reports 5
Hi All,
Thanks for your replies. I do not see an issue with NTP or with dropped
messages.
However the tombstones count on the specific CF shows me this. This essentially
indicates that there are as many tombstones as Live cells in the CF isin't?
Now is that an issue and can this cause incon
It was only the schema change.
2017-02-24 19:18 GMT+01:00 kurt greaves :
> How many CFs are we talking about here? Also, did the script also kick off
> the scrubs or was this purely from changing the schemas?
>
>
--
Benjamin Roth
Prokurist
Jaumo GmbH · www.jaumo.com
Wehrstraße 46 · 73035 G
How many CFs are we talking about here? Also, did the script also kick off
the scrubs or was this purely from changing the schemas?
WRT to NTP, I first encountered this issue on my first cluster. The
problem with ntp isn't just if you're doing inserts, it's if you're doing
inserts in combination with deletes, and using server timestamps with a
greater variance than the period between the delete and the insert.
Basically, you e
Hi,
Check the tombstone count, If is it to high, your query will be impacted.
If tombstone is a problem, you can try to reduce your "gc_grace_seconds" to
reduce tombstone count(Carefully because you use cross data centers).
Tchau,
Petrus Silva
On Fri, Feb 24, 2017 at 12:07 AM, Jan Kesten wro
That stacktrace generally implies your clients are resetting connections.
The reconnection policy probably handles the issue automatically, however
worth investigating. I don't think it normally causes statuslogger output
however, what were the log messages prior to the stacktrace?
On 24 February
Probably LCS although what you're implying (read before write) is an
anti-pattern in Cassandra. Something like this is a good indicator that you
should review your model.
By any chances are you using the PHP/C++ driver?
--
--
Hello,
I'm using a table like this:
CREATE TABLE myset (id uuid PRIMARY KEY)
which is basically a set I use for deduplication, id is a unique id for
an event, when I process the event I insert the id, and before
processing I check if it has already been processed for deduplication.
It
Hi,
On Thu, Feb 23, 2017 at 10:59 PM Rakesh Kumar
wrote:
> Is ver 3.0.10 same as 3.10.
>
No. As far as I know the 3.0.x is "LTS" release with only bug and security
fixes, the 3.x versions are alternated feature and bug fix releases.
Bye,
Gábor AUTH
Hi,
are your nodes at high load? Are there any dropped messages (nodetool
tpstats) on any node?
Also have a look at your system clocks. C* needs them in thight sync -
via ntp for example. Side hint: if you use ntp use the same set of
upstreams on all of your nodes - ideal your own one. Using
13 matches
Mail list logo