Thanks, we'll try delete range of rows as it seems to fit our scenario.
One more question, as you mentioned "repair often" - and we have seen that
several times, the official doc, representations, blogs, etc.
But when we repair a column family sized to terabytes on a cluster with ~30
nodes, it tak
Delete using as few tombstones as possible (deleting the whole partition is
better than deleting a row; deleting a range of rows is better than deleting
many rows in a range).
Repair often and lower gc_grace_seconds so the tombstones can be collected more
frequently
--
Jeff Jirsa
> On Jul
Hi,
When I Install Cassandra 3.11X and Python 2.7.X in Debian 8.8, the Cqlsh does
not start. I get the following error.
debian@vm-184:/opt/apache-cassandra-3.10/bin$ ./cqlsh
Python Cassandra driver not installed, or not on PYTHONPATH. You might try "pip
install cassandra-driver".
Python: /usr/lo
Hi there.
We have a keyspace containing tons of records, and deletions are used as
enforced by its business logic.
As the data accumulates, we are suffering from performance penalty due to
tombstones, still confusing about what could be done to minimize the harm,
or shall we avoid any deletions
Yea, it means they're effecitvely invalid files, and would not be loaded at
startup.
On Mon, Jul 31, 2017 at 9:07 PM, Sotirios Delimanolis <
sotodel...@yahoo.com.invalid> wrote:
> I don't want to go down the TTL path because this behaviour is also
> occurring for tables without a TTL. I don't h
I don't want to go down the TTL path because this behaviour is also occurring
for tables without a TTL. I don't have hard numbers about the amount of writes,
but there's definitely been enough to trigger compaction in the ~year since.
We've never changed the topology of this cluster. Ranges have
On 2017-07-31 15:00 (-0700), kurt greaves wrote:
> How long is your ttl and how much data do you write per day (ie, what is
> the difference in disk usage over a day)? Did you always TTL?
> I'd say it's likely there is live data in those older sstables but you're
> not generating enough data to
How long is your ttl and how much data do you write per day (ie, what is
the difference in disk usage over a day)? Did you always TTL?
I'd say it's likely there is live data in those older sstables but you're
not generating enough data to push new data to the highest level before it
expires.
On Cassandra 2.2.11, I have a table that uses LeveledCompactionStrategy and
that gets written to continuously. If I list the files in its data directory, I
see something like this
-rw-r--r-- 1 acassy agroup 161733811 Jul 31 18:46 lb-135346-big-Data.db
-rw-r--r-- 1 acassy agroup 159626222 Jul 31
Excellent! Thank you Jeff.
On Mon, Jul 31, 2017 at 10:26 AM, Jeff Jirsa wrote:
> 3.10 has 6696 in it, so my understanding is you'll probably be fine just
> running repair
>
>
> Yes, same risks if you swap drives - before 6696, you want to replace a
> whole node if any sstables are damaged or lo
Tremendous! I already suspected it to be a JNA issue but didn't know how to
solve it. I'll try this in my setup; I am experimenting what configuration
to use anyways...
Thanks a lot!
On Mon, Jul 31, 2017 at 5:19 PM, Jeff Jirsa wrote:
> Sigh, I've tried to reply to this three times and none are i
Sigh, I've tried to reply to this three times and none are in the archives, so
I don't think they're making it through. Apologies if this is the fourth time
someone's seen it:
The problem is JNA jar that was upgraded recently and bumped the glibc
requirement
https://issues.apache.org/jira/bro
Thanks Ryan, I couldn't find that version but tried with the 3.0.14
version, to no avail. I ended up configuring the VM's in my cloud with
RHEL7 and that includes glib2_17...
Best regards,
Piet
On Fri, Jul 28, 2017 at 6:29 PM, ruijian.lee wrote:
> Hi Piet,
>
> I have also encountered this situat
3.10 has 6696 in it, so my understanding is you'll probably be fine just
running repair
Yes, same risks if you swap drives - before 6696, you want to replace a whole
node if any sstables are damaged or lost (if you do deletes, and if it hurts
you if deleted data comes back to life).
--
Jeff
I just want to add that we use vnodes=16 if that helps with my questions..
On Mon, Jul 31, 2017 at 9:41 AM, Ioannis Zafiropoulos
wrote:
> Thank you Jeff for your answer,
>
> I use RF=3 and our client connect always with QUORUM. So I guess I will be
> alright after a repair (?)
> Follow up questi
Thank you Jeff for your answer,
I use RF=3 and our client connect always with QUORUM. So I guess I will be
alright after a repair (?)
Follow up questions,
- It seems that the risks you describing would be the same as if I had
replaced the drive with an new fresh one and run repair, is that correct
It depends on what consistency level you use for reads/writes, and whether you
do deletes
The real danger is that there may have been a tombstone on the drive the failed
covering data on the disks that remain, where the delete happened older than
gc-grace - if you simple yank the disk, that dat
Hi All,
I have a 7 node cluster (Version 3.10) consisting of 5 disks each in JBOD.
A few hours ago I had a disk failure on a node. I am wondering if I can:
- stop Cassandra on that node
- remove the disk, physically and from cassandra.yaml
- start Cassandra on that node
- run repair
I mean, is i
18 matches
Mail list logo