Are there binary builds available for testing or is it source only?
--
Jacques-Henri Berthemet
-Original Message-
From: Nate McCall
Sent: Wednesday, October 24, 2018 10:02 PM
To: Cassandra Users
Subject: Re: Cassandra 4.0
When it's ready :)
In all seriousness, the past two blog posts
Hello,
'*nodetool cleanup*' use to be mono-threaded (up to C*2.1) then used all
the cores (C*2.1 - C*2.1.14) and is now something that can be controlled
(C*2.1.14+):
'*nodetool cleanup -j 2*' for example would use 2 compactors maximum (out
of the number of concurrent_compactors you defined (probab
> Does anyone have any ideas of what I can do to generate inserts based on
> primary key numbers in an excel spreadsheet?
A quick thought:
What about using a column of the spreadsheet to actually store the SELECT
result and generate the INSERT statement (and I would probably do the
DELETE too) c
Hi Alain,
That is exactly what I did yesterday in the end. I ran the selects and
output the results to a file, I ran some greps on that file to leave myself
with just the data rows removing any white space and headers.
I then copied this data into a notepad on my local machine and saved it as
a c
Hi All,
I have one table in which i have some data which has TTL of 2days and some
data which has TTL of 60 days. What compaction strategy will suits the most.
1. LeveledCompactionStrategy (LCS)
2. SizeTieredCompactionStrategy (STCS)
3. TimeWindowCompactionStrategy (TWCS)
--
Raman Gug
Hi Raman,
TWCS is the best compaction strategy for TTL data, even if you have
different TTLs (set the time window based on your largest TTL, so it would
be 1 day in your case).
Enable unchecked tombstone compaction to clear the data with 2 days TTL
along the way. This is done by setting :
ALTER T
Environment: Cassandra: 2.2.9, JRE: 1.8.0_74, CentOS 6/7
We have two DCs. In DC1 we have 3 RACs and in DC2 we have 6.
Because we're in a physical environment (not virtual or cloud based), we've run
short on unique rack space in DC2 and need to fix the layout problems.
Is it possible to somehow m
Hello,
I am running into a situation where huge schema (# of CF) causing OOM
issues to the heap. is there a way to measure how much size each column
family uses in the heap?
Hi All,
Is there any Cassandra C++ driver that works with C++98 and is also
compatible with UNIX big-endian?
I found this issue: https://datastax-oss.atlassian.net/browse/CPP-692 which
seems to have been resolved, but not sure if this is exactly what I'm
looking for. Does this make the DSE driver
To add to what Alex suggested, if you know what keys use what TTL you could
store them in different tables, with different window settings.
Jon
On Fri, Oct 26, 2018 at 1:28 AM Alexander Dejanovski
wrote:
> Hi Raman,
>
> TWCS is the best compaction strategy for TTL data, even if you have
> diffe
Thank you for your reply. I actually found your blog post regarding this
topic and browsed through it, but it did not yield the answer I was looking
for. In fact, it seems impossible to do what I wish to do without defining
a UDA for this specific use case -- something that is not practical to do
w
Hi,
The answer to your question on 'merge Cassandra racks' is, the only safe
way to migrate nodes across physical racks is to perform a DC migration.
This does not include decommissioning and re-adding nodes.
It should be done if the system is stable and do not have other issues.
Talking about un
12 matches
Mail list logo