Hello,
In a Cassandra cluster I want to push a notification to rabbitmq whenever a
change (insert/update/delete) was made to some Cassandra tables, with the
following requirements:
The notifications should:
1. Be ordered in the same order the changes were stored.
2. Be sent only if t
Hi Oren,
I've spent a reasonable time working out triggers and I would say that your
best bet is doing this in the app.
Just publish a rabbitmq message from the app when you execute a statement.
If your goal is to have an audit then try batch writing data to the tables
and delta to their audit cou
Scenario:
Converting an Oracle table to Cassandra, one Oracle table to 4 Cassandra
tables, basically time-series - think log or auditing. Retention is 10
years, but greater than 95% of reads will occur on data written within the
last year. 7 day TTL used on a small percentage of the records, major
With a 10 year retention, just ignore the target sstable count (I should remove
that guidance, to be honest), and go for a 1 week window to match your
partition size. 520 sstables on disk isn’t going to hurt you as long as you’re
not reading from all of them, and with a partition-per-week the bl
I skipped over the more important question - loading data in. Two options:
1) Load data in order through the normal writepath and use “USING
TIMESTAMP” to set the timestamp, or
2) Use CQLSSTableWriter and “USING TIMESTAMP” to create sstables, then
sstableloader them into the cluste
Thank you Jeff - always nice to hear straight from the source.
Any issues you can see with 3 (my calendar-week bucket not aligning with
the arbitrary 7-day window)? Or am I confused (I'd put money on this
option, but I've been wrong once or twice before)?
On Fri, Dec 16, 2016 at 12:50 PM, Jeff Ji
The issue is that your partitions will likely be in 2 sstables instead of
“theoretically” 1. In practice, they’re probably going to bleed into 2 anyway
(memTable flush to sstable isn’t going to happen exactly when the window
expires, so it’ll bleed a bit anyway), so I bet no meaningful impact.
Hi,
We have a brand new Cassandra cluster (version 3.0.4) and we set up
nodetool repair scheduled for every day (without any options for repair).
As per documentation, incremental repair is the default in this case.
Should we do a full repair for the very first time on each node once and
then leav
This was fixed post 3.0.4 please upgrade to latest 3.0 release
On Fri, Dec 16, 2016 at 4:49 PM, Kathiresan S
wrote:
> Hi,
>
> We have a brand new Cassandra cluster (version 3.0.4) and we set up
> nodetool repair scheduled for every day (without any options for repair).
> As per documentation, in
Thank you!
Is any work around available for this version?
Thanks,
Kathir
On Friday, December 16, 2016, Jake Luciani wrote:
> This was fixed post 3.0.4 please upgrade to latest 3.0 release
>
> On Fri, Dec 16, 2016 at 4:49 PM, Kathiresan S <
> kathiresanselva...@gmail.com
> > wrote:
>
>> Hi,
>>
You probably want to look at change data capture rather than triggers:
http://cassandra.apache.org/doc/latest/operating/cdc.html
Be aware that one of your criteria regarding operation order is going to be
very difficult to guarantee due to eventual consistency.
On Fri, Dec 16, 2016, 2:43 AM Matij
Hello,
I am trying to fight a high CPU problem on some of our nodes. Thread dumps show
that it's not GC threads (we have 30GB heap), iostat %iowait confirms it's not
disk (ranges between 0.3 - 0.9%). One of the ways in which the problem
manifests is that the nodes can't compact SSTables and it
Thanks again, Jeff.
Thinking about this some more, I'm wondering if I'm overthinking or if
there's a potential issue:
If my compaction_window_size is 7 (DAYS), and I've got TTLs of 7 days on
some (relatively small percentage) of my records - am I going to be leaving
tombstones around all over the
Tombstone compaction subproperties can handle tombstone removal for you (you’ll
set a ratio of tombstones worth compacting away – for example, 80%, and set an
interval to prevent continuous compaction – for example, 24 hours, and then
anytime there’s no other work to do, if there’s an sstable ov
Gotcha. "never compacted" has an implicit asterisk referencing
tombstone_compaction_interval and tombstone_threshold, sounds like. More
of a "never compacted" via strategy selection, but eligible for
tombstone-triggered compaction.
On Fri, Dec 16, 2016 at 10:07 PM, Jeff Jirsa
wrote:
> Tombston
With the caveat that tombstone compactions are disabled by default in TWCS (and
DTCS)
--
Jeff Jirsa
> On Dec 16, 2016, at 8:34 PM, Voytek Jarnot wrote:
>
> Gotcha. "never compacted" has an implicit asterisk referencing
> tombstone_compaction_interval and tombstone_threshold, sounds like.
16 matches
Mail list logo