Hi all,
I have some question about LWT.
I am wondering if LWT works only for data mutated by LWT or not.
In other words, doing LWT on some data mutated by non-LWT operations
is still valid ?
I don't fully understand how system.paxos table works in LWT,
but row_key should be empty for a data mutat
Hi,
FQDN = Fully Qualified Domain Name
= IPv6 Address record in DNS
A = IPv4 Address record in DNS
He is saying, that by removing the resolution of a domain name (e.g
cass1.acme.com) to only return an IPv4 address, and never return an IPv6
address, the issue that Cassandra only binds to on
Sudheer,
Seems interesting, can you please eloborate what is FQDN and where to
remove mapping. Appreciate your help.
Thanks and Regards,
Goutham
On Fri, Mar 23, 2018 at 2:34 PM sudheer k wrote:
> I found a solution for this. As Cassandra can’t bind to two addresses at a
> point in time acc
Thanks you for your replies so far. We are just going to .14 because our repair
is consuming cpu and our management always want to to stay a couple version
behind for stability reasons.
Sent from my iPhone
> On Mar 23, 2018, at 4:50 PM, Jeff Jirsa wrote:
>
> Why .14? I would consider 3.0.16 t
Why .14? I would consider 3.0.16 to be production worthy.
--
Jeff Jirsa
> On Mar 23, 2018, at 2:01 PM, Nitan Kainth wrote:
>
> Hi All,
>
> Our repairs are consuming CPU and some research shows that moving to 3.0.14
> will help us fix them. I just want to know community's experience about
>
3.0.16 is the latest, I recommend going all the way up. About a hundred
bug fixes:
https://github.com/apache/cassandra/blob/cassandra-3.0/CHANGES.txt
Jon
On Fri, Mar 23, 2018 at 2:22 PM Dmitry Saprykin
wrote:
> Hi,
>
> I successfully used 3.0.14 more than a year in production. And moreover
> 3
I found a solution for this. As Cassandra can’t bind to two addresses at a
point in time according to the comments in cassandra.yaml file, we removed
mapping to FQDN and kept only A(IPv4) mapping. So, FQDN resolves to
IPv4 always and we can use FQDN in the application configuration while
talki
Hi,
I successfully used 3.0.14 more than a year in production. And moreover
3.0.10 is definitely not stable and you need to upgrade ASAP. 3.0.10
contains known bug which corrupts data during schema changes
Regards,
Dmitrii
On Fri, Mar 23, 2018 at 5:01 PM Nitan Kainth wrote:
> Hi All,
>
> Our r
Hi All,
Our repairs are consuming CPU and some research shows that moving to 3.0.14
will help us fix them. I just want to know community's experience about
3.0.14 version.
Is it stable?
Anybody had any issues after upgrading this?
Regards,
Nitan K.
Yes agree on “let really old data expire” . However, I could not find a way to
TTL an entire row. Only columns can be TTLed.
Charu
From: Rahul Singh
Reply-To: "user@cassandra.apache.org"
Date: Friday, March 23, 2018 at 1:45 PM
To: "user@cassandra.apache.org" ,
"user@cassandra.apache.org"
Sub
I think there are better ways to leverage parallel processing than to use it to
delete data. As I said , it works for one of my projects for the same exact
reason you stated : business rules.
Deleting data is an old way of thinking. Why not store the data and just use
the relevant data .. let r
Yes essentially it’s the same, but from a code complexity perspective, writing
in spark is more compact and execution is superfast. Spark uses the Cassandra
connector so the question was mostly on if there is any issue with that and also
with spark we will be deleting in analytical nodes which wo
Martin,
Would you pls share settings you had before and what did you change? We have
similar issue.
> On Mar 23, 2018, at 8:47 AM, Martin Mačura wrote:
>
> Nevermind, we resolved the issue JVM heap settings were misconfigured
>
> Martin
>
>> On Fri, Mar 23, 2018 at 1:18 PM, Martin Mač
We use spark to do same because our partition contains data for whole year and
we delete one day at a time. C* does not allow us delete without using
partition key. I know it’s wrong data model but we can’t change it due to
obvious reason of whole application redesign.
Sent from my iPhone
> On
I'm confused as to what the difference between deleting with prepared
statements and deleting through spark is? To the best of my knowledge
either way it's the same thing - normal deletion with tombstones
replicated. Is it that you're doing deletes in the analytics DC instead of
your real time on
Hi Rahul,
Thanks for your answer. Why do you say that deleting from spark is not
elegant?? This is the exact feedback I want. Basically why is it not elegant?
I can either delete using delete prepared statements or through spark. TTL
approach doesn’t work for us
Because first of all ttl
We do small inserts. For a modest size environment we do about 90,000
inserts every 30 seconds. For a larger environment, we could be doing
300,000 or more inserts every 30 seconds. In earlier versions of the
project, each insert was a separate request as each insert targets a
different partition.
Increasing queue would increase the number of requests waiting. It could make
GCs worse if the requests are like large INSERTs, but for a lot of super tiny
queries it helps to increase queue size (to a point). Might want to look into
what and how queries are being made, since there are possibly
Thanks for the explanation. In the past when I have run into problems
related to CASSANDRA-11363, I have increased the queue size via the
cassandra.max_queued_native_transport_requests system property. If I find
that the queue is frequently at capacity, would that be an indicator that
the node is h
It blocks the caller attempting to add the task until theres room in queue,
applying back pressure. It does not reject it. It mimics the behavior from
pre-SEP DebuggableThreadPoolExecutor's RejectionExecutionHandler that the other
thread pools use (exception on sampling/trace which just throw aw
Hmm. Interesting !
So you suspect that cassandra-stress tries to use the thrift protocol before
actually using the native protocol, right ?
I might check where is the difference between cassandra-stress 3.1 and
cassandra-stress 2.1 when I have some time.
Thanks.
> On Mar 23, 2018, at 10:43 AM,
I downloaded the 3.0.16 tar to /tmp on the same host as my 2.1 node was
running (without thrift), and this worked for me:
./tools/bin/cassandra-stress write n=1 -mode native cql3
protocolVersion=3
Michael
On 03/23/2018 09:30 AM, Michael Shuler wrote:
> Well, now I'm a little stumped. I tried
Well, now I'm a little stumped. I tried native mode with thrift enabled,
wrote one row, so schema is created, then set start_rpc: false,
restarted C*, and native mode fails in the same way. So it's not just
the schema creation phase. I also tried including -port native=9042 and
-schema keyspace="ke
Nevermind, we resolved the issue JVM heap settings were misconfigured
Martin
On Fri, Mar 23, 2018 at 1:18 PM, Martin Mačura wrote:
> Hi all,
>
> We have a cluster of 3 nodes with RF 3 that ran fine until we upgraded
> it to 3.11.2.
>
> Each node has 32 GB RAM, 8 GB Cassandra heap size.
>
>
Many thanks Alain for the thorough explanation,we will not disable compaction
for now.
Thanks,
Peng Xiao
-- --
??: "arodrime";
: 2018??3??23??(??) 8:57
??: "Peng Xiao"<2535...@qq.com>;
: "user";
: Re: disable compaction
Here is the command I use :
cassandra-stress user profile=cass_insert_bac.yaml ops\(insert=1\) -mode native
cql3 user=cassandra password=cassandra -rate threads=1
Thrift is disabled (start_rpc: False) as I’m not supposed to use thrift at all.
But I was surprised by org.apache.thrift.transport.T
>
> I mean to disable Compaction in the bootstrapping process,then enable it
> after the bootstrapping.
That's how I understood it :-). Bootstrap can take a relatively long time
and could affect all the nodes when using vnodes. Disabling compactions for
hours is risky, even more, if the cluster i
Hi all,
We have a cluster of 3 nodes with RF 3 that ran fine until we upgraded
it to 3.11.2.
Each node has 32 GB RAM, 8 GB Cassandra heap size.
After the upgrade, clients started reporting connection issues:
cassandra | [ERROR] Closing established connection pool to host
because of the followi
29 matches
Mail list logo