Restarting Cassandra helped.
On Monday, July 29, 2019, 9:05:06 AM GMT+3, Dinesh Joshi
wrote:
Try obtaining a thread dump. It will help debug. Anything that goes via JMX
such as nodetool could be responsible for it.
Dinesh
On Jul 28, 2019, at 10:57 PM, Vlad wrote:
Hi,
suddenly I
Hi,
suddenly I noticed that one of three nodes started consume CPU in RMI TCP
Connection threads.
What it could be?
Thanks.
Thanks!
On Friday, July 19, 2019, 10:15:43 PM GMT+3, Jon Haddad
wrote:
It's a limit on the total compaction throughput.
On Fri, Jul 19, 2019 at 10:39 AM Vlad wrote:
Hi,
is 'nodetool setcompactionthroughput' sets limit for all compactions on the
node, or is it
Hi,
is 'nodetool setcompactionthroughput' sets limit for all compactions on the
node, or is it per compaction thread?
Thanks.
That's the issue - I do not use consistency ALL. I set QUORUM or ONE but it
still performs with ALL.
On Wednesday, May 22, 2019 12:42 PM, shalom sagges
wrote:
In a lot of cases, the issue is with the data model.
Can you describe the table?
Can you provide the query you use to retrieve
Hi,
I do reads in my own Python code, how cqlsh can affect it?
On Wednesday, May 22, 2019 12:02 PM, Chakravarthi Manepalli
wrote:
Hi Vlad,
Maybe the consistency level has been set manually in CQLSH. Did you try
checking your consistency level and set it back to normal? (Just a thought
Hi,
we have three nodes cluster with KS defined as
CREATE KEYSPACE someks WITH REPLICATION = { 'class' :
'org.apache.cassandra.locator.NetworkTopologyStrategy', 'some-dc': '3' } AND
DURABLE_WRITES = true;
next I read with Pyhton (cassandra-driver 3.11) from Cassandra 3.11.3 and get
error
Er
maller pages so that read repair
succeeds, or increase your commitlog segment size from 32M to 128M or so until
the read repair actually succeeds.
On Tue, Jan 1, 2019 at 12:18 AM Vlad wrote:
Hi All and Happy New Year!!!
This year started with Cassandra 3.11.3 sometimes forces level ALL despite
Also I see
WARN [ReadRepairStage:341] 2018-12-31 17:57:58,594 DataResolver.java:507 -
Encountered an oversized (37264537/16777216) read repair mutation for table
for about several hours (5-7) after cluster restart, next it disappeared.
On Tuesday, January 1, 2019 1:10 PM, Vlad
wrote
-ef9c7c02b2c1-1546182664318-1.hints to endpoint
/10.123.123.123: d2c7bb82-3d7a-43b2-8791-ef9c7c02b2c1
And from now on there is lot of Digest Mismatch exceptions in Cassandra log.
What's going on?
On Tuesday, January 1, 2019 10:18 AM, Vlad
wrote:
Hi All and Happy New Year!!!
This year st
Hi All and Happy New Year!!!
This year started with Cassandra 3.11.3 sometimes forces level ALL despite
query level LOCAL_QUORUM (actually there is only one DC) and it fails with
timeout.
As far as I understand, it can be caused by read repair attempts (we see
"DigestMismatch" errors in Cassandr
UB) is a way to go.
C*heers,---Alain Rodriguez - alain@thelastpickle.comFrance
/ Spain
The Last Pickle - Apache Cassandra Consultinghttp://www.thelastpickle.com
Le lun. 29 oct. 2018 à 16:18, Vlad a écrit :
Hi,
should autocompaction be disabled before running scrub?It seems that scrub
proce
Hi,
should autocompaction be disabled before running scrub?It seems that scrub
processes each new created table and never ends.
Thanks.
Hi, this node isn't in system.peers on both nodes.
On Thursday, September 6, 2018 10:02 PM, Michael Shuler
wrote:
On 09/06/2018 01:48 PM, Vlad wrote:
> Hi,
> 3 node cluster, Cassandra 3.9, GossipingPropertyFileSnitch, one DC
>
> I removed dead node with `nodetool ass
Hi,
this node isn't in system.peers on both nodes.
On Wednesday, August 29, 2018 4:22 PM, Vlad
wrote:
Hi,
>You'll need to disable the native transportWell, this is what I did already,
>it seems repair is running
I'm not sure whether repair will finish within 3 ho
Hi,3 node cluster, Cassandra 3.9, GossipingPropertyFileSnitch, one DC
I removed dead node with `nodetool assassinate`. It was also seed node, so I
removed it from seeds list on two other nodes and restarted them.
But I still see in log
`DEBUG [GossipTasks:1] 2018-09-06 18:32:05,149 Gossiper.java
n strongly consider moving to RF=3 because RF=2 will lead you to
this type of situation again in the future and does not allow quorum reads with
fault tolerance. Good luck,
On Wed, Aug 29, 2018 at 1:56 PM Vlad wrote:
I restarted with cassandra.join_ring=falsenodetool status on other nodes shows
t
only 0 responses.
and nodetool compactionstats shows
pending tasks: 9
- system_schema.tables: 1
- system_schema.keyspaces: 1
- ks1.tb1: 4
- ks1.tb2: 3
On Wednesday, August 29, 2018 2:57 PM, Vlad
wrote:
I restarted with cassandra.join_ring=falsenodetool status on other nodes shows
t 21:22, Alexander Dejanovski wrote:
Hi Vlad, you must restart the node but first disable joining the cluster, as
described in the second part of this blog post :
http://thelastpickle.com/blog/ 2018/08/02/Re-Bootstrapping-
Without-Bootstrapping.html
Once repaired, you'll have to run "nod
Will it help to set read_repair_chance to 1 (compaction is
SizeTieredCompactionStrategy)?
On Wednesday, August 29, 2018 1:34 PM, Vlad
wrote:
Hi,
quite urgent questions:due to disk and C* start problem we were forced to
delete commit logs from one of nodes.
Now repair is running, but
Hi,
quite urgent questions:due to disk and C* start problem we were forced to
delete commit logs from one of nodes.
Now repair is running, but meanwhile some reads bring no data (RF=2)
Can this node be excluded from reads queries? And that all reads will be
redirected to other node in the ring?
sely. My guess can be wrong.
About TWCS: http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
C*heers,---Alain Rodriguez - @arodream -
alain@thelastpickle.comFrance / Spain
The Last Pickle - Apache Cassandra Consultinghttp://www.thelastpickle.com
2018-01-18 11:15 GMT+00:00 Vlad :
Hi,
I set default_time_to_live for existing table. Does it affect existing data?
It seems data to be deleted, but after compaction, I don't see any disk space
freed as expected. Database has data for almost year, GC time is ten days, and
TTL is also ten days on one table and 100 days on other.
Hi,
I run repair, then I see that anticompaction started on all nodes.Does it mean
that all data is already repaired. Actually I increased RF, so can I already
use database?
Thanks.
Hi,
that's the issue, thanks!
On Sunday, August 13, 2017 2:49 PM, Christophe Schmitz
wrote:
Hi Vlad,
Are you by any chance inserting null values? If so you will create tombstones.
The work around (Cassandra >= 2.2) is to use unset on your bound statement (se
happen? I have several SASI indexes for this table, can this be a
reason?
Regards, Vlad
Hi,
it's possible to create both regular secondary index and SASI on the same
column:
CREATE TABLE ks.tb (id int PRIMARY KEY, name text);
CREATE CUSTOM INDEX tb_name_idx_1 ON ks.tb (name) USING
'org.apache.cassandra.index.sasi.SASIIndex';
CREATE INDEX tb_name_idx ON ks.tb (name);
But which one
a kind of workaround by changing partition key distribution boundaries,
but is there better way?
Regards, Vlad
>There's a system property (actually 2)Which ones?
On Wednesday, April 19, 2017 9:17 AM, Jeff Jirsa wrote:
On 2017-04-12 11:30 (-0700), Vlad wrote:
> Interesting, there is no such explicit warning for v.3
> https://docs.datastax.com/en/cassandra/3.0/cassan
Hi All,
I have Cassandra with Spark on node 10.0.0.1
On other server I run
./spark-shell --master spark://10.0.0.1:7077and then set
val conf = new SparkConf(true).set("spark.cassandra.connection.host",
"10.0.0.1")
It works, but I see in tcpdump that Spark node connects to Cassandra with
addres
activity streaming data to the other nodes as the token
ranges are reassigned.
-- Jacob Shadix
On Sat, Apr 8, 2017 at 10:55 AM, Vlad wrote:
Hi,
how multiple nodes should be decommissioned by "nodetool decommission"- one by
one or in parallel ?
Thanks.
s. re.
version 2.1
https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_node_to_cluster_t.html
-- Jacob Shadix
On Wed, Apr 12, 2017 at 1:48 PM, Vlad wrote:
But it seems OK to add multiple nodes at once, right?
On Tuesday, April 11, 2017 8:38 PM, Jacob Shadix
ly do one-by-one as the decommission will create
additional load/network activity streaming data to the other nodes as the token
ranges are reassigned.
-- Jacob Shadix
On Sat, Apr 8, 2017 at 10:55 AM, Vlad wrote:
Hi,
how multiple nodes should be decommissioned by "nodetool decommissio
Hi,
how multiple nodes should be decommissioned by "nodetool decommission"- one by
one or in parallel ?
Thanks.
Actually if factor is equal to total number of nodes with SimpleStrategy one
copy will be placed on each node.Does LOCAL_ONE know to choose local (the same)
node with SimpleStrategy?
On Sunday, April 2, 2017 4:02 PM, Sam Tunnicliffe wrote:
auth logins for super users is 101 replicas s
Hi,
what is the suitable replication strategy for system_auth keyspace?As I
understand factor should be equal to total nodes number, so can we use
SimpleStrategy? Does it ensure that queries with LOCAL_ONE consistency level
will be targeted to local DC (or the same node)?
Thanks.
sandra code enabling SO_KEEPALIVE on storage port, only on CQL
port.Nevertheless it works now, thanks again!
Here is link to MSDN about this timeout -
https://blogs.msdn.microsoft.com/cie/2014/02/13/windows-azure-load-balancer-timeout-for-cloud-service-roles-paas-webworker/
Regards, Vlad
On
N (Up and Normal, I guess)
I suspected connectivity problem, but tcpdump shows constant traffic on port
7001 between nodes. Restarting OTHER node than I'm connection to solves the
problem for another several minutes. I increased TCP idle time in Azure IP
address setting to 30 minutes, but it had no effect.
Thanks, Vlad
set in all
sstables as of 2.2.x for standard repairs and therefore manual migration
procedure should be necessary. It may still be a good idea to manually migrate
if you have a sizable amount of data and are using LCS as anticompaction is
rather painful.
On Sun, Jun 19, 2016 at 6:37 AM, Vlad w
Hi,
assuming I have new, empty Cassandra cluster, how should I start using
incremental repairs? Is incremental repair is default now (as I don't see -inc
option in nodetool) and nothing is needed to use it, or should we perform
migration procedure anyway? And what happens to new column families?
uot;odds", to skip latest features and regression problem and to
ensure no critical bugs found so far. But from other side previous "odds" have
no latest fixes.
So what is general production choosing recommendation now?
Regards, Vlad
Tyler, thanks for explanation!
So commit segment can contain both data from flushed table A and non-flushed
table B.How is it replayed on start up? Does C* skip portions belonging to
table A that already were written to SSTable?
Regards, Vlad
On Tuesday, March 1, 2016 11:37 PM, Tyler
ith J8? Any chances to use original method to
reduce overhead and "be happy with the results"?
Regards, Vlad
On Tuesday, March 1, 2016 4:07 PM, Jack Krupansky
wrote:
I'll defer to one of the senior committers as to whether they want that
information disseminated any furt
Hi Jack,
>you can reduce the overhead per table an undocumented Jira Can you please
>point to this Jira number?
>it is strongly not recommendedWhat is consequences of this (besides
>performance degradation, if any)?
Thanks.
On Tuesday, March 1, 2016 7:23 AM, Jack Krupansky
wrote:
3
ze, why is difference in commit log and
>memtables sizes?
Regards, Vlad
n in 1.0.6.
Any ideas ?
Regards,
Vlad Paiu
OpenSIPS Developer
On 12/16/2011 11:02 AM, Vlad Paiu wrote:
Hello,
Sorry, wrong link in the previous email. Proper link is
http://svn.apache.org/viewvc/thrift/trunk/lib/c_glib/test/
Regards,
Vlad Paiu
OpenSIPS Developer
On 12/15/2011 08:35 PM,
Hello,
Sorry, wrong link in the previous email. Proper link is
http://svn.apache.org/viewvc/thrift/trunk/lib/c_glib/test/
Regards,
Vlad Paiu
OpenSIPS Developer
On 12/15/2011 08:35 PM, Vlad Paiu wrote:
Hello,
While digging more for this I've found these :
http://svn.apache.org/v
assandra server.
It should be a function generated by the Cassandra thrift interface, but I
can't seem to find the proper one.
Any help would be very much appreciated.
Regards,
Vlad
Mina Naguib wrote:
>
>Hi Vlad
>
>I'm the author of libcassie.
>
>For what it's wor
very basic Thrift & glibc example for
Cassandra, with no 'advanced' features like connection pooling,
asynchronous stuff, etc.
Just a plain & simple connection, an insert and a column fetch that
works with the latest 1.x Cassandra.
I think it would be a great starting point and w
at the API functions are called, and guessing them
is not going that good :)
I'll wait a little longer & see if anybody can help with the C thrift, or at
least tell me it's not working. :)
Regards,
Vlad
Eric Tamme wrote:
>On 12/14/2011 04:18 PM, Vlad Paiu wrote:
>> H
Hi,
Just tried libcassie and seems it's not compatible with latest cassandra, as
even simple inserts and fetches fail with InvalidRequestException...
So can anybody please provide a very simple example in C for connecting &
fetching columns with thrift ?
Regards,
Vlad
Vlad Pa
#x27;t seem to find
what function to call to init a connection to a Cassandra instance. Any ideas ?
Thanks and Regards,
Vlad
Jeremiah Jordan wrote:
>If you are OK linking to a C++ based library you can look at:
>https://github.com/minaguib/libcassandra/tree/kickstart-libcassie-0.7/libca
g glibc thrift.
Does anyone have experience/some examples for this ?
Regards,
Vlad
i...@iyyang.com wrote:
>BTW please use
>https://github.com/eyealike/libcassandra
>
>
>Best Regards,
>Yi "Steve" Yang
>~~~
>+1-401-441-5086
>+86-13910771510
>
&g
/ThriftExamples I cannot find an example for C.
Can anybody suggest a C client library for Cassandra or provide some working
examples for Thrift in C ?
Thanks and Regards,
Vlad
Hello,
Thanks for your answer. See my reply in-line.
On 11/04/2011 01:46 PM, Amit Chavan wrote:
Answers inline.
On Fri, Nov 4, 2011 at 4:59 PM, Vlad Paiu <mailto:vladp...@opensips.org>> wrote:
Hello,
I'm a new user of Cassandra and I think it's great.
Stil
s there any way to atomically reset a counter ? I read on the
website that the only way to do it is read the variable value, and then
set it to -value, which seems rather bogus to me.
Regards,
--
Vlad Paiu
OpenSIPS Developer
56 matches
Mail list logo