This vulnerability is only exposed if someone can access your JMX port. If you
lock down access to JMX ports then you can avoid it.
-Jeremiah
> On Sep 2, 2020, at 3:36 AM, Sam Tunnicliffe wrote:
>
> Hi Manish,
>
> unfortunately I'm afraid, as far as I'm aware there is not.
>
> Thanks,
> Sam
JustFYI if being able to operationally do things many nodes at a time, you
should look at setting up racks. With num racks = RF you can take down all
nodes in a given rack at once without affecting LOCAL_QUORUM. Your single
token example has the same functionality in this respect as a vnodes c
The easiest way to figure out what happened is to examine the system log. It
will tell you what happened. But I’m pretty sure your nodes got new tokens
during that time.
If you want to get back the data inserted during the 2 hours you could use
sstableloader to send all the data from the /var
Just an FYI. DSE Search does not run in its own JVM, it runs in the same JVM
that Cassandra is running in. DSE Search also has integration with Spark
map/reduce out of the box.
> On Jun 16, 2015, at 9:42 AM, Andres de la Peña wrote:
>
> Thanks for your interest.
>
> I am not familiar with
You probably want to re-think your data model here. 50 million rows per
partition is not going to be optimal. You will be much better off keeping that
down to hundreds of thousands per partition in a worst case.
-Jeremiah
On Jun 5, 2014, at 8:29 PM, Xu Zhongxing wrote:
> Is writing too man
That looks like you started the initial nodes with num tokens=1, then later
switched to vnodes, by setting num tokens to 256, then added that new node with
256 vnodes to start. Am I right?
Since you don't have very much data, the easiest way out of this will be to
decommission the original nod
Unless the issue is "I have some giant partitions mixed in with non-giant ones"
the usual reason for "data size imbalance" is STCS is being used.
You can look at nodetool cfhistograms and cfstats to get info about partition
sizes.
If you copy the data off to a test node, and run "nodetool compa
Russell,
The hinted handoff manager is checking for hints to see if it needs to pass
those off during the decommission so that the hints don't get lost. You most
likely have a lot of hints, or a bunch of tombstones, or something in the table
causing the query to timeout. You aren't seeing any
Ariel,
DSE lets you specify an "Analytics" virtual data center. You can then
replicate your keyspaces over to that data center, and run your Analytics jobs
against it, and as long as they are using the LOCAL_ consistency levels, they
won't be hitting your real time nodes, and vice versa. So th
Also, in terms of overhead, server side the overhead is pretty much all at the
Column Family (CF)/Table level, so 100 keyspaces, 1 CF each, is the same as 1
keyspace, 100 CF's.
-Jeremiah
On Mar 11, 2014, at 10:36 AM, Jeremiah D Jordan
wrote:
> The use of more than one keyspac
The use of more than one keyspace is not uncommon. Using 100's of them is.
That being said, different keyspaces let you specify different replication and
different authentication. If you are not going to be doing one of those
things, then there really is no point to multiple keyspaces. If yo
Also it might be:
https://issues.apache.org/jira/browse/CASSANDRA-6541
That is causing the high heap.
-Jeremiah
On Feb 18, 2014, at 5:01 PM, Jonathan Ellis wrote:
> Sounds like you have CMSInitiatingOccupancyFraction set close to 60.
> You can raise that and/or figure out how to use less heap.
TL;DR you need to run repair in between doing those two things.
Full explanation:
https://issues.apache.org/jira/browse/CASSANDRA-2434
https://issues.apache.org/jira/browse/CASSANDRA-5901
Thanks,
-Jeremiah Jordan
On Nov 25, 2013, at 11:00 AM, Christopher J. Bottaro
wrote:
> Hello,
>
> We rec
Paulo,
If you have large data sizes then the vnodes with hadoop issue is moot. You
will get that many splits with or without vnodes. The issues come when you
don't have a lot of data, so all the extra splits slow everything down to a
crawl because there are 256 times as many tasks created as y
http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.html#cassandra/configuration/configVnodesProduction_t.html
On Sep 18, 2013, at 9:41 AM, Chris Burroughs wrote:
> http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.html#cassandra/operations/ops_add_dc_to_cluster_t.h
Thanks for everyone's work on this release!
-Jeremiah
On Sep 3, 2013, at 8:48 AM, Sylvain Lebresne wrote:
> The Cassandra team is very pleased to announce the release of Apache Cassandra
> version 2.0.0. Cassandra 2.0.0 is a new major release that adds numerous
> improvements[1,2], including:
>
post, I'm using the
> Simple placement strategy, with the RackInferringSnitch. How does that play
> into the bugs mentioned previously about cross-DC replication?
>
> MN
>
> On 08/30/2013 01:28 PM, Jeremiah D Jordan wrote:
>> You probably want to go to 1.0.11/12 first no matter what
You need to introduce the new "vnode enabled" nodes in a new DC. Or you will
have similar issues to https://issues.apache.org/jira/browse/CASSANDRA-5525
Add vnode DC:
http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.html#cassandra/operations/ops_add_dc_to_cluster_t.html
Point c
FYI:
http://techblog.netflix.com/2012/02/aegisthus-bulk-data-pipeline-out-of.html
-Jeremiah
On Aug 30, 2013, at 9:21 AM, "Hiller, Dean" wrote:
> is there a SSTableInput for Map/Reduce instead of ColumnFamily (which uses
> thrift)?
>
> We are not worried about repeated reads since we are idem
You probably want to go to 1.0.11/12 first no matter what. If you want the
least chance of issue you should then go to 1.1.12. While there is a high
probability that going from 1.0.X->1.2 will work. You have the best chance at
no failures if you go through 1.1.12. There are some edge cases th
Pretty sure you can put the list in the yaml file too.
-Jeremiah
On Jul 12, 2013, at 3:09 AM, aaron morton wrote:
>> Can he not specify all 256 tokens in the YAML of the new
>> cluster and then copy sstables?
>> I know it is a bit ugly but should work.
> You can pass a comma sepa
To force clean out a tombstone.
1. Stop doing deletes on the CF, or switch to performing all deletes at ALL
2. Run a full repair of the cluster for that CF.
3. Change GC grace to be small, like 5 seconds or something for that CF
Either:
4. Find all sstables which have that row key in them using ss
If you are using 1.2, I would checkout https://github.com/mstump/libcql
-Jeremiah
On Jul 2, 2013, at 5:18 AM, Shubham Mittal wrote:
> I am trying to run below code, but it gives this error. It compiles without
> any errors. Kindly help me.
> (source of the code :
> http://posulliv.github.io/
23 matches
Mail list logo