Hi,
I recommissioned a node after decommissioningit.
That happened (1) after a successfull decommission (checked), (2) without
wiping the data directory on the node, (3) simply by restarting the
cassandra service. The node now reports himself healty and up and running
Knowing that I issued the "r
Thanks Alex.
But is there any workaround possible?. I can't believe that everyone read &
process all rows at once (without pagination).
Thanks
Ajay
On Feb 10, 2015 11:46 PM, "Alex Popescu" wrote:
>
> On Tue, Feb 10, 2015 at 4:59 AM, Ajay wrote:
>
>> 1) Java driver implicitly support Pagination
I already experienced the same problem (hundreds of thousands of SSTables)
with Cassandra 2.1.2. It seems to appear when running an incremental repair
while there is a medium to high insert load on the cluster. The repair goes
in a bad state and starts creating way more SSTables than it should (eve
This kind of recovery is definitely not my strong point, so feedback on
this approach would certainly be welcome.
As I understand it, if you really want to keep that data, you ought to be
able to mv it out of the way to get your node online, then move those files
in a several thousand at a time, n
Can you run nodetool tpstats and check if there is pending requests on
GossipStage.
The timeout should not affect gossip (AFAIK).
As for problems you can have with this state is, if your nodes are marked
down for long and if you are using hinted handoff, your hints may not be
delivered and your dat
yeah... probably just 2.1.2 things and not compactions. Still probably
want to do something about the 1.6 million files though. It may be worth
just mv/rm'ing to 60 sec rollup data though unless really attached to it.
Chris
On Tue, Feb 10, 2015 at 4:04 PM, Paul Nickerson wrote:
> I was having
I was having trouble with snapshots failing while trying to repair that
table (http://www.mail-archive.com/user@cassandra.apache.org/msg40686.html).
I have a repair running on it now, and it seems to be going successfully
this time. I am going to wait for that to finish, then try a
manual nodetool
Your cluster is probably having issues with compactions (with STCS you
should never have this many). I would probably punt with
OpsCenter/rollups60. Turn the node off and move all of the sstables off to
a different directory for backup (or just rm if you really don't care about
1 minute metrics),
Hi Deepak,
Thanks.
Got it working by adding withCredentials method.
cluster = Cluster.builder()
.addContactPoint(node)
.withCredentials("yourusername", "yourpassword")
.build();
On Wed, Feb 11, 2015 at 2:03 AM, Deepak Shetty wrote:
> see the API docs
> http://www.data
see the API docs
http://www.datastax.com/drivers/java/2.0/index.html
Cluster.builder has a withCredentials method
regards
deepak
On Tue, Feb 10, 2015 at 12:24 PM, Chamila Wijayarathna <
cdwijayarat...@gmail.com> wrote:
> Hello all,
>
> I changed the authenticator value of my Cassandra database t
Hello all,
I changed the authenticator value of my Cassandra database to
'PasswordAuthenticator' in cassandra.yaml file. Previously I used following
code to connect to the database using java.
public void connect(String node) {
cluster = Cluster.builder()
.addContactPoint(nod
Thank you Rob. I tried a 12 GiB heap size, and still crashed out. There
are 1,617,289 files under OpsCenter/rollups60.
Once I downgraded Cassandra to 2.1.1 (apt-get install cassandra=2.1.1), I
was able to start up Cassandra OK with the default heap size formula.
Now my cluster is running multiple
Are you hitting long GCs on your nodes? Can check gc log or look at
cassandra log for GCInspector.
Chris
On Tue, Feb 10, 2015 at 1:28 PM, Cheng Ren wrote:
> Hi Carlos,
> Thanks for your suggestion. We did check the NTP setting and clock, and
> they are all working normally. Schema versions are
Hi Carlos,
Thanks for your suggestion. We did check the NTP setting and clock, and
they are all working normally. Schema versions are also consistent with
peers'.
BTW, the only change we made was to set some of nodes' request
timeout(read_request_timeout, write_request_timeout, range_request_timeou
On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson wrote:
> I am getting an out of memory error why I try to start Cassandra on one of
> my nodes. Cassandra will run for a minute, and then exit without outputting
> any error in the log file. It is happening while SSTableReader is opening a
> couple
I am getting an out of memory error why I try to start Cassandra on one of
my nodes. Cassandra will run for a minute, and then exit without outputting
any error in the log file. It is happening while SSTableReader is opening a
couple hundred thousand things.
I am running a 6 node cluster using Apa
Thank you Reynald. I have contributed to that issue. But I cannot
participate further right now because now I'm having an out of memory issue
which may be unrelated. I think I'll start a new thread on this list for
that.
~ Paul Nickerson
On Thu, Jan 29, 2015 at 11:15 AM, Reynald Bourtembourg <
On Mon, Feb 9, 2015 at 5:25 PM, Seth Edwards wrote:
> I see what you are saying. So basically take whatever existing token I
> have and divide it by 2, give or take a couple of tokens?
>
Yep! "bisect the token ranges" if you want to be fancy about it.
=Rob
On Tue, Feb 10, 2015 at 4:59 AM, Ajay wrote:
> 1) Java driver implicitly support Pagination in the ResultSet (using
> Iterator) which can be controlled through FetchSize. But it is limited in a
> way that we cannot skip or go previous. The FetchState is not exposed.
Cassandra doesn't support sk
Thank you!
I would like to add, that Opscenter is a valuable tool for my work. Thanks
for your work!
Kind regards
Björn
--
Björn Hachmann
metrigo GmbH
NEUE ADRESSE:
Lagerstraße 36
20357 Hamburg
p: +49 40 2093108-88
Geschäftsführer: Christian Müller, Tobias Schlottke, Philipp Westermeyer
Die Ge
Hi,
I am working on exposing the Cassandra Query APIs(Java Driver) as REST APIs
for our internal project.
To support Pagination, I looked at the Cassandra documentation, Source code
and other forums.
What I mean by pagination support is like below:
1) Client fires query to REST server
2) Server
21 matches
Mail list logo