C* version 2.0.12
How do I resolve item 2) ? Just want to mention that when the node is stopped,
nodetool status does not show it Down, it is missing from the list...
Thanks for the support.
On Thursday, February 26, 2015 2:52 AM, Robert Coli
wrote:
On Wed, Feb 25, 2015 at 3:38 PM,
Hi Piotrek,
your disks are mostly idle as far as I can see (the one with 17% busy
isn't that high on load). One thing came up to my mind did you look on
the sizes of your sstables? I did this with something like
find /var/lib/cassandra/data -type f -size -1k -name "*Data.db" | wc
find /var/lib/c
I've seen this before, when I tried to be clever and add nodes of a different
major version into a cluster. Any chance that's what's happening here?
> On Feb 25, 2015, at 4:52 PM, Robert Coli wrote:
>
>> On Wed, Feb 25, 2015 at 3:38 PM, Batranut Bogdan wrote:
>> I have a new node that I want
All the nodes have the same version. 2.0.12
Hi Batranut,
apart from the other suggestions - do you have ntp running on all your
cluster nodes and are times in sync?
Jan
What is the current status of clearing snapshots on Windows?
When running Cassandra 2.1.3, trying manually to run clearSnapshot I get:
"FSWriteError… Caused by: java.nio.file.FileSystemException… File is used by
another process"
I know there’s been numerous issues in JIRA trying to fix similar pr
I didn't know about this cfhistograms thing, very nice!
From: user@cassandra.apache.org
Subject: Re: Unexplained query slowness
Have a look at your column family histograms (nodetool cfhistograms iirc), if
you notice things like a very long tail, a double hump or outliers it would
indicate som
Hello Jan,
Yes I do have ntp and it is in synch.
On Thursday, February 26, 2015 11:49 AM, Jan Kesten
wrote:
Hi Batranut,
apart from the other suggestions - do you have ntp running on all your
cluster nodes and are times in sync?
Jan
Any errors in your log file?
We saw something similar when bootstrap crashed when rebuilding
secondary indexes.
See CASSANDRA-8798
~mck
No errors in the system.log file [root@cassa09 cassandra]# grep "ERROR"
system.log[root@cassa09 cassandra]#
Nothing.
On Thursday, February 26, 2015 1:55 PM, mck wrote:
Any errors in your log file?
We saw something similar when bootstrap crashed when rebuilding
secondary indexes.
Se
We did this query, most our files are less than 100MB.
Our heap setting are like (they are calculatwed using scipr in
cassandra.env):
MAX_HEAP_SIZE="8GB"
HEAP_NEWSIZE="2GB"
which is maximum recommended by DataStax.
What values do you think we should try?
On Thu, Feb 26, 2015 at 10:06 AM, Rol
Hi, Ron
I look deep into my cassandra files and SSTables created during last day
are less than 20MB.
Piotrek
p.s. Your tips are really useful at least I am starting to finding where
exactly the problem is.
On Thu, Feb 26, 2015 at 3:11 PM, Ja Sam wrote:
> We did this query, most our files are l
Hi,
8GB Heap is a good value already - going above 8GB will often result in
noticeable gc pause times in java, but you can give 12G a try just to
see if that helps (and turn it back down again). You can add a "Heap
Used" graph in opscenter to get a quick overview of your heap state.
Best reg
This should be fixed in 3.0 by a combination of
https://issues.apache.org/jira/browse/CASSANDRA-8709 and
https://issues.apache.org/jira/browse/CASSANDRA-4050.
The changes in 8709 and 4050 are invasive enough that we didn't want to
target them for the 2.1 release and is actually a big part of why w
> I figured out the issue. I'm using a VM and the template I had did not
configure enough virtual memory. I'm not sure what the minimum is but 2048
seems to work.
Cassandra will use JNA to try to mlockall for all pages currently mapped to
the process address space. On very small systems (1024mb
Hi,
I found many simmilar lines in log:
INFO [SlabPoolCleaner] 2015-02-24 12:28:19,557 ColumnFamilyStore.java:850
- Enqueuing flush of customer_events: 95299485 (5%) on-heap, 0 (0%) off-heap
INFO [MemtableFlushWriter:1465] 2015-02-24 12:28:19,569 Memtable.java:339
- Writing Memtable-customer_eve
> Can I get data owned by a particular node and this way generate sum
> on different nodes by iterating over data from virtual nodes and later
> generate total sum by doing sum of data from all virtual nodes.
>
You're pretty much describing a map/reduce job using CqlInputFormat.
Hi All,
Using DSE 4.6.1 (Spark 1.1.0.2).
I am having trouble convincing BlockManager in a Spark Worker process to use a
machine’s IP instead of its hostname. Everything else in the Spark Workers is
using the IP correctly.
ERROR 2015-02-26 21:49:01 org.apache.spark.network.SendingConnection: Er
Hi all,
My team is using Cassandra as our database. We have one question as below.
As we know, the row with the some partition key will be stored in the some node.
But how many rows can one partition key hold? What is it depend on? The node's
volume or partition data size or partition rows size(th
19 matches
Mail list logo