Hi
I am adding a new expiring column to an existing column family in
cassandra. I want this new column to be expired at the same time as all the
other expiring columns in the Column Family.
One way of doing this is to get the ttl of existing expiring Columns in
that CF and set that value in my new
Hi, Duncan:
Thanks for your reply, but it didn't help.
yzhang@yzhangmac1:~/dse/bin$ ./cqlsh hostname 9160 -u user -p passwordConnected
to P2 QA Cluster at xxx:9160.[cqlsh 3.1.2 | Cassandra 1.2.18.1 | CQL spec 3.0.0
| Thrift protocol 19.36.2]Use HELP for help.cqlsh> use
myKeyspace;cqlsh:myKeyspac
Hi,
On 26/02/15 01:24, java8964 wrote:
...
select * from "myTable";
59 | 336 | 1100390163336 | A |
[{"updated_at":1424844362530,"ids":"668e5520-bb71-11e4-aecd-00163e56be7c"}]
59 | 336 | 1100390163336 | D |
[{"updated_at":1424844365062,"ids":"668e5520-bb71-11e4-aecd-00163e56be7c"}]
Obvio
On Wed, Feb 25, 2015 at 3:38 PM, Batranut Bogdan wrote:
> I have a new node that I want to add to the ring. The problem is that
> nodetool says UJ I have left it for several days and the status has not
> changed. In Opscenter it is "seen" as in an unknown cluster.
>
If I were you, I would do the
Here is the version of the cqlsh and Cassandra I am using:
yzhang@yzhangmac1:~/dse/bin$ ./cqlsh hostname 9160 -u username -p
passwordConnected to P2 QA Cluster at c1-cass01.roving.com:9160.[cqlsh 3.1.2 |
Cassandra 1.2.18.1 | CQL spec 3.0.0 | Thrift protocol 19.36.2]Use HELP for
help.cqlsh> use m
Hello all,
I have a new node that I want to add to the ring. The problem is that nodetool
says UJ I have left it for several days and the status has not changed. In
Opscenter it is "seen" as in an unknown cluster.
>From the time that I started it, it was streaming data and the data size is
>5,9
Have a look at your column family histograms (nodetool cfhistograms iirc),
if you notice things like a very long tail, a double hump or outliers it
would indicate something wrong with your data model or you have a hot
partition key/s.
Also looking at your 99 and 95 percentile latencies will just h
Cassandra 1.2.19
We would like to turn on Cassandra's internal security (PasswordAuthenticator
and CassandraAuthorizer) on the ring (away from AllowAll). (Clients are already
passing credentials in their connections.) However, I know all nodes have to be
switched to those before the basic secur
I figured out the issue. I'm using a VM and the template I had did not
configure enough virtual memory. I'm not sure what the minimum is but 2048
seems to work. For the record, I'm using
/usr/share/cassandra/lib/jna-4.1.0.jar
Thanks for all of the tips!
Garret
On Wed, Feb 25, 2015 at 11:13 AM
CentOS6 and every major version of C* from 1.1 through 2.1, but I would be
curious if there's maybe a memory leak or something fixed between 3.2.4 and
3.2.7...? AFAIK, it's only use for memlocking the heap and creating
hardlinks for snapshots, both of which work.
On Wed, Feb 25, 2015 at 2:53 PM,
Hi,
Check how many active CompactionExecutors is showing in "nodetool tpstats".
Maybe your concurrent_compactors is too low. Enforce 1 per CPU core,
even it's the default value on 2.1.
Some of our nodes were running with 2 compactors, but we have an 8 core CPU...
After that monitor your nodes to b
Also I always install JNA from the JNA page.
I did the installation for this blog post in CentOS 6.5:
http://www.pythian.com/blog/from-0-to-cassandra-an-exhaustive-approach-to-installing-cassandra/
Regards,
Carlos Juzarte Rolo
Cassandra Consultant
Pythian - Love your data
rolo@pythian | Twitte
Hello,
I always install JNA into the lib directory of java itself
Since I normally have java in /opt/java I put the JNA into /opt/java/lib.
~$ grep JNA /var/log/cassandra/system.log
INFO HH:MM:SS JNA mlockall successful
Regards,
Carlos Juzarte Rolo
Cassandra Consultant
Pythian - Love your d
Hi,
On this page
http://www.datastax.com/documentation/cassandra/2.0/cassandra/install/installJnaRHEL.html
it says
"Cassandra requires JNA 3.2.7 or later. Some Yum repositories may provide
earlier versions"
and at the bottom
"If you can't install using Yum or it provides a version of the JNA
We've been using jna-3.2.4-2.el6.x86_64 with the Sun/Oracle JDK for
probably 2-years now, and it works just fine. Where are you seeing 3.2.7
required at? I searched the pages you link and that string isn't even in
there.
Regardless, I assure you the newest jna that ships in the EL6 repo works
wi
Hello,
I'm having problems getting cassandra to start with the configuration
listed above.
Yum wants to install 3.2.4-2.el6 of the JNA along with several other
packages including java-1.7.0-openjdk
The documentation states that a JNA version earlier that 3.2.7 should not
be used, so the jar file
Hi,
One more thing. Hinted Handoff for last week for all nodes was less than 5.
For me every READ is a problem because it must open too many files (3
SSTables), which occurs as an error in reads, repairs, etc.
Regards
Piotrek
On Wed, Feb 25, 2015 at 8:32 PM, Ja Sam wrote:
> Hi,
> It is not o
Hi,
It is not obvious, because data is replicated to second data center. We
check it "manually" for random records we put into Cassandra and we find
all of them in secondary DC.
We know about every single GC failure, but this doesn't change anything.
The problem with GC failure is only one: restart
I think you may have a vicious circle of errors: because your data is not
properly replicated to the neighbour, it is not replicating to the
secondary data center (yeah, obvious). I would suspect the GC errors are
(also obviously) the result of a backlog of compactions that take out the
neighbour (
Hi Roni,
The repair results is following (we run it Friday): Cannot proceed on
repair because a neighbor (/192.168.61.201) is dead: session failed
But to be honest the neighbor did not died. It seemed to trigger a series
of full GC events on the initiating node. The results form logs are:
[2015-0
Hi Piotr,
Are your repairs finishing without errors?
Regards,
Roni Balthazar
On 25 February 2015 at 15:43, Ja Sam wrote:
> Hi, Roni,
> They aren't exactly balanced but as I wrote before they are in range from
> 2500-6000.
> If you need exactly data I will check them tomorrow morning. But all n
Hi,
We have a small cluster running 2.0.12 and after adding a new node to it
running nodetool cleanup fails on every old node with "AssertionError:
Memory was freed". It seems to be fixed in 2.0.13, see
https://issues.apache.org/jira/browse/CASSANDRA-8716. Is there any
workaround for this in 2.0.1
Hi, Roni,
They aren't exactly balanced but as I wrote before they are in range from
2500-6000.
If you need exactly data I will check them tomorrow morning. But all nodes
in AGRAF have small increase of pending compactions during last week, which
is "wrong direction"
I will check in the morning get
Hi Piotr,
What about the nodes on AGRAF? Are the pending tasks balanced between
this DC nodes as well?
You can check the pending compactions on each node.
Also try to run "nodetool getcompactionthroughput" on all nodes and
check if the compaction throughput is set to 999.
Cheers,
Roni Balthazar
Hi Roni,
It is not balanced. As I wrote you last week I have problems only in DC in
which we writes (on screen it is named as AGRAF:
https://drive.google.com/file/d/0B4N_AbBPGGwLR21CZk9OV1kxVDA/view). The
problem is on ALL nodes in this dc.
In second DC (ZETO) only one node have more than 30 SSTab
Hi Ja,
How are the pending compactions distributed between the nodes?
Run "nodetool compactionstats" on all of your nodes and check if the
pendings tasks are balanced or they are concentrated in only few
nodes.
You also can check the if the SSTable count is balanced running
"nodetool cfstats" on y
I do NOT have SSD. I have normal HDD group by JBOD.
My CF have SizeTieredCompactionStrategy
I am using local quorum for reads and writes. To be precise I have a lot of
writes and almost 0 reads.
I changed "cold_reads_to_omit" to 0.0 as someone suggest me. I used set
compactionthrouput to 999.
So i
>
> If You could be so kind and validate above and give me an answer is my
> disk are real problems or not? And give me a tip what should I do with
> above cluster? Maybe I have misconfiguration?
>
>
>
You disks are effectively idle. What consistency level are you using for
reads and writes?
Actua
Hi Gaurav,
I recommend you just run a MapReduce job for this computation.
Alternatively, you can look at the code for the C* MapReduce input format:
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/hadoop/cql3/CqlInputFormat.java
That should give you what you need to
I am sorry if it's too basic and you already looked at that, but the first
thing I would ask would be the data model.
What data model are you using (how is your data partitioned)? What queries are
you running? If you are using ALLOW FILTERING, for instance, it will be very
easy to say why it's
You can use query tracing to check what is happening. Also you fire
jconsole/JavaVisualVM and push out some metrics like the 99th read Beans
for that column family.
A simpler check is using cfstats and look for weird numbers (high number
sstables, if you are deleting check how much tombstones per s
Our Cassandra database just rolled to live last night. I’m looking at our query
performance, and overall it is very good, but perhaps 1 in 10,000 queries takes
several hundred milliseconds (up to a full second). I’ve grepped for GC in the
system.log on all nodes, and there aren’t any recent GC e
I read that I shouldn't install version less than 6 in the end. But I
started with 2.1.0. Then I upgraded to 2.1.3.
But as I know, I cannot downgrade it
On Wed, Feb 25, 2015 at 12:05 PM, Carlos Rolo wrote:
> Your latency doesn't seem that high that can cause that problem. I suspect
> more of a
Your latency doesn't seem that high that can cause that problem. I suspect
more of a problem with the Cassandra version (2.1.3) than that with the
hard drives. I didn't look deep into the information provided but for your
reference, the only time I had serious (leading to OOM and all sort of
weird
Hi,
I write some question before about my problems with C* cluster. All my
environment is described here:
https://www.mail-archive.com/user@cassandra.apache.org/msg40982.html
To sum up I have thousands SSTables in one DC and much much less in second.
I write only to first DC.
Anyway after reading
Good morning,
Sorry for the slow reply here. I finally had some time to test cqlsh tracing on
a ccm cluster with 2 of 3 nodes down, to see if the unavailable error was due
to cqlsh or my query. Reply inline below.
On 15/01/2015 12:46, "Tyler Hobbs"
mailto:ty...@datastax.com>> wrote:
On Thu, J
36 matches
Mail list logo