Ok. I am able to understand the problem now. Issue is:
If i create a column family altercations as:
**8
CREATE TABLE altercations (
instigator text,
started_at timestamp,
s
Hi Brian,
Thanks for these references. These will surly help as i am on my way to get
them integrate with-in Kundera.
Surprisingly Column family itself was not created with example i was trying.
Thanks again,
-Vivek
On Tue, Oct 9, 2012 at 8:33 AM, Brian O'Neill wrote:
> Hey Vivek,
>
> The same
Hey Vivek,
The same thing happened to me the other day. You may be missing a component in
your compound key.
See this thread:
http://mail-archives.apache.org/mod_mbox/cassandra-dev/201210.mbox/%3ccajhhpg20rrcajqjdnf8sf7wnhblo6j+aofksgbxyxwcoocg...@mail.gmail.com%3E
I also wrote a couple blogs
Certainly. As these are available with cql3 only!
Example mentioned on datastax website is working fine, only difference is i
tried with a compound primary key with 3 composite columns in place of 2
-Vivek
On Tue, Oct 9, 2012 at 7:57 AM, Arindam Barua wrote:
> ** **
>
> Did you use the “--cql3
Did you use the "--cql3" option with the cqlsh command?
From: Vivek Mishra [mailto:mishra.v...@gmail.com]
Sent: Monday, October 08, 2012 7:22 PM
To: user@cassandra.apache.org
Subject: Using compound primary key
Hi,
I am trying to use compound primary key column name and i am referring to:
http:
Hi,
I am trying to use compound primary key with cassandra and i am referring
to:
http://www.datastax.com/dev/blog/whats-new-in-cql-3-0
I have created a column family as:
CREATE TABLE altercations (
instigator text,
started_at timestamp,
ships_destroyed int,
energy_use
Hello.
In the process of trying to streamline and provide better reporting
for various data storage systems, I've realized that although we're
verifying that nodetool repair runs, we're not verifying that it is
successful.
I found a bug relating to the exit code for nodetool repair, where, in
som
I did wait for atleast 5 minutes before terminating it. Also sometimes it
results in server crash as well, though data volume is not very huge.
-Vivek
On Tue, Oct 9, 2012 at 7:05 AM, Vivek Mishra wrote:
> It was on 1 node and there is no error in server logs.
>
> -Vivek
>
>
> On Tue, Oct 9, 201
It was on 1 node and there is no error in server logs.
-Vivek
On Tue, Oct 9, 2012 at 1:21 AM, aaron morton wrote:
> get User where user_name = 'Vivek', it is taking ages to retrieve that
>> data. Is there anything i am doing wrong?
>>
> How long is ages and how many nodes do you have?
> Are ther
The problem was - I calculated 3 tokens for random partitioner but
used them with BOP, so nodes were not supposed to be loaded evenly.
That's ok, I got it.
But what I don't understand, why nodetool ring shows equal ownership.
This is an example:
I created small cluster with BOP and three tokens
00
> It works pretty fast.
Cool.
Just keep an eye out for how big the lucene token row gets.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 7/10/2012, at 2:57 AM, Oleg Dulin wrote:
> So, what I ended up doing is this --
>
> As I write m
What is the CF schema ?
> Is it not possible to include a column in both the set clause and in the
> where clause? And if it is not possible, how come?
Not sure.
Looks like you are looking for a conditional update here. You know the row is
at ID 1 and you only want to update if locked = 'fal
> Is it still an issue if you don't run a repair within gc_grace_seconds ?
There is a potential issue.
You want to make sure the tombstones are distributed to all replicas *before*
gc_grace_seconds has expired. If they are not you can have a case where some
replicas compact and purge their tombs
If you are restoring the backup to get back to previous point in them, then you
will want to remove all hints from the cluster. You will also want to stop
recording them, IIRC the only way to do that is via a yaml config.
If you are restoring the data to recover from some sort of loss, then kee
Hi!
In the last 3 days I see many messages of "READ messages dropped in last
5000ms" on one of my 3 nodes cluster.
I see no errors in the log.
There are also messages of "Finished hinted handoff of 0 rows to endpoint"
but I had those for a while now, so I don't know if they are related.
I am runnin
> get User where user_name = 'Vivek', it is taking ages to retrieve that data.
> Is there anything i am doing wrong?
>
How long is ages and how many nodes do you have?
Are there any errors in server logs ?
When you do a get by secondary index at a CL higher than ONE ever RFth node is
involved.
This is an issue with using the BOP.
If you are just starting out stick with the Random Partitioner.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 5/10/2012, at 10:33 AM, Andrey Ilinykh wrote:
> It was my first thought.
> Then I md5
Not sure why you have two different definitions for the bars2 CF.
You will need to create SSTable's that match the schema cassandra has.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 5/10/2012, at 7:15 AM, T Akhayo wrote:
> Good even
> In short the question is whether the row_cache_size_in_mb can exceed the heap
> setting for cassandra 1.1.4 if jna.jar is present in the libs?
Yes.
AFAIK jna.jar is not required for off heap row cache in 1.1.X
> My heap settings are 8G and new heap size is 1600M.
You can reduce the size of t
AFAIK in the code the minimum exclusive value token is -1, so as a signed
integer the maxmium value is 2**127
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 4/10/2012, at 3:19 AM, Carlos Pérez Miguel wrote:
> Hello,
>
> Reading the wiki
I'm attempting to plot how "busy" the node is doing compactions but there
seems to only be a few metrics reported that might be suitable:
CompletedTasks, PendingTasks, TotalBytesCompacted,
TotalCompactionsCompleted.
It's not clear to me what the difference between CompletedTasks and
TotalCompactio
Hello,
I am using BulkOutputFormat to load data from a .csv file into Cassandra. I am
using Cassandra 1.1.3 and Hadoop 0.20.2.I have 7 hadoop nodes: 1
namenode/jobtracker and 6 datanodes/tasktrackers. Cassandra is installed on 4
of these 6 datanodes/tasktrackers.The issue happens when I have mo
So what solution should be for cassandra architecture when we need to make
Hadoop M\R jobs and not be restricted by number of CF?
What we have now is fair amount of CFs (> 2K) and this number is slowly
growing so we already planing to merge partitioned CFs. But our next goal is to
run hadoop ta
Thanks very much, Adeel! It works much better!
Thierry
Please upgrade the JAVA with 1.7.X then it will be working.
Thanks & Regards
*Adeel**Akbar*
On 10/8/2012 1:36 PM, Thierry Templier wrote:
Hello,
I would want to upgrade Cassandra to version 1.1.5 but I have a
problem when trying to st
Hi,
We're running a small Cassandra cluster (1.1.4) with two nodes and
serving data to our Web and Java application. After up-gradation of
Cassandra from 1.0.8 to 1.1.4, we're starting to see some weird issues.
If we run 'ring' command from second node, its show that failed to
connect 7199 o
Please upgrade the JAVA with 1.7.X then it will be working.
Thanks & Regards
*Adeel**Akbar*
On 10/8/2012 1:36 PM, Thierry Templier wrote:
Hello,
I would want to upgrade Cassandra to version 1.1.5 but I have a
problem when trying to start this version:
$ ./cassandra -f
xss = -ea -javaagen
Hello,
I would want to upgrade Cassandra to version 1.1.5 but I have a problem
when trying to start this version:
$ ./cassandra -f
xss = -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities
-XX:ThreadPriorityPolicy=42 -Xms1024M -Xmx1024M -Xmn256M
-XX:+HeapDumpOnOutOfMemoryError -
27 matches
Mail list logo