> After I searched some document on Datastax website and some old ticket, seems
> that it works for random partitioner only, and leaves order preserved
> partitioner out of the luck.
Links ?
> or allow add Virtual Nodes manually?
If not looked into it but there is a cassandra.inital_token sta
> Ie. Query for a single column works but the column does not appear in slice
> queries depending on the other columns in the query
>
> cfq.getKey("foo").getColumn("A") returns "A"
> cfq.getKey("foo").withColumnSlice("A", "B") returns "B" only
> cfq.getKey("foo").withColumnSlice("A","B","C") retu
Can you provide specs of the column family using describe.
From: Kuldeep Mishra [mailto:kuld.cs.mis...@gmail.com]
Sent: Tuesday, January 29, 2013 12:37 PM
To: user@cassandra.apache.org
Subject: getting error for decimal type data
while I an trying to list column family data using cassandra-cli th
Thanks for the reply. Here is some information:
Do you have wide rows ? Are you seeing logging about "Compacting wide rows" ?
* I don't see any log about "wide rows"
Are you seeing GC activity logged or seeing CPU steal on a VM ?
* There is some GC, but CPU general is under 20%. We have heap
I am trying out the example given in Cassandra Definitive guide, Ch 12.
This statement gives error and I am not able to figure out the replacement
for it:
*ConfigHelper.setThriftContact(job.getConfiguration(), "localhost", 9160);*
Also,
*IColumn column = columns.get(columnName.getBytes());*
*Str
ColumnFamily: STUDENT
Key Validation Class: org.apache.cassandra.db.marshal.LongType
Default column value validator:
org.apache.cassandra.db.marshal.BytesType
Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type
GC grace seconds: 864000
Compaction min/max thresh
Did u trt accessing this cf from CQL, I think it must work from there, also try
accessing it through any API and see if error persists.
Thanks
Rishabh Agrawal
From: Kuldeep Mishra [mailto:kuld.cs.mis...@gmail.com]
Sent: Tuesday, January 29, 2013 2:51 PM
To: user@cassandra.apache.org
Subject: Re:
Le 29 janv. 2013 à 08:08, aaron morton a écrit :
>> From what I could read there seems to be a contention issue around the
>> flushing (the "switchlock" ?). Cassandra would then be slow, but not using
>> the entire cpu. I would be in the strange situation I was where I reported
>> my issue in
In hadoop-0.20.2, org.apache.hadoop.mapreduce.JobContext is a class. Looks like
in hadoop-0.21+ JobContext has morphed into an interface.
I'd guess that Hadoop support in Cassandra is based on the older Hadoop.
Brian
On Jan 29, 2013, at 3:42 AM, Tejas Patil wrote:
> I am trying to run a map-red
When connecting to Cassandra 1.2.0 from CQLSH the table was created with:
CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy',
'replication_factor' : 1};
cqlsh> use test;
cqlsh:test> create columnfamily users (KEY varchar Primary key, password
varchar, gender varchar) ;
cqlsh:test
I think you need Jna jar and jna-plaform jar in cassandra lib folder
-chandra
On Mon, Jan 28, 2013 at 10:02 PM, Tim Dunphy wrote:
> I went to github to try to download jna again. I downloaded version 3.5.1
>
> [root@cassandra-node01 cassandrahome]# ls -l lib/jna-3.5.1.jar
> -rw-r--r-- 1 ro
Hi,
I have some trouble to request my data. I use SSTableSimpleUnsortedWriter to
write SSTable. Writing and Importing works fine.
I think, I'm misusing CompositeType.Builder with SSTableSimpleUnsortedWriter.
Do you have any idea ?
Thanks
Here is my case :
/**
* CREATE STATEMENT
*/
CREATE TAB
I was misunderstood this
http://www.datastax.com/dev/blog/virtual-nodes-in-cassandra-1-2 , especially
"If you want to get started with vnodes on a fresh cluster, however, that is
fairly straightforward. Just don’t set the initial_token parameter in
yourconf/cassandra.yaml and instead enable th
Hello,
I've been testing a four identical node cassanda 1.2 cluster for a number
of days. I have written a c# client using cassandra sharp() which inserts
data into a table.
The keyspace difinition is
CREATE KEYSPACE "data"
WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'dc1' : 3};
Forgot to mention that I also used
ALTER KEYSPACE "Keyspace1" WITH REPLICATION =
{ 'class' : 'SimpleStrategy', 'replication_factor' : 3 };
To change the replication factor for Keyspace1. For some reason the command
line doesn't me to change the replication factor. I get the following error
Una
Thanks very much Aaron.
* Other nodes still report it is in "Joining"
* Here are bootstrap information in the log
[ca...@dsat305e.prod:/usr/local/cassy log]$ grep -i boot system.log
INFO [main] 2013-01-28 20:16:07,488 StorageService.java (line 774)
JOINING: schema complete, ready to bootstrap
I
Hi Chandra,
Thanks for your reply. Well I have added both jna.jar and platform.jar to
my lib directory (jna 3.3.0):
[root@cassandra-node01 cassandrahome]# ls -l lib/jna.jar lib/platform.jar
-rw-r--r-- 1 root root 865400 Jan 29 12:14 lib/jna.jar
-rw-r--r-- 1 root root 841291 Jan 29 12:14 lib/platf
One more question, can I add a virtual node manually without reboot and rebuild
a host data?
I checked nodetool command, there is no option to add a node.
Thanks.
Zhong
On Jan 29, 2013, at 11:09 AM, Zhong Li wrote:
> I was misunderstood this
> http://www.datastax.com/dev/blog/virtual-node
Try downloading jna-3.5.1.jar and copying into the lib directory. I made
the same mistake :)
On Jan 29, 2013 5:20 PM, "Tim Dunphy" wrote:
> Hi Chandra,
>
> Thanks for your reply. Well I have added both jna.jar and platform.jar to
> my lib directory (jna 3.3.0):
>
> [root@cassandra-node01 cassandr
Oops you've already done that. Ive used the same methods for java 6 and
java 7.
On Jan 29, 2013 6:35 PM, "Jabbar" wrote:
> Try downloading jna-3.5.1.jar and copying into the lib directory. I made
> the same mistake :)
> On Jan 29, 2013 5:20 PM, "Tim Dunphy" wrote:
>
>> Hi Chandra,
>>
>> Thanks f
Chandra,
Try adding the following option, which may give you more info in the log or
console.
-Xcheck:jni
Do you have any custom c++ libraries using JNA interface?You should add
your custom libraries in LD_LIBRARY_PATH
or provide them in -Djava.library.path.
Yogi
On Tue, Jan 29, 2013 at 1
Sure thing, Here is a console dump showing the error. Notice that column '9801'
is NOT NULL on the first two queries but IS NULL on the last query. I get this
behavior constantly on any writes that coincide with a flush. The column is
always readable by itself but disappears depending on the oth
I've heard that on Amazon EC2 I should be using ephemeral drives...but I
want/need to be using encrypted volumes.
On my local machine I use cryptsetup to encrypt a device and then mount it
and so on...but on Amazon I get the error:
"Cannot open device /dev/xvdb for read-only access".
Reading fur
> How can I check for this secondary index read fails?
Your description was that reads which use a secondary index (not the row key)
failed…
> if I do a simple “list ;” the data is shown, but it I do a “get
> where =’’;”
If you can retrieve the row using it's row key, but not via the secondar
> * Will try it tomorrow. Do I need to restart server to change the log level?
You can set it via JMX, and supposedly log4j is configured to watch the config
file.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 29/0
> I am trying out the example given in Cassandra Definitive guide, Ch 12.
That book may be out of date.
You might be better off with info from
http://www.datastax.com/docs/1.1/cluster_architecture/hadoop_integration and
http://wiki.apache.org/cassandra/HadoopSupport as well as the sample in the
The cli is probably trying to read more data than it can keep in memory.
Try using the LIMIT clause for the list statement, or getting a single row, to
reduce the size of the read.
Alternatively try increase the heap size for the cassandra-cli in
bin/cassandra-cli
> Built indexes: [STU
Brian,
Could you raise a ticket at
https://issues.apache.org/jira/browse/CASSANDRA ?
Thanks
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 30/01/2013, at 1:23 AM, Brian Jeltema wrote:
> In hadoop-0.20.2, org.apach
On Tue 29 Jan 2013 03:39:17 PM CST, aaron morton wrote:
So If I write to CF Users with rowkey="dean"
and to CF Schedules with rowkey="dean", it is actually one row?
In my mental model that's correct.
A RowMutation is a row key and a collection of (internal) ColumnFamilies which
contain the
About as definitive as the word maybe. Oreilys seo keeps it close to top of
search results but it probably not the think you want.
On Tuesday, January 29, 2013, aaron morton wrote:
> I am trying out the example given in Cassandra Definitive guide, Ch 12.
>
> That book may be out of date.
> You mi
Hey Aaron,
It gives compilation errors saying that the method is undefined.
Thanks,
Tejas Patil
On Tue, Jan 29, 2013 at 4:17 PM, Edward Capriolo wrote:
>
> About as definitive as the word maybe. Oreilys seo keeps it close to top
> of search results but it probably not the think you want.
>
>
>
Pretty sure you are looking for something like:
// thrift input job settings
ConfigHelper.setInputRpcPort(job.getConfiguration(), "9160");
ConfigHelper.setInputInitialAddress(job.getConfiguration(), "127.0.0.1");
ConfigHelper.setInputPartitioner(job.getConfiguration(), "RandomPartitioner");
// th
I really really need this running. I cannot get hadoop-0.20.2 tarball from
apache hadoop project website. Is there any place where I can get it ?
thanks,
Tejas Patil
On Tue, Jan 29, 2013 at 1:10 PM, aaron morton wrote:
> Brian,
> Could you raise a ticket at
> https://issues.apache.org/jira/brow
http://archive.apache.org/dist/hadoop/core/ has older releases.
On Tue, Jan 29, 2013 at 8:08 PM, Tejas Patil wrote:
> I really really need this running. I cannot get hadoop-0.20.2 tarball
> from apache hadoop project website. Is there any place where I can get it ?
>
> thanks,
> Tejas Patil
>
>
>
Hi
I tried setting the storage port in program using
System.setProperty("cassandra.storage_port" , "7002").
Still not able to communicate with the ByteOrdered cluster. Seems like the port
is still pointing to 7000. Not sure how to validate this setting.
Any inputs related to this would be reall
we had this issue before, but after adding those two jar the error gone.
We used 1.0.8 cassandra (JNA 3.3.0, JNA platform. 3.3.0). what version
cassnadra you are using ?
-chandra
On Tue, Jan 29, 2013 at 12:19 PM, Tim Dunphy wrote:
> Hi Chandra,
>
> Thanks for your reply. Well I have added
Hi Chandra,
I'm using Cassandra 1.2.1 and jna/platform 3.5.1.
One thing I should mention is that I tried putting the jar files into my
java jre/lib directory. The theory being those jars would be available to
all java apps. In that case Cassandra will start but still not recognize
JNA. If I cop
Do get started look at:
HintedHandoff: http://wiki.apache.org/cassandra/HintedHandoff
Operations: http://wiki.apache.org/cassandra/Operations (specifically repair
and repair –pr operations)
There should be a ton of information on this you can easily Google.
Best,
Michael
From: "dong.yajun" mai
Hi all,
I am running 1.1.9 with 2 data centers and 3 nodes each. Recently I have been
seeing a terrible key cache hit rate (around 1-3%) with a 98% row cache hit
rate. The seed node appears to take higher traffic than the other nodes
(approximately twice) but I believe I have astyanax configu
thanks Michael.
I found it - :)
---
Rick
On Wed, Jan 30, 2013 at 11:31 AM, Michael Kjellman
wrote:
> Do get started look at:
>
> HintedHandoff: http://wiki.apache.org/cassandra/HintedHandoff
> Operations: http://wiki.apache.org/cassandra/Operations (specifically
> repair and repair –pr operati
hey List,
I consider a way that can read all data from a column family, the following
is my thoughts:
1. make a snapshot for all nodes at the same time with a special column
family in a cluster,
2. copy these sstables to local disk from cassandra nodes.
3. compact these sstables to a single one
How often do you need to do this? How many rows in your column families?
If it's not a frequent operation you can just page the data n number of rows at
a time using nothing special but C* and a driver.
Or another option is you can write a map/reduce job if you need an entire cf to
be an input
Thanks Michael.
*
*
*> *How many rows in your column families?
abort 500w rows, each row has abort 1k data.
> How often do you need to do this?
once a day.
> example Hadoop map/reduce jobs in the examples folder
thanks, I have saw the source code, it uses the *Thrift API* as the
recordReader to
Yes, wide rows, but doesn't seem horrible by any means. People have gotten by
with Thrift for many many years in the community. If you are running this once
a day doesn't sound like latency should be a major concern and I doubt the
proto is going to be your primary bottleneck.
To answer your qu
And finally to make wide rows with C* and Hadoop even better, these problems
have already been solved by tickets such as (not inclusive):
https://issues.apache.org/jira/browse/CASSANDRA-3264
https://issues.apache.org/jira/browse/CASSANDRA-2878
And a nice more updated doc from the 1.1 branch from
Hi,
Thx for the great support.
I have checked everything and after a rebuild_index all data were searchable. I
will upgrade to 1.1.9 asap.
Many thx,
Br,
Matthias Zeilinger
Production Operation - Shared Services
P: +43 (0) 50 858-31185
M: +43 (0) 664 85-34459
E: matthias.zeilin...@bwinparty.com
46 matches
Mail list logo