Thanks Jonathan. I added an extra directory big enough and at least the
bootstrap worked.
However, it brings me to questions that i cant seem to find an answer to.
In my case, i had a 2 node cluster to start with, replication factor of 2,
basically all data residing on both machines. Both nodes h
And having said all that: Azure Table storage model doesn't look like
Cassandra. There is a schema, there are partition keys. It more
resembles something like VoltDB than the map of maps (of maps) of
Cassandra (and BigTable, and HBase).
b
On Wed, Sep 8, 2010 at 2:20 PM, Peter Harrison wrote:
They are not copying Cassandra with that, as it was in development for
some time before Cassandra was released (possibly even before
Cassandra development started). The BigTable-esque aspects, if they
are 'copied' from anywhere, are copied from BigTable, just as they are
in Cassandra. The underly
trunk
On Wed, Sep 8, 2010 at 10:54 PM, Alex Burkoff wrote:
> Well, 7.0beta1 rejects those patches. Is there a specific revision I can try
> applying them to ?
>
> Alex.
>
> From: Jonathan Ellis [jbel...@gmail.com]
> Sent: Wednesday, September 08, 2010 6:48
Well, 7.0beta1 rejects those patches. Is there a specific revision I can try
applying them to ?
Alex.
From: Jonathan Ellis [jbel...@gmail.com]
Sent: Wednesday, September 08, 2010 6:48 PM
To: user@cassandra.apache.org
Subject: Re: ColumnFamilyOutputFormat an
this is fixed in trunk for beta2
On Wed, Sep 8, 2010 at 9:52 PM, welcome wrote:
> Hello!
>
> I encounter the same problem with you,and I replace
> client.insert("index".getBytes("UTF-8"), parent,
> column,ConsistencyLevel.ONE);to:
> client.insert("inde".getBytes("UTF-8"), parent, column,Consi
Hello!
I encounter the same problem with you,and I replace
client.insert("index".getBytes("UTF-8"), parent,
column,ConsistencyLevel.ONE);to:
client.insert("inde".getBytes("UTF-8"), parent, column,ConsistencyLevel.ONE);
That's right.But when I insert another one letter again,It's wrong as b
You can't build Cassandra against trunk thrift, the API has changed.
Stick to the one shipped w/ Cassandra and you will be fine.
On Wed, Sep 8, 2010 at 5:41 PM, Alex Burkoff wrote:
> With the trunk version and given patches I am now getting following exception:
>
> 10/09/08 22:39:14 WARN mapred.L
in case the community is interested, my gmetric collector:
http://github.com/scottnotrobot/gmetric/tree/master/database/cassandra/
note i have only tested with a special csv mode of gmetric... you can
bypass this mode and use vanilla gmetric with --nocsv, but beware it will
generate over 100 f
Thank you. That was helpful. But as mentioned in the comments section of
http://prettyprint.me/2010/02/14/running-cassandra-as-an-embedded-service/
section, the embedded server cannot be shutdown unless the JVM is shutdown due
to Cassandra's design limitation. Is there a specific reason for thi
With the trunk version and given patches I am now getting following exception:
10/09/08 22:39:14 WARN mapred.LocalJobRunner: job_local_0001
java.lang.ClassCastException: [B cannot be cast to java.nio.ByteBuffer
at
org.apache.cassandra.hadoop.ColumnFamilyRecordWriter.write(ColumnFamilyReco
Try the patches on
https://issues.apache.org/jira/browse/CASSANDRA-1434 (or wait until
they're committed to trunk, then try a nightly build)
On Wed, Sep 8, 2010 at 4:18 PM, Alex Burkoff wrote:
> Guys,
>
> I was testing ColumnFamilyOutputFormat and found that only columns from the
> last Reduce
>
Microsoft has essentially copied the Cassandra approach for it's Table
Storage. See here:
http://www.codeproject.com/KB/azure/AzureStorage.aspx
It is I believe a compliment of sorts, in the sense that it is a
validation of the Cassandra approach. The reason I know about this is
that I attended a
Guys,
I was testing ColumnFamilyOutputFormat and found that only columns from the
last Reduce
invocation get stored when
mapreduce.output.columnfamilyoutputformat.batch.threshold has
the default value. Setting it to 1 changes the behavior, and all data is stored
then. Is it the
intended behavio
Before I upgrade to the latest nightly just wanted to check how far away 07 Beta2 may be.ThanksAaron
Ah thanks Jonathan, this is yet again a great explanation to get me started.
Will do some digging. Thanks allot!
On Wed, Sep 8, 2010 at 12:30 PM, Jonathan Ellis wrote:
> it looks to me like you are describing "compaction causes a lot of
> i/o." see http://wiki.apache.org/cassandra/MemtableSSTab
it looks to me like you are describing "compaction causes a lot of
i/o." see http://wiki.apache.org/cassandra/MemtableSSTable#Compaction
things you can do about the extra i/o:
- increase memtable sizes (if you haven't done this yet you should,
by 10x or so)
- reduce compaction priority (see
htt
This link describe ganglia / cassandra graphing.
http://mysqldba.blogspot.com/2010/09/cassandra-and-ganglia.html
I ran into a problem illustrated here.
http://www.flickr.com/photos/dathan/4971255111/
This screen shot shows a huge spike of transport exceptions between the
hours of 12:15 - to 1:3
if you have a data directory defined with enough room, Cassandra will
use that one.
On Wed, Sep 8, 2010 at 6:25 AM, Gurpreet Singh wrote:
> Hi,
> version: cassandra 0.6.5
> I am trying to bootstrap a new node from an existing seed node.
> The new node seems to be stuck with the bootstrapping mess
Have you looked into Amazon's EC2? That's worked quite well for us.
I haven't looked into other alternatives enough to know how it would
compare for a 24/7 "production" application, but Amazon's "pay as you go"
system is really nice when you just need to bunch of machines for a
few hours of tes
Hi,
version: cassandra 0.6.5
I am trying to bootstrap a new node from an existing seed node.
The new node seems to be stuck with the bootstrapping message, and did not
show any activity.
Only after i checked the logs of the seed node, i realise there has been an
error:
Caused by: java.lang.Unsupp
>From NEWS.txt: "Row size limit increased from 2GB to 2 billion columns."
On Wed, Sep 8, 2010 at 11:20 AM, Courtney Robinson wrote:
> Are there any limits (implied or otherwise) on how many columns there
> can be in a single row?
> My understanding has always been that there is no limit on how
Are there any limits (implied or otherwise) on how many columns there can be
in a single row?
My understanding has always been that there is no limit on how many columns you
can have in a single row
but i've just read Arin's, WTF is a super column post again and i got the
impression he was sayi
On Sep 7, 2010, at 8:58 PM, Peter Harrison wrote:
On Wed, Sep 8, 2010 at 3:20 AM, Asif Jan wrote:
Hi
I need to use the low level java API in order to test bulk
ingestion to
cassandra. I have already looked at the code in contrib/bmt_example
and
contrib/client_only.
When I try and run t
Thanks
once I am using cli script, it is able to connect to the local server
and see all keyspaces etc. Do I still need to load schemas when using
the same local server ?
Asif
On Sep 7, 2010, at 8:58 PM, Peter Harrison wrote:
On Wed, Sep 8, 2010 at 3:20 AM, Asif Jan wrote:
Hi
I need t
25 matches
Mail list logo