Hi,
I had a simillar problem with Cassandra 0.8.x and the problem was when
configured Cassandra with rpc_address: 0.0.0.0 and starting Hadoop job
from outside the Cassandra cluster. But with version 1.0.x the problem
is gone.
You can debug the splits with thrift. This is a copy&paste part of my
s
Pierre: do you mind opening an 'improvement' ticket on
https://issues.apache.org/jira/browse/CASSANDRA for this? Wouldn't be
crazy to add support for it at some point.
--
Sylvain
On Tue, May 1, 2012 at 5:27 AM, paul cannon wrote:
> No, there isn't right now. But note that there shouldn't be a w
I think that your best bet is to look directly into the source tarball
https://www.apache.org/dyn/closer.cgi?path=/cassandra/1.0.8/apache-cassandra-1.0.8-src.tar.gz
or in the git repository
http://git-wip-us.apache.org/repos/asf/cassandra.git
Paolo
On 05/02/12 01:12, rk vishu wrote:
Hello Al
Hi,
I'd like to release version 1.1.0-1 of Mojo's Cassandra Maven Plugin
to sync up with the 1.1.0 release of Apache Cassandra.
We solved 2 issues:
http://jira.codehaus.org/secure/ReleaseNote.jspa?projectId=12121&version=17926
Staging Repository:
https://nexus.codehaus.org/content/repositories/o
Hi,
(cassandra 1.0.8)
Stumbled on a piece of code in Memtable that looks like it could hang a
thread forever.
public void flushAndSignal(final CountDownLatch latch, ExecutorService
writer, final ReplayPosition context)
{
writer.execute(new WrappedRunnable()
{
On Wed, May 2, 2012 at 2:42 PM, Mikael Wikblom
wrote:
> Given an IOException in writeSortedContents the latch.countDown() will not
> be called. Wouldn't it be better to place the latch.countDown() in the
> finally statement?
No because having the latch being countDown means 'the sstable has
been
ok, just find it a bit hard to be forced to shutdown the node in case of
an IOException, but I understand why. The exception occurred because of
a missing native snappy library on the server, but the error only occur
because we initialized a column family incorrectly (we are using
cassandra emb
On Tue, May 1, 2012 at 9:05 AM, Gmail wrote:
> unsubscribe
http://qkme.me/35w46c
--
Eric Evans
Acunu | http://www.acunu.com | @acunu
The hive support is going to be integrated into the main source tree with this
ticket:
https://issues.apache.org/jira/browse/CASSANDRA-4131
You can go to https://github.com/riptano/hive to find the
CassandraStorageHandler right now though.
For 1.0.8, the CassandraStorage class for the Pig suppor
On Tue, 2012-05-01 at 11:00 -0700, Aaron Turner wrote:
> Tens or a few hundred MB per row seems reasonable. You could do
> thousands/MB if you wanted to, but that can make things harder to
> manage.
thanks (Both Aarons)
> Depending on the size of your data, you may find that the overhead of
> ea
Maybe we need an auto responder for emails that contain "unsubscribe"
On May 2, 2012, at 9:14 AM, Eric Evans wrote:
> On Tue, May 1, 2012 at 9:05 AM, Gmail wrote:
>> unsubscribe
>
> http://qkme.me/35w46c
>
> --
> Eric Evans
> Acunu | http://www.acunu.com | @acunu
On Wed, May 2, 2012 at 8:22 AM, Tim Wintle wrote:
> On Tue, 2012-05-01 at 11:00 -0700, Aaron Turner wrote:
>> Tens or a few hundred MB per row seems reasonable. You could do
>> thousands/MB if you wanted to, but that can make things harder to
>> manage.
>
> thanks (Both Aarons)
>
>> Depending on
Sure, here it is : CASSANDRA-4210.
- Pierre
-Original Message-
From: Sylvain Lebresne [mailto:sylv...@datastax.com]
Sent: mercredi 2 mai 2012 09:53
To: user@cassandra.apache.org
Subject: Re: execute_prepared_cql_query and variable range filter parameter
Pierre: do you mind opening an 'i
Thank you very much for the help.
On Wed, May 2, 2012 at 8:07 AM, Jeremy Hanna wrote:
> The hive support is going to be integrated into the main source tree with
> this ticket:
> https://issues.apache.org/jira/browse/CASSANDRA-4131
> You can go to https://github.com/riptano/hive to find the
> Cas
On Tue, May 1, 2012 at 6:07 PM, Rob Coli wrote:
>
> The primary differences, as I understand it, are that the index
> performance and bloom filter false positive rate for your One Big File
> are worse. First, you are more likely to get a bloom filter false
> positive due to the intrinsic degradat
Hello:
I am trying to use bulkoutputformat and seeing some nice docs on how to use it
to stream the data to an existing cassandra cluster using configHelper class.
I am wondering if it is possible to use it just to stream the data (sstable
etc) into the hdfs?
Thx
Shawna
On Tue, May 1, 2012 at 9:06 PM, Edward Capriolo wrote:
> Also there are some tickets in JIRA to impose a max sstable size and
> some other related optimizations that I think got stuck behind levelDB
> in coolness factor. Not every use case is good for leveled so adding
> more tools and optimizatio
On Tue, May 1, 2012 at 10:00 PM, Oleg Proudnikov wrote:
> There is this note regarding major compaction in the tuning guide:
>
> "once you run a major compaction, automatic minor compactions are no longer
> triggered frequently forcing you to manually run major compactions on a
> routine
> basis"
Hello,
Code snippet below is printing out column names and values:
MultigetSliceQuery multigetSliceQuery =
HFactory.createMultigetSliceQuery(keyspace,
stringSerializer, stringSerializer, stringSerializer);
for (HColumn column : c){
System.out.println("C
On Wed, May 2, 2012 at 2:23 PM, Shawna Qian wrote:
> Hello:
>
> I am trying to use bulkoutputformat and seeing some nice docs on how to use
> it to stream the data to an existing cassandra cluster using configHelper
> class. I am wondering if it is possible to use it just to stream the data
> (ss
Column name is a composite, so you should use
MultigetSliceQuery and pass CompositeSerializer.
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-media.com
Tel: +972 2 6409736
Mob: +972 54 8356490
Fax: +972 2 5612956
On Thu, May 3, 2012 at 3:57 AM,
21 matches
Mail list logo