I'm having problems with Hadoop job failures on a Cassandra 1.2 cluster due to
Caused by: TimedOutException()
2013-06-24 11:29:11,953 INFO Driver -at
org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12932)
This is running on a 6-node cluster, RF=3
t;
> Can you drill down into the consistency problem?
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 8/02/2013, at 7:01 AM, Brian Jeltema
> wrote:
>
I'm confused about consistency. I have a 6-node group (RF=3) and I have a table
that
was known to be inconsistent across replicas (a Hadoop app was sensitive to
this).
So a did a 'nodetool -pr repair' on every node in the cluster. After the
repairs were
complete, the Hadoop app still indicated
I had this problem using a rather old version of Open JDK. I downloaded the Sun
JDK and its working now.
Brian
On Feb 4, 2013, at 1:04 PM, Kumar, Anjani wrote:
> Thank you Aaron! I uninstalled older version of Cassandra and then brought
> 1.2.1 version of apache Cassandra as per your mail belo
ues.apache.org/jira/browse/CASSANDRA-4813)
>
> Kind regards,
> Pieter
>
>
> -----Original Message-
> From: Brian Jeltema [mailto:brian.jelt...@digitalenvoy.net]
> Sent: woensdag 30 januari 2013 13:58
> To: user@cassandra.apache.org
> Subject: Re: cryptic exception in
Cassandra 1.1.5, using BulkOutputFormat
Brian
On Jan 30, 2013, at 7:39 AM, Pieter Callewaert wrote:
> Hi Brian,
>
> Which version of cassandra are you using? And are you using the BOF to write
> to Cassandra?
>
> Kind regards,
> Pieter
>
> -Original Message
In hadoop-0.20.2, org.apache.hadoop.mapreduce.JobContext is a class. Looks like
in hadoop-0.21+ JobContext has morphed into an interface.
I'd guess that Hadoop support in Cassandra is based on the older Hadoop.
Brian
On Jan 29, 2013, at 3:42 AM, Tejas Patil wrote:
> I am trying to run a map-red
--
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 8/01/2013, at 2:16 AM, Brian Jeltema
> wrote:
>
>> I need some help understanding unexpected behavior I saw in some recent
>> experim
I wrote a Hadoop mapper-only job that uses BulkOutputFormat to load a Cassandra
table.
That job would consistently fail with a flurry of exceptions (primary cause
looks like EOFExceptions
streaming between nodes).
I restructured the job to use an identity mapper and perform the updates in the
r
> version of Cassandra into a cluster that's a different version?
> - (shot in the dark) is your cluster overwhelmed for some reason?
>
> If the temp dir hasn't been cleaned up yet, you are able to retry, fwiw.
>
> Jeremy
>
> On Sep 14, 2012, at 1:34 PM, Brian
I'm trying to do a bulk load from a Cassandra/Hadoop job using the
BulkOutputFormat class.
It appears that the reducers are generating the SSTables, but is failing to
load them into the cluster:
12/09/14 14:08:13 INFO mapred.JobClient: Task Id :
attempt_201208201337_0184_r_04_0, Status : FA
.2
Thanks in advance.
Brian
On Sep 12, 2012, at 7:52 AM, Brian Jeltema wrote:
> I'm a fairly novice Cassandra/Hadoop guy. I have written a Hadoop job (using
> the Cassandra/Hadoop integration API)
> that performs a full table scan and attempts to populate a new table from the
>
I'm a fairly novice Cassandra/Hadoop guy. I have written a Hadoop job (using
the Cassandra/Hadoop integration API)
that performs a full table scan and attempts to populate a new table from the
results of the map/reduce. The read
works fine and is fast, but the table insertion is failing with OOM
I couldn't get the same-host sstableloader to work either. But it's easier to
use the JMX bulk-load hook that's built
into Cassandra anyway. The following is what I implemented to do this:
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import javax.management.JMX;
im
Just the opposite, I think. The property value exists in the yaml file but
does not have a corresponding definition in the Config class.
Typically caused by a version mismatch in my experience.
On Jul 2, 2012, at 1:20 PM, Robin Verlangen wrote:
> Your missing the "sliced_buffer_size_in_kb" prop
I'm attempting to perform a bulk load by calling the jmx:bulkLoad method on
several nodes in parallel. In a Casssandra log
file I see a few occurrences of the following:
INFO [GossipTasks:1] 2012-07-02 10:12:33,626 Gossiper.java (line 759)
InetAddress /10.4.0.3 is now dead.
ERROR [GossipTasks:1
16 matches
Mail list logo