Hello,

How many cassandra nodes you have in the cluster ?
when you run MR from hadoop,  how many Mappers/Reducers it is creating  ?
I had same kind of issue before ...

thanks
chandra





On Thu, Jan 16, 2014 at 11:00 PM, Aaron Morton <aa...@thelastpickle.com>wrote:

> Look at the logs for the cassandra servers, are nodes going down ?
> Are there any other errors ?
> Check for log messages about GCInspector, if there is a lot of GC nodes
> will start to flap up and down.
>
> It sounds like there is stability issue with cassandra, look there first
> to make sure it is always available.
>
> If you want to load 150GB of data from Hadoop to Cassandra a day I would
> suggest creating SSTables in Hadoop and bulk loading them into cassandra.
> This article is old buy it’s still relevant
> http://www.datastax.com/dev/blog/bulk-loading
>
> Hope that helps.
>
>
>
> -----------------
> Aaron Morton
> New Zealand
> @aaronmorton
>
> Co-Founder & Principal Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> On 12/01/2014, at 3:53 pm, Arun <toarun...@gmail.com> wrote:
>
> Hi ,
>
> I need your help & suggestions for our production issue.
>
> Details:
> ----------
> we have 40 CF's in cassandra cluster for each datasource like below
> MusicData--Keyspace
> spotify_1-column family-Active
> spotify_2-column family-standby
> Daily we load data into this cluster using as below process:
> 1.Astyanix library to delete inactive version of CF datahere spotify_2
> 2. Hadoop Bulkload JAR -pushes data from Hadoop to cassandra into
> spotify_2
> Data inflow rate 150GB per day .
> Datastax community version 1.1.9 with 9 nodes of 4 TB which are built on
> openstack with high end config.
>
> Problem:
> -----------
> we're encountering the problem every week, the hadoop bulkload program is
> failing with
> java.io.IOException: Too many hosts failed: [/10.240.171.80, /
> 10.240.171.76,
> /10.240.171.74, /10.240.171.73]
>
> at
>
> org.apache.cassandra.hadoop.BulkRecordWriter.close(BulkRecordWriter.java:243
> )
> I can provide more details about the error if you need.with our initial
> analysis we came to know if we're deleting the deleted space for
> tombstoned
> blocks will be reclaimed in compaction process so we have increased
> storage
> capacity by adding new nodes but problem still persists.
> we need your expertise to comment on this production issue.please let me
> know if you need any information!!
> I will wait for your response !!
>
> -Arun
>
>
>
>

Reply via email to