Hi , 

I need your help & suggestions for our production issue. 

Details: 
---------- 
we have 40 CF's in cassandra cluster for each datasource like below 
MusicData--Keyspace 
spotify_1-column family-Active 
spotify_2-column family-standby 
Daily we load data into this cluster using as below process: 
1.Astyanix library to delete inactive version of CF datahere spotify_2 
2. Hadoop Bulkload JAR -pushes data from Hadoop to cassandra into spotify_2 
Data inflow rate 150GB per day . 
Datastax community version 1.1.9 with 9 nodes of 4 TB which are built on 
openstack with high end config. 

Problem: 
----------- 
we're encountering the problem every week, the hadoop bulkload program is 
failing with 
java.io.IOException: Too many hosts failed: [/10.240.171.80, /10.240.171.76, 
/10.240.171.74, /10.240.171.73] 

at 
org.apache.cassandra.hadoop.BulkRecordWriter.close(BulkRecordWriter.java:243
) 
I can provide more details about the error if you need.with our initial 
analysis we came to know if we're deleting the deleted space for tombstoned 
blocks will be reclaimed in compaction process so we have increased storage 
capacity by adding new nodes but problem still persists. 
we need your expertise to comment on this production issue.please let me 
know if you need any information!! 
I will wait for your response !! 

-Arun


Reply via email to