Rahul, Can you try reducing the batch size to 1000? Also what is the write consistency level?
Thanks, Dhanasekaran > On 29-Mar-2015, at 12:30 am, Rahul Bhardwaj <rahul.bhard...@indiamart.com> > wrote: > > Hi All, > > awaiting any response.. please help > > > regards: > rahul > >> On Fri, Mar 27, 2015 at 5:54 PM, Rahul Bhardwaj >> <rahul.bhard...@indiamart.com> wrote: >> Hi All, >> >> >> >> We are using cassandra version 2.1.2 with cqlsh 5.0.1 (cluster of three >> nodes with rf 2) >> >> I need to load around 40 million records into a table of cassandra db. I >> have created batch of 1 million ( batch of 10000 records also gives the same >> error) in csv format. when I use copy command to import I got this error, >> which is causing problem. >> >> cqlsh:mesh_glusr> copy >> glusr_usr1(glusr_usr_id,glusr_usr_usrname,glusr_usr_pass,glusr_usr_membersince,glusr_usr_designation,glusr_usr_url,glusr_usr_modid,fk_gl_city_id,fk_gl_state_id,glusr_usr_ph2_area) >> from 'gl_a' with delimiter = '\t' and QUOTE = '"'; >> >> Processed 36000 rows; Write: 1769.07 rows/s >> Record has the wrong number of fields (9 instead of 10). >> Aborting import at record #36769. Previously-inserted values still present. >> 36669 rows imported in 20.571 seconds. >> cqlsh:mesh_glusr> copy >> glusr_usr1(glusr_usr_id,glusr_usr_usrname,glusr_usr_pass,glusr_usr_membersince,glusr_usr_designation,glusr_usr_url,glusr_usr_modid,fk_gl_city_id,fk_gl_state_id,glusr_usr_ph2_area) >> from 'gl_a' with delimiter = '\t' and QUOTE = '"'; >> Processed 185000 rows; Write: 1800.91 rows/s >> Record has the wrong number of fields (9 instead of 10). >> Aborting import at record #185607. Previously-inserted values still present. >> 185507 rows imported in 1 minute and 43.428 seconds. >> >> [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native protocol v3] >> Use HELP for help. >> cqlsh> use mesh_glusr ; >> cqlsh:mesh_glusr> copy >> glusr_usr1(glusr_usr_id,glusr_usr_usrname,glusr_usr_pass,glusr_usr_membersince,glusr_usr_designation,glusr_usr_url,glusr_usr_modid,fk_gl_city_id,fk_gl_state_id,glusr_usr_ph2_area) >> from 'gl_a1' with delimiter = '\t' and QUOTE = '"'; >> Processed 373000 rows; Write: 1741.23 rows/s >> ('Unable to complete the operation against any hosts', {}) >> Aborting import at record #373269. Previously-inserted values still present. >> >> >> When we remove already inserted records from file and on again starting the >> command for rest data, it inserts few more records and gives the same error >> without any specific. >> >> please help if any one have some idea about this error. >> >> >> >> Regards: >> Rahul Bhardwaj > > > > Follow IndiaMART.com for latest updates on this and more: Mobile Channel: > > > Watch how IndiaMART Maximiser helped Mr. Khanna expand his business. kyunki > Kaam Yahin Banta Hai!!!