how to deal with continued records

2015-06-10 Thread Zhang Jiaqiang
Hello, I have a large CSV file in which the continued records(with same RecordID) may have the context meaning. I should see these continued records as ONE complete record. Also the recordID will be reset to 1 at some time when the csv dumper system think it's necessary. I'd like to get some sugg

fail to run spark PortfolioDemo with dse Cassandra

2014-12-25 Thread Zhang Jiaqiang
onnector.rdd.partitioner.ServerSideTokenRangeSplitter$$anonfun$split$2.apply(ServerSideTokenRangeSplitter.scala:53) com.datastax.spark.connector.rdd.partitioner.ServerSideTokenRangeSplitter$$anonfun$split$2.apply(ServerSideTokenRangeSplitter.scala:49) scala.Option.getOrElse(Option.scala:120) com.datastax.spark.connector.rdd.partitioner.ServerSideTokenRangeSplitter.split(ServerSideTokenRangeSplitter.scala:49) Now I am not sure how to do any further investigation next. Would you please help me on this ? Merry Christmas to everyone. Best regards Zhang JiaQiang