Re: Hbase and Spark

2017-01-29 Thread Sudev A C
Hi Masf, Do try the official Hbase Spark. https://hbase.apache.org/book.html#spark I think you will have to build the jar from source and run your spark program with --packages . https://spark-packages.org/package/hortonworks-spark/shc says it's not yet published to Spark packages or Maven Repo.

Re: Few questions on reliability of accumulators value.

2016-12-13 Thread Sudev A C
pth explanation: > http://imranrashid.com/posts/Spark-Accumulators/ > > > On Sun, Dec 11, 2016 at 11:27 AM, Sudev A C wrote: > > Please help. > Anyone, any thoughts on the previous mail ? > > Thanks > Sudev > > > On Fri, Dec 9, 2016 at 2:28 PM Sudev

Re: Few questions on reliability of accumulators value.

2016-12-11 Thread Sudev A C
Please help. Anyone, any thoughts on the previous mail ? Thanks Sudev On Fri, Dec 9, 2016 at 2:28 PM Sudev A C wrote: > Hi, > > Can anyone please help clarity on how accumulators can be used reliably to > measure error/success/analytical metrics ? > > Given below is use c

Few questions on reliability of accumulators value.

2016-12-09 Thread Sudev A C
Hi, Can anyone please help clarity on how accumulators can be used reliably to measure error/success/analytical metrics ? Given below is use case / code snippet that I have. val amtZero = sc.accumulator(0) > val amtLarge = sc.accumulator(0) > val amtNormal = sc.accumulator(0) > val getAmount = (

Re: reading data from s3

2016-12-09 Thread Sudev A C
Hi Hitesh, Schema of the table is inferred automatically if you are reading from JSON file, wherein when you are reading from a text file you will have to provide a schema for the table you want to create (JSON has schema within it). You can create a data frames and register them as tables. 1. In

Re: What syntax can be used to specify the latest version of JAR found while using spark submit

2016-10-26 Thread Sudev A C
Hi Aseem, If you are submitting the jar from a shell you could write a simple bash/sh script to solve your problem. `print /home/pathtojarfolder/$(ls -t /home/pathtojarfolder/*.jar | head -n 1)` The above command can be put in your spark-submit command. Thanks Sudev On Wed, Oct 26, 2016 at 3:3