Re: Question about Parallel Stages in Spark

2017-06-27 Thread satish lalam
, 2017 at 4:03 AM, Bryan Jeffrey wrote: > Satish, > > Is this two separate applications submitted to the Yarn scheduler? If so > then you would expect that you would see the original case run in parallel. > > However, if this is one application your submission to Yarn guarantees >

Re: Question about Parallel Stages in Spark

2017-06-26 Thread satish lalam
Thanks All. To reiterate - stages inside a job can be run parallely as long as - (a) there is no sequential dependency (b) the job has sufficient resources. however, my code was launching 2 jobs and they are sequential as you rightly pointed out. The issue which I was trying to highlight with that

Re: Why my project has this kind of error ?

2017-06-22 Thread satish lalam
Minglei - You could check your jdk path and scala library setting in project structure. i.e., project view (alt + 1), and then pressing F4 to open Project structure... look under SDKs and Libraries. On Mon, Jun 19, 2017 at 10:54 PM, 张明磊 wrote: > Hello to all, > > Below is my issue. I have alread

Re: Broadcasts & Storage Memory

2017-06-21 Thread satish lalam
My understanding is - it from storageFraction. Here cached blocks are immune to eviction - so both persisted RDDs and broadcast variables sit here. Ref

Re: Read Local File

2017-06-14 Thread satish lalam
I guess you have already made sure that the paths for your file are exactly the same on each of your nodes. I'd also check the perms on your path. Believe the sample code you pasted is only for testing - and you are already aware that a distributed count on a local file has no benefits. Once I ran

Re: Spark Streaming Design Suggestion

2017-06-14 Thread satish lalam
Agree with Jörn. Dynamically creating/deleting Topics is nontrivial to manage. With the limited knowledge about your scenario - it appears that you are using topics as some kind of message type enum. If that is the case - you might be better off with one (or just a few topics) and have a messagetyp

Re: Performance issue when running Spark-1.6.1 in yarn-client mode with Hadoop 2.6.0

2017-06-08 Thread Satish John Bosco
el.enabledtrue > > > > yarn.log-aggregation.enable-local- > cleanupfalse > > > > yarn.resourcemanager.scheduler. > client.thread-count64 > > > > yarn.resourcemanager.resource- > tracker.addresssatish-NS1:8031 > > > > yarn.resourcemana

RE: question on SPARK_WORKER_CORES

2017-02-17 Thread Satish Lalam
Have you tried passing --executor-cores or –total-executor-cores as arguments, , depending on the spark version? From: kant kodali [mailto:kanth...@gmail.com] Sent: Friday, February 17, 2017 5:03 PM To: Alex Kozlov Cc: user @spark Subject: Re: question on SPARK_WORKER_CORES Standalone. On Fr

Unsubscribe

2017-02-05 Thread satish saley
blockquote, div.yahoo_quoted { margin-left: 0 !important; border-left:1px #715FFA solid !important; padding-left:1ex !important; background-color:white !important; } Unsubscribe Sent from Yahoo Mail for iPhone

Fetching Hive table data from external cluster

2016-09-13 Thread Satish Chandra J
it before loading into target table Thanks in advance for all your support Regards, Satish Chandra

Re: Environment tab meaning

2016-06-07 Thread satish saley
- > > https://medium.com/@jaceklaskowski/ > > Mastering Apache Spark http://bit.ly/mastering-apache-spark > > Follow me at https://twitter.com/jaceklaskowski > > > > > > On Tue, Jun 7, 2016 at 8:11 PM, satish saley > wrote: > >> Hi, > >> In spark history server, we see environment tab. Is it show environment > of > >> Driver or Executor or both? > >> > >> Jobs > >> Stages > >> Storage > >> Environment > >> Executors > >> >

Environment tab meaning

2016-06-07 Thread satish saley
Hi, In spark history server, we see environment tab. Is it show environment of Driver or Executor or both? - Jobs - Stages - Storage

duplicate jar problem in yarn-cluster mode

2016-05-17 Thread satish saley
Hello, I am executing a simple code with yarn-cluster --master yarn-cluster --name Spark-FileCopy --class my.example.SparkFileCopy --properties-file spark-defaults.conf --queue saleyq --executor-memory 1G --driver-memory 1G --conf spark.john.snow.is.back=true --jars hdfs://myclusternn.com:8020/tmp

pyspark.zip and py4j-0.9-src.zip

2016-05-15 Thread satish saley
Hi, Is there any way to pull in pyspark.zip and py4j-0.9-src.zip in maven project?

Re: System memory 186646528 must be at least 4.718592E8.

2016-05-13 Thread satish saley
> $executorMemory must be at least " + > > On Fri, May 13, 2016 at 12:47 PM, satish saley > wrote: > >> Hello, >> I am running >> https://github.com/apache/spark/blob/branch-1.6/examples/src/main/python/pi.py >> example, >> but facing following excep

System memory 186646528 must be at least 4.718592E8.

2016-05-13 Thread satish saley
Hello, I am running https://github.com/apache/spark/blob/branch-1.6/examples/src/main/python/pi.py example, but facing following exception What is the unit of memory pointed out in the error? Following are configs --master local[*] --

Re: killing spark job which is submitted using SparkSubmit

2016-05-06 Thread satish saley
nd can only be killed via YARN commands, or if it's batch and completes. > The simplest way to tie the driver to your app is to pass in yarn-client as > master instead. > > On Fri, May 6, 2016 at 2:00 PM satish saley > wrote: > >> Hi Anthony, >> &

Re: killing spark job which is submitted using SparkSubmit

2016-05-06 Thread satish saley
whenever I kill my application. On Fri, May 6, 2016 at 11:58 AM, Anthony May wrote: > Greetings Satish, > > What are the arguments you're passing in? > > On Fri, 6 May 2016 at 12:50 satish saley wrote: > >> Hello, >> >> I am submitting a spark job using Spark

killing spark job which is submitted using SparkSubmit

2016-05-06 Thread satish saley
Hello, I am submitting a spark job using SparkSubmit. When I kill my application, it does not kill the corresponding spark job. How would I kill the corresponding spark job? I know, one way is to use SparkSubmit again with appropriate options. Is there any way though which I can tell SparkSubmit a

mesos cluster mode

2016-05-05 Thread satish saley
Hi, Spark documentation says that "cluster mode is currently not supported for Mesos clusters."But below we can see mesos example with cluster mode. I don't have mesos cluster to try it out. Which one is true? Shall I interpret it as "cluster mode is currently not supported for Mesos clusters* for

Redirect from yarn to spark history server

2016-05-02 Thread satish saley
k that let me redirect to spark history server from yarn? Best, Satish

unsubscribe

2016-03-15 Thread satish chandra j
unsubscribe

Calender Obj to java.util.date conversion issue

2016-02-17 Thread satish chandra j
) at org.apache.spark.deploy.DseSparkSubmitBootstrapper.main(DseSparkSubmitBootstrapper.scala) Please let me know if I am missing anything here Regards, Satish Chandra

Re: Spark DataFrameNaFunctions unrecognized

2016-02-15 Thread satish chandra j
er details required on the same Regards, Satish Chandra On Tue, Feb 16, 2016 at 1:03 PM, Ted Yu wrote: > bq. I am getting compile time error > > Do you mind pastebin'ning the error you got ? > > Cheers > > On Mon, Feb 15, 2016 at 11:08 PM, satish chandra j < > jsatishc

Re: Spark DataFrameNaFunctions unrecognized

2016-02-15 Thread satish chandra j
working Hence please let me know if any inputs on the same to fix the issue Regards, Satish Chandra On Mon, Feb 15, 2016 at 7:41 PM, Ted Yu wrote: > fill() was introduced in 1.3.1 > > Can you show code snippet which reproduces the error ? > > I tried the following using

Spark DataFrameNaFunctions unrecognized

2016-02-15 Thread satish chandra j
of DataFrame "df" to be replaced with value "" as given in the above snippet. I understand, code does not require any additional packages to support DataFrameNaFunctions Please let me know if I am missing anything so that I can make these DataFrameNaFunctions working Regards, Satish Chandra J

Re: createDataFrame question

2016-02-09 Thread satish chandra j
HI, Hope you are aware of "toDF()" which is used to convert your RDD to DataFrame Regards, Satish Chandra On Tue, Feb 9, 2016 at 5:52 PM, jdkorigan wrote: > Hi, > > I would like to transform my rdd to a sql.dataframe.Dataframe, is there a > possible conversion to do the

Re: DataFrame First method is resulting different results in each iteration

2016-02-03 Thread satish chandra j
Hi Hemant, My dataframe "ordrd_emd_df" consist data in order as I have applied oderBy in the first step And also tried having "orderBy" method before "groupBy" than also getting different results in each iteration Regards, Satish Chandra On Wed, Feb 3, 2016 at 4

DataFrame First method is resulting different results in each iteration

2016-02-03 Thread satish chandra j
10 003 20 002 Not sure why output varies on each iteration as no change in code and values in DataFrame Please let me know if any inputs on this Regards, Satish Chandra J

Passing binding variable in query used in Data Source API

2016-01-21 Thread satish chandra j
it is an iterative approach hence cannot use constants but need to pass variable to query If anybody had a similar implementation to pass binding variable while fetching data from source database using Data Source than please provide details on the same Regards, Satish Chandra

Re: Window Functions importing issue in Spark 1.4.0

2016-01-20 Thread satish chandra j
nd "import org.apache.spark.sql.functions.rowNumber" Thanks for providing your valuable inputs Regards, Satish Chandra J On Thu, Jan 7, 2016 at 4:41 PM, Ted Yu wrote: > Please take a look at the following for sample on how rowNumber is used: > https://github.com/apache/spark/pull/9050 > > BTW 1.4.

Window Functions importing issue in Spark 1.4.0

2016-01-07 Thread satish chandra j
nybody throw some light if any to fix the issue Regards, Satish Chandra

Re: spark-submit for dependent jars

2015-12-21 Thread satish chandra j
Hi Rajesh, Could you please try giving your cmd as mentioned below: ./spark-submit --master local --class --jars Regards, Satish Chandra On Mon, Dec 21, 2015 at 6:45 PM, Madabhattula Rajesh Kumar < mrajaf...@gmail.com> wrote: > Hi, > > How to add dependent jars in spark-subm

RE: Concatenate a string to a Column of type string in DataFrame

2015-12-12 Thread Satish
Hi, Will the below mentioned snippet work for Spark 1.4.0 Thanks for your inputs Regards, Satish -Original Message- From: "Yanbo Liang" Sent: ‎12-‎12-‎2015 20:54 To: "satish chandra j" Cc: "user" Subject: Re: Concatenate a string to a Column of type

Concatenate a string to a Column of type string in DataFrame

2015-12-12 Thread satish chandra j
e a column value of datatype String Ex: Column value consist '20-10-2015' post updating it should have '20-10-201500:00:000' Note: Transformation such that new DataFrame has to becreated from old DataFrame Regards, Satish Chandra

Error Handling approach for SparkSQL queries in Spark version 1.4

2015-12-10 Thread satish chandra j
HI All, Any inputs on error handling approach for Spark SQL or DataFrames Thanks for all your valuable inputs in advance Regards, Satish Chandra

Re: Re: RE: Error not found value sqlContext

2015-11-23 Thread satish chandra j
Thanks for all the support. It was a code issue which I overlooked it Regards, Satish Chandra On Mon, Nov 23, 2015 at 3:49 PM, satish chandra j wrote: > Sorry, just to understand my issue.if Eclipse could not understand > Scala syntax properly than it should error for the other Spa

Re: Re: RE: Error not found value sqlContext

2015-11-23 Thread satish chandra j
mport sqlContext.implicits._" is not recognized during compile time Please let me know if any further inputs needed to fix the same Regards, Satish Chandra On Mon, Nov 23, 2015 at 3:29 PM, prosp4300 wrote: > > > So it is actually a compile time error in Eclipse, instead of jar > generation fro

Re: RE: Error not found value sqlContext

2015-11-20 Thread satish chandra j
n RDBMS by implementing JDBCRDD I tried couple of DataFrame related methods for which most of them errors stating that method has been overloaded Please let me know if any further inputs needed to analyze it Regards, Satish Chandra On Fri, Nov 20, 2015 at 5:46 PM, prosp4300 wrote: > > Looks

RE: Error not found value sqlContext

2015-11-20 Thread Satish
Hi Michael, As my current Spark version is 1.4.0 than why it error out as "error: not found: value sqlContext" when I have "import sqlContext.implicits._" in my Spark Job Regards Satish Chandra -Original Message- From: "Michael Armbrust" Sent: ‎20-‎11-‎

Error not found value sqlContext

2015-11-19 Thread satish chandra j
HI All, we have recently migrated from Spark 1.2.1 to Spark 1.4.0, I am fetching data from an RDBMS using JDBCRDD and register it as temp table to perform SQL query Below approach is working fine in Spark 1.2.1: JDBCRDD --> apply map using Case Class --> apply createSchemaRDD --> registerTempTabl

Re: Issue while Spark Job fetching data from Cassandra DB

2015-11-19 Thread satish chandra j
or any of its parents" I have verified the permissions grants for the user "UserIDXYZ" which has SELECT permission on the "keyspace and the table" on the which it is performing query Please let me know if any further inputs on the same Regards, Satish Chandra On Wed

Re: Issue while Spark Job fetching data from Cassandra DB

2015-11-17 Thread satish chandra j
e table in CQL UI and code used in Spark Job has been tested in Spark Shell and it is working fine Regards, Satish Chandra On Tue, Nov 17, 2015 at 11:45 PM, satish chandra j wrote: > HI All, > I am getting "*.UnauthorizedException: User has no SELECT > permission on or any of its

Issue while Spark Job fetching data from Cassandra DB

2015-11-17 Thread satish chandra j
eadEventExecutor$2.run(SingleThreadEventExecutor.java:116) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at java.lang.Thread.run(Thread.java:745) Please let me know if any solutions to fix the same Regards, Satish Chandra

No suitable drivers found for postgresql

2015-11-13 Thread satish chandra j
e let me know if any inputs on the same to proceed further Regards Satish Chandra

Re: Best practises

2015-11-02 Thread satish chandra j
HI All, Yes, any such doc will be a great help!!! On Fri, Oct 30, 2015 at 4:35 PM, huangzheng <1106944...@qq.com> wrote: > I have the same question.anyone help us. > > > -- 原始邮件 -- > *发件人:* "Deepak Sharma"; > *发送时间:* 2015年10月30日(星期五) 晚上7:23 > *收件人:* "user"; > *主题

Re: JdbcRDD Constructor

2015-10-20 Thread satish chandra j
let me know if any default approach Spark is going to implement if do not give any such inputs as "lowerbound" and "upperbound" to JDBCRDD Constructor or DataSourceAPI Thanks in advance for your inputs Regards, Satish Chandra J On Thu, Sep 24, 2015 at 10:18 PM, Deenar Tor

Re: Convert SchemaRDD to RDD

2015-10-16 Thread satish chandra j
SchemaRDD to RDD without using Tuple or Case Class as we have restrictions in Scala 2.10 Regards Satish Chandra

Convert SchemaRDD to RDD

2015-10-16 Thread satish chandra j
package scala*" Could anybody please provide inputs to convert SchemaRDD to RDD without using Tuple in the implementation approach Thanks for your valuable inputs in advance Regards, Satish Chandra

Fwd: Partition Column in JDBCRDD or Datasource API

2015-10-14 Thread satish chandra j
, Satish Chandra Jummula -- Forwarded message -- From: satish chandra j Date: Wed, Sep 30, 2015 at 2:10 PM Subject: Partition Column in JDBCRDD or Datasource API To: user HI All, Please provide your inputs on Partition Column to be used in DataSourceAPI or JDBCRDD in a scenerio where

Re: Scala Limitation - Case Class definition with more than 22 arguments

2015-10-04 Thread satish chandra j
Hi Petr, Could you please let me know if I am missing anything on the code as my code is as same as snippet shared by you but still i am getting the below error: *error type mismatch: found String required: Serializable* Please let me know if any fix to be applied on this Regards, Satish

Re: Scala Limitation - Case Class definition with more than 22 arguments

2015-10-02 Thread satish chandra j
Hi, I am getting the below error while implementing the above custom class code given by you error type mismatch: found String required: Serializable Please let me know if i am missing anything here Regards, Satish Chandra On Wed, Sep 23, 2015 at 12:34 PM, Petr Novak wrote: > You

Partition Column in JDBCRDD or Datasource API

2015-09-30 Thread satish chandra j
HI All, Please provide your inputs on Partition Column to be used in DataSourceAPI or JDBCRDD in a scenerio where the source table does not have a Numeric Columns which is sequential and unique such that proper partitioning can take place in Spark Regards, Satish

Fetching Date value from RDD of type spark.sql.row

2015-09-29 Thread satish chandra j
/apache/spark/sql/api/java/Row.html But getting an error: "value get is not a member of org.apache.spark.sql.row" Let me know if any alternate method to fetch the Date in a Row Regards, Satish Chandra

Re: Fetching Date value from spark.sql.row in Spark 1.2.2

2015-09-29 Thread satish chandra j
HI All, If any alternate solutions to get the Date value from org.apache.spark.sql.row please suggest me Regards, Satish Chandra On Tue, Sep 29, 2015 at 4:41 PM, satish chandra j wrote: > HI All, > Currently using Spark 1.2.2, as getDate method is not defined in this > version hence

Fetching Date value from spark.sql.row in Spark 1.2.2

2015-09-29 Thread satish chandra j
But getting an error: "value get is not a member of org.apache.spark.sql.row" Regards, Satish Chandra

Not fetching all records from Cassandra DB

2015-09-24 Thread satish chandra j
above Regards, Satish Chandra

Re: Scala Limitation - Case Class definition with more than 22 arguments

2015-09-24 Thread satish chandra j
lease let me know if any work around for the same Regards, Satish Chandra On Thu, Sep 24, 2015 at 3:18 PM, satish chandra j wrote: > HI All, > As it is for SQL purpose I understand, need to go ahead with Custom Case > Class approach > Could anybody have a sample code for creati

Re: JdbcRDD Constructor

2015-09-24 Thread satish chandra j
)...r.getInt("col37"))) When I have the above 100,0,*1 * I am getting SQL_RDD.count as 100 When set to 100,0,2 I am getting SQL_RDD.count as 151 When set to 100,0,3 I am getting SQL RDD.count as 201 But where as I expect every execution count should be 100, let me know if I am missing a

Re: Scala Limitation - Case Class definition with more than 22 arguments

2015-09-24 Thread satish chandra j
HI All, As it is for SQL purpose I understand, need to go ahead with Custom Case Class approach Could anybody have a sample code for creating Custom Case Class to refer which would be really helpful Regards, Satish Chandra On Thu, Sep 24, 2015 at 2:51 PM, Adrian Tanase wrote: > +1 on group

Re: JdbcRDD Constructor

2015-09-23 Thread satish chandra j
HI, Could anybody provide inputs if they have came across similar issue @Rishitesh Could you provide if any sample code to use JdbcRDDSuite Regards, Satish Chandra On Wed, Sep 23, 2015 at 5:14 PM, Rishitesh Mishra wrote: > I am using Spark 1.5. I always get count = 100, irrespective of

Re: JdbcRDD Constructor

2015-09-23 Thread satish chandra j
HI, Currently using Spark 1.2.2, could you please let me know correct results output count which you got it by using JdbcRDDSuite Regards, Satish Chandra On Wed, Sep 23, 2015 at 4:02 PM, Rishitesh Mishra wrote: > Which version of Spark you are using ?? I can get correct results us

Re: Scala Limitation - Case Class definition with more than 22 arguments

2015-09-23 Thread satish chandra j
HI Andy, So I believe if I opt pro grammatically building the schema approach, than it would not have have any restriction as such in "case Class not allowing more than 22 Arguments" As I need to define a schema of around 37 arguments Regards, Satish Chandra On Wed, Sep 23, 2015

JdbcRDD Constructor

2015-09-22 Thread satish chandra j
: 100 0 ,100 ,2 : 151 0 ,100 ,3 : 201 Please help me in understanding the why Output count is 151 if numPartitions is 2 and Output count is 201 if numPartitions is 3 Regards, Satish Chandra

Scala Limitation - Case Class definition with more than 22 arguments

2015-09-22 Thread satish chandra j
asses cannot have more than 22 parameters.*" It would be a great help if any inputs on the same Regards, Satish Chandra

Spark SQL vs Spark Programming

2015-08-30 Thread satish chandra j
d you please let me know pro's & con's of these implementations Thanks for your support Regards, Satish Chandra

Re: Joining using mulitimap or array

2015-08-24 Thread satish chandra j
Hi, If you join logic is correct, it seems to be a similar issue which i faced recently Can you try by *SparkContext(conf).set("spark.driver.allowMultipleContexts","true")* Regards, Satish Chandra On Mon, Aug 24, 2015 at 2:51 PM, Ilya Karpov wrote: > Hi, guys >

Re: Transformation not happening for reduceByKey or GroupByKey

2015-08-24 Thread satish chandra j
() val sc = new SparkContext(conf).set("spark.driver.allowMultipleContexts","true") val DataRDD = SC.makeRDD(Seq((0,1),(0,2),(1,2),(1,3),(2,4))) DataRDD.reduceByKey(_+_).collect Result: Array((0,3),(1,5),(2,4)) Regards, Satish Chandra On Sat, Aug 22, 2015 at 11:27 AM, sat

Re: Transformation not happening for reduceByKey or GroupByKey

2015-08-22 Thread satish chandra j
HI All, Currently using DSE 4.7 and Spark 1.2.2 version Regards, Satish On Fri, Aug 21, 2015 at 7:30 PM, java8964 wrote: > What version of Spark you are using, or comes with DSE 4.7? > > We just cannot reproduce it in Spark. > > yzhang@localhost>$ more test.spark > val p

Re: Transformation not happening for reduceByKey or GroupByKey

2015-08-21 Thread satish chandra j
HI Abhishek, I have even tried that but rdd2 is empty Regards, Satish On Fri, Aug 21, 2015 at 6:47 PM, Abhishek R. Singh < abhis...@tetrationanalytics.com> wrote: > You had: > > > RDD.reduceByKey((x,y) => x+y) > > RDD.take(3) > > Maybe try: > > > rdd

Re: Transformation not happening for reduceByKey or GroupByKey

2015-08-21 Thread satish chandra j
HI All, Any inputs for the actual problem statement Regards, Satish On Fri, Aug 21, 2015 at 5:57 PM, Jeff Zhang wrote: > Yong, Thanks for your reply. > > I tried spark-shell -i , it works fine for me. Not sure the > different with > dse spark --master local --jars postgresql-

Re: Transformation not happening for reduceByKey or GroupByKey

2015-08-21 Thread satish chandra j
HI Robin, Yes, it is DSE but issue is related to Spark only Regards, Satish Chandra On Fri, Aug 21, 2015 at 3:06 PM, Robin East wrote: > Not sure, never used dse - it’s part of DataStax Enterprise right? > > On 21 Aug 2015, at 10:07, satish chandra j > wrote: > > HI R

Re: Transformation not happening for reduceByKey or GroupByKey

2015-08-21 Thread satish chandra j
Yes, DSE 4.7 Regards, Satish Chandra On Fri, Aug 21, 2015 at 3:06 PM, Robin East wrote: > Not sure, never used dse - it’s part of DataStax Enterprise right? > > On 21 Aug 2015, at 10:07, satish chandra j > wrote: > > HI Robin, > Yes, below mentioned piece or code works fi

Re: Transformation not happening for reduceByKey or GroupByKey

2015-08-21 Thread satish chandra j
required output Regards, Satish Chandra On Thu, Aug 20, 2015 at 8:23 PM, Robin East wrote: > This works for me: > > scala> val pairs = sc.makeRDD(Seq((0,1),(0,2),(1,20),(1,30),(2,40))) > pairs: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[77] at >

Re: Transformation not happening for reduceByKey or GroupByKey

2015-08-20 Thread satish chandra j
HI All, Could anybody let me know what is that i missing here, it should work as its a basic transformation Please let me know if any additional information required Regards, Satish On Thu, Aug 20, 2015 at 3:35 PM, satish chandra j wrote: > HI All, > I have data in RDD as mentioned

Transformation not happening for reduceByKey or GroupByKey

2015-08-20 Thread satish chandra j
RDD: org.apache.spark.rdd.RDD[(Int,Int)]= ShuffledRDD[1] at reduceByKey at :73 res:Array[(Int,Int)] = Array() Command as mentioned dse spark --master local --jars postgresql-9.4-1201.jar -i Please let me know what is missing in my code, as my resultant Array is empty Regards, Satish

to retrive full stack trace

2015-08-18 Thread satish chandra j
HI All, Please let me know if any arguments to be passed in CLI to retrieve FULL STACK TRACE in Apache Spark I am stuck in a issue for which it would be helpful to analyze full stack trace Regards, Satish Chandra

Re: saveToCassandra not working in Spark Job but works in Spark Shell

2015-08-14 Thread satish chandra j
Hi Akhil, Which jar version is conflicting and what needs to be done for the fix Regards, Satish Chandra On Fri, Aug 14, 2015 at 2:44 PM, Akhil Das wrote: > Looks like a jar version conflict to me. > > Thanks > Best Regards > > On Thu, Aug 13, 2015 at 7:59 PM, satish chandra

Re: saveToCassandra not working in Spark Job but works in Spark Shell

2015-08-13 Thread satish chandra j
HI, Please let me know if I am missing anything in the below mail, to get the issue fixed Regards, Satish Chandra On Wed, Aug 12, 2015 at 6:59 PM, satish chandra j wrote: > HI, > > The below mentioned code is working very well fine in Spark Shell but when > the same is pla

Re: Spark Cassandra Connector issue

2015-08-11 Thread satish chandra j
sc.stop() sys.exit() } } I am getting the error as below: *Exception in thread "main" java.lang.NoSuchMethodError: com.datastax.spark.connector.package$.toRDDFunctions(Lorg/apache/spark/rdd/RDD;Lscala/reflect/ClassTag;)Lcom/datastax/spark/connector/RDDFunctions*

Re: dse spark-submit multiple jars issue

2015-08-11 Thread satish chandra j
.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Regards, Satish Chandra J On Tue, Aug 11, 2015 at 6:15 PM, Javier Domingo Cansino < javier.d

Re: dse spark-submit multiple jars issue

2015-08-11 Thread satish chandra j
-java_2.10-1.1.1.jar ///home/missingmerch/etl-0.0.1-SNAPSHOT.jar Regards, Satish On Tue, Aug 11, 2015 at 4:08 PM, Javier Domingo Cansino < javier.domi...@fon.com> wrote: > I have no real idea (not java user), but have you tried with the --jars > option? > > > http://spark.a

dse spark-submit multiple jars issue

2015-08-11 Thread satish chandra j
jar file paths in the command is an issue, please provide an appropriate format for providing multiple jars in the command Thanks for support Satish Chandra

Re: Differents in loading data using spark datasource api and using jdbc

2015-08-10 Thread satish chandra j
Hi, As I understand JDBC is meant for moderate voulme of data but Datasource api is a better option if volume of data volume is more Datasource API is not available is lower version of Spark such as 1.2.0 Regards, Satish On Tue, Aug 11, 2015 at 8:53 AM, 李铖 wrote: > Hi,everyone. > > I

Re: Spark Cassandra Connector issue

2015-08-10 Thread satish chandra j
-SNAPSHOT.jar I understand only problem with the way I provide list of jar file in the command, if anybody using Datastax Enterprise could please provide thier inputs to get this issue resolved Thanks for your support Satish Chandra On Mon, Aug 10, 2015 at 7:16 PM, Dean Wampler wrote: > I don

Re: Spark Cassandra Connector issue

2015-08-10 Thread satish chandra j
order of arguments passing in DSE command line interface but now I am not sure why the issue again Please let me know if still I am missing anything in my Command as mentioned above(as insisted I have added dse.jar and spark-cassandra-connector-java_2.10.1.1.1.jar) Thanks for support Satish

Spark Cassandra Connector issue

2015-08-10 Thread satish chandra j
C:\workspace\*etl*\*lib*\dse.jar com.datastax.spark spark-*cassandra*-connector-java_2.10 1.1.1 Please let me know if any further details required to analyze the issue Regards, Satish Chandra

Re: Spark-Submit error

2015-08-03 Thread satish chandra j
:60525/user/HeartbeatReceiver On Tue, Aug 4, 2015 at 8:38 AM, Guru Medasani wrote: > Hi Satish, > > Can you add more error or log info to the email? > > > Guru Medasani > gdm...@gmail.com > > > > On Jul 31, 2015, at 1:06 AM, satish chandra j > wrote: > &g

Spark-Submit error

2015-07-30 Thread satish chandra j
HI, I have submitted a Spark Job with options jars,class,master as *local* but i am getting an error as below *dse spark-submit spark error exception in thread main java.io.ioexception: Invalid Request Exception(Why you have not logged in)* *Note: submitting datastax spark node* please let me kn

Spark Shell "No suitable driver found" error

2015-07-09 Thread satish chandra j
on RDD than I am getting typical "No suitable driver found for jdbc:postgresql://" Please provide solution if anybody has faced and fixed the same Regards, Satish Chandra

[no subject]

2015-07-09 Thread satish chandra j
on RDD than I am getting typical "No suitable driver found for jdbc:postgresql://" Please provide solution if anybody has faced and fixed the same Regards, Satish Chandra