results from it. Kindly do let
me know if this is possible.
Thanks,
Siddharth Ubale,
thing that whenever yarn allocates containers on
the machine from where I am running the code the spark job runs else
It always fails.
Thanks,
Siddharth Ubale
-Original Message-
From: Wellington Chevreuil [mailto:wellington.chevre...@gmail.com]
Sent: Thursday, January 21, 2016 3:44 PM
To
above 2 issues.
Thanks,
Siddharth Ubale,
above 2 issues.
Thanks,
Siddharth Ubale,
guide me .
I am submitting a Spark Streaming job whjic is reading from a Kafka topic and
dumoing data in to hbase tables via Phoenix API. The job is behaving as
expected in local mode.
Thanks
Siddharth Ubale
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Friday, January 15, 2016 8:08 PM
To
)
at java.lang.reflect.Method.invoke(Method.java:497)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:483)
Thanks,
Siddharth
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Friday, January 15, 2016 7:43 PM
To: Siddharth Ubale
Cc: user
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/01/15 19:23:53 INFO Utils: Shutdown hook called
16/01/15 19:23:53 INFO Utils: Deleting directory
/tmp/spark-b6ebcb83-efff-432a-9a7a-b4764f482d81
java.lang.UNIXProcess$ProcessPipeOutputStream@7a0a6f73 1
Siddharth Ubale,
Synchronized Communic
le t understand this.
Thanks,
Siddharth
From: ayan guha
Sent: 01 May 2015 04:38
To: Ted Yu
Cc: user@spark.apache.org; Siddharth Ubale; matei.zaha...@gmail.com; Prakash
Hosalli; Amit Kumar
Subject: Re: real time Query engine Spark-SQL on Hbase
And if I may as
nario . Please assist.
Thanks,
Siddharth Ubale,
Hi ,
In Spark Web Application the RDD is generating every time client is sending a
query request. Is there any way where the RDD is compiled once and run query
again and again on active SparkContext?
Thanks,
Siddharth Ubale,
Synchronized Communications
#43, Velankani Tech Park, Block No. II
been applied? Or every time while running the
query we have to perform RDD Mapping and Apply schema? In this case I am using
hbase tables to map the RDD.
2. Spark-SQL provides better performance when used with Hive or Hbase?
Thanks,
Siddharth Ubale,
Synchronized Communications
#43, Velankani
base updated as and when Hbase
table reflects any change??
Thanks,
Siddharth Ubale,
Synchronized Communications
#43, Velankani Tech Park, Block No. II,
3rd Floor, Electronic City Phase I,
Bangalore – 560 100
Tel : +91 80 3202 4060
Web: www.syncoms.com<http://www.syncoms.com/>
[LogoNEWmohLAR
Hi,
How do we manage putting partial data in to memory and partial into disk where
data resides in hive table ?
We have tried using the available documentation but unable to go ahead with
above approach , we are only able to cache the entire table or uncache it.
Thanks,
Siddharth Ubale
Sql.
Please share ur views.
Thanks,
Siddharth Ubale,
built version for 2.4 and above in the downbloads section of
Spark homepage -> downloads.
Siddharth Ubale,
Synchronized Communications
#43, Velankani Tech Park, Block No. II,
3rd Floor, Electronic City Phase I,
Bangalore – 560 100
Tel : +91 80 3202 4060
Web: www.syncoms.com<http://www.synco
15 matches
Mail list logo