Sachin
On Fri, May 15, 2015 at 6:57 AM, ayan guha wrote:
> With this information it is hard to predict. What's the performance you
> are getting? What's your desired performance? Maybe you can post your code
> and experts can suggests improvement?
> On 14 May 2015 15:02,
Hi Friends,
please someone can give the idea, Ideally what should be time(complete job
execution) for spark job,
I have data in a hive table, amount of data would be 1GB , 2 lacs rows for
whole month,
I want to do monthly aggregation, using SQL queries,groupby
I have only one node,1 cluster,below
Hi All,
I am trying to execute batch processing in yarn-cluster mode i.e. I have
many sql insert queries,based on argument provided it will it will fetch the
queries ,create context , schema RDD and insert in hive tables,
Please Note- in standalone mode its working and in cluster mode working is I
Hi Linlin,
have you got the solution for this issue, if yes then what are the thing
need to make correct,because I am also getting same error,when submitting
spark job in cluster mode getting error as under -
2015-04-14 18:16:43 DEBUG Transaction - Transaction rolled back in 0 ms
2015-04-14 18:16:4
Hi ,
When I am submitting spark job as --master yarn-cluster with below
command/options getting driver
memory error-
spark-submit --jars
./libs/mysql-connector-java-5.1.17.jar,./libs/log4j-1.2.17.jar --files
datasource.properties,log4j.properties --master yarn-cluster --num-executors
1 --driver-m
Hi ,
I observed that we have installed only one cluster,
and submiting job as yarn-cluster then getting below error, so is this cause
that installation is only one cluster?
Please correct me, if this is not cause then why I am not able to run in
cluster mode,
spark submit command is -
spark-submit
>
> On Wed, Mar 25, 2015 at 9:07 PM sachin Singh
> wrote:
>
>> Hi ,
>> when I am submitting spark job in cluster mode getting error as under in
>> hadoop-yarn log,
>> someone has any idea,please suggest,
>>
>> 2015-03-25 23:35:22,467 IN
Hi ,
when I am submitting spark job in cluster mode getting error as under in
hadoop-yarn log,
someone has any idea,please suggest,
2015-03-25 23:35:22,467 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1427124496008_0028 State change from FINAL_SAVING to FAILED
2
thanks Sean and Akhil,
I changed the the permission of */user/spark/applicationHistory, *now it
works,
On Tue, Mar 24, 2015 at 7:35 PM, Sachin Singh
wrote:
> thanks Sean,
> please can you suggest in which file or configuration I need to modify
> proper path, please elaborate which
nfiguration specifies a local path. See the exception message.
>
> On Tue, Mar 24, 2015 at 1:08 PM, Akhil Das
> wrote:
> > Its in your local file system, not in hdfs.
> >
> > Thanks
> > Best Regards
> >
> > On Tue, Mar 24, 2015 at 6:25 PM, Sachin Singh
> &
gards
>
> On Tue, Mar 24, 2015 at 6:08 PM, Sachin Singh
> wrote:
>
>> Hi Akhil,
>> thanks for your quick reply,
>> I would like to request please elaborate i.e. what kind of permission
>> required ..
>>
>> thanks in advance,
>>
>> Regards
*/user/spark* directory.
>
> Thanks
> Best Regards
>
> On Tue, Mar 24, 2015 at 5:21 PM, sachin Singh
> wrote:
>
>> hi all,
>> all of sudden I getting below error when I am submitting spark job using
>> master as yarn its not able to create spark context,previ
hi all,
all of sudden I getting below error when I am submitting spark job using
master as yarn its not able to create spark context,previously working fine,
I am using CDH5.3.1 and creating javaHiveContext
spark-submit --jars
./analiticlibs/mysql-connector-java-5.1.17.jar,./analiticlibs/log4j-1.2.
I have copied hive-site.xml to spark conf folder "cp
/etc/hive/conf/hive-site.xml /usr/lib/spark/conf"
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/issue-creating-spark-context-with-CDH-5-3-1-tp21968p21969.html
Sent from the Apache Spark User List mailin
Hi,
I am using CDH5.3.1
I am getting bellow error while, even spark context not getting created,
I am submitting my job like this -
submitting command-
spark-submit --jars
./analiticlibs/utils-common-1.0.0.jar,./analiticlibs/mysql-connector-java-5.1.17.jar,./analiticlibs/log4j-1.2.17.jar,./analiti
Not yet,
Please let. Me know if you found solution,
Regards
Sachin
On 4 Mar 2015 21:45, "mael2210 [via Apache Spark User List]" <
ml-node+s1001560n21909...@n3.nabble.com> wrote:
> Hello,
>
> I am facing the exact same issue. Could you solve the problem ?
>
> Kind regards
>
> -
I am using CDH5.3.1
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/issue-Running-Spark-Job-on-Yarn-Cluster-tp21779p21780.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
Hi,
I want to run my spark Job in Hadoop yarn Cluster mode,
I am using below command -
spark-submit --master yarn-cluster --driver-memory 1g --executor-memory 1g
--executor-cores 1 --class com.dc.analysis.jobs.AggregationJob
sparkanalitic.jar param1 param2 param3
I am getting error as under, kindly
Yes.
On 19 Feb 2015 23:40, "Harshvardhan Chauhan" wrote:
> Is this the full stack trace ?
>
> On Wed, Feb 18, 2015 at 2:39 AM, sachin Singh
> wrote:
>
>> Hi,
>> I want to run my spark Job in Hadoop yarn Cluster mode,
>> I am using below command -
>
Hi,
I want to run my spark Job in Hadoop yarn Cluster mode,
I am using below command -
spark-submit --master yarn-cluster --driver-memory 1g --executor-memory 1g
--executor-cores 1 --class com.dc.analysis.jobs.AggregationJob
sparkanalitic.jar param1 param2 param3
I am getting error as under, kindly
Hi,
can some one guide how to get SQL Exception trapped for query executed using
SchemaRDD,
i mean suppose table not found
thanks in advance,
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-get-SchemaRDD-SQL-exceptions-i-e-table-not-found-excepti
Hi,
Please can somebody help ,how to avoid Spark and Hive log from Application
log,
I mean both spark and hive are using log4j property file ,
I have configured log4j.property file as per my application as under but its
printing Spark and hive console logging also,please suggest its urgent for
me,
Hi,
when I am trying to execute my program as
spark-submit --master yarn --class com.mytestpack.analysis.SparkTest
sparktest-1.jar
I am getting error bellow error-
java.lang.IllegalArgumentException: Required executor memory (1024+384 MB)
is above the max threshold (1024 MB) of this cluster!
Hi all,
issue has bee resolved,
when I used
rdd.foreachRDD(new Function, Void>() {
@Override
public Void call(JavaRDD rdd) throws Exception {
if(rdd!=null)
{
List result = rdd.col
Hi I want to send streaming data to kafka topic,
I am having RDD data which I converted in JavaDStream ,now I want to send it
to kafka topic, I don't want kafka sending code, just I need foreachRDD
implementation, my code is look like as
public void publishtoKafka(ITblStream t)
{
MyTopi
I have a table(csv file) loaded data on that by creating POJO as per table
structure,and created SchemaRDD as under
JavaRDD testSchema =
sc.textFile("D:/testTable.csv").map(GetTableData);/* GetTableData will
transform the all table data in testTable object*/
JavaSchemaRDD schemaTest = sqlContext.ap
Hi,
I have a csv file having fields as a,b,c .
I want to do aggregation(sum,average..) based on any field(a,b or c) as per
user input,
using Apache Spark Java API,Please Help Urgent!
Thanks in advance,
Regards
Sachin
--
View this message in context:
http://apache-spark-user-list.1001560.n3
27 matches
Mail list logo