sing from
> such loss, damage or destruction.
>
>
>
> On 24 October 2016 at 15:08, Sankar Mittapally creditvidya.com> wrote:
>
>> sc <- sparkR.session(master = "spark://ip-172-31-6-116:7077"
>> ,sparkConfig=list(spark.executor.memory="
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any m
citly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 24 October 2016 at 13:15, Sankar Mittapally creditvidya.com> wrote:
>
>> Hi Mich,
>>
>> Yes, I am using standal
Hi Mich,
Yes, I am using standalone mode cluster, We have two executors with 10G
memory each. We have two workers.
FYI..
On Mon, Oct 24, 2016 at 5:22 PM, Mich Talebzadeh
wrote:
> Sounds like you are running in standalone mode.
>
> Have you checked the UI on port 4040 (default) to see where
1_v2.csv
>
> On Tue, Sep 20, 2016 at 12:19 PM, Sankar Mittapally creditvidya.com> wrote:
>
>> I used that one also
>>
>> On Sep 20, 2016 10:44 PM, "Kevin Mellott"
>> wrote:
>>
>>> Instead of *mode="append"*, try *
I used that one also
On Sep 20, 2016 10:44 PM, "Kevin Mellott" wrote:
> Instead of *mode="append"*, try *mode="overwrite"*
>
> On Tue, Sep 20, 2016 at 11:30 AM, Sankar Mittapally creditvidya.com> wrote:
>
>> Please find the code below.
>&
;csv",mode="append",schema="true")
On Tue, Sep 20, 2016 at 9:40 PM, Kevin Mellott
wrote:
> Can you please post the line of code that is doing the df.write command?
>
> On Tue, Sep 20, 2016 at 9:29 AM, Sankar Mittapally creditvidya.com> wrote:
>
>>
t;> sRelationCommand$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoo
>> pFsRelationCommand.scala:149)
>> at
>> org.apache.spark.sql.execution.datasources.InsertIntoHadoopF
>> sRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRela
>> tionCommand.scala:115)
>> at
>> org.apache.spark.sql.execution.datasources.InsertIntoHadoopF
>> sRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRela
>> tionCommand.scala:115)
>> at
>> org.apache.spark.sql.execution.SQLExecution$.withNewExecutio
>> nId(SQLExecution.scala:57)
>> at
>> org.apache.spark.sql.execution.datasources.InsertIntoHadoopF
>> sRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:115)
>> at
>> org.apache.spark.sql.execution.command.ExecutedCommandExec.s
>> ideEffectResult$lzycompute(commands.scala:60)
>> at
>> org.apache.spark.sql.execution.command.ExecutedCommandExec.s
>> ideEffectResult(commands.scala:58)
>> at org.apache.spark.sql.execution.command.ExecutedCommandExec.doE
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-spark-user-list.
>> 1001560.n3.nabble.com/write-df-is-failing-on-Spark-Cluster-tp27761.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> -
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>
>
--
Regards
Sankar Mittapally
Senior Software Engineer
Hi ,
We have setup a spark cluster which is on NFS shared storage, there is no
permission issues with NFS storage, all the users are able to write to NFS
storage. When I fired write.df command in SparkR, I am getting below. Can
some one please help me to fix this issue.
16/09/17 08:03:28 ERROR