Hi All,
I have the same issue with one compressed file .tgz around 3 GB. I increase the
nodes without any affect to the performance.
Best Regards,
Mostafa Alaa Mohamed,
Technical Expert Big Data,
M: +971506450787
Email: mohamedamost...@etisalat.ae<mailto:mohamedamost...@etisalat.ae>
Unsubscribe
Best Regards,
Mostafa Alaa Mohamed,
Technical Expert Big Data,
M: +971506450787
Email: mohamedamost...@etisalat.ae
-Original Message-
From: balaji9058 [mailto:kssb...@gmail.com]
Sent: Wednesday, December 14, 2016 08:32 AM
To: user@spark.apache.org
Subject: Re: Graphx triplet
No, Sometime when you have table with column int and you insert in this column
string job will be fail some times
Best Regards,
Mostafa Alaa Mohamed,
Technical Expert Big Data,
M: +971506450787
Email: mohamedamost...@etisalat.ae<mailto:mohamedamost...@etisalat.ae>
From: Michael
we specify the rejection directory?
If not avaiable do you recommend to open Jira issue?
Best Regards,
Mostafa Alaa Mohamed,
Technical Expert Big Data,
M: +971506450787
Email: mohamedamost...@etisalat.ae<mailto:mohamedamost...@etisalat.ae>
The content o
Hi All,
I have dataframe contains some data and I need to insert it into hive table. My
questions
1- Where will spark save the rejected rows from the insertion statements?
2- Can spark failed if some rows rejected?
3- How can I specify the rejection directory?
Regards,
__