Thank you so much for the replies, Reynold, Sean, Takeshi, Hyukjin!
Bests,
Dongjoon.
On Mon, Jul 1, 2019 at 6:00 PM Hyukjin Kwon wrote:
> +1
>
> 2019년 7월 2일 (화) 오전 9:39, Takeshi Yamamuro 님이 작성:
>
>> I'm also using the script in both cases, anyway +1.
>>
>> On Tue, Jul 2, 2019 at 5:58 AM Sean Ow
+1
2019년 7월 2일 (화) 오전 9:39, Takeshi Yamamuro 님이 작성:
> I'm also using the script in both cases, anyway +1.
>
> On Tue, Jul 2, 2019 at 5:58 AM Sean Owen wrote:
>
>> I'm using the merge script in both repos. I think that was the best
>> practice?
>> So, sure, I'm fine with disabling it.
>>
>> On Mo
I'm also using the script in both cases, anyway +1.
On Tue, Jul 2, 2019 at 5:58 AM Sean Owen wrote:
> I'm using the merge script in both repos. I think that was the best
> practice?
> So, sure, I'm fine with disabling it.
>
> On Mon, Jul 1, 2019 at 3:53 PM Dongjoon Hyun
> wrote:
> >
> > Hi, Apa
I'm using the merge script in both repos. I think that was the best practice?
So, sure, I'm fine with disabling it.
On Mon, Jul 1, 2019 at 3:53 PM Dongjoon Hyun wrote:
>
> Hi, Apache Spark PMC members and committers.
>
> We are using GitHub `Merge Button` in `spark-website` repository
> because i
That's a good idea. We should only be using squash.
On Mon, Jul 01, 2019 at 1:52 PM, Dongjoon Hyun < dongjoon.h...@gmail.com >
wrote:
>
> Hi, Apache Spark PMC members and committers.
>
>
> We are using GitHub `Merge Button` in `spark-website` repository
> because it's very convenient.
>
>
>
Hi, Apache Spark PMC members and committers.
We are using GitHub `Merge Button` in `spark-website` repository
because it's very convenient.
1. https://github.com/apache/spark-website/commits/asf-site
2. https://github.com/apache/spark/commits/master
In order to be consistent with our pre
Thanks for your reply. It makes sense as to why the option is not provided.
(Since the user is the one who is imperatively asking spark to read the
files.)
Yes, I provide the list of files. I'll try the ignoreCorruptFiles option.
Also, I'll look into how I can avoid missing files or at least check
Where is this list of files coming from?
If you made the list, then yes, the expectation is generally "supply a list
of files which are present" on the basis that general convention is
"missing files are considered bad"
Though you could try setting spark.sql.files.ignoreCorruptFiles=true to see
w
We are focus on the arm instance of cloud, and now I use the arm instance
of vexxhost cloud to run the build job which mentioned above, the
specification of the arm instance is 8VCPU and 8GB of RAM,
and we can use bigger flavor to create the arm instance to run the job, if
need be.
On Fri, Jun 28,