gt; *To: *"Ozsakarya, Omer"
> *Cc: *Spark Forum
> *Subject: *Re: Triggering sql on Was S3 via Apache Spark
>
>
>
> Also try to read about SCD and the fact that Hive may be a very good
> alternative as well for running updates on data
>
>
>
> Regard
: "Ozsakarya, Omer"
Cc: Spark Forum
Subject: Re: Triggering sql on Was S3 via Apache Spark
Also try to read about SCD and the fact that Hive may be a very good
alternative as well for running updates on data
Regards,
Gourav
On Wed, 24 Oct 2018, 14:53 ,
mailto:omer.ozsaka...@sony.com>&
To: *"Ozsakarya, Omer"
> *Cc: *Spark Forum
> *Subject: *Re: Triggering sql on Was S3 via Apache Spark
>
>
>
> This is interesting you asked and then answered the questions (almost) as
> well
>
>
>
> Regards,
>
> Gourav
>
>
>
> On Tue
Thank you very much 😊
From: Gourav Sengupta
Date: 24 October 2018 Wednesday 11:20
To: "Ozsakarya, Omer"
Cc: Spark Forum
Subject: Re: Triggering sql on Was S3 via Apache Spark
This is interesting you asked and then answered the questions (almost) as well
Regards,
Gourav
On Tue, 2
This is interesting you asked and then answered the questions (almost) as
well
Regards,
Gourav
On Tue, 23 Oct 2018, 13:23 , wrote:
> Hi guys,
>
>
>
> We are using Apache Spark on a local machine.
>
>
>
> I need to implement the scenario below.
>
>
>
> In the initial load:
>
>1. CRM applicat
Why not directly access the S3 file from Spark?
You need to configure the IAM roles so that the machine running the S3 code is
allowed to access the bucket.
> Am 24.10.2018 um 06:40 schrieb Divya Gehlot :
>
> Hi Omer ,
> Here are couple of the solutions which you can implement for your use cas
Hi Omer ,
Here are couple of the solutions which you can implement for your use case
:
*Option 1 : *
you can mount the S3 bucket as local file system
Here are the details :
https://cloud.netapp.com/blog/amazon-s3-as-a-file-system
*Option 2 :*
You can use Amazon Glue for your use case
here are the