;
> spark = SparkSession.builder\
> .config(conf=sc.getConf())\
> .getOrCreate()
>
> dfTermRaw = spark.read.format("csv")\
> .option("header", "true")\
> .option("delimiter&q
2")
spark = SparkSession.builder\
.config(conf=sc.getConf())\
.getOrCreate()
dfTermRaw = spark.read.format("csv")\
.option("header", "true")\
.option("delimiter" ,"\t")\
.option("inferSchema", "true&qu
On 5 Jan 2017, at 20:07, Manohar Reddy
mailto:manohar.re...@happiestminds.com>> wrote:
Hi Steve,
Thanks for the reply and below is follow-up help needed from you.
Do you mean we can set up two native file system to single sparkcontext ,so
then based on urls prefix( gs://bucket/path and dest s3a
understanding right?
Manohar
From: Steve Loughran [mailto:ste...@hortonworks.com]
Sent: Thursday, January 5, 2017 11:05 PM
To: Manohar Reddy
Cc: user@spark.apache.org
Subject: Re: Spark Read from Google store and save in AWS s3
On 5 Jan 2017, at 09:58, Manohar753
mailto:manohar.re
On 5 Jan 2017, at 09:58, Manohar753
mailto:manohar.re...@happiestminds.com>> wrote:
Hi All,
Using spark is interoperability communication between two
clouds(Google,AWS) possible.
in my use case i need to take Google store as input to spark and do some
processing and finally needs to store in S
this kind of usecase bu using
directly spark without any middle components and share the info or link if
you have.
Thanks,
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Read-from-Google-store-and-save-in-AWS-s3-tp28278.html
Sent from the Apache Spark