Hi Schmirr, The part after the s3n:// is your bucket name and folder name, ie s3n://${bucket_name}/${folder_name}[/${subfolder_name}]*. Bucket names are unique across S3, so the resulting path is also unique. There is no concept of hostname in s3 urls as far as I know.
-sujit On Fri, Jul 17, 2015 at 1:36 AM, Schmirr Wurst <schmirrwu...@gmail.com> wrote: > Hi, > > I wonder how to use S3 compatible Storage in Spark ? > If I'm using s3n:// url schema, the it will point to amazon, is there > a way I can specify the host somewhere ? > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > >