Hi enno.  Might be worthwhile to cross post this on dev@hadoop... Obviously a 
simple spark way to test this would be to change the uri to write to hdfs:// or 
maybe you could do file:// , and confirm that the extra slash goes away.

- if it's indeed a jets3t issue we should add a new unit test for this if the 
hcfs tests are passing for jets3tfilesystem, yet this error still exists.

- To learn how to run HCFS tests against any FileSystem , see the wiki page : 
https://wiki.apache.org/hadoop/HCFS/Progress (see the July 14th entry on that 
page).

- Is there another S3FileSystem implementation for AbstractFileSystem or is 
jets3t the only one?  That would be a easy  way to test this. And also a good 
workaround.

I'm wondering, also why jets3tfilesystem is the AbstractFileSystem used by so 
many - is that the standard impl for storing using AbstractFileSystem interface?

> On Dec 23, 2014, at 6:06 AM, Enno Shioji <eshi...@gmail.com> wrote:
> 
> Is anybody experiencing this? It looks like a bug in JetS3t to me, but 
> thought I'd sanity check before filing an issue.
> 
> 
> ================
> I'm writing to S3 using ReceiverInputDStream#saveAsTextFiles with a S3 URL 
> ("s3://fake-test/1234").
> 
> The code does write to S3, but with double forward slashes (e.g. 
> "s3://fake-test//1234/-1419334280000/".
> 
> I did a debug and it seem like the culprit is 
> Jets3tFileSystemStore#pathToKey(path), which returns "/fake-test/1234/..." 
> for the input "s3://fake-test/1234/...". when it should hack off the first 
> forward slash. However, I couldn't find any bug report for JetS3t for this.
> 
> Am I missing something, or is this likely a JetS3t bug?
> ================
> 
> 
> ᐧ

Reply via email to