*The Shark-specific group appears to be in moderation pause, so I'm asking here.*
I'm running Shark/Spark on EC2. I am using Shark to query data from a S3 bucket and then write the results back to a S3 bucket. The data is read fine, but when I write I get an error: 14/07/31 16:42:30 INFO scheduler.TaskSetManager: Loss was due to > java.lang.IllegalArgumentException: Wrong FS: > s3n://id:key@shadoop/tmp/hive-root/hive_2014-07-31_16-39-29_825_6436105804053790400/_tmp.-ext-10000, > expected: hdfs://ecmachine.compute-1.amazonaws.com:9000 [duplicate 3] Is there some setting that I change to allow it to write to a S3 file system? I've tried all sorts of different queries to write to S3. This particular one was: > INSERT OVERWRITE DIRECTORY 's3n://id:key@shadoop/bucket' SELECT * FROM > table; Thanks for your help! -William