Hi friends, Is it possible to interact with Amazon S3 using Spark via a proxy? This is what I have been doing :
SparkConf conf = new SparkConf().setAppName("MyApp").setMaster("local"); JavaSparkContext sparkContext = new JavaSparkContext(conf); Configuration hadoopConf = sparkContext.hadoopConfiguration(); hadoopConf.set("fs.s3n.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem"); hadoopConf.set("fs.s3n.awsAccessKeyId", "***********"); hadoopConf.set("fs.s3n.awsSecretAccessKey", "***********"); hadoopConf.set("httpclient.proxy-autodetect", "false"); hadoopConf.set("httpclient.proxy-host", "***********"); hadoopConf.set("httpclient.proxy-port", "****"); SQLContext sqlContext = new SQLContext(sparkContext); But whenever I try to run it, it says : java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:625) at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:524) at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:403) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177) at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:144) at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:131) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446) at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57) at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:326) at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:277) at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestHead(RestStorageService.java:1038) at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:2250) at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectDetailsImpl(RestStorageService.java:2179) at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:1120) at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:575) at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:172) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at org.apache.hadoop.fs.s3native.$Proxy21.retrieveMetadata(Unknown Source) at org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:414) at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57) at org.apache.hadoop.fs.Globber.glob(Globber.java:248) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623) at com.databricks.spark.avro.AvroRelation.newReader(AvroRelation.scala:105) at com.databricks.spark.avro.AvroRelation.<init>(AvroRelation.scala:60) at com.databricks.spark.avro.DefaultSource.createRelation(DefaultSource.scala:41) at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:219) at org.apache.spark.sql.SQLContext.load(SQLContext.scala:697) at org.apache.spark.sql.SQLContext.load(SQLContext.scala:673) The same proxy is working fine with AWS S3 Java API, and the JetS3t API. I could not find any document that explains about setting proxies in a Spark program. Could someone please point me to the right direction? Many thanks. Thank [image: http://] Tariq, Mohammad about.me/mti [image: http://] <http://about.me/mti>