So the situation is following: got a spray server, with a spark context
available (fair scheduling in a cluster mode, via spark-submit). There are
some http urls, which calling spark rdd, and collecting information from
accumulo / hdfs / etc (using rdd). Noticed, that there is a sort of
limitation, on requests: 

wrk -t8 -c50 -d30s "http://localhost:4444/…/";
Running 30s test @ http://localhost:4444/…/
  8 threads and 50 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.03s   523.30ms   1.70s    50.00%
    Req/Sec     6.05      5.49    20.00     71.58%
  452 requests in 30.04s, 234.39KB read
  Socket errors: connect 0, read 0, write 0, timeout 440

So this happens on making some calls with spark rdd (not depends on called
function), and in browser you can see ERR_EMPTY_RESPONSE

Now the solution was to use cache, but want to know about this limitations,
or mb some settings.
This error happens in local mode and in cluster mode, so guess not depends
on it.

P.S. logs are clear (or simply don't know where to look, but stdout of a
spar-submit in a client mode is clear). 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Limitations-using-SparkContext-tp23452.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to