I have seen the same problem, It causes some tasks to fail, but not the
whole job to fail.
Hope someone could shed some light on what could be the cause of this.
On Mon, Jan 26, 2015 at 9:49 AM, Aaron Davidson wrote:
> It looks like something weird is going on with your object serialization,
> p
my side, not sure if OP was having the same
problem though
On Wed, Feb 11, 2015 at 12:03 AM, Arush Kharbanda <
ar...@sigmoidanalytics.com> wrote:
> Hi
>
> Can you share the code you are trying to run.
>
> Thanks
> Arush
>
> On Wed, Feb 11, 2015 at 9:12 AM, Tianshuo D
my side, not sure if OP was having the same
problem though
On Wed, Feb 11, 2015 at 12:03 AM, Arush Kharbanda <
ar...@sigmoidanalytics.com> wrote:
> Hi
>
> Can you share the code you are trying to run.
>
> Thanks
> Arush
>
> On Wed, Feb 11, 2015 at 9:12 AM, Tianshuo D
Hi, spark users.
When running a spark application with lots of executors(300+), I see following
failures:
java.net.SocketTimeoutException: Read timed out at
java.net.SocketInputStream.socketRead0(Native Method) at
java.net.SocketInputStream.read(SocketInputStream.java:152) at
j
Hi,
Currently in GradientDescent.scala, weights is constructed as a dense
vector:
initialWeights = Vectors.dense(new Array[Double](numFeatures))
And the numFeatures is determined in the loadLibSVMFile as the max index of
features.
But in the case of using hash function to compute feature ind