k.apache.org/docs/latest/configuration.html#dynamically-loading-spark-properties
> w.r.t. spark-defaults.conf
>
> On Fri, Apr 1, 2016 at 12:06 PM, Max Schmidt <mailto:m...@datapath.io>> wrote:
>
> Yes but doc doesn't say any word for which variable the configs
> are val
Am 2016-04-01 18:58, schrieb Ted Yu:
You can set them in spark-defaults.conf
See
also https://spark.apache.org/docs/latest/configuration.html#spark-ui
[1]
On Fri, Apr 1, 2016 at 8:26 AM, Max Schmidt wrote:
Can somebody tell me the interaction between the properties:
spark.ui.retaine
Can somebody tell me the interaction between the properties:
spark.ui.retainedJobs
spark.ui.retainedStages
spark.history.retainedApplications
I know from the bugtracker, that the last one describes the number of
applications the history-server holds in memory.
Can I set the properties in the spa
Just to mark this question closed - we expierienced an OOM-Exception on
the Master, which we didn't see on the Driver, but made him crash.
Am 24.03.2016 um 09:54 schrieb Max Schmidt:
> Hi there,
>
> we're using with the java-api (1.6.0) a ScheduledExecutor that
> continuous
11 AM, Max Schmidt wrote:
Am 24.03.2016 um 10:34 schrieb Simon Hafner:
2016-03-24 9:54 GMT+01:00 Max Schmidt :
> we're using with the java-api (1.6.0) a ScheduledExecutor that
continuously
> executes a SparkJob to a standalone cluster.
I'd recommend Scala.
Why should I use
xecutor.Executor - Managed memory leak
> detected; size = 5602240 bytes, TID = 47709
>
> 644989 [Executor task launch worker-13] ERROR
> org.apache.spark.executor.Executor - Managed memory leak
> detected; size = 5326260 bytes, TID = 47863
>
> 720701
Am 24.03.2016 um 10:34 schrieb Simon Hafner:
> 2016-03-24 9:54 GMT+01:00 Max Schmidt <mailto:m...@datapath.io>>:
> > we're using with the java-api (1.6.0) a ScheduledExecutor that
> continuously
> > executes a SparkJob to a standalone cluster.
> I'd
ement - finished.
Any guess?
--
*Max Schmidt, Senior Java Developer* | m...@datapath.io
<mailto:m...@datapath.io> | LinkedIn
<https://www.linkedin.com/in/maximilian-schmidt-9893b7bb/>
Datapath.io
Decreasing AWS latency.
Your traffic optimized.
Datapath.io GmbH
Mainz | HRB Nr. 46222
Sebastian Spies, CEO
Okay, i solved this problem...
It was my own fault by setting the RootLogger for the
java.util.logging*.
An explicit name for the handler/level solved it.
Am 2016-01-11 12:33, schrieb Max Schmidt:
I checked the handlers of my rootLogger
(java.util.logging.Logger.getLogger("")) whi
I checked the handlers of my rootLogger
(java.util.logging.Logger.getLogger("")) which where
a Console and a FileHandler.
After the JavaSparkContext was created, the rootLogger only contained a
'org.slf4j.bridge.SLF4JBridgeHandler'.
Am 11.01.2016 um 10:56 schrieb Max
Hi there,
we're haveing a strange Problem here using Spark in a Java application
using the JavaSparkContext:
We are using java.util.logging.* for logging in our application with 2
Handlers (Console + Filehandler):
{{{
.handlers=java.util.logging.ConsoleHandler, java.util.logging.FileHandler
.le
11 matches
Mail list logo