Whether I use 1 or 2 machines, the results are the same... Here follows the
results I got using 1 and 2 receivers with 2 machines:
2 machines, 1 receiver:
sbt/sbt "run-main Benchmark 1 machine1 1000" 2>&1 | grep -i "Total
delay\|record"
15/04/13 16:41:34 INFO JobScheduler: Total delay: 0.15
Are you running # of receivers = # machines?
TD
On Thu, Apr 9, 2015 at 9:56 AM, Saiph Kappa wrote:
> Sorry, I was getting those errors because my workload was not sustainable.
>
> However, I noticed that, by just running the spark-streaming-benchmark (
> https://github.com/tdas/spark-streaming-
Sorry, I was getting those errors because my workload was not sustainable.
However, I noticed that, by just running the spark-streaming-benchmark (
https://github.com/tdas/spark-streaming-benchmark/blob/master/Benchmark.scala
), I get no difference on the execution time, number of processed record
If it is deterministically reproducible, could you generate full DEBUG
level logs, from the driver and the workers and give it to me? Basically I
want to trace through what is happening to the block that is not being
found.
And can you tell what Cluster manager are you using? Spark Standalone,
Meso
Hi Tathagata,
Yes. The input stream is from Kafka and my program reads the data, keeps
all the data in memory, process the data, and generate the output.
Bill
On Mon, Jun 30, 2014 at 11:45 PM, Tathagata Das wrote:
> Are you by any change using only memory in the storage level of the input
> s
Hi Tobias,
Your explanation makes a lot of sense. Actually, I tried to use partial
data on the same program yesterday. It has been up for around 24 hours and
is still running correctly. Thanks!
Bill
On Mon, Jun 30, 2014 at 5:53 PM, Tobias Pfeiffer wrote:
> Bill,
>
> let's say the processing t
Are you by any change using only memory in the storage level of the input
streams?
TD
On Mon, Jun 30, 2014 at 5:53 PM, Tobias Pfeiffer wrote:
> Bill,
>
> let's say the processing time is t' and the window size t. Spark does not
> *require* t' < t. In fact, for *temporary* peaks in your streami
Bill,
let's say the processing time is t' and the window size t. Spark does not
*require* t' < t. In fact, for *temporary* peaks in your streaming data, I
think the way Spark handles it is very nice, in particular since 1) it does
not mix up the order in which items arrived in the stream, so items
Tobias,
Your suggestion is very helpful. I will definitely investigate it.
Just curious. Suppose the batch size is t seconds. In practice, does Spark
always require the program to finish processing the data of t seconds
within t seconds' processing time? Can Spark begin to consume the new batch
b
Tobias,
Thanks for your help. I think in my case, the batch size is 1 minute.
However, it takes my program more than 1 minute to process 1 minute's data.
I am not sure whether it is because the unprocessed data pile up. Do you
have an suggestion on how to check it and solve it? Thanks!
Bill
On
Bill,
were you able to process all information in time, or did maybe some
unprocessed data pile up? I think when I saw this once, the reason
seemed to be that I had received more data than would fit in memory,
while waiting for processing, so old data was deleted. When it was
time to process that
11 matches
Mail list logo