's necessary for each Worker to be its own receiver,
> but there's no real objection or concern to fuel the puzzlement, just
> curiosity.
>
>
> On Mon, Apr 7, 2014 at 4:16 PM, Christophe Clapp
> wrote:
>
> > Could it be as simple as just changing FlumeU
Could it be as simple as just changing FlumeUtils to accept a list of
host/port number pairs to start the RPC servers on?
On 4/7/14, 12:58 PM, Christophe Clapp wrote:
Based on the source code here:
https://github.com/apache/spark/blob/master/external/flume/src/main/scala/org/apache/spark
/7/14, 12:23 PM, Michael Ernest wrote:
You can configure your sinks to write to one or more Avro sources in a
load-balanced configuration.
https://flume.apache.org/FlumeUserGuide.html#flume-sink-processors
mfe
On Mon, Apr 7, 2014 at 3:19 PM, Christophe Clapp
wrote:
Hi,
From my testing of
r than just one.
- Christophe
On Apr 7, 2014 12:24 PM, "Michael Ernest" wrote:
> You can configure your sinks to write to one or more Avro sources in a
> load-balanced configuration.
>
> https://flume.apache.org/FlumeUserGuide.html#flume-sink-processors
>
> mfe
>
>
&
Hi,
From my testing of Spark Streaming with Flume, it seems that there's
only one of the Spark worker nodes that runs a Flume Avro RPC server to
receive messages at any given time, as opposed to every Spark worker
running an Avro RPC server to receive messages. Is this the case? Our
use-case