Kafka is probably better than redis, and definitely better than sockets.
Other queues like rabbitmq can work too.

Sockets are a terrible choice, but I will leave why as a mental exercise
for now :).  If you must know I can respond again.
On Apr 26, 2016 6:27 AM, "Cody Lee" <[email protected]> wrote:

2 options come to mind without knowing what your code does: 1. join the
topology code bases for one topology 2. use a distributed queue  (if
something else needs to use this data )




-------- Original message --------
From: Navin Ipe <[email protected]>
Date: 04/26/2016 4:59 AM (GMT-06:00)
To: [email protected]
Subject: Emit from a Java application and receive in a Bolt of another Java
application?

Hi,

A colleague created a Bolt that writes data to a MongoDB application.
I have a Spout that reads that data from MongoDB.
My colleague's Bolt is in a separate Storm application he built. Mine is on
a separate storm application I built. Our applications may run on different
VM's.

My team lead wants my colleague's Bolt to emit data, and wants a Bolt in my
application to receive that data. So we basically avoid writing to MongoDB.

I suggested doing this via Java sockets, RMI or Redis. On hearing this, my
team lead tells me "You haven't understood Storm yet".

I've been through a lot of the documentation on Storm, and I haven't seen
any case where Storm can communicate. Except maybe using DRPC
<http://storm.apache.org/releases/0.10.0/Distributed-RPC.html>:
http://stackoverflow.com/questions/15690691/communication-between-several-storm-topologies
But there
But there are complaints of DRPC memory leaks and unexpected behaviour:
https://mithunsatheesh.wordpress.com/2014/01/04/storm-drpc-and-why-it-didnt-solve-the-case-for-us/

So is DRPC the way to go or does Storm have some other method of emitting
from one topology and receiving it in another topology?

-- 
Regards,
Navin

Reply via email to