> Because i know that by using Storm, you can guarantee the messages
(depending on the type of the Topology)
> such as exactly once, at least once.  If i simply use kafka consumer and
another producer to forward the
> messages, could the data tranfer completely be guaranteed as well?

Addendum:  If you use Kafka Streams, you have at-least-once processing
semantics.  So you do not lose data in the face of failures.

On Fri, Jul 1, 2016 at 4:29 PM, numangoceri <numangoc...@yahoo.com.invalid>
wrote:

> Hi,
>
> Thanks for your answer. I meant actually if we can verify the data
> reliability without using Storm or Spark. Because i know that by using
> Storm, you can guarantee the messages (depending on the type of the
> Topology) such as exactly once, at least once.
> If i simply use kafka consumer and another producer to forward the
> messages, could the data tranfer completely be guaranteed as well?
>
>
> Numan Göceri
>
> ---
>
> Rakesh Vidyadharan <rvidyadha...@gracenote.com> wrote:
>
> >Definitely.  You can read off kafka using the samples shown in
> KafkaConsumer javadoc, transform if necessary and publish to the
> destination topic.
> >
> >
> >
> >
> >On 01/07/2016 03:24, "numan goceri" <numangoc...@yahoo.com.INVALID>
> wrote:
> >
> >>Hello everyone,
> >>I've a quick question:I'm using Apache Kafka producer to write the
> messages into a topic. My source at the moment a csv file but in the future
> i am supposed to read the messages from another kafka topic.My question
> is:Is it possible to consume messages from a Kafka topic in real-time and
> write directly into another topic without using any streaming technology
> such as storm or spark? If yes, do you have any examples to do that in Java?
> >>To sum up, it should be looking like this:Kafka reads from topic
> "kafkaSource" and writes into the topic "kafkaResult".
> >>
> >>Thanks in advance and Best Regards, Numan
>

Reply via email to