Avi,

just adding a bit to what Gwen and Eno said, and providing a few pointers.

If you are using the DSL, you can use the `process()` method to "do
whatever you want".  See "Applying a custom processor" in the Kafka Streams
DSL chapter of the Developer Guide at
http://docs.confluent.io/3.0.0/streams/developer-guide.html#applying-a-custom-processor
.

Alternatively, you can also use the low-level Processor API directly.
Here, you'd implement the `Processor` interface, where the most notable
method is (again) a method called `process()`.See the Processor API section
in the Developer Guide at
http://docs.confluent.io/3.0.0/streams/developer-guide.html#streams-developer-guide-processor-api
.

Hope this helps!
Michael




On Fri, Jun 3, 2016 at 3:09 AM, Avi Flax <avi.f...@parkassist.com> wrote:

> On 6/2/16, 07:03, "Eno Thereska" <eno.there...@gmail.com> wrote:
>
> > Using the low-level streams API you can definitely read or write to
> arbitrary
> > locations inside the process() method.
>
> Ah, good to know — thank you!
>
> > However, back to your original question: even with the low-level streams
> > API the sources and sinks can only be Kafka topics for now. So, as Gwen
> > mentioned, Connect would be the way to go to bring the data to a Kafka
> > Topic first.
>
> Got it — thank you!
>
>


-- 
Best regards,
Michael Noll



*Michael G. Noll | Product Manager | Confluent | +1 650.453.5860Download
Apache Kafka and Confluent Platform: www.confluent.io/download
<http://www.confluent.io/download>*

Reply via email to