Hi Guozhang,
It does not seem that RocksDB is the bottleneck because the global state
restore listener logs show that the restore calls to globalConsumer#poll do
not always return max.poll.records. A typical log looks like this:
21:20:22,530 Restored batch of 2000 records...
21:20:22,533 Restored
that will be really helpful, thanks for the heads up.
On Thu, Feb 7, 2019 at 7:36 PM Guozhang Wang wrote:
> Hi Nan,
>
> Glad it helps with your case. Just another note that in the next release
> when KIP-307 is in place [1], you can actually combine the DSL with PAPI by
> naming the last operato
Hello Taylor,
Thanks for reporting the issue. One thing I can think of is that in 2.0.1
RocksDB is already using batching-restorer to avoid restoring one record at
a time, so with larger batches of records, better throughput can be
achieved with given IOPS.
On the other hand, since you mention co
Hi Nan,
Glad it helps with your case. Just another note that in the next release
when KIP-307 is in place [1], you can actually combine the DSL with PAPI by
naming the last operator that creates your transformed KStream, and then
manually add the sink nodes like:
stream2 = stream1.transform(Named
awesome, this solution is great, thanks a lot.
Nan
On Thu, Feb 7, 2019 at 2:28 PM Bill Bejeck wrote:
> Hi Nan,
>
> l see what you are saying about reproducing a join in the PAPI.
>
> I have another thought.
>
>1. Have your Transform return a List [r1, r2, r3]
>2. Then after your transfo
Hi Nan,
l see what you are saying about reproducing a join in the PAPI.
I have another thought.
1. Have your Transform return a List [r1, r2, r3]
2. Then after your transform operation use a flatMapValues operator as
this will forward KV pairs of (k, r1), (k, r2), and (k, r3).
>From t
Hi
Reviewing the javadoc for KafkaConsumer.poll() I'd like to confirm the
status of the consumer after the poll method throws an exception.
I assume for an unrecoverable KafkaException you could not re-use the
consumer and would need to create a brand new KafkaConsumer object?
Are there any case
hmm, but my DSL logic at beginning involve some join between different
streams, so I feel that will be quit complex to write everything in PAPI.
what if I do this. in the transform, I return all 3 classes as a tuple.
then to map 3 times on the same stream like this
transformer {
return (class1Insta
Hi Nan,
I wanted to follow up some more.
Since you need your Transformer forward to 3 output topics or more
generally any time you want a processor to forward to multiple child nodes
or specific nodes in the topology, you can best achieve this kind of
control and flexibility using the PAPI.
Than
Hi Nan,
What I'm suggesting is do the entire topology in the PAPI, sorry if I
didn't make this clear from before.
Thanks,
Bill
On Thu, Feb 7, 2019 at 10:41 AM Nan Xu wrote:
> thanks, just to make sure I understand this correctly,.
>
> I have some processing logic using DSL, after those process
thanks, just to make sure I understand this correctly,.
I have some processing logic using DSL, after those processing, I have a
kstream, from this kstream, I need to do a transform and put result to
different topics. To use processor api, I need to put this kstream to a
topic, then use topology.a
Hi Nan,
To forward to the 3 different topics it will probably be easier to do this
in the Processor API. Based off what your stated in your question, the
topology will look something like this:
final Topology topology = new Topology();
topology.addSource("source-node", "input-topic");
topology.a
12 matches
Mail list logo