confluentinc/examples/blob/3.
> 2.x/kafka-streams/src/main/java/io/confluent/examples/streams/
> MapFunctionLambdaExample.java#L126
>
> Full docs: http://docs.confluent.io/current/streams/index.html
>
>
> -Matthias
>
> On 5/17/17 1:45 PM, Robert Quinlivan wrote:
> >
of #poll() which can then be passed into a map/filter pipeline.
I am using an underlying blocking queue data structure to buffer in memory
and using Stream.generate() to pull records. Any recommendations on a best
approach here?
Thanks
--
Robert Quinlivan
Software Engineer, Signal
d email address; please retain a copy of
> > this confirmation for future reference.
> >
> > Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur
> > immédiatement, par retour de courriel ou par un autre moyen. Vous avez
> > accepté de recevoir
igher.
>
> Problem 2: If we have two partitions, only two consumers can consume
> messages。How let more consumers to consume, but expand partitions.
>
> Thanks!
>
>
>
--
Robert Quinlivan
Software Engineer, Signal
er should arrive in the consumer, so if I do
> >> >> this in one windows console:
> >> >>
> >> >> kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic
> >> >> big_ptns1_repl1_nozip --zookeeper localhost:2181 >
> >> >> F:\Users\me\Desktop\shakespear\single_all_shakespear_OUT.txt
> >> >>
> >> >> and this in another:
> >> >>
> >> >> kafka-console-producer.bat --broker-list localhost:9092 --topic
> >> >> big_ptns1_repl1_nozip <
> >> >> F:\Users\me\Desktop\shakespear\complete_works_no_bare_lines.txt
> >> >>
> >> >> then the output file "single_all_shakespear_OUT.txt" should be
> >> >> identical to the input file "complete_works_no_bare_lines.txt"
> except
> >> >> it's not. For the complete works (sabout 5.4 meg uncompressed) I lost
> >> >> about 130K in the output.
> >> >> For the replicated shakespeare, which is about 5GB, I lost about 150
> >> meg.
> >> >>
> >> >> This can't be right surely and it's repeatable but happens at
> >> >> different places in the file when errors start to be produced, it
> >> >> seems.
> >> >>
> >> >> I've done this using all 3 versions of kafak in the 0.10.x.y branch
> >> >> and I get the same problem (the above commands were using the
> 0.10.0.0
> >> >> branch so they look a little obsolete but they are right for that
> >> >> branch I think). It's cost me some days.
> >> >> So, am I making a mistake, if so what?
> >> >>
> >> >> thanks
> >> >>
> >> >> jan
> >> >>
> >> >
> >>
> >
>
--
Robert Quinlivan
Software Engineer, Signal
le-producer.bat --broker-list localhost:9092 --topic
> > >> big_ptns1_repl1_nozip <
> > >> F:\Users\me\Desktop\shakespear\complete_works_no_bare_lines.txt
> > >>
> > >> then the output file "single_all_shakespear_OUT.txt" should be
> > >> identical to the input file "complete_works_no_bare_lines.txt" except
> > >> it's not. For the complete works (sabout 5.4 meg uncompressed) I lost
> > >> about 130K in the output.
> > >> For the replicated shakespeare, which is about 5GB, I lost about 150
> > meg.
> > >>
> > >> This can't be right surely and it's repeatable but happens at
> > >> different places in the file when errors start to be produced, it
> > >> seems.
> > >>
> > >> I've done this using all 3 versions of kafak in the 0.10.x.y branch
> > >> and I get the same problem (the above commands were using the 0.10.0.0
> > >> branch so they look a little obsolete but they are right for that
> > >> branch I think). It's cost me some days.
> > >> So, am I making a mistake, if so what?
> > >>
> > >> thanks
> > >>
> > >> jan
> > >>
> > >
> >
>
--
Robert Quinlivan
Software Engineer, Signal
x27;s reported
partition count. I have seen no mention of a need to restart or reconfigure
the producer in order to pick up the added partitions. Is this required?
Thanks
--
Robert Quinlivan
Software Engineer, Signal
e for maintaining the offsets.
>
> Could someone more experienced elaborate a bit on this topic?
>
> Thanks
> jakub
>
--
Robert Quinlivan
Software Engineer, Signal
blem i am facing in real time . when I try to bring in a new
> consumer group to consume from a certain topic . I have to restart the
> producer only then it ( new consumer group) starts consuming. Is there any
> other away without disturbing the producer.
>
>
> Regards
> V G S
t;
> Sent from my iPhone
>
> > On Mar 15, 2017, at 9:40 AM, Robert Quinlivan
> wrote:
> >
> > I should also mention that this error was seen on broker version
> 0.10.1.1.
> > I found that this condition sounds somewhat similar to KAFKA-4362
> > <https://is
017 at 11:11 AM, Robert Quinlivan
wrote:
> Good morning,
>
> I'm hoping for some help understanding the expected behavior for an offset
> commit request and why this request might fail on the broker.
>
> *Context:*
>
> For context, my configuration looks like this:
>
ail?
2. If this is an issue with metadata size, what would cause abnormally
large metadata?
3. How is this cache used within the broker?
Thanks in advance for any insights you can provide.
Regards,
Robert Quinlivan
Software Engineer, Signal
uld be greatly appreciated.
>
> Thanks in advance.
>
> --
> Thanks,
> Syed.
>
--
Robert Quinlivan
Software Engineer, Signal
hreads.
>
> Or should be there one kafka producer created to handle one request?
>
> Is there any best practice documents/guidelines to follow for using simple
> java Kafka producer api?
>
> Thanks in advance for your responses.
>
> Thanks,
> Amit
>
--
Robert Quinlivan
Software Engineer, Signal
#SystemTools-ConsumerOffsetChecker )
> > >
> > > However, if my version they don't work, because they try and read from
> > > zookeeper /consumers which is empty.. I think they are old tools.
> > >
> > > Does anyone know where in zookeeper, where the current kafka keeps
> > > consumer offsets?
> > >
> > > Regards
> > > --
> > > Glen Ogilvie
> > > Open Systems Specialists
> > > Level 1, 162 Grafton Road
> > > http://www.oss.co.nz/
> > >
> > > Ph: +64 9 984 3000
> > > Mobile: +64 21 684 146
> > > GPG Key: ACED9C17
> > >
> >
>
--
Robert Quinlivan
Software Engineer, Signal
org.apache.kafka.common.errors.RecordTooLargeException,
returning UNKNOWN error code to the client
(kafka.coordinator.GroupMetadataManager)
The consumer group cannot attach. How can I resolve this issue on the
broker?
Thanks
--
Robert Quinlivan
Software Engineer, Signal
max.bytes"
setting? This seems like an edge case to me. The leader would accept the
record but replicas would not be able to receive it, so it would be lost.
Or does the replica take the max of those two settings in order to avoid
this condition?
Thanks in advance!
--
Robert Quinlivan
Software Engineer, Signal
nimum delay? or
> is the minimum time required by kafka for the whole process?
>
> Best Regards,
> Patricia
--
Robert Quinlivan
Software Engineer, Signal
Hello,
Are there more detailed descriptions available for the metrics exposed by
Kafka via JMX? The current documentation provides some information but a
few metrics are not listed in detail – for example, "Log flush rate and
time."
--
Robert Quinlivan
Software Engineer, Signal
,
ConsumerRebalanceListener) would follow a similar behavior by distributing
the assigned topics among all consumers in the group.
Is this not the case? What is the expected behavior and how would you
recommend implementing this design?
Thank you
--
Robert Quinlivan
Software Engineer, Signal
hat can provide more verbose logging, or is there another way of checking
the offsets?
Thank you
--
Robert Quinlivan
Software Engineer, Signal
21 matches
Mail list logo