Hi All,
I am trying to get a multi threaded HL consumer working against a 2 broker
Kafka cluster with a 4 partition 2 replica topic.
The consumer code is set to run with 4 threads, one for each partition.
The producer code uses the default partitioner and loops indefinitely
feeding events into
t; to Kafka. remove this line the consumer threads will run forever
>
> On Wed, Apr 29, 2015 at 9:42 PM, christopher palm
> wrote:
>
> > Hi All,
> >
> > I am trying to get a multi threaded HL consumer working against a 2
> broker
> > Kafka cluster with a 4 partit
ogram
>
> On Thu, Apr 30, 2015 at 2:23 AM, christopher palm
> wrote:
>
> > Commenting out Example shutdown did not seem to make a difference, I
> added
> > the print statement below to highlight the fact.
> >
> > The other threads still shut down, and
Hi All,
Does Kafka support SSL authentication and ACL authorization without
Kerberos?
If so, can different clients have their own SSL certificate on the same
broker?
In reading the following security article, it seems that Kerberos is an
option but not required if SSL is used.
Thanks,
Chris
ht
ecuring-kafka --group securing-kafka-group
>
> Enabling the authorizer log is a good way to figure out the principal if
> you don't know it.
>
> Hope this helps,
> Ismael
>
> On Mon, Mar 21, 2016 at 10:27 PM, Raghavan, Gopal >
> wrote:
>
> > >Hi Christ
Hi All,
I am working with the KafkaProducer using the properties below,
so that the producer keeps trying to send upon failure on Kafka .9.0.1.
I am forcing a failure by setting my buffersize smaller than my
payload,which causes the expected exception below.
I don't see the producer retry to send
"buffer.memory",
> "max.request.size" ) happens before
> batching and sending messages. Retry mechanism is applicable for broker
> side errors and network errors.
> Try changing "message.max.bytes" broker config property for simulating
> broker side error.
>
>
omeone else has filed a related issue about it.
>
> Regards,
> Nicolas PHUNG
>
> On Thu, Apr 7, 2016 at 5:15 AM, christopher palm wrote:
>
> > Hi Thanks for the suggestion.
> > I lowered the broker message.max.byt
I had a similar question, and just watched the video on the confluent.io
site about this.
>From what I understand idempotence and transactions are there to solve the
duplicate writes and exactly once processing, respectively.
Is what you are stating below is that this only works if we produce into
a wrote:
> > You can achieve exactly once on a consumer by enabling read committed and
> > manually committing the offset as soon as you receive a message. That way
> > you know that at next poll you won't get old message again.
> >
> > On Fri, Sep 27, 2019, 6:24
10 matches
Mail list logo