On Mon, Mar 13, 2017 at 5:05 AM, Amit K wrote:
>
> > Hi,
> >
> > I am using simple kafka producer (java based, version 0.9.0.0) in an
> > application where I receive lot of hits (about 50 per seconds, in much
> like
> > servlet way) on application that has kafka
Hi,
I am using simple kafka producer (java based, version 0.9.0.0) in an
application where I receive lot of hits (about 50 per seconds, in much like
servlet way) on application that has kafka producer. Per request comes
different set of records.
I am using only one instance of kafka producer to p
Do you use auto-commit or committing your self? I'm trying to figure out
> how the offset moved if it was stuck.
>
> On Tue, Dec 6, 2016 at 10:28 AM Amit K wrote:
>
> > Hi,
> >
> > Is there any way to re-consume the older records from Kafka broker with
> &
Hi,
Is there any way to re-consume the older records from Kafka broker with
kafka consumer?
I am using kafka 0.9.0.0 In one of the scenario, I saw records for 2 days
from today were not consumed as consumer was stuck. When the consumer
restarted, it started processing records from today but older
biggest offset. Notice that the
> search is at log segment level and the result may not be accurate if the
> partition has been moved. In the worst case, you may consume a lot more
> messages than you want to, but you should not miss any messages.
>
> Jiangjie (Becket) Qin
>
> On
Hi All,
I am using kafka 0.9.0.1 with high level java producer and consumer.
I need to handle a case wherein I need to re-consume the already consumed
(and processed) messages say for last 5 days (configurable).
Is there any way of achieving the same apart from identifying the offsets
for the par
:
> Sure, rebalance is a normal cause for duplicates.
> Sure, "As I lower value of auto.commit.interval.ms, the performance
> deteriorates
> drastically" but you should see less duplicates. Did you try commit async
> or storing offsets somewhere else?
> On Aug 1, 20
eplicas in
> > ISR, i.e., ack=all or none? Not sure, if this can cause consumer to read
> > duplicates, I know there can definitely be data loss because of data not
> > being replicated.
> >
> > On Mon, Aug 1, 2016 at 10:11 AM, Amit K wrote:
> >
> > > Hi,
> &g
Hi,
I am kind of new to Kafka. I have set up a 3 node kafka (1 broker per
machine) cluster with 3 node zookeer cluster. I am using Kafka 0.9.0.0
version.
The set up works fine wherein from my single producer I am pushing a JSON
string to Kafka to a topic with 3 partitions and replication factor o
th us: Twitter I LinkedIn I Facebook I YouTube
>
>
> -Original Message-
> From: Amit K [mailto:amitk@gmail.com]
> Sent: Wednesday, July 20, 2016 11:40 AM
> To: users@kafka.apache.org
> Subject: Re: Regarding kafka partition and replication
>
> Thanks for reply.
-Dave
>
> Dave Tauzell | Senior Software Engineer | Surescripts
> O: 651.855.3042 | www.surescripts.com | dave.tauz...@surescripts.com
> Connect with us: Twitter I LinkedIn I Facebook I YouTube
>
>
> -Original Message-
> From: Amit K [mailto:amitk@gmail.com]
&
Hi,
I have 3 nodes, each with 3 brokers, Kafka cluster along with 3 zookeeper
cluster. So total 9 brokers spread across 3 different machines. I am
adhered to Kafka 0.9.
In order to optimally use the infrastructure for 2 topics (as of now, is
not expected to grow drastically in near future), I am
12 matches
Mail list logo