our partition, if the data is indeed
>> being produced to Kafka, Are there are errors in your broker logs? How many
>> brokers do you have have and what is the replication factor of the topic? If
>> you have less than 3 brokers, have you set offsets.topic.rep
>> you have less than 3 brokers, have you set offsets.topic.replication.factor
>> to the number of brokers?
>>
>> Thanks,
>> Jamie
>>
>> -Original Message-
>> From: Sachin Nikumbh
>> To: users
>> Sent: Wed, 17 Jul 20
> Thanks,
> Jamie
>
> -Original Message-----
> From: Sachin Nikumbh
> To: users
> Sent: Wed, 17 Jul 2019 20:21
> Subject: Re: Kafka logs are getting deleted too soon
>
> Broker
> configs:===broker.id=36num.network.threads=3num.io.threads=8socket.se
tion.factor
> to the number of brokers?
>
> Thanks,
> Jamie
>
> -Original Message-
> From: Sachin Nikumbh
> To: users
> Sent: Wed, 17 Jul 2019 20:21
> Subject: Re: Kafka logs are getting deleted too soon
>
> Broker
> configs:===broke
3 brokers, have you set offsets.topic.replication.factor to the
number of brokers?
Thanks,
Jamie
-Original Message-
From: Sachin Nikumbh
To: users
Sent: Wed, 17 Jul 2019 20:21
Subject: Re: Kafka logs are getting deleted too soon
Broker
configs:===broker.id
factor of the topic? If you have
less than 3 brokers, have you set offsets.topic.replication.factor to the
number of brokers?
Thanks,
Jamie
-Original Message-
From: Sachin Nikumbh
To: users
Sent: Wed, 17 Jul 2019 20:21
Subject: Re: Kafka logs are getting deleted too soon
Broker
> From: Sachin Nikumbh
> To: Kafka Users
> Date: 17/07/2019 16:01
> Subject: [EXTERNAL] Kafka logs are getting deleted too soon
>
>
>
> Hi all,
> I have ~ 96GB of data in files that I am trying to get into a Kafka
> cluster. I have ~ 11000 keys for t
default the console consumer starts from
> the last offset.
>
> Tom Aley
> thomas.a...@ibm.com
>
>
>
> From: Sachin Nikumbh
> To: Kafka Users
> Date: 17/07/2019 16:01
> Subject:[EXTERNAL] Kafka logs are getting deleted too soon
>
>
>
&
bject: [EXTERNAL] Kafka logs are getting deleted too soon
Hi all,
I have ~ 96GB of data in files that I am trying to get into a Kafka
cluster. I have ~ 11000 keys for the data and I have created 15 partitions
for my topic. While my producer is dumping data in Kafka, I have a console
consumer that
Are you running the console consumer with the ‘--from-beginning’ option? It
defaults to reading from tail of the log, so if there is nothing being produced
it will be idle.
-- Peter (from phone)
> On Jul 17, 2019, at 8:00 AM, Sachin Nikumbh
> wrote:
>
> Hi all,
> I have ~ 96GB of data in fil
] Kafka logs are getting deleted too soon
Hi all,
I have ~ 96GB of data in files that I am trying to get into a Kafka
cluster. I have ~ 11000 keys for the data and I have created 15 partitions
for my topic. While my producer is dumping data in Kafka, I have a console
consumer that shows me that
Hi all,
I have ~ 96GB of data in files that I am trying to get into a Kafka cluster. I
have ~ 11000 keys for the data and I have created 15 partitions for my topic.
While my producer is dumping data in Kafka, I have a console consumer that
shows me that kafka is getting the data. The producer ru
12 matches
Mail list logo