ithappen
that the earliestOffset is higher than the currentOffset?My only suspicion
isthat maybe messages from the topic were cleaned out due to the retention
policy….Any other cases this could have happened? Thanks a lot!Marina
to
the retention policy…. Any other cases this could have happened?
Thanks a lot!
Marina
__
ConsumerOffsetChecker
showed - but this is not the case.
Why? Is it because in 0.8.2.1 offsets are already stored in Kafka, and not
Zookeeper?
(I am using low-level simpleConsumer API)?
And in that case - how can I see what are the real offset values and modify
them?
thanks!
Marina
Thanks, Achanta - you are right, I should have used 'get' instead of 'ls' !And
this also answers my other question about setting the offset manually - I guess
I can just as well do
'set /consumers/elastic_search_group/offsets/my_log_topic/0 someValue'
thanks!
M
' in value part of property" error... due to IPv6 address
representation returned by my Mac.
Is there a known or planned fix/work-around ?thanks,Marina
Hi,
I have enabled JMX_PORT for KAfka server and am trying to understand some of
the metrics that are being exposed. I have two questions:
1. what are the best metrics to monitor to quickly spot unhealthy Kafka cluster?
2. what do these metrics mean: ReplicaManager -> LeaderCount ? and
ReplicaMa
values for both of the above attributes is
"53" So I am not sure what the count '53' means here"
thanks!MArina
From: Todd Palino
To: "users@kafka.apache.org"
Cc: Marina
Sent: Tuesday, June 2, 2015 1:29 PM
Subject: Re: Kafka JMS metrics m
Size/minBytes' relate? What exactly do
they define? Do I have to make
sure one is smaller or grater than the other?
Thanks,
Marina
does not show anything
Also, how would I change the offset? I need to do this sometimes if I want to
skip/ignore some messages and just advance offset manually.
thanks,
Marina
nless, of course, this is the only way to do this.]
thanks!
Marina
- Original Message -
From: Stevo Slavić
To: users@kafka.apache.org; Marina
Cc:
Sent: Friday, June 19, 2015 9:33 AM
Subject: Re: how to modify offsets stored in Kafka in 0.8.2.1 version?
Hello Marina,
There's
orm a different offset - for example, given a log size of 1.5M events, I can
start form offset 500K to have an exact 1M load, or I can start from offset 1M
if I want 500K load Very convenient and easy for QA to use.
Thanks again for your help!
Marina
- Original Message -
From
ffset directly in Zookeeper (or deleting the whole
path in Zookeeper and the whole log dir in Kafka).
Not sure if there is a better way to do this kind of cleanup.
thanks!
- Original Message -
From: Marina
To: "users@kafka.apache.org"
Cc:
Sent: Monday, June 22, 2015 8:45 AM
Subj
+1 for deprecating JDK1.6
From: Harsha
To: users@kafka.apache.org; d...@kafka.apache.org
Sent: Wednesday, July 1, 2015 11:05 AM
Subject: EOL JDK 1.6 for Kafka
Hi,
During our SSL Patch KAFKA-1690. Some of the reviewers/users
asked for support this config
https://docs
ck to troubleshoot this?
thanks a lot!MArina
, I tried to create a new topic , and delete it right away with no
events posted - and in that case it was deleted for good right away.
I'm using kafka 1.8.2.1
thanks!MArina
From: Stevo Slavić
To: users@kafka.apache.org
Sent: Thursday, July 16, 2015 6:56 PM
Subject: Re: Delete topi
see this consistently when I have more than 100M
events. I know it is a wide range :) - so I will try to ramp it up gradually, I
just have to write a couple of scripts for that.
thanks!Marina
From: JIEFU GONG
To: users@kafka.apache.org
Sent: Friday, July 17, 2015 2:56 PM
Subject: Re
ould think that if you can get message in the while{} - you are already past
the point at which Consumer dies if the message is corrupt.... is it not the
case?
thanks!MArina
[sorry, I did not mean to high-jack the thread - but I think it is important to
understand how to skip corrupted messages for bo
Are there plans to move consumer group coordination off of Zk as well? And if
so:-- what's the approximate planned release for that?-- and what dependencies
on Zk will be left in Kafka after that?
thanks!Marina
From: Ewen Cheslack-Postava
To: "users@kafka.apache.org"
en though
the consumers are actively consuming/saving offsets.
Am I missing some other configuration/properties ?
thanks!
Marina
I have also posted this question on StackOverflow:
http://stackoverflow.com/questions/33925866/kafka-0-8-2-1-how-to-read-from-consumer-offsets-topic
ion (there are no values stored there). Is it
true?
I feel it is some small stupid mistake I'm making, since it seems to work fine
for others - drives me crazy :)
thanks!!
Marina
- Original Message -
From: Jason Gustafson
To: users@kafka.apache.org
Sent: Wednesday, December 2, 2
, 30, 31, 32, 33, 34, 35,
36, 37, 38, 39, 40, 41, 42, 43]
example of one partition state:
ls /brokers/topics/__consumer_offsets/partitions/44
[state]
ls /brokers/topics/__consumer_offsets/partitions/44/state
[]
anything else you think I could check?
thanks!
Marina
- Original Message
et' - I could see correct offsets for all my consumers in ZK. We
will switch to using Kafka storage very soon!
Thanks!Marina
From: Lance Laursen
To: users@kafka.apache.org
Cc: Marina
Sent: Thursday, December 3, 2015 4:35 PM
Subject: Re: Kafka 0.8.2.1 - how to read from __consume
I have also asked this question before, and others too, and I'm including a
nice details response form Grant below.
My only other wish is that it would be possible to forcefully re-set the
offsets to zero when needed Even though it is unlikely to exhaust the whole
range of values - when they
fka-elasticsearch-consumer) is moved
to this repository now - which supports Docker, Gradle and is Spring-based:
https://github.com/BigDataDevs/kafka-elasticsearch-consumer
Thank you!
Marina
Hi, just wanted to bump it up again...I tried to see if I could edit the WIKI
myself - but it seems that you need permissions to do so which I don't.
How does one go about updating this part of the WIKI?
thanks!
From: Marina
To: Users
Sent: Tuesday, March 22, 2016 1:17 PM
Su
Sounds like a great Meetup! Unfortunately, not all of us are lucky enough to be
in CA :) - any chance this Meetup will be recorded?
thanks!
From: Guozhang Wang
To: "d...@kafka.apache.org" ; "users@kafka.apache.org"
Sent: Tuesday, March 29, 2016 12:00 PM
Subject: Apache Kafka Meetup t
+1 - wish it was already done with Kafka 0.9 version :)
From: Tommy Becker
To: users@kafka.apache.org
Sent: Friday, June 17, 2016 7:55 AM
Subject: Re: [DISCUSS] Java 8 as a minimum requirement
+1 We're on Java 8 already.
On 06/16/2016 04:45 PM, Ismael Juma wrote:
Hi all,
I would
ing , when, say, new consumers are added to the
group.
Could somebody validate this approach or suggest a better way to accomplish
what we need ?
thanks!
Marina
bout the number of
partitions.
Thanks!Marina
I'm very interested in the answer to this question as well - if the offsets are
not preserved over the app re-start, we coudl be looking into a sizable data
loss
thanks!Marina
From: Dhyan Muralidharan
To: users@kafka.apache.org
Sent: Friday, December 16, 2016 8:29 AM
Su
[3100-3600] for partition 2.
Here are example results for a couple of partitions:
[kafka_offsets.png]
One interesting point: looks like all differences are 500 events! I wonder if
this is some default buffer size somewhere
Any idea if it is an issue on the app side or possibly a bug in Kafka?
y way to just use the Confluent Connector with the plain
(non-Confluent) Apache distribution of Kafka?
thanks!
Marina
hare/java/kafka-connect-elasticsearch/ dir? and adding it
into the plugin.path dir for Kafka? Or do I package the whole
./confluent-3.3.0/share/java/kafka-connect-elasticsearch/ dir as an Uber Jar to
make sure all dependencies are also included?
Thanks,
Marina
Sent with [ProtonMail](https:/
is not an option
for now, for at least a few months...
So, I'm trying to understand how I can add the Confluent Elasticsearch
Connector to the existing Apache Kafka installation...
Thank you!
Marina
Sent with [ProtonMail](https://protonmail.com) Secure Email.
> Original
AM
> UTC Time: September 29, 2017 3:49 PM
> From: sjdur...@gmail.com
> To: users@kafka.apache.org, Marina Popova
>
> You can choose to run just kafka connect in the confluent platform (just run
> the kafka connect shell script(s)) and configure the connectors to point
> towards you
!!
Marina
Sent with [ProtonMail](https://protonmail.com) Secure Email.
Hi,
I wanted to give this question a second try as I feel it is very important
to understand how to control error cases with Connectors.
Any advice on how to control handling of "poison" messages in case of
connectors?
Thanks!
Marina
> Hi,
> I have the FileStreamSinkC
give them a try, but looks like they do not offer the same level of direct
control over the exception handling yet (which is normal for more high-level
products, of course - the higher level of abstraction you use the less control
over details you have ...)
Thank you!
Marina
Sent wit
g (like Connectors) or something in that line, rather that packing it
all into the Core Kafka.
Just my 2 cents :)
Marina
Sent with [ProtonMail](https://protonmail.com) Secure Email.
> Original Message
> Subject: Re: Comparing Pulsar and Kafka: unified queuing and streami
Sorry, maybe a stupid question, but:
I see that Kafka 1.0.1 RC2 is still not released, but now 1.1.0 RC0 is coming
up...
Does it mean 1.0.1 will be abandoned and we should be looking forward to 1.1.0
instead?
thanks!
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Fe
ot; - I would see the latest state from the table, whatever
was aggregated so far.
Does not seem to be the case for me - I only see the result after new events
woudl come , AFTER I start the query
Am I missing some configuration setting?
Thank you!
Marina
t with [ProtonMail](https://protonmail.com) Secure Email.
‐‐‐ Original Message ‐‐‐
On Monday, May 6, 2019 4:44 PM, Steve Howard wrote:
> Sorry, I hit send too quickly.
>
> On Mon, May 6, 2019 at 4:42 PM Steve Howard wrote:
>
>> Hi Marina,
>>
>> Try...
>>
>>
05 [0]
8vGuAWfMScPzFg 155719305 155719308 [1.624,0.062,0]
5nrqWz99We6-2Q 155719302 155719305 [0.001]
This is not the only query I see only partial results in the Control Center
for Is there some configuration I'm missing?
Thank you!
Marina
time, 3) as top_3_request_times
>FROM rc_events_full
>WINDOW TUMBLING (SIZE 30 SECONDS)
>GROUP BY payload->sitekey;
8vGuAWfMScPzFg | 155719302 | 155719305 | [0.014]
8vGuAWfMScPzFg | 155719305 | 155719308 | [0.062]
Why can't I execute this query
I'm also very interested in this question - any update on this?
thanks!
Marina
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Thursday, September 5, 2019 6:30 PM, Ash G wrote:
> _consumer_offsets is becoming rather big > 1 TB. Is there a way to purge
>
using the
inter.broker.protocol.version set to 0.11 at first
2. rolling upgrade ZK cluster to 3.5.6
3. set inter.broker.protocol.version=2.4.0 and rolling restart the Kafka
cluster again
Anybody sees a problem with this approach?
thanks,
Marina
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
O
sumer,
producer and zookeeper properties files between those distributions and did not
se anything critical, but just wanted to make sure
Thank you!
Marina
Sent with [ProtonMail](https://protonmail.com) Secure Email.
‐‐‐ Original Message ‐‐‐
On Tuesday, August 11, 2020 6:02 PM, Ismael Juma
on is: does Kafka also rely on the resources reported as
available by the JVM or not?
I search and googled - but did not find any specific info...
thank you!
Marina
Sent with [ProtonMail](https://protonmail.com) Secure Email.
ings might have changed between those
versions, for producers specifically, that might lead to this? or other reasons?
Thank you!
Marina
Sent with [ProtonMail](https://protonmail.com) Secure Email.
topic1 and
just push it unchanged into a new topic2 with , say 40 partitions - and then
have your other services pick up from this topic2
good luck,
Marina
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Saturday, December 19, 2020 6:46 PM, Yana K wrote:
> Hi
>
&
Hi, I have posted this question on SO:
https://stackoverflow.com/questions/67625641/kafka-segments-are-deleted-too-often-or-not-at-all
but wanted to re-post here as well in case someone spots the issue right away
Thank you for your help!
>
We have two topics on our Kafka cluster that ex
could check, apart from what I already did - see the
post below - to troubleshoot this?
thank you!
Marina
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Thursday, May 20, 2021 2:10 PM, Marina Popova
wrote:
> Hi, I have posted this question on SO:
>
is being cleaned up
every 5 min, due to "
Found deletable segments with base offsets [11755700] due to retention time
259200ms breach "
And I can see that the size of the segments never reaches 1G for the 2-nd topic
either ...
thank you,
Marina
Sent with [ProtonM
Y_DATE as the timestamp, which is in seconds, not ms
...
Although, at this time I am more concerned with the Topic 1 problem - as the
data keeps growing and growing and eventually causes out of diskspace failures
Thank you!!!
Marina
Sent with ProtonMail Secure Email.
‐‐‐ Original Me
n for the
topic that was not cleaned up for a long time, and it was an issue with wrong
event timestamps for the topic that was being cleaned too often,
thank you, Matthias, for the tips!
Marina
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Wednesday, May 26, 2021 12:
55 matches
Mail list logo