Hey everyone,
I've been having some issues with corrupted messages and mirrormaker as
I wrote previously. Since there was no feedback, I want to ask a new
question:
Did you ever have corrupted messages in kafka? Did things break? How did
you recover or work around that?
Thanks
Jörg
Hi,
I'm facing an issue with high level kafka consumer (0.8.2.0) - after
consuming some amount of data one of our consumers stops. After restart it
consumes some messages and stops again with no error/exception or warning.
After some investigation I found that the "ConsumerFetcherThread" for my
m
Answering my own message, the problem with consumer was this exception:
==
ERROR c.u.u.e.impl.kafka.KafkaConsumer - Error consuming message stream:
# kafka.message.InvalidMessageException: Message is corrupt (stored crc =
3801080313, computed crc = 2728178222)
# \x09at kafka.message.Message.ens
Hi,
There are two properties which determines when does a replica falls off
sync.Look for replica.lag.time.max.ms and replica.lag.max.messages .If a
replica goes out of sync then it would not be even considered for leader
election.
Regards,
Pushkar
On Wed, Sep 30, 2015 at 9:44 AM, Shushant Arora
Note the 0.8.3-SNAPSHOT has recently been renamed 0.9.0.0-SNAPSHOT.
In any event, the major version number change could indicate that there
has, in fact, been some sort of incompatible change. Using 0.9.0.0, I'm
also unable to use the kafka-console-consumer.sh to read from a 0.8.2.1
broker, b
Hi Richard,
You are correct that version will now be 0.9.0 and anything referencing
0.8.3 is being changed. You are also correct in the there have been wire
protocol changes that break compatibility. However, backwards compatibility
exists and you should always upgrade your brokers before upgradin
I’m trying to replicate data from 1 DC to a remote DC in AWS.
Have a node in DC1, running Kafka 0.8.2.1 under zookeeper.
- hostname = is11
- broker.id = 0
- port 6667
MirrorMaker in DC2, running Kafka 0.8.2.1
- hostname = zoo1001
consumer.config
auto.commit.enable=true
auto.commit.interval.ms=1
Of course, that documentation needs to be updated to refer to '0.9.X'!
Also, I'm wondering if the last step there should be changed to remove the
property altogether and restart (rather than setting it to the new
version), since once the code is updated, it will use that by default?
On Thu, Oct 1
Jason,
The version number in the docs does need to be updated, and will be before
the release.
I agree that removing the property rather than setting it to the new
version might be better. However, others may want to set the new version in
order to already have the value hard coded for the next u
Hey Jörg,
Unfortunately when the high level consumer hits a corrupt message, it
enters an invalid state and closes. The only way around this is to iterate
your offset by 1 in order to skip the corrupt message. This is currently
not automated. You can catch this exception if you are using the simpl
Great.. that makes sense. Forward compatibility by brokers is likely
hard, tho it would be nice if clients were backward compatible. I
guess, tho, implementing that requires KIP-35.
Thanks for the 0.9.0.0 rolling update pointer.
Richard
On 10/01/2015 10:48 AM, Grant Henke wrote:
Hi Richard
Hi All,
We’ve been dealing with a Kafka outage for a few days. In an attempt to
recover, we’ve shut down all of our producers and consumers. So the only
connections we see to/from the brokers are other brokers and zookeeper.
The symptoms we’re seeing are:
1.
Uncaught exception on kafka-ne
Hi,
We would like to log the offset of a Kafka message if we fail to process it
(so we can try to re-process it later). Is it possible to get the offset
using the high level consumer?
I took a quick look at the code, and:
- It seems like the offset it private in the current Scala consumer
Hi
I have noticed that when our brokers have no incoming connections (just
connections to other brokers and to the ZK cluster) we get messages about
shrinking the ISR for some partitions
[2015-10-02 00:58:31,239] INFO Partition [lia.stage.raw_events,9] on broker 1:
Shrinking ISR for partitio
If i am not wrong, the auto commit might have happened so, when you start
the consumer it should work fine. Also keep it in mind that Kafka works on
at least one delivery model so we should expect redundant message while
restarting the consumer.
On Oct 2, 2015 4:06 AM, "eugene miretsky" wrote:
>
If its a daily rolling appender then next day when you get an another log
data will be rolled.
On Sep 26, 2015 3:20 AM, "Hema Bhatia" wrote:
> Thanks Gwen!
>
> I made the changes and restarted kafka nodes. Looks like all log files are
> still present. Does it take some time to kick in the changes
16 matches
Mail list logo