My consumer configuration has auto.offset.reset=smallest will this be
causing the problem for replaying the messages?
I got to know why some partitions have no owners. It was my mistake i
have 12 partitions but i started only 9 consumers. So when i check the
Consumer Offset checker tool 3 partit
Hello Arya,
The broker seems dead due to too many open file handlers, which are likely
due to too many open sockets. Hhow many producer clientss do you have on
these 5 machines, and could you check if there is any socket leak?
Guozhang
On Wed, Apr 9, 2014 at 8:50 PM, Arya Ketan wrote:
> *Issu
I see this a lot in the consumer logs
[kafka.utils.ZkUtils$] conflict in
/consumers/group1/owners/testtopic/2 data:
group1_ip-10-164-9-107-1397047622368-c14b108b-1 stored data:
group1_ip-10-168-47-114-1397034341089-3861c6cc-1
I am not seeing any "expired" in my consumer logs
Thanks
Arjun Na
Thanks Magnus! I will definitely check this out
—Ian
On Apr 9, 2014, at 8:39 PM, Magnus Edenhill wrote:
> Hey Ian,
>
> this is where a tool like kafkacat comes in handy, it will use a random
> partitioner by default (without the need for defining a key):
>
> tail -f /my/log | kafkacat -b myb
*Issue : *Kafka cluster goes to an un-responsive state after some time with
producers getting Socket time-outs on every request made.
*Kafka Version* - 0.8.1
*Machines* : VMs , 2 cores 8gb RAM, linux , 3 node cluster.
ulimit -a
core file size (blocks, -c) 0
data seg size (kby
Thanks for the quick reply Jun. I will be checking it. I think there is
one more thing to this problem, the partitions which dont have any owner
are consuming the messgaes when the producer is pushing the messages
into kafka, but these are not giving those messgaes to the actual
consumer. the
Do you see many rebalances in the consumer log? If so, see
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyaretheremanyrebalancesinmyconsumerlog
?
Thanks,
Jun
On Wed, Apr 9, 2014 at 8:38 PM, Arjun wrote:
> Hi ,
>
> I have set up kafka 0.8 in 3 servers. I have pushed some data int
Hi ,
I have set up kafka 0.8 in 3 servers. I have pushed some data into these
servers. The number of partitions i use is 12, with a replication factor
of 2.
We use high level consumer to consume messages. we set auto commit to
false, and we do commits almost after each and every successful mes
Was there any error in the controller and the state-change logs?
Thanks,
Jun
On Wed, Apr 9, 2014 at 11:18 AM, Marcin Michalski wrote:
> Hi, has anyone upgraded their kafka from 0.8.0 to 0.8.1 successfully one
> broker at a time on a live cluster?
>
> I am seeing strange behaviors where many of
Hey Ian,
this is where a tool like kafkacat comes in handy, it will use a random
partitioner by default (without the need for defining a key):
tail -f /my/log | kafkacat -b mybroker -t mytopic
See
https://github.com/edenhill/kafkacat
2014-04-10 6:13 GMT+07:00 Ian Friedman :
> Hey guys. We
This may be because the 0.8 producer sticks to a partition during
metadata refresh intervals. You can get around that by specifying a
key:
--property parse.key=true --property key.separator=###
Each line would then be:
KEY###MESSAGE
The key is used for partitioning but will also be stored with
Hey guys. We recently migrated our production cluster from 0.7.2 to 0.8.1. One
of the tools in the 0.7 distribution was something called Producer Shell, which
we used in some cron jobs to do some manual addition of messages (for messages
that got dropped for various reasons). So in 0.8 that is g
Thanks for reply.
That is certainly not good idea.
Other clients can't kafka cluster with these settings.
advertised.host.name=localhost
advertised.port=19092
When staging environment, I will construct proxy commented configuration.
Temporary, I succeeded message produce over ssh tunnel.
Settin
Hi, has anyone upgraded their kafka from 0.8.0 to 0.8.1 successfully one
broker at a time on a live cluster?
I am seeing strange behaviors where many of my kafka topics become unusable
(by both consumers and producers). When that happens, I see lots of errors
in the server logs that look like this
Do you see the data loss warning after a controlled shutdown? It isn't
very clear from your original message whether that is associated with
a shutdown operation.
We have a test setup similar to what you are describing - i.e.,
continuous rolling bounces of a test cluster (while there is traffic
fl
What are folk currently doing to secure Kafka and Zk sockets in a cluster ?
Firewalls?
Ssh tunnels between machines in the cluster and wider servers?
Private hand cranked mods to the source code?
Other?
What's been seen to work out in the wild?
Thanks
---
I think setting these is not a good idea b/c only apply to the specific
client where you've setup the tunnel. Other clients cannot use these
settings
advertised.host.name=localhost
advertised.port=19092
You probably need to figure out another way such as
1) Setting up a local mapping on your pro
Found that the posh-git shell which comes with GitHub for Windows is
causing this odd CLI behavior (see here for related discussion on Gradle
forum:
http://forums.gradle.org/gradle/topics/which_characters_are_allowed_in_value_of_gradle_project_properties)
On Wed, Apr 9, 2014 at 3:13 PM, Stevo Sla
I confirmed.
- Controller.log
After kafka broker stated.
Kafka broker continues outputting below log.
Error is continued.
===
[2014-04-09 23:55:07,709] ERROR [Controller-0-to-broker-0-send-thread],
Controller 0's connection to broker id:0,host:localhost,port:19092 was
unsuccessful (kafka.contr
When you say "topic size", do you mean # of topics? If so, you can send
TopicMetadataRequest to any broker. See
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Examplefor
details. In particular, if the topic list in the request is empty, you
get all topics.
Thanks,
Jun
On
Hello Kafka community,
I'm trying to import Kafka, 0.8.1 branch, in eclipse IDE.
Gradle eclipse plugin is already applied in Kafka build script.
If I just run "gradle eclipse", default scala 2.8.0 will be used to
generate eclipse project files, so classpath will exclude
kafka/utils/Annotations_2.
Thanks Joel and Guozhang!
The data retention is 72 hours.
Graceful shutdown is done via SIGTERM, and
controlled.shutdown.enabled=true is in the config.
I do see 'Controlled shutdown succeeded' in the broker log when I shut
it down.
With both your responses, I feel as if brokers are indeed setu
Hi Team,
I would like to ask you about the easiest way to fetch topics metadata
from my app. My goal is monitoring of the topic size in my java app
(when I have also producers). Is there any API in Kafka libs for that? I
would like to avoid direct connection to the Zookeeper.
Thanks,
Chris
23 matches
Mail list logo