I seem to unable to connect single instance of Kafka server to single
instance of local zookeeper server on local host (Ubuntu 14.04LTS/ open jdk
1.7.0_75):
[2015-02-27 08:10:11,467] INFO Initiating client connection,
connectString=localhost:2181 sessionTimeout=6000
watcher=org.I0Itec.zkclient.ZkC
You might want ZkUtils.getPartitionsForTopic. But beware that it's an
internal method that could potentially change or disappear in the future.
If you're just looking for protocol-level solutions, the metadata API has a
request that will return info about the number of partitions:
https://cwiki.ap
Bok Stevo,
Simple as well, if I'm not mistaken.
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/
On Fri, Feb 27, 2015 at 2:35 AM, Stevo Slavić wrote:
> Hello Apache Kafka community,
>
> In Kafka 0.8.1.1, are Kaf
Zakee,
It would be useful to get the following.
kafka.network:name=RequestQueueSize,type=RequestChannel
kafka.network:name=RequestQueueTimeMs,request=Fetch,type=RequestMetrics
kafka.network:name=RequestQueueTimeMs,request=Produce,type=RequestMetrics
Thanks,
Jun
On Thu, Feb 26, 2015 at 2:17 PM
The log seems to suggest that broker 1 is offline. Is broker 1 registered
properly in ZK? You can find this out by reading the broker registration
path (
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper)
from ZK.
Thanks,
Jun
On Thu, Feb 26, 2015 at 10:31 PM, Z
Can you paste the error log for each rebalance try?
You may search for keyword ³exception during rebalance².
On 2/26/15, 7:41 PM, "Ashwin Jayaprakash"
wrote:
>Just give you some more debugging context, we noticed that the "consumers"
>path becomes empty after all the JVMs have exited because of
Do you mean you were not able to connect to zookeeper after retry?
We see this error in the log from time to time, but the zkClient will
retry and usually it will succeed. Can you verify if you were finally be
able to connect or not?
Jiangjie (Becket) Qin
On 2/27/15, 12:53 AM, "Victor L" wrote:
Great news. Thanks a lot Joe
On Wed, Feb 25, 2015 at 11:46 AM, Joseph Lawson wrote:
> Doh that was probably my bad Pranay! A misinterpretation of some old
> consumer code. btw, jruby-kafka is now at 1.1.1 with proper support for
> deleting the offset, setting the auto_offset_reset and whitelis
I eventually figured it out: my zkClient was running on vm/guest os with
zookeeper on host and vm/host port mapping was broken...
On Fri, Feb 27, 2015 at 1:17 PM, Jiangjie Qin
wrote:
> Do you mean you were not able to connect to zookeeper after retry?
> We see this error in the log from time to
Thanks !
On Tue, Feb 24, 2015 at 8:23 PM, Gwen Shapira wrote:
> Camus uses the simple consumer, which doesn't have the concept of "consumer
> group" in the API (i.e. Camus is responsible for allocating threads to
> partitions on its own).
>
> The client-id is hard coded and is "hadoop-etl" in so
JiangJie:
thanks for the info.
it looks I can change the default behavior and let a new group read from
earliest offset by setting auto.offset.reset=smallest
Yang
On Tue, Feb 24, 2015 at 3:06 PM, Jiangjie Qin
wrote:
> If a consumer comes from a new consumer group, it will by default consume
>
Do you see "zookeeper state changed (Expired)" in your logs?
On Fri, Feb 27, 2015 at 10:12 AM, Jiangjie Qin
wrote:
> Can you paste the error log for each rebalance try?
> You may search for keyword ³exception during rebalance².
>
> On 2/26/15, 7:41 PM, "Ashwin Jayaprakash"
> wrote:
>
> >Just gi
Does anyone know how to achieve unlimited log retention either globally or
on a per topic basis? I tried explicitly setting the log.retention.bytes to
-1 but the default time policy kicked in after 7 days and cleaned up the
messages.
Thanks!
Warren
Hi Gang,
I am testing some of the durability guarantees given by Kafka 8.2.1 which
involve min in-sync replicas and disabling unclean leader election.
My question is: *When will the failed replica after successfully coming up
will be included back in ISR? Is this governed by replica.lag.max.messa
we tested our new application that reads and writes to kafka.
at first we found the access latency is very high. then we realized that
it's because the client and server are in different colos. moving them
together reduces down the access time to < 4 ms.
I was wondering if there are any techniqu
we have a single partition, and the topic contains 300k events.
we fired off a camus job, it finished within 1 minute. this is rather fast.
I was guess that the multiple mappers must be reading from multiple offsets
in parallel, right?
otherwise if they are reading in serial (like in a consumer
Thanks for the reply.I confirmed that broker 1 is registered in the zk.
> Date: Fri, 27 Feb 2015 09:27:52 -0800
> Subject: Re: broker restart problems
> From: j...@confluent.io
> To: users@kafka.apache.org
>
> The log seems to suggest that broker 1 is offline. Is broker 1 registered
> properly i
There used to be available a very lucid page describing Kafka 0.7, its design,
and the rationale behind certain decisions. I last saw it about 18 months ago.
I can't find it now. Is it still available? I can find the 0.8 version, it's up
there on the site.
Any help? Any links?
Philip
--
Wouldn't it be a better choice to store the logs offline somewhere? HDFS and S3
are both good choices...
-Mark
> On Feb 27, 2015, at 16:12, Warren Kiser wrote:
>
> Does anyone know how to achieve unlimited log retention either globally or
> on a per topic basis? I tried explicitly setting the
Perhaps mirror maker is what you want?
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330
On Friday, February 27, 2015, Yang wrote:
> we tested our new application that reads and writes to kafka.
>
> at first we found the access latency is very high. then we realized that
Kafka on dedicated hosts running in docker under marathon under Mesos. It
was a real bear to get working, but is really beautiful once I did manage
to get it working. I simply run with a unique hostname constraint and
number of instances = replication factor. If a broker dies and it isn't a
hardwar
Hi,
After Kafka cleaned .log / .index files based on topic retention. I can
still lsof a lot of .index.deleted files. And df shows usage on disk space
is accumulated to full.
When this happened, just by restarting broker, it will immediately free
those disk space. I seems to me kafka after cleani
Hi team,
I had a replica node that was shutdown improperly due to no disk space
left. I managed to clean up the disk and restarted the replica but the
replica since then never caught up the leader shown below
Topic:test PartitionCount:1 ReplicationFactor:3 Configs:
Topic: test Partition: 0 Leade
23 matches
Mail list logo