Today following a re-balance operation in our 0.8.0 cluster , we had a
bunch of disks fill up, even though our retention property values are
designed to prevent that level of disk usage.
When we investigated, it appeared the following had occurred:
1. Broker A was in the replica list for partition
Unfortunately sounds like a Zookeeper data corruption issue on the node in
question:
https://issues.apache.org/jira/browse/ZOOKEEPER-1546
The fix from the Jira is to clean out the Zookeeper data on the affected
node (if that's possible)
On 28 April 2015 at 16:59, Emley, Andrew wrote:
> Hi
>
> I
become active.
>
> Kind regards,
> Stevo Slavic.
>
> On Wed, Apr 29, 2015 at 2:25 PM, David Corley
> wrote:
>
> > If the 100 partitions are all for the same topic, you can have up to 100
> > consumers working as part of a single consumer group for that topic.
&
If the 100 partitions are all for the same topic, you can have up to 100
consumers working as part of a single consumer group for that topic.
You cannot have more consumers than there are partitions within a given
consumer group.
On 29 April 2015 at 08:41, Nimi Wariboko Jr wrote:
> Hi,
>
> I was
Does the byte retention policy apply to replica partitions or leader
partitions or both?
In a multi-node cluster, with all brokers configured configured with
different retention policies, it seems obvious that the partitions for
which a given broker is a leader will be subject to the byte retention
Hey all,
We're trying to write some integration tests around a Ruby-based Kafka
client we're developing that leverages both poseidon and poseidon_cluster
gems. We're running Kafka 0.8.0 in a single node config with a single ZK
instance supporting it on the same machine.
The basic tests is as follo
wever, I'll check your suggestion on the ZK bypass.
On 23 February 2015 at 17:32, Jun Rao wrote:
> Does the ruby library write to ZK directly to create topics? That will
> bypass the checking on the broker side.
>
> Thanks,
>
> Jun
>
> On Mon, Feb 23, 2015 at 3:06 AM, Da
Hey all,
I'm trying to run some basic error-handling validation with some client
code, and I'm attempting to handle an UnknownTopicOrPartitionException. To
set this scenario up, I wanted to attempt fetch messages from a topic I
know doesn't exist. To that end, I've got a 3-broker cluster with:
* au
See the description of log.retention.bytes here:
https://kafka.apache.org/08/configuration.html
You can set a basic value per log-partition, but you'll need to do some
math to work out an appropriate value based on:
1. The number of partitions per topic
2. The number of topics
3. The capacity of t
Edward, I believe the request log was set to TRACE by default in older
versions of Kafka, but has changed to WARN in newer versions. We had the
same problem as you, and lowered our log level to WARN with no apparent
issues
On Thu, Aug 28, 2014 at 8:49 PM, Edward Capriolo
wrote:
> At a certain h
curity LLC
> > http://www.stealth.ly
> > Twitter: @allthingshadoop
> > ****/
> >
> >
> >> On Jan 27, 2014, at 9:54 AM, David Corley
> wrote:
> >>
> >> Kafka Devs,
> >> Just wondering if there'll be anything in the line of Kafka
> presentations
> >> and/or tutorials at ApacheCon in Denver in April?
>
Kafka Devs,
Just wondering if there'll be anything in the line of Kafka presentations
and/or tutorials at ApacheCon in Denver in April?
12 matches
Mail list logo