g. So my real issue is more with Camus than with cluster
> problems? It seems that Camus won’t consume if it encounters a
> ReplicaNotAvailableException.
>
>
>> On Aug 15, 2015, at 12:02, Clark Haskins wrote:
>>
>> Replica not available is not a fatal exception
Replica not available is not a fatal exception. This simply means that there is
a replica that is down.
If you get Leader not available that means the partition is offline.
-Clark
Sent from my iPhone
> On Aug 15, 2015, at 8:41 AM, Andrew Otto wrote:
>
> Also strange: If I start this broker
ng this message and nothing worked after that.
>
> Again, thank you so much for your time and knowledge. Very much
> appreciated.
> Chris
>
>> On Sun, May 17, 2015 at 2:20 PM, Clark Haskins wrote:
>>
>> No problem.
>>
>> Delete the reassign_partitions
was
>>> nothing in the Kafka logs indicating that anything had happened either, so
>>> I'm thinking maybe I need to try and delete the znode partition manually?
>>>
>>> Or, should I have seen the controller znode disappear for a time?
>>>
>>>
= Wed Jan 21 18:37:40 UTC 2015
> pZxid = 0x6001afde9
> cversion = 0
> dataVersion = 0
> aclVersion = 0
> ephemeralOwner = 0x0
> dataLength = 29
> numChildren = 0
>
> Thank you!
> Chris
>
>> On Sun, May 17, 2015 at 12:58 PM, Clark Haskins wrote:
>>
>>
t; []
>
> Hope that is helpful :)
> If this is not what you were asking for, please just let me know.
> Thank you!
> Chris
>
>> On Sun, May 17, 2015 at 12:17 PM, Clark Haskins wrote:
>>
>> The reassign_partitions znode is the important one. Please paste
cting to myhost.mydomain.com:2181
>
> WATCHER::
>
> WatchedEvent state:SyncConnected type:None path:null
> [reassign_partitions, delete_topics]
>
> I'm not sure what this tells me though :)
> Again, thanks for your time.
> Chris
>
>> On Sun, May 17, 2015
Does the partition reassignment znode exist under /admin in zookeeper?
-Clark
Sent from my iPhone
> On May 16, 2015, at 7:16 PM, Chris Neal wrote:
>
> Sorry for bumping my own thread. :S Just wanted to get it in front of some
> eyes again!
>
> Thanks for your time and help.
> Chris
>
>> On
Yes it is based on the machines capacity.
In practice we try to limit partitions to about 50GB to ensure the data is
evenly spread across machines and that the recovery time of a failure is
minimized.
-Clark
Sent from my iPhone
> On Apr 3, 2015, at 6:17 AM, Nirmal ram wrote:
>
> Hi,
>
> Wh
ace at the back of the auditorium. At a minimum, there will be standing
> room.
>
>
>
> On 3/24/15, 1:40 PM, "Patrick Lucas" wrote:
>
> >On Mon, Mar 23, 2015 at 1:23 PM, Clark Haskins wrote:
> >>
> >> Just a reminder about the Meetup tomorrow
Hey Everyone –
Just a reminder about the Meetup tomorrow night @ LinkedIn.
There will be 3 talks:
Offset management - 6:35PM Joel Koshy(LinkedIn)
The Netflix Data Pipeline - 7:05PM - Allen Wang & Steven Wu(Netflix)
Best Practices - 7:50PM - Jay Kreps(Confluent)
If you are interested in atte
Yep! We are growing :)
-Clark
Sent from my iPhone
> On Mar 20, 2015, at 2:14 PM, James Cheng wrote:
>
> Amazing growth numbers.
>
> At the meetup on 1/27, Clark Haskins presented their Kafka usage at the time.
> It was:
>
> Bytes in: 120 TB
> Messages In: 585
Is your application possibly timing out its zookeeper connection during
consumption while doing its processing, thus triggering the rebalance?
-Clark
On 8/6/14, 11:18 PM, "Jason Rosenberg" wrote:
>We've noticed that some of our consumers are more likely to repeatedly
>trigger rebalancing when t
ght? I run kafka 0.8.2 and its a difficult job to
>> reassign partitions.
>>
>>
>> On Fri, Jul 25, 2014 at 3:10 PM, Clark Haskins <
>> chask...@linkedin.com.invalid> wrote:
>>
>> > You can have more partitions than machines in the cluster, you cannot
You can have more partitions than machines in the cluster, you cannot
however have a replication factor that is great than the number of
machines in the cluster.
You could easily have a topic with 100 partitions on a 3 node cluster.
-Clark
Clark Elliott Haskins III
LinkedIn DDS Site Reliability
I assume you ran out of space on your data partitions?
Using the partition-reassignment tool can increase the disk space when
using time-based retention for topics as this resets the data file time.
-Clark
Clark Elliott Haskins III
LinkedIn DDS Site Reliability Engineer
Kafka, Zookeeper, Samza
rtition move:
>Supposed that I have the following the partitionnement plan:
>
>{"version":1,
> "partitions":[{"topic":"foo1","partition":0,"replicas":[1,2,3]}
>}
>
>and I submit this Json file to move partitions
>
>
I have written such a script. It balances the cluster by the data size on
disk. It is written using lots of internal tools which is why its not
open-sourced. I plan to re-write it without the internal tooling.
In terms of leader balancing, when using the partition-reassignemnt
script, whichever br
ust trying to rebalance partitions to the new node.
>
>I just want to balance it evently across the all cluster.
>
>Am I doing wrong?
>
>
>On Thu, Jul 10, 2014 at 9:58 AM, Clark Haskins <
>chask...@linkedin.com.invalid> wrote:
>
>> I am confused as to exactly what you a
uot;:5,"replicas":[101421743,101461702,1014617
>82]},{"topic":"RTB","partition":45,"replicas":[101421743,101862816,1023116
>71]},{"topic":"B_IMPRESSION","partition":15,"replicas":[101461702,10
> The problem is I do not know if it is a zookeeper issue or if the tool
>> > really failed.
>> >
>> > I faced one time the zookeeper issue (
>> > https://issues.apache.org/jira/browse/KAFKA-1382) and by killing the
>> > responsible Kafka the partition switche
How does it get stuck?
-Clark
Clark Elliott Haskins III
LinkedIn DDS Site Reliability Engineer
Kafka, Zookeeper, Samza SRE
Mobile: 505.385.1484
BlueJeans: https://www.bluejeans.com/chaskins
chask...@linkedin.com
https://www.linkedin.com/in/clarkhaskins
There is no place like 127.0.0.1
On 7/
Yes you can. You can use the partition-reassignment tool and move it to a
smaller number of replicas.
-Clark
Clark Elliott Haskins III
LinkedIn DDS Site Reliability Engineer
Kafka, Zookeeper, Samza SRE
Mobile: 505.385.1484
BlueJeans: https://www.bluejeans.com/chaskins
chask...@linkedin.com
http
23 matches
Mail list logo