lso feel free to upload a
> patch.
>
> On Sat, Dec 27, 2014 at 7:25 PM, Bae, Jae Hyeon
> wrote:
>
> > Hi
> >
> > While I am testing kafka java producer, I saw the following NPE
> >
> > SLF4J: Failed toString() invocation on a
Hi
While I am testing kafka java producer, I saw the following NPE
SLF4J: Failed toString() invocation on an object of type
[org.apache.kafka.common.Cluster]
java.lang.NullPointerException
at org.apache.kafka.common.PartitionInfo.toString(PartitionInfo.java:72)
at java.lang.String.valueOf(String.
r.class.getName(),
ConfigDef.Type.CLASS, ConfigDef.Importance.LOW, "");
} catch (Exception e) {
e.printStackTrace();
}
On Wed, Nov 5, 2014 at 10:48 PM, Bae, Jae Hyeon wrote:
> Hi
>
> When I set up
>
> props.put("metric.reporters"
Hi
When I set up
props.put("metric.reporters",
Lists.newArrayList(ServoReporter.class.getName()));
I got the following error:
org.apache.kafka.common.config.ConfigException: Unknown configuration
'com.netflix.suro.sink.kafka.ServoReporter'
at org.apache.kafka.common.config.AbstractConfig.get(Ab
Andrew
If you see BaseProducer.scala, there's the following code snippet
override def send(topic: String, key: Array[Byte], value: Array[Byte]) {
val record = new ProducerRecord(topic, key, value)
if(sync) {
this.producer.send(record).get()
} else {
this.producer.send(reco
> Neha
>
> On Wed, Sep 17, 2014 at 11:00 AM, Bae, Jae Hyeon
> wrote:
>
> > The major motivation of adopting new producer before it's released, old
> > producer is showing terrible throughput of cross-regional kafka mirroring
> > in EC2.
> >
> > Let
The major motivation of adopting new producer before it's released, old
producer is showing terrible throughput of cross-regional kafka mirroring
in EC2.
Let me share numbers.
Using iperf, network bandwidth between us-west-2 AWS EC2 and us-east-1 AWS
EC2 is more than 40 MB/sec. But old producer's
mind sharing your workaround with the community?
>
> On Mon, Sep 15, 2014 at 10:17 PM, Bae, Jae Hyeon
> wrote:
>
> > The above pull request didn't work perfectly. After a bunch of testing
> > experiment, we decided that fixing zkclient itself isn't easy. So we
>
Sun, Aug 17, 2014 at 11:41 AM, Bae, Jae Hyeon
> wrote:
>
> > Recently, we found the serious ZkClient bug, actual Apache Zookeeper
> client
> > bug, which can bring down broker/consumer on zookeeper push.
> >
> > We're running kafka and zookeeeper in AWS EC2 en
Recently, we found the serious ZkClient bug, actual Apache Zookeeper client
bug, which can bring down broker/consumer on zookeeper push.
We're running kafka and zookeeeper in AWS EC2 environment. Zookeeper
instances are bound with EIP to give the static hostname for each instance,
which means even
Hi
Is this intentional? Otherwise, could you release this?
Thank you
Best, Jae
Are you using 0.8? BytesOut will include the traffic for replication. If
you have no consumer and replication factor is 2, BytesOut should be
exactly double of BytesIn.
On Tue, Apr 29, 2014 at 1:26 PM, Arnaud Lawson wrote:
> Hello,
>
> After graphing the cumulative values of Bytesin and Bytesout
check your libstdc++ version
On Tuesday, April 8, 2014, 陈小军 wrote:
>
> Hi all,
>
> I try to add compress feature on kafka-node js driver, for snappy I
> use node-snappy library https://github.com/kesla/node-snappy. when I test
> my code, the server always output following error, I don't know
you decide to pause the
> consumer between 2 intervals, then it will replay data since the last
> interval.
>
> Thanks,
> Neha
>
>
> On Thu, Mar 27, 2014 at 4:21 PM, Bae, Jae Hyeon
> wrote:
>
> > When I call consumer.commitOffsets(); before killing session,
When I call consumer.commitOffsets(); before killing session, unit test
succeeded. This problem would happen only with autoCommit enabled.
Could you fix this problem before releasing 0.8.1.1?
Thank you
Best, Jae
On Thu, Mar 27, 2014 at 3:57 PM, Bae, Jae Hyeon wrote:
> Hi
>
> Whil
Hi
While testing kafka 0.8 consumer's zk resilience, I found that on the zk
session kill and handleNewSession() is called, high level consumer is
replaying messages.
Is this know issue? I am attaching unit test source code.
package com.netflix.nfkafka.zktest;
import com.fasterxml.jackson.core.J
Do you have any ETA for 0.8.1.1?
On Tue, Mar 25, 2014 at 9:53 AM, Neha Narkhede wrote:
> You are probably hitting https://issues.apache.org/jira/browse/KAFKA-1317.
> We are trying to fix it in time for 0.8.1.1.
>
> Thanks,
> Neha
>
>
> On Tue, Mar 25, 2014 at 9:45 AM,
t.java:472)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
What did I do wrong in my test environment?
On Tue, Mar 25, 2014 at 9:24 AM, Bae, Jae Hyeon wrote:
> Nope, linux doesn't work. Let me debug why it's not triggered.
>
>
> On Tue, Mar 25, 2014 at 9:21 AM, Bae
Nope, linux doesn't work. Let me debug why it's not triggered.
On Tue, Mar 25, 2014 at 9:21 AM, Bae, Jae Hyeon wrote:
> Hm... I cannot reproduce in my local, I downloaded kafka_2.8.0-0.8.1
> package but it didn't work. Let me try in my linux machine.
>
>
> On Mon,
gt; kill -SIGCONT
>
> At this point, you should see the following log message from inside the
> handleNewSession() method -
>
> INFO re-registering broker info in ZK for broker 0
> (kafka.server.KafkaHealthcheck)
>
> Hope that helps.
>
> Thanks,
> Neha
>
>
&
Hi
On zookeeper session timeout due to some stopping the world long GC pause
or zookeeper server outage, Ephemeral nodes on kafka broker and consumer
should be recreated but in my test environment, handleNewSession() is not
called.
My test scenario is, starting kafka broker locally and put a brea
al nodes
> can be lost?
>
> Thanks,
>
> Jun
>
>
> On Thu, Mar 20, 2014 at 9:52 PM, Bae, Jae Hyeon
> wrote:
>
> > This issue is zookeeper resiliency.
> >
> > What I have done is, ephemeral node creation is replaced by Apache
> > Curator's Persistent
This issue is zookeeper resiliency.
What I have done is, ephemeral node creation is replaced by Apache
Curator's PersistentEphemeralNode recipe, to reinstate ephemeral nodes
after zookeeper blip. Also, all watchers also should be reinstated. Kafka
internally only handles session expired event but
Hello
I am having producer throughput issue, so I am seriously considering to use
new shiny KafkaProducer. Before proceeding, I want to confirm that it's
fully stable for production from Kafka developers.
Thank you
Best, Jae
Never mind, this was caused by a bug of my apache curator-zkclient bridge
module.
On Fri, Mar 7, 2014 at 6:14 PM, Bae, Jae Hyeon wrote:
> I started from a fresh but I deployed the working version synced from the
> latest trunk, not a release version.
>
> I j
min commands or steps
> lead you to these errors? This error basically points to an unexpected
> state change, which could be a bug. I'm looking for steps to be able to
> reproduce the bug.
>
> Thanks,
> Neha
>
>
> On Fri, Mar 7, 2014 at 1:59 PM, Bae, Jae Hyeon wrote:
&
How can I prevent these errors coming?
When I created 9 partitions on 3 instances and 2 replication-factor, I
didn't have any error. But when I created 36 partitions on 12 instances, I
got these errors.
2014-03-07 18:19:28,208] ERROR Controller 9 epoch 12 initiated state change
of replica 9 for
; 2. Create ~/.gradle/gradle.properties file according to README.md
> > 3. ./gradlew test
> >
> > Only testSendToPartition failed.
> >
> > Guozhang
> >
> >
> > On Sun, Feb 16, 2014 at 8:42 PM, Bae, Jae Hyeon
> > wrote:
> >
> > > T
ialized this partition. This can happen if
> the current controller went into a long
> // GC pause
>
> Thanks,
>
> Jun
>
>
> On Sun, Feb 16, 2014 at 3:10 AM, Bae, Jae Hyeon
> wrote:
>
> > Hi
> >
> > I am getting the following error
a clean trunk and retry unit
> tests?
>
> Guozhang
>
>
> 2014-02-14 22:56 GMT-08:00 Bae, Jae Hyeon :
>
> >- LogOffsetTest .
> >
> >
>
> testEmptyLogsGetOffsets
> >- LogOffsetTest .
> >
> >
>
> testGetOffsetsBeforeEarliest
Hi
I am getting the following errors
state-change.log:[2014-02-16 10:23:20,708] ERROR Controller 0 epoch 2
encountered error while changing partition [request_trace,18]'s state from
New to Online since LeaderAndIsr path already exists with value
{"leader":0,"leader_epoch":0,"isr":[0,1]} and contr
Netflix is using kafka 0.7 and 0.8 with zk 3.4.5, very stable.
On Saturday, February 15, 2014, Todd Palino wrote:
> We're not at the moment, but I'd definitely be interested in hearing your
> results if you do. We're going to be experimenting with the latest version
> soon to evaluate it.
>
> -T
- LogOffsetTest .
testEmptyLogsGetOffsets
- LogOffsetTest .
testGetOffsetsBeforeEarliestTime
- LogOffsetTest .
testGetOffsetsBeforeLatestTime
- LogOffsetTest .
testGetOffsetsBeforeNow
- ProducerSendTest .
testSendToPartition
failed.
Can I trust trunk? :)
n, Jan 20, 2014 at 10:02 AM, Bae, Jae Hyeon
> wrote:
>
> > Due to short retention period, I don't have that log segment now.
> >
> > How I am developing kafka is,
> >
> > I forked apache/kafka into my personal repo and customized a little bit.
> I
> >
Best, Jae
On Mon, Jan 20, 2014 at 8:01 AM, Jun Rao wrote:
> Could you use our DumpLogSegment tool on the relevant log segment and see
> if the log is corrupted? Also, are you using the 0.8.0 release?
>
> Thanks,
>
> Jun
>
>
> On Sun, Jan 19, 2014 at 10:09 PM, Bae,
If I want to increase message.max.bytes up to 10MB *with compression*, Are
there any properties I sync except the following two properties?
- broker's message.max.bytes
- consumer's fetch.message.max.bytes
What about replica.message.max.bytes?
Thank you
Best, Jae
Hello
I finally upgraded kafka 0.7 to kafka 0.8 and a few kafka 0.8 clusters are
being tested now.
Today, I got alerted with the following messages:
"data": {
"exceptionMessage": "Found a message larger than the maximum fetch size
of this consumer on topic nf_errors_log partition 0 at fetch
Hi
I know if the number of kafka consumers is greater than the number of
partitions in the kafka broker cluster, several kafka consumers will
be idle.
My question is, does the number of kafka consumers mean the number of
kafka streams?
For example, I have one broker with one partition. What if I
consumer drain the data
>> (you can use ConsumerOffsetChecker to check if all data has been consumed).
>> Finally, you can shut down the broker.
>>
>> This will be much easier in 0.8 because of replication.
>>
>> Thanks,
>>
>> Jun
>>
&g
Hi
If I want to terminate kafka broker gracefully. Before termination, it
should stop receiving the traffic from producers and wait until all
data will be consumed.
I don't think that kafka 0.7.x is supporting this feature. If I want
to implement this feature for myself, could you give me a brief
curiosity was resolved.
On Fri, Dec 14, 2012 at 10:20 PM, Bae, Jae Hyeon wrote:
> Hi
>
> When one of the broker instances is dead, I can see the producer can
> acknowledge the dead broker and refresh its producer pool not to send
> the data to the dead broker. But I don't find any cod
Hi
When one of the broker instances is dead, I can see the producer can
acknowledge the dead broker and refresh its producer pool not to send
the data to the dead broker. But I don't find any code in
ZookeeperConsumerConnector. As I guess, ZookeeprConsumerConnector
should do syncRebalance when kaf
a. Exposing the Stat object allows us to read the previous version
> of the zookeeper value, and we can use that to write the new value if
> the expected version of the previous value is correct.
>
> Are you using zookeeper client 3.3.4 or older ?
>
> Thanks,
> Neha
>
> On We
Could you share what changed in zkclient-20120522.jar?
I found that watchers were canceled when the zookeeper connection was
interrupted and reconnected with another application. If the new
zkclient-20120522 resolved this issue, I need to update this library
in my other projects.
I really appreci
44 matches
Mail list logo