virtualized kafka

2015-08-31 Thread allen chan
I am currently using the Elasticsearch (ELK stack) and Redis is the current choice as broker. I want to move to a distributed broker to make that layer more HA. Currently exploring kafka as a replacement. I have a few questions: 1. I read that kafka is designed to write contents to disk and this

port already in use error when trying to add topic

2015-09-11 Thread allen chan
Hi all, First time testing kafka with brand new cluster. Running into an issue that i do not understand. Server started up fine but I get error when trying to create a topic. *[achan@server1 ~]$ ps -ef | grep -i kafka* *root 6507 1 0 15:42 ?00:00:00 sudo /opt/kafka_2.10-0.8.2.

Re: port already in use error when trying to add topic

2015-09-13 Thread allen chan
Changing the port to 9998 did not help. Still the same error occurred On Sat, Sep 12, 2015 at 12:27 AM, Foo Lim wrote: > Try throwing > > JMX_PORT=9998 > > In front of the command. Anything other than 9994 > > Foo > > On Friday, September 11, 2015, allen

Re: port already in use error when trying to add topic

2015-09-14 Thread allen chan
After completely disabling JMX settings, i was able to create topics. Seems like there is an issue with using JMX with the product. Should i create bug? On Sun, Sep 13, 2015 at 9:07 PM, allen chan wrote: > Changing the port to 9998 did not help. Still the same error occurred > > On Sa

Re: port already in use error when trying to add topic

2015-09-15 Thread allen chan
a-server-start.sh instead and run > kafka-topics.sh using a separate terminal or user account. Also, google > search "linux environment variables." You could also just run > kafka-topics.sh from a separate host, such as your workstation, so long as > it can see zookeeper:2181. &

log.retention.hours not working?

2015-09-21 Thread allen chan
Hi, Just brought up new kafka cluster for testing. Was able to use the console producers to send 1k of logs and received it on the console consumer side. The one issue that i have right now is that the retention period does not seem to be working. *# The minimum age of a log file to be eligible

Re: log.retention.hours not working?

2015-09-21 Thread allen chan
rt deleting > old logs. > > On Mon, Sep 21, 2015 at 8:58 PM allen chan > wrote: > > > Hi, > > > > Just brought up new kafka cluster for testing. > > Was able to use the console producers to send 1k of logs and received it > on > > the console consumer s

consumer offset tool and JMX metrics do not match

2015-11-13 Thread allen chan
Hi All, I am comparing the output from kafka.tools.ConsumerOffsetChecker vs JMX (kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=logstash,topic=logstash_fdm,partition=*) and they do not match. ConsumerOffsetChecker is showing ~60 Lag per partition and JMX shows 0 for all partitions.

Re: consumer offset tool and JMX metrics do not match

2015-11-13 Thread allen chan
I also looked at this metric in JMX and it is also 0 *kafka.consumer:type=ConsumerFetcherManager,name=MaxLag,clientId=logstash* On Fri, Nov 13, 2015 at 4:06 PM, allen chan wrote: > Hi All, > > I am comparing the output from kafka.tools.ConsumerOffsetChecker vs JMX > (kafka

Re: consumer offset tool and JMX metrics do not match

2015-11-14 Thread allen chan
sumption and until you see this issue ? > > Thanks, > Prabhjot > > > > On Sat, Nov 14, 2015 at 5:53 AM, allen chan > wrote: > > > I also looked at this metric in JMX and it is also 0 > > > *kafka.consumer:type=ConsumerFetcherManager,name=MaxLag,clientId=lo

Re: consumer offset tool and JMX metrics do not match

2015-11-16 Thread allen chan
the *committed* offsets > > > > When the Lag value in the Kafka consumer JMX is high (for example 5M), > > ConsumerOffsetChecker shows a matching number. > > > > I am running kafka_2.10-0.8.2.1 > > > > Osama > > > > -Original Message- &

Re: consumer offset tool and JMX metrics do not match

2015-11-16 Thread allen chan
According to documentation, offsets by default are committed every 10 secs. Shouldnt that be frequent enough that JMX would be accurate? autocommit.interval.ms1is the frequency that the consumed offsets are committed to zookeeper. On Mon, Nov 16, 2015 at 3:31 PM, allen chan wrote: > So

Re: consumer offset tool and JMX metrics do not match

2015-11-19 Thread allen chan
Anyone can help me understand this? On Mon, Nov 16, 2015 at 11:21 PM, allen chan wrote: > According to documentation, offsets by default are committed every 10 > secs. Shouldnt that be frequent enough that JMX would be accurate? > > autocommit.interval.ms1is the frequency that

Re: consumer offset tool and JMX metrics do not match

2015-11-21 Thread allen chan
s where consumer lag was not > reported correctly. > > Regards, > Prabhjot > > On Sun, Nov 15, 2015 at 7:04 AM, allen chan > wrote: > > > I believe producers / brokers / and consumers has been restarted at > > different times. > > What do you think the issu

BrokerState JMX Metric

2015-12-03 Thread allen chan
Hi all Does anyone have info about this JMX metric kafka.server:type=KafkaServer,name=BrokerState or what does the number values means? -- Allen Michael Chan

Re: BrokerState JMX Metric

2015-12-06 Thread allen chan
> On Thu, Dec 3, 2015 at 7:20 PM, allen chan > wrote: > > > Hi all > > > > Does anyone have info about this JMX metric > > kafka.server:type=KafkaServer,name=BrokerState or what does the number > > values means? > > > > -- > > Allen Michael Chan > > > -- Allen Michael Chan

0.9 consumer beta?

2015-12-22 Thread allen chan
In the documentation it says the new consumer is considered beta quality. I cannot find what is beta about it. Stability? Performance? Can someone clarify? 3.3.2 New Consumer Configs Since 0.9.0.0 we have been working on a replacement f

Questions from new user

2016-01-29 Thread allen chan
attention. Allen Chan

Re: Regarding issue in Kafka-0.8.2.2.3

2016-02-08 Thread allen chan
I export my JMX_PORT setting in the kafka-server-start.sh script and have not run into any issues yet. On Mon, Feb 8, 2016 at 9:01 AM, Manikumar Reddy wrote: > kafka scripts uses "kafka-run-class.sh" script to set environment variables > and run classes. So if you set any environment variable >

Re: Questions from new user

2016-02-16 Thread allen chan
Hi can anyone help with this? On Fri, Jan 29, 2016 at 11:50 PM, allen chan wrote: > Use case: We are using kafka as broker in one of our elasticsearch > clusters. Kafka caches the logs if elasticsearch has any performance > issues. I have Kafka set to delete logs pretty quickly to ke

new consumer still classified as beta in 0.9.0.1?

2016-02-19 Thread allen chan
My company is waiting for the new consumer to move out of "beta" mode before using it. Does anyone know if it is still considered beta? -- Allen Michael Chan

Re: Kafka 2.11-0.9.0.1 - Performance Run

2016-02-22 Thread allen chan
what kind of performance tests are you looking for? I am currently using kafka-producer-perf-test.sh to benchmark my brokers. There is also a consumer version too On Mon, Feb 22, 2016 at 8:05 PM, Harihara Subramaniam < subramaniam.harih...@gmail.com> wrote: > Hi, > > How do we run Performance Tes

kafka-consumer-perf.sh

2016-02-22 Thread allen chan
Something i do not understand about this perf-test tool. 1. The legend shows 5 columns but the data shows 6 columns. I am assuming the 0 column is the one that is throwing everything off? 2. does nMsg.sec = number of message consumed per second? [bin]$ sudo ./kafka-consumer-perf-test.sh --group

Partition reassignment data file is empty

2017-12-30 Thread allen chan
Hello Kafka Version: 0.11.0.1 I am trying to increase replication factor for a topic and i am getting the below error. Can anyone help explain what the error means? The json is not empty $ cat increase-replication-factor.json {"version":1, "partitions":[ {"topic":"metrics","partition":0,"

Re: Partition reassignment data file is empty

2017-12-31 Thread allen chan
ics\",\"partition\":1,\"replicas\" > :[2,3]},]}"); > > > partitionsToBeReassigned was empty. > > I think parsePartitionReassignmentData() should be improved to give better > error information. > > > FYI > > On Sun, Dec 31, 2017 at 4:51 PM, Brett Rann > w

two questions

2016-03-20 Thread allen chan
1) I am using the upgrade instructions to upgrade from 0.8 to 0.9. Can someone tell me if i need to continue to bump the inter.broker.protocol.version after each upgrade? Currently the broker code is 0.9.0.1 but i have the config file listing as inter.broker.protocol.versi on=0.9.0.0 2) Is it possi

consumer offsets not updating

2016-05-06 Thread allen chan
Brokers: 0.9.0.1 Consumers: 0.8.2.2 In the normal situation my monitoring system runs the consumer groups tool to check consumer offsets. Example: [ac...@ekk001.atl kafka]$ sudo /opt/kafka/kafka_2.11-0.9.0.1/bin/kafka-consumer-groups.sh --zookeeper ekz003.atl:2181 --describe --group indexers GROU

KAFKA-3470: treat commits as member heartbeats #1206

2016-05-21 Thread allen chan
Hi, Does anyone know if this is a broker side implementation or consumer side? We deal with long processing times of polls that caused rebalances and this should fix our problem. We will be upgrading our brokers to the 0.10.x branch long before upgrading the consumers so just wanted to email this

Re: KAFKA-3470: treat commits as member heartbeats #1206

2016-05-22 Thread allen chan
Thank you for confirming! On Sunday, May 22, 2016, Guozhang Wang wrote: > Hello, > > KAFKA-3470 is a mainly a broker-side change, which handles the commit > request to also "reset" the timer for heartbeat as well. > > Guozhang > > On Sat, May 21, 2016 at 4:02 P

kafka-consumer-group.sh failed on 0.10.0 but works on 0.9.0.1

2016-05-24 Thread allen chan
I upgraded one of my brokers to 0.10.0. I followed the upgrade guide and added these to my server.properties: inter.broker.protocol.version=0.9.0.1 log.message.format.version=0.9.0.1 When checking the lag i get this error. [ac...@ekk001.scl ~]$ sudo /opt/kafka/kafka_2.11-0.10.0.0/bin/kafka-con

Re: kafka-consumer-group.sh failed on 0.10.0 but works on 0.9.0.1

2016-05-24 Thread allen chan
s, > Jason > > On Tue, May 24, 2016 at 5:21 PM, allen chan > wrote: > > > I upgraded one of my brokers to 0.10.0. I followed the upgrade guide and > > added these to my server.properties: > > > > inter.broker.protocol.version=0.9.0.1 > > log.message.for

Re: kafka-consumer-group.sh failed on 0.10.0 but works on 0.9.0.1

2016-05-24 Thread allen chan
.sh script from 0.9 until all the brokers have been > upgraded. > > -Jason > > On Tue, May 24, 2016 at 6:31 PM, tao xiao wrote: > > > I am pretty sure consumer-group.sh uses tools-log4j.properties > > > > On Tue, 24 May 2016 at 17:59 allen chan > > wro

broker randomly shuts down

2016-06-01 Thread allen chan
I have an issue where my brokers would randomly shut itself down. I turned on debug in log4j.properties but still do not see a reason why the shutdown is happening. Anyone seen this behavior before? version 0.10.0 log4j.properties log4j.rootLogger=DEBUG, kafkaAppender * I tried TRACE level bu

Re: broker randomly shuts down

2016-06-02 Thread allen chan
tty easy to find in > /var/log/syslog (depending on your setup). I don't know about other > operating systems. > > On Thu, Jun 2, 2016 at 5:54 AM, allen chan > wrote: > > > I have an issue where my brokers would randomly shut itself down. > > I turned on

concept of record vs request vs batch

2016-06-13 Thread allen chan
In JMX for Kafka producer there are metrics for both request, record, and batch size Max + Avg. What is the difference between these concepts? In the logging use case: I assume record is the single log line, batch is multiple log lines together and request is the batch wrapped with the metadata t

Re: concept of record vs request vs batch

2016-06-14 Thread allen chan
olr & Elasticsearch Consulting Support Training - http://sematext.com/ > > > On Mon, Jun 13, 2016 at 4:43 PM, allen chan > wrote: > > > In JMX for Kafka producer there are metrics for both request, record, and > > batch size Max + Avg. > > > > What i

Re: concept of record vs request vs batch

2016-06-16 Thread allen chan
Can anyone help with this question? On Tue, Jun 14, 2016 at 1:45 PM, allen chan wrote: > Thanks for answer Otis. > The producer that i use (Logstash) does not track message sizes. > > I already loaded all the metrics from JMX into my monitoring system. > I just need to confirm t

Re: broker randomly shuts down

2016-06-30 Thread allen chan
esg? I have run into this issue and it was the OOM > killer. I also ran into a heap issue using too much of the direct memory > (JVM). Reducing the fetcher threads helped with that problem. > On Jun 2, 2016 12:19 PM, "allen chan" > wrote: > > > Hi Tom, > > &g

Re: broker randomly shuts down

2016-06-30 Thread allen chan
k for the stderr. > > On Thu, Jun 30, 2016 at 5:07 PM allen chan > wrote: > > > Anyone else have ideas? > > > > This is still happening. I moved off zookeeper from the server to its own > > dedicated VMs. > > Kakfa starts with 4G of heap and gets nowhere n

Re: Latest Logstash 7.8 and compatibility with latest Kafka 2.5.0

2020-07-06 Thread allen chan
Best is to read the changelog of the plugin https://github.com/logstash-plugins/logstash-integration-kafka/blob/master/CHANGELOG.md they are up to 2.4.1 per 10.1.0 notes and you have to see what version is packaged with the release. If it is not the right version, you need to use automation or man

Re: [ANNOUCE] Apache Kafka 0.10.1.1 Released

2016-12-23 Thread allen chan
>From what i can tell, it looks like the main kafka website is not updated with this release. Download page shows 0.10.1.0 as latest release. The above link for release notes does not work either. Not Found The requested URL /dist/kafka/0.10.1.1/RELEASE_NOTES.html was not found on this server. O