Hi, all
I know kafka automatically expose mbeans to jmx, it seems storm doesn’t, i
wonder if anyone has the experience to use JConsole to read storm build-in
metrics through mbeans, or I will have to write separate metricConsumer to
register metrics to mbeans? Is there such source code availabl
Hi, All
I have a 9-node cluster where I already installed cloudera Hadoop/spark, now I
want to install kafka in this cluster too, is it a good idea I install kafka on
each of 9 node? If so, any potential risk for that?
Also I am thinking to install cassandra on each of this node too, basically
Hi, Guozhang
I re-install gradle, it works now, thanks a lot.
SL
> On Dec 9, 2015, at 3:47 PM, Guozhang Wang wrote:
>
> Sa,
>
> Which command line did you use under what path?
>
> Guozhang
>
> On Wed, Dec 9, 2015 at 1:57 PM, Sa Li wrote:
>
>> Hi,
Hi, All
I am having such error to build Kafka,
* Where:
Build file '/usr/local/kafka/build.gradle' line: 164
* What went wrong:
A problem occurred evaluating root project 'kafka'.
> Could not find property 'ScalaPlugin' on project ':clients’.
I try to search online, but can’t even find a solut
Hi, All
I have a question about kafka-python producer, here is the record I have
id (uuid) | sensor_id (character) | timestamp | period (int) | current
(numeric) | date_received | factor (bigint)
"75da661c-bd5c-40e3-8691-9034f34262e3” | “ff0057” | "2013-03-21
11:44:00-07” | 60 |
Hi, All
I had experience to setup kafka cluster among physical servers, currently I
setup two VMs, and I fire up 1 broker on each VMs, as (broker 0 and 2). I
create a topic test-rep-1:
Topic:test-rep-1PartitionCount:2ReplicationFactor:1 Configs:
Topic: test-rep-1
Hi, All
My dev cluster has three nodes (1, 2, 3), but I've seen quite often that
the 1 node just not work as a leader, I run preferred-replica-election many
time, every time I run replica election, I see 1 turn out to be leader for
some partitions, but it just stop leadership after a while, and th
Hi, All
I 've like to use kafka web console to monitor the offset/topics stuff, it
is easy to use, however, it is freezing/stopping or dying too frequently.
I don't think it's a problem on the OS level.
Seems to be a problem on the application level.
I've already fixed open file handlers to 98000
Hello, all
I've recently played around the different kafka monitoring tools, got
following monitoring tools on my DEV,
1. graphite + statsD
2. kafka-web-console
3. JMX + jconsole
4. kafkaOffsetMonitor
5. Kafka Manager (yahoo just open source it)
They all work fine locally on my dev, but I am thi
just do not want to install them on my production server,
how possible can I install all of them in a VM, remotely connecting the
production? It seems I can't find config to allow me to do this, or no such
out of box feature?
thanks
AL
On Fri, Jan 9, 2015 at 6:32 PM, Sa Li wrote:
> T
n Tue, Feb 3, 2015 at 10:32 AM, Sa Li wrote:
>
> > GuoZhang
> >
> > Sorry for leaving this topic for a while, I am still not clear how to
> > commit the offset to zk from commandline, I tried this
> >
> > bin/kafka-console-consumer.sh --zookeeper 10.100.7
Good idea, Joel, will do it now. Thanks
AL
On Tue, Feb 3, 2015 at 2:12 PM, Joel Koshy wrote:
> Can you contact the maintainer directly?
> http://github.com/claudemamo/kafka-web-console/issues
>
> On Tue, Feb 03, 2015 at 12:09:46PM -0800, Sa Li wrote:
> > Hi, All
> >
Hi, All
I am currently using kafka-web-console to monitor the kafka system, it get
down regularly, so I have to restart it every few hours which is kinda
annoying. I downloaded two versions
https://github.com/claudemamo/kafka-web-console
http://mungeol-heo.blogspot.ca/2014/12/kafka-web-console.ht
57 PM, Guozhang Wang wrote:
> It seems not the latest version of Kafka, which version are you using?
>
> On Tue, Jan 20, 2015 at 9:46 AM, Sa Li wrote:
>
> > Guozhang
> >
> > Thank you very much for reply, here I print out the
> > kafka-console-consumer.sh help
Hi, All
I send messages from one VM to production, but getting such error
[2015-01-30 18:43:44,810] WARN Failed to send producer request with
correlation id 126 to broker 101 with data for partitions
[test-rep-three,5],[test-rep-three,2]
(kafka.producer.async.DefaultEventHandler)
java.nio.channel
FKA-1888 <https://issues.apache.org/jira/browse/KAFKA-1888>
>
> So if you have new observations while using the package or if you are
> willing to contribute to those tickets you are mostly welcomed.
>
> Guozhang
>
>
> On Thu, Jan 22, 2015 at 3:02 PM, Sa Li wrote:
>
&
/cluster_config.json
?
Thanks
AL
On Fri, Jan 23, 2015 at 1:39 PM, Sa Li wrote:
> Thanks for reply. Ewen, pertaining to your statement "... hostname setting
> being a list instead of a single host," are you saying entity_id 1 or 0,
>
> "entity_id&quo
y're running on since it
> requires manually editing all those files. The patch gets rid of
> cluster_config.json and provides a couple of different ways of configuring
> the cluster -- run everything on localhost, get cluster info from a single
> json file, or get the ssh i
Hi, All
>From my last ticket (Subject: kafka production server test), Guozhang
kindly point me the system test package come with kafka source build which
is really cool package. I took a look at this package, things are clear is
I run it on localhost, I don't need to change anything, say,
cluster_
Hi, Guozhang
Can I run this package remotely test another server? which mean I run this
package on dev but testing kafka system on production?
thanks
AL
On Thu, Jan 22, 2015 at 2:55 PM, Sa Li wrote:
> Hi, Guozhang,
>
> Good to know such package, will try it now. :-)
>
> th
t;
>
> On Thu, Jan 22, 2015 at 12:00 PM, Sa Li wrote:
>
> > Hi, All
> >
> > We are about to deliver kafka production server, I have been working on
> > different test, like performance test from linkedin. This is a 3-node
> > cluster, with 5 nodes zkEnsem
Hi, All
We are about to deliver kafka production server, I have been working on
different test, like performance test from linkedin. This is a 3-node
cluster, with 5 nodes zkEnsemble. I assume there are lots of tests I need
to do, like network, node failure, flush time, etc. Is there is completed
o
> list all the properties.
>
> Guozhang
>
> On Mon, Jan 19, 2015 at 5:15 PM, Sa Li wrote:
>
> > Guozhang,
> >
> > Currently we are in the stage to testing producer, our C# producer
> sending
> > data to brokers, and use
> >
> > bin/kafka-ru
h will be created.
>
> Guozhang
>
> On Mon, Jan 19, 2015 at 3:58 PM, Sa Li wrote:
>
> > Hi,
> >
> > I use such tool
> >
> > Consumer Offset Checker
> >
> > Displays the: Consumer Group, Topic, Partitions, Offset, logSize, Lag,
> > Own
Hi,
I use such tool
Consumer Offset Checker
Displays the: Consumer Group, Topic, Partitions, Offset, logSize, Lag,
Owner for the specified set of Topics and Consumer Group
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker
To be able to know the consumer group, in zkCli.sh
[zk: localhos
itter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> ********/
>
> On Fri, Jan 2, 2015 at 7:41 PM, Sa Li wrote:
>
> > Hi, all
> >
> > I am running kafka-web-console, I periodically getting such error and
> cause
> > the UI d
ers disconnection took
place or not, how to check it up?
thanks
AL
On Sun, Jan 18, 2015 at 10:21 AM, Jun Rao wrote:
> Any issue with the network?
>
> Thanks,
>
> Jun
>
> On Wed, Jan 7, 2015 at 1:59 PM, Sa Li wrote:
>
> > Things bother me, sometimes, the errors wo
Thanks for the reply, I have change the configuration and running to see if
any errors come out.
SL
On Thu, Jan 15, 2015 at 3:34 PM, István wrote:
> Hi Sa Li,
>
> Depending on your system that configuration entry needs to be modified. The
> first parameter after the insert is the u
Hi, all
We test our production kafka, and getting such error
[2015-01-15 19:03:45,057] ERROR Error in acceptor (kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accep
Hello, Kafka experts
I have a production cluster which has three nodes(.100, .101, .102) I am
using a C# producer to publish data to kafka brokers, it works for a while
but started to lose connection error to 2 nodes of cluster. Here is the C#
producer error:
[2015-01-13 01:49:49,786] ERROR
[Cons
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> /
>
> On Fri, Jan 9,
Hello, all
I like to use the tool metrics-kafka which seems to be attractive to report
kafka metric and use graphite to graph metrics, however I am having trouble
to make it work.
In https://github.com/stealthly/metrics-kafka, it says:
In the main metrics-kafka folder
1) sudo ./bootstrap.sh 2)
Worked, thanks
On Fri, Jan 9, 2015 at 10:37 AM, Sa Li wrote:
> Hi, I parse the zkServer.sh and make changes on
> /etc/zookeeper/conf/environment
>
> ZOOMAIN="-Dcom.sun.management.jmxremote=true
> -Dcom.sun.management.jmxremote.local.only=false
> -Dcom.sun.management
onf/zoo.cfg
Starting zookeeper ... STARTED
But when I try to connect it by jconsole: 10.100.70.128:2, but it fails
to connect, is there a way to confirm jmxremote port= 2?
thanks
AL
On Thu, Jan 8, 2015 at 4:02 PM, Sa Li wrote:
>
> Hi, all
>
> I've just figured out the mo
Hi, all
I've just figured out the monitoring of kafka by jconsole, I want to do the
same thing to zookeeper. Zookeeper site says "The class
*org.apache.zookeeper.server.quorum.QuorumPeerMain* will start a JMX
manageable ZooKeeper server. This class registers the proper MBeans during
initalization
In addition, I found all the attributes in jconsole MBeans are cool, but
not being graphed, so again, if I want to view the real-time graphing,
jmxtrans + graphite is the solution?
thanks
AL
On Thu, Jan 8, 2015 at 1:35 PM, Sa Li wrote:
> Thank you very much for all the reply, I am able
ROD environments? If so you will
> need to open access on all ports, not just JMX port.
>
> It gets complicated with JMX.
>
> Gene Robichaux
> Manager, Database Operations
> Match.com
> 8300 Douglas Avenue I Suite 800 I Dallas, TX 75225
>
> -----Original Messa
:
> There are different ways to find the connection count and each one depends
> on the operating system that's being used. "lsof -i" is one option, for
> example, on *nix systems.
>
> -Jaikiran
>
> On Thursday 08 January 2015 11:40 AM, Sa Li wrote:
>
&
Hello, All
I understand many of you are using jmxtrans along with graphite/ganglia to
pull out metrics, according to https://kafka.apache.org/081/ops.html, it
says "The easiest way to see the available metrics to fire up jconsole and
point it at a running kafka client or server; this will all bro
Yes, it is weird hostname, ;), that is what our system guys name it. How to
take a note to measure the connections open to 10.100.98.102?
Thanks
AL
On Jan 7, 2015 9:42 PM, "Jaikiran Pai" wrote:
> On Thursday 08 January 2015 01:51 AM, Sa Li wrote:
>
>> see this type o
Hi, All
I installed jmxtrans and graphite, wish to be able to graph stuff from
kafka, but firstly I start the jmxtrans and getting such errors, (I use the
example graphite json).
./jmxtrans.sh start graphite.json
[07 Jan 2015 17:55:58] [ServerScheduler_Worker-4] 180214 DEBUG
(com.googlecode.jmxt
Things bother me, sometimes, the errors won't pop out, sometimes it comes,
why?
On Wed, Jan 7, 2015 at 1:49 PM, Sa Li wrote:
>
> Hi, Experts
>
> Our cluster is a 3 nodes cluster, I simply test producer locally, see
>
> bin/kafka-run-class.sh org.apache.kafka.clients.to
Hi, Experts
Our cluster is a 3 nodes cluster, I simply test producer locally, see
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
test-rep-three 100 3000 -1 acks=1 bootstrap.servers=10.100.98.100:9092
buffer.memory=67108864 batch.size=8196
But I got such error, I do
?
thanks
On Wed, Jan 7, 2015 at 12:21 PM, Sa Li wrote:
> see this type of error again, back to normal in few secs
>
> [2015-01-07 20:19:49,744] WARN Error in I/O with harmful-jar.master/
> 10.100.98.102 (org.apache.kafka.common.network.Selector)
> java.net.ConnectException: Con
avg
latency, 3858.0 max latency.
On Wed, Jan 7, 2015 at 12:07 PM, Sa Li wrote:
> Hi, All
>
> I am doing performance test by
>
> bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
> test-rep-three 5 100 -1 acks=1 bootstrap.servers=
> 10.100.98.1
Hi, All
I am doing performance test by
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
test-rep-three 5 100 -1 acks=1 bootstrap.servers=10.100.98.100:9092,
10.100.98.101:9092,10.100.98.102:9092 buffer.memory=67108864 batch.size=8196
where the topic test-rep-thre
, Xiaoyu Wang wrote:
> @Sa,
>
> the required.acks is producer side configuration. Set to -1 means requiring
> ack from all brokers.
>
> On Fri, Jan 2, 2015 at 1:51 PM, Sa Li wrote:
>
> > Thanks a lot, Tim, this is the config of brokers
> >
> > --
>
Hi, All
I am running a C# producer to send messages to kafka (3 nodes cluster), but
have such errors:
[2015-01-06 16:09:51,143] ERROR Closing socket for /10.100.70.128 because
of error (kafka.network.Processor)
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl
Hi, All
I am testing and making changes on server.properties, I wonder do I need to
specifically change the values in consumer and producer properties, here is
the consumer.properties
zookeeper.connect=10.100.98.100:2181,10.100.98.101:2181,10.100.98.102:2181
# timeout in ms for connecting to zook
BTW, I found the the /kafka/logs also getting biger and bigger, like
controller.log and state-change.logs. should I launch a cron the clean them
up regularly or there is way to delete them regularly?
thanks
AL
On Tue, Jan 6, 2015 at 2:01 PM, Sa Li wrote:
> Hi, All
>
> We fix the p
ot; How do you guys set
log.retention.bytes? I set log.retention.hours=336 (2 weeks), and should I
set log.retention.bytes as default -1 or some other amount?
thanks
AL
On Tue, Jan 6, 2015 at 12:43 PM, Sa Li wrote:
> Thanks the reply, the disk is not full:
>
> root@exemplary-bir
lar broker, picking an unlucky
> topic/partition, deleting, modifying the any topics that consumed too much
> space by lowering their retention bytes, and restarting.
>
> On Tue, Jan 6, 2015 at 12:02 PM, Sa Li wrote:
>
> > Continue this issue, when I restart the server,
(kafka.server.KafkaServer)
Any ideas
On Tue, Jan 6, 2015 at 12:00 PM, Sa Li wrote:
> the complete error message:
>
> -su: cannot create temp file for here-document: No space left on device
> OpenJDK 64-Bit Server VM warning: Insufficient space for shared memory
> file:
>/t
kafka.utils.Utils$.loadProps(Utils.scala:144)
at kafka.Kafka$.main(Kafka.scala:34)
at kafka.Kafka.main(Kafka.scala)
On Tue, Jan 6, 2015 at 11:58 AM, Sa Li wrote:
>
> Hi, All
>
> I am doing performance test on our new kafka production server, but after
> sending some messa
Hi, All
I am doing performance test on our new kafka production server, but after
sending some messages (even faked message by using bin/kafka-run-class.sh
org.apache.kafka.clients.tools.ProducerPerformance), it comes out the error
of connection, and shut down the brokers, after that, I see such e
Hi, All
I am running performance test on kafka, the command
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
test-rep-three 500 100 -1 acks=1 bootstrap.servers=
10.100.10.101:9092 buffer.memory=67108864 batch.size=8196
Since we send 50 billions to brokers, it was
Hi, all
I am running kafka-web-console, I periodically getting such error and cause
the UI down:
! @6kldaf9lj - Internal server error, for (GET)
[/assets/images/zookeeper_small.gif] ->
play.api.Application$$anon$1: Execution exception[[FileNotFoundException:
/vagrant/kafka-web-console-master/ta
, 2015 at 2:20 PM, Sa Li wrote:
> Thanks a lot!
>
>
> On Fri, Jan 2, 2015 at 12:15 PM, Jay Kreps wrote:
>
>> Nice catch Joe--several people have complained about this as a problem and
>> we were a bit mystified as to what kind of bug could lead to all their
>&g
ttp://www.stealth.ly
> > Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> > /
> >
> > On Fri, Jan 2, 2015 at 1:58 PM, Sa Li wrote:
> >
> > > Hi, All
> > >
> > > I've just notice one thing
Hi, All
I've just notice one thing, when I am experiencing some errors in Kafka
servers, I reboot the dev servers (not a good way), after reboot, I get
into zkCli, I can see all the topics still exist. But when I get into kafka
log directory, I found all data gone, see
root@DO-mq-dev:/tmp/kafka-l
> On Fri, Jan 2, 2015 at 9:54 AM, Sa Li wrote:
> > Hi, all
> >
> > We are sending the message from a producer, we send 10 records, but
> we
> > see only 99573 records for that topics, we confirm this by consume this
> > topic and check the log size in kafka
Hi, all
We are sending the message from a producer, we send 10 records, but we
see only 99573 records for that topics, we confirm this by consume this
topic and check the log size in kafka web console.
Any ideas for the message lost, what is the reason to cause this?
thanks
--
Alec Li
Hi, all
I add auto.create.topics.enable=true in server.properties file, but I got
such error
java.lang.IllegalArgumentException: requirement failed: Unacceptable value
for property 'auto.create.topics.enable', boolean values must be either
'true' or 'false
when I start the kafka server, any clu
Hi, all
I am thinking to make a reliable monitoring system for our kafka production
cluster. I read such from documents:
"Kafka uses Yammer Metrics for metrics reporting in both the server and the
client. This can be configured to report stats using pluggable stats
reporters to hook up to your mo
. (kafka.log.Log)
[2014-12-23 00:04:39,452] WARN [ReplicaFetcherThread-0-100], Replica 102
for partition [perf_producer_p8_test,7] reset its fetch offset to current
leader 100's latest offset 0 (kafka.server.ReplicaFetcherThread)
On Mon, Dec 22, 2014 at 3:55 PM, Sa Li wrote:
>
>
(kafka.server.KafkaApis)
On Mon, Dec 22, 2014 at 3:50 PM, Sa Li wrote:
>
> I restart the kafka server, it is the same thing, sometime nothing listed
> on ISR, leader, I checked the state-change log
>
> [2014-12-22 23:46:38,164] TRACE Broker 100 cached leader info
> (LeaderAndI
:101,102,100)
for partition [perf_producer_p8_test,1] in response to UpdateMetadata
request sent by controller 101 epoch 4 with correlation id 138
(state.change.logger)
On Mon, Dec 22, 2014 at 2:46 PM, Sa Li wrote:
>
> Hi, All
>
> I created a topic with 3 replications and 6 partitions
Hi, All
I created a topic with 3 replications and 6 partitions, but when I check
this topic, seems there is no leader and isr were set for this topic, see
bin/kafka-topics.sh --create --zookeeper 10.100.98.100:2181
--replication-factor 3 --partitions 6 --topic perf_producer_p6_test
SLF4J: Class p
Hi, All
I've run bin/kafka-producer-perf-test.sh on our kafka-production cluster, I
found the number of partitions really have huge impacts on the producer
performance, see:
start.time, end.time, compression, message.size, batch.size,
total.data.sent.in.MB, MB.sec, total.data.sent.in.nMsg, nMsg.s
o: 10.100.98.100:9092
>
> I'd validate that this is the issue using telnet and then check the
> firewall / ipfilters settings.
>
> On Thu, Dec 18, 2014 at 2:21 PM, Sa Li wrote:
> > Dear all
> >
> > We just build a kafka production cluster, I can create topics in kafk
Dear all
We just build a kafka production cluster, I can create topics in kafka
production from another host. But when I am send very simple message as
producer, it generate such errors:
root@precise64:/etc/kafka# bin/kafka-console-producer.sh --broker-list
10.100.98.100:9092 --topic my-replicate
Thanks, Neha, is there a java version batch consumer?
thanks
On Fri, Dec 5, 2014 at 9:41 AM, Scott Clasen wrote:
> if you are using scala/akka this will handle the batching and acks for you.
>
> https://github.com/sclasen/akka-kafka#akkabatchconsumer
>
> On Fri, Dec 5, 2014 at
while
load to memory?
thanks
On Thu, Dec 4, 2014 at 1:21 PM, Neha Narkhede wrote:
> This is specific for pentaho but may be useful -
> https://github.com/RuckusWirelessIL/pentaho-kafka-consumer
>
> On Thu, Dec 4, 2014 at 12:58 PM, Sa Li wrote:
>
> > Hello, all
> >
&g
Hello, all
I never developed a kafka consumer, I want to be able to make an advanced
kafka consumer in java to consume the data and continuously write the data
into postgresql DB. I am thinking to create a map in memory and getting a
predefined number of messages in memory then write into DB in ba
gt; On Thu, Nov 27, 2014 at 11:09 AM, Sa Li wrote:
>
>> Hi, all
>>
>> We are having 3 production server to setup for kafka cluster, I wonder how
>> many brokers to configure for each server.
>>
>>
>> thanks
>>
>>
>> --
>>
>> Alec Li
>>
Dear all
I am provision production kafka cluster, which has 3 servers, I am
wondering how many brokers I should set for each servers, I set 3 brokers
in dev clusters, but I really don't what is the advantages to set more than
1 broker for each server, what about 1 broker for each server, totally 3
Hi, all
I read the comments from
http://www.michael-noll.com/tutorials/running-multi-node-storm-cluster/,
Michael mentioned to increase the maximum number of open file handles for
the user kafka to 98,304 (change kafka to whatever user you are running the
Kafka daemons with – this can be your own
t;
> On Tue, Nov 25, 2014 at 1:26 PM, Jun Rao wrote:
>
> > Which web console are you using?
> >
> > Thanks,
> >
> > Jun
> >
> > On Fri, Nov 21, 2014 at 8:34 AM, Sa Li wrote:
> >
> > > Hi, all
> > >
> > > I am trying t
Is there any rules to determine or optimize the number of brokers?
On Thu, Nov 27, 2014 at 11:09 AM, Sa Li wrote:
> Hi, all
>
> We are having 3 production server to setup for kafka cluster, I wonder how
> many brokers to configure for each server.
>
>
> thanks
Hi, all
We are having 3 production server to setup for kafka cluster, I wonder how
many brokers to configure for each server.
thanks
--
Alec Li
Hi, all
I am trying to get kafka web console work, but seems it only works few
hours and fails afterwards, below is the error messages on the screen. I am
assuming something wrong with the DB, I used to swap H2 to mysql, but
didn't help. Anyone has similar problem?
-
.
.
at sun.
Hi, All
I am running the kafka producer code:
import java.util.*;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
public class TestProducer {
public static void main(String[] args) {
long events = Long.parseLong(args[
Hi, all
I've just made a 3-node kafka cluster (9 brokers, 3 for each node), the
performance test is OK. Now I am using tridentKafkaSpout, and being able to
getting data from producer, see
BrokerHosts zk = new ZkHosts("10.100.70.128:2181");
TridentKafkaConfig spoutConf = new TridentKafkaCon
All,
Again, I am still unable to install, seems to stuck on ivy.lock, any ideas to
continue?
thanks
Alec
On Oct 12, 2014, at 7:38 PM, Sa Li wrote:
> Hi
t will start.
>
> Even I had faced this issue. The console takes a lot of time to start, but
> eventually it does. So this is not an error :)
>
> Hope this helped,
> -Palak
>
> On Sat, Oct 11, 2014 at 9:00 AM, Sa Li wrote:
>
>> Hi, all
>>
>> I am
Hi, all
I am installing kafka-web-console on ubuntu server, when I sbt package it, it
stuck on waiting for ivy.lock
root@DO-mq-dev:/home/stuser/kafkaprj/kafka-web-console# sbt package
Loading /usr/share/sbt/bin/sbt-launch-lib.bash
[info] Loading project definition from
/home/stuser/kafkaprj/kaf
e same.
>
> Guozhang
>
> On Thu, Oct 9, 2014 at 11:37 AM, Sa Li wrote:
>
> > Hi, All
> >
> > I setup a 3-node kafka cluster on top of 3-node zk ensemble. Now I
> launch 1
> > broker on each node, the brokers will be randomly distributed to zk
> > ense
Hi, All
I setup a 3-node kafka cluster on top of 3-node zk ensemble. Now I launch 1
broker on each node, the brokers will be randomly distributed to zk
ensemble, see
DO-mq-dev.1
[zk: localhost:2181(CONNECTED) 1] ls /brokers/ids
[0, 1]
pof-kstorm-dev1.2
[zk: localhost:2181(CONNECTED) 1] ls /broke
Hi, All
I setup a kafka cluster, and plan to publish the messages from Web to kafka,
the messages are in the form of json, I want to implement a consumer to write
the message I consumer to postgresql DB, not aggregation at all. I was thinking
to use KafkaSpout in storm to make it happen, now I
/
>
> gradle-wrapper.jar gradle-wrapper.properties
>
> Thanks,
>
> Jun
>
> On Thu, Oct 2, 2014 at 2:05 PM, Sa Li wrote:
>
> > I git clone the latest kafka package, why can't I build the package
> >
> > gradle
> >
> > FAILURE: Build failed
g/installation) installed."
>
> Guozhang
>
> On Thu, Oct 2, 2014 at 1:55 PM, Sa Li wrote:
>
> > Thanks Guozhang
> >
> > I tried this as in KAFKA-1490:
> >
> > git clone https://git-wip-us.apache.org/repos/asf/kafka.git
> >
> > cd kafka
>
.
On Thu, Oct 2, 2014 at 2:25 PM, Sa Li wrote:
> Daniel, thanks for reply
>
> It is still the learn curve to me to setup the cluster, we finally want to
> make connection between kafka cluster and storm cluster. As you mentioned,
> seems 1 single broker per node is more efficient
> > In general it is not required to have the kafka brokers installed on the
> > same nodes of the zk servers, and each node can host multiple kafka
> > brokers: you just need to make sure they do not share the same port and
> the
> > same data dir.
> >
> > Guozh
I can't really gradle through, even clone the latest trunk, anyone having
same issue?
On Thu, Oct 2, 2014 at 1:55 PM, Sa Li wrote:
> Thanks Guozhang
>
> I tried this as in KAFKA-1490:
>
> git clone https://git-wip-us.apache.org/repos/asf/kafka.git
>
> cd kafka
>
I git clone the latest kafka package, why can't I build the package
gradle
FAILURE: Build failed with an exception.
* Where:
Script '/home/ubuntu/kafka/gradle/license.gradle' line: 2
* What went wrong:
A problem occurred evaluating script.
> Could not find method create() for arguments [downloa
>
> Guozhang
>
> On Thu, Oct 2, 2014 at 11:00 AM, Sa Li wrote:
>
> > Thanks, Jay,
> >
> > Here is what I did this morning, I git clone the latest version of kafka
> > from git, (I am currently using kafka 8.0) now it is 8.1.1, and it use
> > gradle to bu
Hi, all
Here I want to run example code associated with kafka package, I run as
readme says:
To run the demo using scripts:
+
+ 1. Start Zookeeper and the Kafka server
+ 2. For simple consumer demo, run bin/java-simple-consumer-demo.sh
+ 3. For unlimited producer-consumer run, run
bin/java-
it should be there.
>
> -Jay
>
> On Wed, Oct 1, 2014 at 7:55 PM, Sa Li wrote:
> > Hi, All
> >
> > I built a 3-node kafka cluster, I want to make performance test, I found
> someone post following thread, that is exactly the problem I have:
> > -
>
still not able to run it, any clues for
that?
thanks
Alec
On Oct 1, 2014, at 9:13 PM, ravi singh wrote:
> It is available with Kafka package containing the source code. Download
> the package, build it and run the above command.
>
> Regards,
> Ravi
>
> On Wed, Oct 1, 2
Hi, All
I built a 3-node kafka cluster, I want to make performance test, I found
someone post following thread, that is exactly the problem I have:
-
While testing kafka producer performance, I found 2 testing scripts.
1) performance testing script in kafka distribution
bin/kafka-p
1 - 100 of 101 matches
Mail list logo