Hi,
Seems it is OK now, thanks for your great help!
Best Regards
Johnny
-Original Message-
From: Karolis Pocius [mailto:k.poc...@adform.com]
Sent: 2016年11月15日 14:56
To: users@kafka.apache.org
Subject: Re: Create topic with multi-zookeeper URLs
That is correct.
On 2016.11.15 0
That is correct.
On 2016.11.15 08:54, ZHU Hua B wrote:
Hi,
Thanks for your explanation!
Do you mean I just need to write a number "1" in the file myid on 192.168.210.5, not
"server.1" or other content? Thanks!
Best Regards
Johnny
-Original Message-
From: Karolis Pocius [mai
Hi,
Thanks for your explanation!
Do you mean I just need to write a number "1" in the file myid on
192.168.210.5, not "server.1" or other content? Thanks!
Best Regards
Johnny
-Original Message-
From: Karolis Pocius [mailto:k.poc...@adform.com]
Sent: 2016年11月15日 14:47
To: users
You need to create myid file in the datadir (in your case
/tmp/zookeeper/myid) of each instance with a numeric ID inside that file
corresponding to what you have in this section
server.1=192.168.210.5:2888:3888
server.2=192.168.210.6:2888:3888
server.3=192.168.210.8:2888:3888
So on 192.168.2
Hi,
Many thanks for your info!
As you said, I think my configuration might be running two separate instances
of zookeeper rather than a cluster, so I modify the configuration file as below
and start up three zookeeper instances in separate machines. While the
zookeeper launched, some error oc
I am running a 0.9.0.1 Java Kafka Consumer client with 10 consumer threads.
When ever Partition is revoked/assigned, all the consumers hang in the same
state and don't receive any more records.
It typically runs for 30 mins. or so after which it hangs. I am running
with "auto.commit" set to *false*
Hi,
Why are these tools not working perfectly for you?
Does it *have to* be open-source? If not, Sematext SPM collects a lot of
Kafka metrics, with consumer lag being one of them --
https://sematext.com/blog/2016/06/07/kafka-consumer-lag-offsets-monitoring/
Otis
--
Monitoring - Log Management -
Hi there,
What is the best open source tool for Kafka monitoring mainly to check the
offset lag. We tried the following tools:
1. Burrow
2. KafkaOffsetMonitor
3. Prometheus and Grafana
4. Kafka Manager
But nothing is working perfectly. Please help us on this.
Thanks
Yes, offsets are unique per partition.
> I've observed that for example I had offset values equal to zero more times
> then there is the number of Kafka partitions.
Can you elaborate a little more what you observed?
-Zakee
> On Nov 14, 2016, at 10:06 AM, Dominik Safaric
> wrote:
>
> Hi all,
Are these the only logs you see or there are bit more log events before this
that might be relevant?
-Zakee
> On Nov 12, 2016, at 7:00 PM, Vinay Gulani wrote:
>
> Hi,
>
> I am getting below warning message:
>
> WARN kafka.server.KafkaServer - [Kafka Server 1], Retrying
> controlled shutdown
Banias,
This is a property for producers / consumers. Your producers / consumers
may not necessarily (and probably should not) have access to te zookeeper
cluster your kafka cluster uses. That’s why you give it a list kafka nodes
with bootstrap.servers.
–
Best regards,
Radek Gruchalski
ra...@gruc
Hi Kafka experts,
In my Java code to produce messages to Kafka, I always have to specify the
list of "bootstrap.servers" in initialization since version 0.9. I always
wonder if there is anyway the "bootstrap.servers" is available in zookeeper
(w/o manually storing the list there)?
Thanks.
-B
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Yes. You need to use Processor API to access the topic name via
ProcessorContext that is given via Processor#init().
If you use DSL, you can mix-and-match with Processor API via
KStream#process()
- -Matthias
On 11/14/16 12:42 AM, Timur Yusupov w
Hi all,
I've been wondering- is the offset gotten with ConsumerRecord<>().offset()
always unique for each partition? Asking because while I've been running a
consumer group, I've observed that for example I had offset values equal to
zero more times then there is the number of Kafka partitions
Good morning!
I used kafka for queue messaging and it worked good but then I have played
with enabling SSL on Windows 7 by link
https://github.com/edenhill/librdkafka/wiki/Using-SSL-with-librdkafka,
changed server.properties file, then got error on starting kafka: The
requested operation cannot be
Hello ,
I have a question . I have create a adapter Kafka for ibm WebSphere
transformation extender (8.4.1.1). WebSphere is in java 6 and the server
kafaka is on version 9.
I use Kafka 8.2.2 Library for my adapter , i have no trouble to push
data to topic on Kafka 9.
My issue is : I need to b
+1
On 11 November 2016 at 06:36, Vincent Dautremont <
vincent.dautrem...@olamobile.com> wrote:
> Hi,
> Can anyone explain me in more detail how Kakfa works with compression ?
> I've read the doc but it's not all clear to me.
>
>
> - There are compression settings on the broker, the topic of a bro
Hi,
Can anyone explain me in more detail how Kakfa works with compression ?
I've read the doc but it's not all clear to me.
- There are compression settings on the broker, the topic of a broker, a
producer.
Are they all the same setting and one takes precedence on another ?
- Is there a differen
Can you explain how you launched those two zookeeper instances and maybe
share their config? You need to make some edits to config in order to
run a zookeeper cluster - I have a feeling you might be running two
separate instances of zookeeper rather than a cluster. Also, if you want
a cluster y
Hi All,
I want to create a topic with command "bin/kafka-topics.sh --create --zookeeper
HOST:PORT --replication-factor 1 --partitions 1 --topic test", if the option
"--zookeeper" could point to multi-zookeeper URLs such as
"HOST1:PORT1,HOST2:PORT2"?
I tried it as below seems the topic only be
How can I distinguish messages from different topics within one KStream if
I subscribed to topics using regex ?
On Thu, Nov 10, 2016 at 7:32 AM, Hans Jespersen wrote:
> I believe that the new topics are picked up at the next metadata refresh
> which is controlled by the metadata.max.age.ms param
Hi,
I'm using Java Consumer API (latest version, 0.10.1.0), I store
consumed offsets offline, that is that I do not use kafka builtin
offset storage nor consumer groups.
I need to read data from different topics, but from one topic at once,
I would like to pool my consumers (using something like A
22 matches
Mail list logo