Yes,you can use constraints and same volumes.That can be trusted.
From: Radoslaw Gruchalski
To: "Karnam, Kiran" ; users@kafka.apache.org
Sent: Thursday, 26 May 2016 2:31 AM
Subject: Re: upgrading Kafka
Kiran,
If you’re using Docker, you can use Docker on Mesos, you can use constrai
Hi Walter,
Synced up with Ismael regarding your questions, and here are our
suggestions:
"Scala 2.11 is the minimum. The downside of that flag is that it includes
new features that are still changing, may be less stable and may not work
fully.
You may probably consider planning an upgrade as su
A processor is guaranteed to be executed on the same thread at any given
time, its process() and punctuate() will always be triggered to run in a
single thread.
Currently TimestampExtractor is set globally, but you can definitely define
different logics depending on the topic name (which is includ
More specifically, see:
https://github.com/mesos/kafka#failed-broker-recovery
On Wed, May 25, 2016 at 6:02 PM, craig w wrote:
> The Kafka framework can be used to deploy brokers. It will also bring a
> broker back up on the server it was last running on (within a certain
> amount of time).
>
> H
Excellent :)
Thanks,
Mayuresh
On Tue, May 24, 2016 at 2:55 AM, Ismael Juma wrote:
> Hi all,
>
> Since Kafka implements a number of security features, we need a procedure
> for reporting potential security vulnerabilities privately (as per
> http://www.apache.org/security/). We have added a sim
Hello,
I'm using Kafka 0.9 and have a topic with the
config: cleanup.policy=compact, delete.retention.ms=3, segment.ms
=3, min.cleanable.dirty.ratio=0.01.
I understood regarding the requirements with the latest segments and how
only the segments other than the latest (active) are compacted
Super old, but: https://github.com/atdt/UdpKafka
On Wed, May 25, 2016 at 4:20 PM, Joe San wrote:
> What about this one: https://github.com/agaoglu/udp-kafka-bridge
>
> On Wed, May 25, 2016 at 6:48 PM, Sunil Saggar
> wrote:
>
> > Hi All,
> >
> > I am looking for a kafka producer to receive UDP p
The Kafka framework can be used to deploy brokers. It will also bring a
broker back up on the server it was last running on (within a certain
amount of time).
However the Kafka framework doesn't run brokers in containers.
On Wednesday, May 25, 2016, Radoslaw Gruchalski
wrote:
> Kiran,
>
> If yo
I have seen this issue as well with 0.9. I also thought that it was because
of the upgrade, but that doesn't seem to be it. But there were also a
couple of instances when it didn't change the timestamps, so I was unable
to pinpoint the exact root cause or steps and hence had not yet posted it
here.
Hi Niko,
VerifiableProperties is part of the kafka jar and your build only depends
on kafka-clients.
Ismael
On Wed, May 25, 2016 at 6:09 PM, Niko Davor wrote:
> Using a bare-bones build.sbt: http://pastebin.com/CADyngYs
>
> Results in:
>
> [warn] Class kafka.utils.VerifiableProperties not foun
Kiran,
If you’re using Docker, you can use Docker on Mesos, you can use constraints to
force relaunched kafka broker to always relaunch at the same agent and you can
use Docker volumes to persist the data.
Not sure if https://github.com/mesos/kafka provides these capabilites.
–
Best regards,
Hi All,
We are using Docker containers to deploy Kafka, we are planning to use mesos
for the deployment and maintenance of containers. Is there a way during upgrade
that we can persist the data so that it is available for the upgraded container.
we don't want the clusters to go into chaos with
What about this one: https://github.com/agaoglu/udp-kafka-bridge
On Wed, May 25, 2016 at 6:48 PM, Sunil Saggar
wrote:
> Hi All,
>
> I am looking for a kafka producer to receive UDP packets and send that
> information to specified topic. Is there a out of box producer which does
> this ?
>
> Ther
First result in Google for “kafka udp listener” brings this:
https://github.com/agaoglu/udp-kafka-bridge
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
de.linkedin.com/in/radgruchalski
Confidentiality:
This communication is intended for the above-named person and may be
confidential an
Hi All,
I am looking for a kafka producer to receive UDP packets and send that
information to specified topic. Is there a out of box producer which does
this ?
There are a few github projects which do udp2kafka ( receives UDP packets
and then relay them to kafka)
Any advise ?
--
-
Hi there,
You can use the `listeners` config to tell Kafka which interfaces to listen
on. The `listeners` config also supports setting the port and protocol. You
may also want to set `advertised.listeners` if the `listeners` hostnames or
IPs aren't reachable by your clients.
Alex
On Wed, May 25,
Hi Hafsa,
We often see Grafana and Graphite, which are both free. Keep in mind you
should monitor the system's metrics and Kafka's JMX metrics.
Alex
On Wed, May 25, 2016 at 3:42 AM, Hafsa Asif
wrote:
> Hello,
>
> What is the best monitoring tool for Kafka in production, preferable free
> tool?
Hi all,
I'd like kafka4net client to be added to "clients" page:
https://cwiki.apache.org/confluence/display/KAFKA/Clients
This is C# client, asynchronous, all 3 compressions supported (read and
write), tracks leader partition changes transparently, long time in
production.
Maintainer: https://gi
I'm sure i can do this but I'm just not stumbling on the right
documentation anywhere. I have a handful of kafka servers that I am trying
to get ready for production. I'm trying separate the internal and external
network traffic, but I don't see how to do it.
Each host has two addresses.
10.x.y.z
Thanks for the news Joe! Browsed at the PR and it looks interesting.
Guozhang
On Wed, May 25, 2016 at 9:54 AM, Joe Stein wrote:
> Hey Kafka community, I wanted to pass along some of the work we have been
> doing as part of providing commercial support for Heron
> https://blog.twitter.com/201
I noticed the same issue too with 0.9.
On Wed, 25 May 2016 at 09:49 Andrew Otto wrote:
> “We use the default log retention of 7 *days*" :)*
>
> On Wed, May 25, 2016 at 12:34 PM, Andrew Otto wrote:
>
> > Hiya,
> >
> > We’ve recently upgraded to 0.9. In 0.8, when we restarted a broker, data
> >
Using a bare-bones build.sbt: http://pastebin.com/CADyngYs
Results in:
[warn] Class kafka.utils.VerifiableProperties not found - continuing with a
stub.
[warn] Class kafka.utils.VerifiableProperties not found - continuing with a
stub.
[warn] two warnings found
Hey Kafka community, I wanted to pass along some of the work we have been
doing as part of providing commercial support for Heron
https://blog.twitter.com/2016/open-sourcing-twitter-heron Open Sourced
Today.
https://github.com/twitter/heron/pull/751 Kafka 0.8 & 0.9 Spout, Bolt &
Example Topology
“We use the default log retention of 7 *days*" :)*
On Wed, May 25, 2016 at 12:34 PM, Andrew Otto wrote:
> Hiya,
>
> We’ve recently upgraded to 0.9. In 0.8, when we restarted a broker, data
> log file mtimes were not changed. In 0.9, any data log file that was on
> disk before the broker has it
Hiya,
We’ve recently upgraded to 0.9. In 0.8, when we restarted a broker, data
log file mtimes were not changed. In 0.9, any data log file that was on
disk before the broker has it’s mtime modified to the time of the broker
restart.
This causes problems with log retention, as all the files then
Can someone please explain why if I write 1 message to the queue it takes N
FetchRequests to get the data out where n > 1?
Heath
Warning: This e-mail may contain information proprietary to AutoAnything Inc.
and is intended only for the use of the intended recipient(s). If the reader of
this me
Hi
We are facing a issue where our Consumer component is not instantly
logging the records polled from the Brokers. We have following the
below architecture as attached. Following are the properties
configured:
Producer.properties
bootstrap.servers=xx.xxx.xxx.140:9092,xx.xxx.xxx.140:9093,xx.xxx.x
Hi, I'm trying to debug why one particular partition has a huge lag for our
consumers, but while debugging I ran the following 2 commands which seem to
contradict each other. I snipped the output for brevity. I'm focused on
partition 80. I'm also running Kafka 0.8.2.2 with the high level consumer
i
Hello,
What is the best monitoring tool for Kafka in production, preferable free
tool? If there is no free tool, then please mention non-free efficient
monitoring tools also.
We are feeling so much problem without monitoring tool. Sometimes brokers
goes down or consumer is not working, we are not
Generally Kafka isn't super great with a giant number of topics. I'd
recommend designing your system around a smaller number than 10k. There's
an upper limit enforced on the total number of partitions by zookeeper
anyway, somewhere around 29k.
I'd recommend having just a single producer per JVM, t
By process I mean a JVM process (if you're using the JVM clients and for
your app).
Thanks
Tom Crayford
Heroku Kafka
On Wednesday, 25 May 2016, Hafsa Asif wrote:
> A very good question from Joe. I have also the same question.
>
> Hafsa
>
> 2016-05-24 18:00 GMT+02:00 Joe San
> >:
>
> > Interes
Hi,
Kafka's performance all comes from batching. There's going to be a huge
perf impact from limiting your batching like that, and that's likely the
issue. I'd recommend designing your system around Kafka's batching model,
which involves large numbers of messages per fetch request.
Thanks
Tom Cr
If you're using EBS then it's a single flag to use encrypted drives at the
provision time of the volume. I don't know about the other storage options,
I'd recommend looking at the AWS documentation.
Thanks
Tom Crayford
Heroku Kafka
On Wednesday, 25 May 2016, Snehalata Nagaje <
snehalata.nag...@h
A very good question from Joe. I have also the same question.
Hafsa
2016-05-24 18:00 GMT+02:00 Joe San :
> Interesting discussion!
>
> What do you mean here by a process? Is that a thread or the JVM process?
>
> On Tue, May 24, 2016 at 5:49 PM, Tom Crayford
> wrote:
>
> > Aha, yep that helped a
Hi Tom,
Thank you for your help. I have only one broker. I used kafka production
server configuration listed in kafka's documentation page:
http://kafka.apache.org/documentation.html#prodconfig . I have increased
the flush interval and number of messages to prevent the disk from becoming
the bottle
I do not mind the ordering as I have a Timestamp in all my messages and all
my messaged land in a Timeseries database. So I understand that it is
better to have just one Producer instance per JVM and use that to write to
n number of topics. I mean even if I have 10,000 topics, I can just get
away w
36 matches
Mail list logo