Yes, that sounds good. Would you like to submit a PR to our upgrade page?
https://github.com/apache/kafka/blob/trunk/docs/upgrade.html
Thanks,
Ismael
On Wed, May 25, 2016 at 6:11 AM, allen chan
wrote:
> Thanks Jason for that insight. I will use the 0.9 tools until i upgrade all
> the brokers.
Hi,
You can also use InfluxDB instead of Graphite. InfluxDB has a Graphite
plugin so that you can still use the graphite metrics reporter.
InfluxDB can be queried through REST APIs. This is very convenient to
implement some alerts on kafka metrics.
2016-05-25 21:45 GMT+02:00 Alex Loddengaard :
>
Hi,
I have a producer question: Is the producer (specifically the normal Java
producer) using the file system in any way?
If it does so, will a producer work after loosing this file system or its
content (for example in a containerization scenario)?
Jan
Hi,
The Java producer (and as far as I'm aware all other client libraries)
don't rely on the filesystem for anything, *except* that the JVM producer
relies on the filesystem if you want to use SSL with Kafka. We're running
thousands of producers in containers (on Heroku) and have never seen any
is
PING
-Original Message-
From: Heath Ivie [mailto:hi...@autoanything.com]
Sent: Wednesday, May 25, 2016 9:25 AM
To: users@kafka.apache.org
Subject: FetchRequest Question
Can someone please explain why if I write 1 message to the queue it takes N
FetchRequests to get the data out where n
Guozhang,
Timestamp extraction seems more like a stream level API. I guess its a
better fit as a global options when using WallclockTimestampExtractor
or ConsumerRecordTimestampExtractor.
w.r.t your statement -- "I think setting timestamps for this KTable to make
sure its values is smaller than t
Hi Kiran,
Can you attach your configuration files not in a .zip?
Most likely your broker isn't using the correct hostname:port to connect to
ZooKeeper. Although if you're using ZooKeeper SASL, you may have a SASL
misconfiguration. Set the `sun.security.krb5.debug` JVM property to `true`
to get SA
[bcc: users@kafka.apache.org, d...@kafka.apache.org]
Hi everyone,
We would like to invite you to our first Stream Processing Meetup at
LinkedIn on June 15 at 6pm. Please RSVP here:
http://www.meetup.com/Stream-Processing-Meetup-LinkedIn/events/231454378
Going forward (at LinkedIn) we will host m
Hello Vadim,
Which Kafka version is it currently supporting up to?
Guozhang
On Wed, May 25, 2016 at 11:56 AM, Vadim Chekan
wrote:
> Hi all,
>
> I'd like kafka4net client to be added to "clients" page:
> https://cwiki.apache.org/confluence/display/KAFKA/Clients
>
> This is C# client, asynchrono
Vadim, do you have code samples for producer/consumer?
On Fri, May 27, 2016 at 8:36 PM Guozhang Wang wrote:
> Hello Vadim,
>
> Which Kafka version is it currently supporting up to?
>
> Guozhang
>
> On Wed, May 25, 2016 at 11:56 AM, Vadim Chekan
> wrote:
>
> > Hi all,
> >
> > I'd like kafka4net
The timestamp is not only used for windowing specs but also for flow
control (i.e. it is used a way of "message chooser" among multiple input
topic partitions), see this section for details:
http://docs.confluent.io/3.0.0/streams/architecture.html#flow-control-with-timestamps
Guozhang
On Fri, M
Hi Alex,
Thanks for the response.
Here is the latest log. looks like it is failing at session establishment
after connection establishment successful. I have enabled sasl debug and
attached the log.
[2016-05-27 23:03:39,280] INFO TGT valid starting at:Fri May 27
23:03:39 EDT 2016 (org.ap
Hello,
I'm studying the part about logs retention. For the delete I've no problems
to see what's going on. However, this is more tricky for compaction. I come
to you with some questions about it:
1) In the documentation I can see that putting null key/payload will be
used as a 'delete' marker:
"C
we have a kafka cluster with 19 nodes. every week we suffer a soft failure
from this cluster. how to resolve this problem.
14 matches
Mail list logo