Thanks Guozhang Wang.
Hamza
De : Guozhang Wang
Envoyé : jeudi 4 août 2016 06:58:22
À : users@kafka.apache.org
Objet : Re: Re : A specific use case
Yeah, if you can buffer yourself in the process() function and then rely on
punctuate() for generating the outputs
Is it possible to use Kafka to track counts instead of deletion on
compaction? I know we can aggregate ourself and add it to a different topic
but that won't make sense if the time window is more than few seconds.
Say, I can then, use it to count based on a key containing minute, hour,
day.
https:
Hi Mathieu,
It is true that the DSL currently does not support configuration of the stores.
Sounds like it might be worth trying to build RocksDb and dropping into
classpath for now.
Eno
> On 4 Aug 2016, at 17:42, Mathieu Fenniak wrote:
>
> Hi Eno,
>
> Yes, I've looked at that. RocksDB can
Thanks in advance Eno.
-Jeyhun
On Thu, Aug 4, 2016 at 6:25 PM Eno Thereska wrote:
> Hi Jeyhun,
>
> You can use Kafka Connect to bring data into Kafka from HDFS then use
> Kafka Streams as usual. I believe there is a desire to make the Connect +
> Streams more integrated so it doesn't feel like
Hi,
experimenting with log compaction, I see Kafka go through all the steps,
in particular I see positive messages in log-cleaner.log and *.deleted
files. Yet once the *.deleted segment files have disappeared, the
segment and index files with size 0 are still kept.
I stopped and restarted Ka
Kafka can't by itself do aggregation (nor does it really make sense for it
to). You can build such a feature on top of log compaction relatively
easily (by sending the new count as a message under an individual key), or
you can use the KTable and aggregation features of Kafka Streams.
Thanks
Tom
Kafka compresses batches in the producer before sending them to the broker.
You'll get notably better compression from this than you will from per
message compression. I'd recommend checking your producer config and maybe
looking at the log segments on the broker with DumpLogSegments.
If you have
Looks good here: +1
> On Aug 4, 2016, at 9:54 AM, Ismael Juma wrote:
>
> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for the release of Apache Kafka 0.10.0.1. This
> is a bug fix release and it includes fixes and improvements from 53 JIRAs
> (including a
Hi all
We are getting 'Leader not available' exception' when using ACLs with TLS
on a three node Kafka cluster, configured as [1]. The error occurs both
when trying to produce and consume from a topic, to which the producer
principal and all hosts have been granted access for testing, using the
fo
Hi,
I'd recommend turning up broker logs to DEBUG and looking at the
controller's logs. The controller talks to nodes over the network and if it
can't reach them because of ACLs, then you won't get a leader.
The only other note is to check if your brokers are talking to each other
over TLS or pla
Hello,
How many connections to Zookeeper should correctly working Kafka broker
have opened at the same time?
Krzysiek
I took that approach. It was painful, but, ultimately did get me a working
Windows development environment.
To any who follow in my footsteps, here is my trail:
1. Upgrade to at least Kafka Streams 0.10.0.1 (currently only in RC).
- This is necessary because .1 bumps the rocksdb depende
Thanks for providing the instructions - appreciated.
On Fri, 5 Aug 2016 at 14:10 Mathieu Fenniak
wrote:
> I took that approach. It was painful, but, ultimately did get me a working
> Windows development environment.
>
> To any who follow in my footsteps, here is my trail:
>
>1. Upgrade to a
Harald,
I note that your last modified times are all the same. Are you maybe using
Java 7? There's some details here that a JDK bug in Java 7 causes the last
modified time to get updated on broker restart:
https://issues.apache.org/jira/browse/KAFKA-3802
On Fri, Aug 5, 2016 at 6:12 AM, Harald
Hi users,
Is there some issue if I create the kafka cluster using the
kafka_2.10-0.8.2.0 version and I have my java producers and consumers with
the 0.10.0.0 version?
org.apache.kafka
kafka-clients
0.10.0.0
org.apache.kafka
kafka-streams
0.10.0.0
What are the reperc
Hi there,
We are using Kafka 1.0.0.M2 with Spring and we see a lot of duplicate message
is getting received by the Listener onMessage() method .
We configured :
enable.auto.commit=false
session.timeout.ms=15000
factory.getContainerProperties().setSyncCommits(true);
factory.setConcurrency(5);
So
Hi,
We are using Hortonworks HDP 2.4 with Apache Kafla 0.9 and we have an
in-house solution to pull messages from Kafka to HDFS. I would like to try
using kakfa-connector-hdfs to push messages to HDFS. As far as I concern,
Apache Kafka 0.9 doesn't come with kafka-connector-hdfs. What is a solid
wa
Thanks a lot for investing and also for sharing back your findings, Mathieu!
-Michael
On Fri, Aug 5, 2016 at 3:10 PM, Mathieu Fenniak <
mathieu.fenn...@replicon.com> wrote:
> I took that approach. It was painful, but, ultimately did get me a working
> Windows development environment.
>
> To an
Found I still hit this issue without VPN. I had to make the cluster's user
a super user or at least give it appropriate privileges
On Thu, Aug 4, 2016 at 11:39 AM Bryan Baugher wrote:
> Figured this out. This had to do with me being on a VPN and running
> everything locally
>
> On Thu, Aug 4, 20
Heroku has tested this using the same performance testing setup we used to
evaluate the impact of 0.9 -> 0.10 (see https://engineering.
heroku.com/blogs/2016-05-27-apache-kafka-010-evaluating-
performance-in-distributed-systems/).
We see no issues at all with them, so +1 (non-binding) from here.
passed kafka-python integration tests, +1
-Dana
On Fri, Aug 5, 2016 at 9:35 AM, Tom Crayford wrote:
> Heroku has tested this using the same performance testing setup we used to
> evaluate the impact of 0.9 -> 0.10 (see https://engineering.
> heroku.com/blogs/2016-05-27-apache-kafka-010-evaluati
+1 (non-binding)
On Fri, Aug 5, 2016 at 2:04 PM, Dana Powers wrote:
> passed kafka-python integration tests, +1
>
> -Dana
>
>
> On Fri, Aug 5, 2016 at 9:35 AM, Tom Crayford wrote:
> > Heroku has tested this using the same performance testing setup we used
> to
> > evaluate the impact of 0.9 ->
The installation instructions from Confluent will still work for you :)
If you are using deb/rpm packages, basically add the repositories as
explained here:
http://docs.confluent.io/3.0.0/installation.html#rpm-packages-via-yum
and then:
sudo yum install confluent-kafka-connect-hdfs
or
sudo apt-ge
+1 (binding)
On Fri, Aug 5, 2016 at 12:29 PM, Grant Henke wrote:
> +1 (non-binding)
>
> On Fri, Aug 5, 2016 at 2:04 PM, Dana Powers wrote:
>
> > passed kafka-python integration tests, +1
> >
> > -Dana
> >
> >
> > On Fri, Aug 5, 2016 at 9:35 AM, Tom Crayford
> wrote:
> > > Heroku has tested thi
Thanks Gwen. I went with Confluent 2.0 as it has Kakfa 0.9 that matches
with that in HDP 2.4. I installed confluent-kafka-connect-hdfs and
confluent-common and softlinked a couple jar into kafka libs/.
I was able to start Kafka Connect but kafka.out was showing the following
error:
[2016-08-05 20
This is wierd, and looks like an Ambari class that ended up in the
classpath and that somehow we are trying to load?
Perhaps Sriharsha or one of the HDP dudes can help.
Does it happen without the Connector too? It looks like it has to do
with Kafka broker metrics in general:
https://issues.apache
Verified artifact md5 and ran quick start / unit test. +1 (binding).
On Fri, Aug 5, 2016 at 1:49 PM, Neha Narkhede wrote:
> +1 (binding)
>
> On Fri, Aug 5, 2016 at 12:29 PM, Grant Henke wrote:
>
> > +1 (non-binding)
> >
> > On Fri, Aug 5, 2016 at 2:04 PM, Dana Powers
> wrote:
> >
> > > passed
Thanks for running the release. +1
Jun
On Thu, Aug 4, 2016 at 6:54 AM, Ismael Juma wrote:
> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for the release of Apache Kafka 0.10.0.1. This
> is a bug fix release and it includes fixes and improvements from 53 J
Verified checksums and signatures. Ran quickstart. +1 (non-binding)
On Fri, Aug 5, 2016 at 2:51 PM, Jun Rao wrote:
> Thanks for running the release. +1
>
> Jun
>
> On Thu, Aug 4, 2016 at 6:54 AM, Ismael Juma wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the t
+1 (non-binding)
Tried the build and ran the quick start successfully on Ubuntu, Mac, and
Windows.
--Vahid
From: Ismael Juma
To: users@kafka.apache.org, d...@kafka.apache.org, kafka-clients
Date: 08/04/2016 06:55 AM
Subject:[VOTE] 0.10.0.1 RC2
Sent by:isma...@gmai
Hi Sergio, clients have to be the same version or older than the brokers. A
newer client won't work with an older broker.
Alex
On Fri, Aug 5, 2016 at 7:37 AM, Sergio Gonzalez <
sgonza...@cecropiasolutions.com> wrote:
> Hi users,
>
> Is there some issue if I create the kafka cluster using the
> k
+1 (binding)
Thanks Ismael!
On Thu, Aug 4, 2016 at 6:54 AM, Ismael Juma wrote:
> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for the release of Apache Kafka 0.10.0.1. This
> is a bug fix release and it includes fixes and improvements from 53 JIRAs
> (inc
I'm using 0.10.0.0 and testing some failover scenarios. For dev, i have
single kafka node and a zookeeper instance. While sending events to a
topic, I shutdown the broker to see if my failover handling works. However,
I don't see any indication that the send failed, but I do see the
connection refu
+1 (non-binding).
verified quick start and artifacts.
On Sat, Aug 6, 2016 at 5:45 AM, Joel Koshy wrote:
> +1 (binding)
>
> Thanks Ismael!
>
> On Thu, Aug 4, 2016 at 6:54 AM, Ismael Juma wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the third candidate for the re
Achintya,
1.0.0.M2 is not an official release, so this version number is not
particularly meaningful to people on this list. What platform/distribution
are you using and how does this map to actual Apache Kafka releases?
In general, it is not possible for any system to guarantee exactly once
sema
35 matches
Mail list logo