Hi Steve,
There is no way to access that from public side so I won't be able to do
that. Sorry for that.
But the step is quite simple. The only difference is that we have deployed
Kafka cluster using mesos url.
1) launch 3 Kafka broker cluster and create a topic with multiple
partitions at least
Hello??
Can someone give me a hand?
version: kafka_2.11-0.10.0.0
Run kafka strams application WordCountDemo, got erro??
ERROR Streams application error during processing in thread [StreamThread-1]:
(org.apache.kafka.streams.processor.internals.StreamThread:225)
java.lang.IllegalArgumentE
First thing which comes to my mind, did you use it on a new kafka (broker)
version? Streams don't work with older brokers.
On Fri, Jun 3, 2016 at 9:15 AM 听风 <1429327...@qq.com> wrote:
> Hello,
>
>
> Can someone give me a hand?
>
>
> version: kafka_2.11-0.10.0.0
>
>
> Run kafka strams application
Hi Gerard??
I use this version: kafka_2.11-0.10.0.0
-- --
??: "Gerard Klijs";;
: 2016??6??3??(??) 4:31
??: "users";
: Re: [Kafka Streams] java.lang.IllegalArgumentException: Invalidtimestamp
-1
First thing which comes
There are instructions here:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
Let me know your user id in the wiki if you don't have the required
permissions to create pages.
Ismael
On Fri, Jun 3, 2016 at 3:33 AM, Danny Bahir wrote:
> Yes, I'm in.
>
> Sent from my
Avi,
just adding a bit to what Gwen and Eno said, and providing a few pointers.
If you are using the DSL, you can use the `process()` method to "do
whatever you want". See "Applying a custom processor" in the Kafka Streams
DSL chapter of the Developer Guide at
http://docs.confluent.io/3.0.0/stre
I asume you use a replication factor of 3 for the topics? When I ran some
test with producer/consumers in a dockerized setup, there where only few
failures before the producer switched to to correct new broker again. I
don't know the exact time, but seemed like a few seconds at max, this was
with w
Hi All,
Does anyone have any experience of using kafka behind a load balancer?
Would this work? Are there any reasons why you would not want to do it?
Thanks!
Hi,
Kafka is designed to distribute traffic between brokers itself. It's
naturally distributed and does not need, and indeed will not work behind a
load balancer. I'd recommend reading the docs for more, but
http://kafka.apache.org/documentation.html#design_loadbalancing is a good
start.
Thanks
Hi Tom,
That's great, I thought as much, thanks for taking the time to respond,
much appreciated!
Cheers
On Fri, Jun 3, 2016 at 1:18 PM, Tom Crayford wrote:
> Hi,
>
> Kafka is designed to distribute traffic between brokers itself. It's
> naturally distributed and does not need, and indeed will
Srikanth,
KafkaStreams uses the new consumers, thus you need to use
> bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server localhost:9092
> --list
instead of "--zookeeper XXX"
AbstractTask.commit() is called each "commit.interval.ms". A task is
requested to persist all buffered data
On 6/2/16, 22:26, "Christian Posta" wrote:
> Hate to bring up "non-flashy" technology... but Apache Camel would be a
> great fit for something like this. Two java libraries each with very strong
> suits.
Thanks for the tip!
BTW, I love boring technology!
One of my favorite articles ever: http:
On 6/3/16, 05:24, "Michael Noll" wrote:
> If you are using the DSL, you can use the `process()` method to "do
> whatever you want".
Ah, perfect — thank you!
> Alternatively, you can also use the low-level Processor API directly.
> Here, you'd implement the `Processor` interface, where the most
Hi Gerard,
When trying to reproduce this, did you use the go sarama client Safique
mentioned?
On Fri, Jun 3, 2016 at 5:10 AM, Gerard Klijs
wrote:
> I asume you use a replication factor of 3 for the topics? When I ran some
> test with producer/consumers in a dockerized setup, there where only f
Guozhang,
The output I pasted doesn't strictly follow that definition.
They key you mentioned(128073) is the only one with two records. I kept
that intentionally to see the behavior.
All other keys have only one record. Yet they are all printed twice. Data I
pasted is all I had its not a sample.
Hello,
You can check the offset from which your consumer starts to work ? If it
doesn't match the expected one (0 I suppose), you can try to play with
property called auto.offset.reset and set it to "earliest". You can find
more information about: http://kafka.apache.org/documentation.html
You ca
Thanks Matthias!
1) I didn't realize kafka-consumer-groups.sh only queries consumer
coordinator.
I was checking after terminating the streaming app. Got this via
console-consumer.
2) Understood.
3) Nope. Will check this out.
4)Yes, I can probably have a processorSupplier for this.
Srikanth
On
Please ignore
Philip Remsberg
T: 855-885-5566 Ext 1621
ThreatTrack Security | philip.remsb...@threattrack.com
Connect with us on: Facebook | Twitter | LinkedIn
-Original Message-
From: Philip Remsberg
Sent: Friday, June 3, 2016 1:12 PM
To: users@kafka.apache.org; Paul Apostole
I believe that this should be going to Paul now.
Philip Remsberg
T: 855-885-5566 Ext 1621
ThreatTrack Security | philip.remsb...@threattrack.com
Connect with us on: Facebook | Twitter | LinkedIn
-Original Message-
From: Srikanth [mailto:srikanth...@gmail.com]
Sent: Friday, June 3,
I tried to re-run your application code locally: you can find the sample I
wrote here:
https://gist.github.com/guozhangwang/ed8936e5861378082e757d87a44916f1
If I do "joined.toStream().print();" this is the output:
127339 , null
131933 , null
128072 , null
128074 , null
*123507 , null
128073 , nul
Hi ,
We are exploring the new quotas feature with Kafka 0.9.01.
Could you please let me know if quotas feature works for fetch follower as
well ?
We see that when a broker is down for a long time and brought back , the
replica catches up aggressively , impacting the whole cluster.
Would it be poss
Hello,
Did you mean that the console producer, broker, and kafka streams are all
using 0.10.0.0 version?
Guozhang
On Fri, Jun 3, 2016 at 1:35 AM, 听风 <1429327...@qq.com> wrote:
> Hi Gerard,
>
>
> I use this version: kafka_2.11-0.10.0.0
>
>
>
>
> -- 原始邮件 --
> 发件人
22 matches
Mail list logo