Thank you..
On 12 October 2016 at 16:30, Jianbin Wei wrote:
> You can check this
>
> http://kafka.apache.org/documentation.html#basic_ops_add_topic
>
> But from our experience it is best to delete topics one by one, i.e., make
> sure Kafka is in good shape before and after deleting a topic befor
Useful resource Nico, Thanks
B
On Tuesday, 11 October 2016, Nicolas Motte wrote:
> Hi everyone,
>
> I created a training for Application Management and OPS teams in my
> company.
> Some sections are specific to our deployment, but most of them are generic
> and explain how Kafka and ZooKeeper w
You can check this
http://kafka.apache.org/documentation.html#basic_ops_add_topic
But from our experience it is best to delete topics one by one, i.e., make sure
Kafka is in good shape before and after deleting a topic before working on next
one.
Regards,
-- Jianbin
> On Oct 11, 2016, at 9:2
Thanks..Which bash script I need to run?
On 12 October 2016 at 15:20, Ali Akhtar wrote:
> The last time I tried, I couldn't find a way to do it, other than to
> trigger the bash script for topic deletion programatically.
>
> On Wed, Oct 12, 2016 at 9:18 AM, Ratha v wrote:
>
> > Hi all;
> >
> >
The last time I tried, I couldn't find a way to do it, other than to
trigger the bash script for topic deletion programatically.
On Wed, Oct 12, 2016 at 9:18 AM, Ratha v wrote:
> Hi all;
>
> I have two topics(source and target). I do some processing on the message
> available in the source topic
Hi all;
I have two topics(source and target). I do some processing on the message
available in the source topic and i merge both topic.
That is;
builder.stream(sourceTopic).to(targetTopic)
Once merged I no longer require the sourceTopic. I want to delete it.
How can I do that programatically in
Recently I had a situation occur where a network partition happened between
one of the nodes in a 3 node cluster and zookeeper. The broker affected
never reconnected to zookeeper (it's ID was not registered in ZK) and the
metrics indicate that it became another active controller. It still
conside
P.S, does my scenario require using windows, or can it be achieved using
just KTable?
On Wed, Oct 12, 2016 at 8:56 AM, Ali Akhtar wrote:
> Heya,
>
> Say I'm building a live auction site, with different products. Different
> users will bid on different products. And each time they do, I want to
>
Hello,
I think we're misunderstanding the docs on some level and I need a little
clarification. We have the following setup:
1) 0.8.2 producer -> writing to Kafka 0.10.0.1 cluster w/ version 10
message format (source cluster).
2) 0.10.0.1 mirror using the 'new consumer' reading from the source cl
Sorry my fault, In the kafkaConsumer I messed with 'value.deserializer'
property..
Now things are working fine..
Thanks a lot.
On 12 October 2016 at 14:10, Ratha v wrote:
> HI Michael;
> Sorry , after setting "auto.offset.reset" to 'earliest' , I see messages
> in my 'targetTopic'.
> But still
Thanks. That filter() method is a good solution. But whenever I look at it,
I feel an empty spot in my heart which can only be filled by:
filter(Optional::isPresent)
On Wed, Oct 12, 2016 at 12:15 AM, Guozhang Wang wrote:
> Ali,
>
> We are working on moving from Java7 to Java8 in Apache Kafka, an
Heya,
Say I'm building a live auction site, with different products. Different
users will bid on different products. And each time they do, I want to
update the product's price, so it should always have the latest price in
place.
Example: Person 1 bids $3 on Product A, and Person 2 bids $5 on the
HI Michael;
Sorry , after setting "auto.offset.reset" to 'earliest' , I see messages
in my 'targetTopic'.
But still I get my class cast exception issue, when I consume message from
the 'targetTopic'. (To consume message I use KafkaConsumer highlevel API)
*ConsumerRecords records = consumer.poll(L
Thanks Guozhang for the detailed explanation.
It was really helpful.
Regards,
Rahul Misra
-Original Message-
From: Guozhang Wang [mailto:wangg...@gmail.com]
Sent: Wednesday, October 12, 2016 6:25 AM
To: users@kafka.apache.org
Subject: Re: Frequent Consumer Rebalance/ Commit fail excepti
HI Michael;
Really appreciate for the clear explanation..
I modified my code as you mentioned. I have written custom, Serde,
serializer,deserializer.
But now the problem i see is, both topics are not merged. Means, Messages
in the 'sourcetopic' not to passed to 'targetTopic' . ('targetTopic has '0
Ali,
When I did testing / development, I usually delete zk directory as well
(default is /tmp/zookeeper) for clean up.
Guozhang
On Tue, Oct 11, 2016 at 3:33 PM, Ali Akhtar wrote:
> In development, I often need to delete all existing data in all topics, and
> start over.
>
> My process for th
You could do every producer consume on a specific partition of response topic
(no high level consume) and send that number on the message... when the
consumer process the data, it sends the answer to the specific partition
Enviado do meu iPhone
Hello Rahul,
This "CommitFailedException" usually means that the consumer group
coordinator that sits on the server side has decided that this consumer is
"failed" from the heartbeat protocol and hence kicked out of the group, and
later when it sees a commit-offset request from this consumer it wi
One common issue of lost messages is that consumer auto-committing (related
config is "auto.commit.enabled", "commit.interval.ms"): from the Kafka
consumer point of view, once the messages are returned from the "poll" call
they are considered "consumed", and if committing offsets is called it will
Sachin,
Just a side note in addition to what Matthias mentioned: in the coming
0.10.1.0 release Kafka Streams has added the feature to do
auto-repartitioning by detecting if the message key are joinable or not. To
give a few examples:
---
stream1 = builder.stream("topic1");
stream2 = builder
In development, I often need to delete all existing data in all topics, and
start over.
My process for this currently is: stop zookeeper, stop kafka broker, rm -rf
~/kafka/data/*
But when I bring the broker back on, it often prints a bunch of errors and
needs to be restarted before it actually wo
They both have a lot of the same methods, and yet they can't be used
polymorphically because they don't share the same parent interface.
I think KIterable or something like that should be used as their base
interface w/ shared methods.
Hi,
I already tried to store rocks db files somewhere else by specifying the
kafa state dir properties, but no luck, same behavior.
I will try to run with the trunk tomorrow to see if it's stop correctly,
and I will keep you inform. There must be something with my configuration,
because I googled
Hi Pierre,
I tried the exact code on MacOs and am not getting any errors. Could you check
if all the directories in /tmp where Kafka Streams writes the RocksDb files are
empty? I'm wondering if there is some bad state left over.
Finally looks like you are running 0.10.0, could you try running
Ali,
We are working on moving from Java7 to Java8 in Apache Kafka, and the
Streams client is one of the motivations doing so. Stay tuned on the
mailing list when it will come.
Currently Streams won't automatically filter out null values for you since
in some other cases they may have semantic mea
Hi everyone,
I created a training for Application Management and OPS teams in my company.
Some sections are specific to our deployment, but most of them are generic
and explain how Kafka and ZooKeeper work.
I uploaded it on SlideShare, I thought it might be useful to other people:
http://fr.slide
Hi,
I have a simple test where I create a topology builder with one topic, one
processor using a persistent store, then I create a kafka streams, start
it, wait a bit, then close. Each time, the jvm crashes (seg fault) when
flushing the data. Anyone has already met this kind of problem ?
OS:
Ubun
Hello, we are going to be upgrading the instance types of our brokers. We will
shut them down, upgrade, and the restart them. All told, they will be down for
about 15 minutes. Upon restart, is there anything we need to do other than run
preferred leader election? The brokers will start to ca
hi,
we meet a issue that the temporary node of broker in zookeeper was lost
when the network bewteen broker and zk cluster is not good enough, while the
process of the broker still exist. as we know, the controller would consider it
to be offline in kafka. After we open zkClient log, we
Sounds good that you got up to 500MB/s. At that point I suspect you reach a
sort of steady state where the cache is continuously flushing to the SSDs, so
you are effectively bottlenecked by the SSD. I believe this is as expected (the
bottleneck resource will dominate the end to end throughput ev
Actually, I wanted to include the following link for the JVM docs (the
information matches what's written in the earlier link I shared):
http://kafka.apache.org/documentation#java
On Tue, Oct 11, 2016 at 11:21 AM, Michael Noll wrote:
> Regarding the JVM, we recommend running the latest version
Regarding the JVM, we recommend running the latest version of JDK 1.8 with
the G1 garbage collector:
http://docs.confluent.io/current/kafka/deployment.html#jvm
And yes, Kafka does run on Ubuntu 16.04, too.
(Confluent provides .deb packages [1] for Apache Kafka if you are looking
for these to inst
Thanks for the follow-up and the bug report, David.
We're taking a look at that.
On Mon, Oct 10, 2016 at 4:36 PM, David Garcia wrote:
> Thx for the responses. I was able to identify a bug in how the times are
> obtained (offsets resolved as unknown cause the issue):
>
> “Actually, I think th
When I wrote:
"If you haven't changed to default key and value serdes, then `to()`
will fail because [...]"
it should have read:
"If you haven't changed the default key and value serdes, then `to()`
will fail because [...]"
On Tue, Oct 11, 2016 at 11:12 AM, Michael Noll wrote:
> Rat
Ratha,
if you based your problematic code on the PipeDemo example, then you should
have these two lines in your code (which most probably you haven't changed):
props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG,
Serdes.String().getClass());
props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG,
Se
35 matches
Mail list logo