Sometimes I see warnings in my logs if i create a consumer for a topic
which doesn't exist. Such as:
org.apache.kafka.clients.NetworkClient - Error while fetching metadata
with correlation id 1 : {example_topic=LEADER_NOT_AVAILABLE}
If later messages are posted to that topic (which will create i
oes-not-exist
On 7 Jul 2017 9:46 pm, "Ali Akhtar" wrote:
> Sometimes I see warnings in my logs if i create a consumer for a topic
> which doesn't exist. Such as:
>
> org.apache.kafka.clients.NetworkClient - Error while fetching metadata
> with correlation id 1
Oh gotcha, thanks. So a topic will be created if topic creation is enabled.
On Sat, Jul 8, 2017 at 8:14 PM, M. Manna wrote:
> Please check my previous email.
>
> On Sat, 8 Jul 2017 at 2:32 am, Ali Akhtar wrote:
>
> > What happens if auto creation is enabled but the t
Not too familiar with that error, but I do have Kafka working on
Kubernetes. I'll share my files here in case that helps:
Zookeeper:
https://gist.github.com/aliakhtar/812974c35cf2658022fca55cc83f4b1d
Kafka: https://gist.github.com/aliakhtar/724fbee6910dec7263ab70332386af33
Essentially I have 3 k
How do you know that the brokers don't talk to each other?
On Thu, Sep 14, 2017 at 4:32 PM, Yongtao You
wrote:
> Hi,
> I would like to know the right way to setup a Kafka cluster with Nginx in
> front of it as a reverse proxy. Let's say I have 2 Kafka brokers running on
> 2 different hosts; and
L. But I could be wrong.
>
>
> Thanks!
>
> -Yongtao
>
>
> On Thursday, September 14, 2017, 8:07:38 PM GMT+8, Ali Akhtar <
> ali.rac...@gmail.com> wrote:
>
>
> How do you know that the brokers don't talk to each other?
>
> On Thu, Sep 14, 2017 at 4:
r listens on. It's an [info] message so I'm not sure how
> serious it is, but I don't see messages sent from filebeat in Kafka. :(
>
> Thanks!
> -Yongtao
>
> On Thursday, September 14, 2017, 8:31:31 PM GMT+8, Ali Akhtar <
> ali.rac...@gmail.com> wrote:
parties = ports *
On Thu, Sep 14, 2017 at 8:04 PM, Ali Akhtar wrote:
> I would try to put the SSL on different ports than what you're sending
> kafka to. Make sure the kafka ports don't do anything except communicate in
> plaintext, put all 3rd parties on different parties.
&
Using version 0.10.0.1, how can I create kafka topics using the java client
/ API?
Stackoverflow answers describe using kafka.admin.AdminUtils, but this class
is not included in the kafka-clients maven dependency. I also don't see the
package kafka.admin in the javadocs: http://kafka.apache.
org/0
Do
I need to talk to the bash script?
On Wed, Sep 14, 2016 at 8:45 AM, Ali Akhtar wrote:
> Using version 0.10.0.1, how can I create kafka topics using the java
> client / API?
>
> Stackoverflow answers describe using kafka.admin.AdminUtils, but this
> class is not included in the
ll "partitionsFor(String topic)" on your producer.
> This will create the topic without sending a message to it. I'm not sure
> it's 100% smart, but I found it to be better than the alternatives I could
> find. :-)
>
> Mathieu
>
>
> On Wed, Sep 14, 2016 a
If so, can you please share if you're using a publicly available
deployment, or if you created you own, how you did it? (I.e which services
/ replication controllers you have)
Also, how has the performance been for you? I've read a report which said
the performance suffered running kafka as a dock
I'm guessing its not possible to delete topics?
On Thu, Sep 15, 2016 at 5:43 AM, Ali Akhtar wrote:
> Thank you Martin
>
> On 15 Sep 2016 3:05 am, "Mathieu Fenniak"
> wrote:
>
>> Hey Ali,
>>
>> If you have auto create turned on, which it sounds
I've noticed that, on my own machine, if I start a kafka broker, then
create a topic, then I stop that server and restart it, the topic I created
is still kept.
However, on restarts, it looks like the topic is deleted.
Its also possible that the default retention policy of 24 hours causes the
mes
It sounds like a network issue. Where are the 3 servers located / hosted?
On Thu, Sep 15, 2016 at 11:51 AM, kant kodali wrote:
> Hi,
> I have the following setup.
> Single Kafka broker and Zookeeper on Machine 1single Kafka producer on
> Machine 2
> Single Kafka Consumer on Machine 3
> When a pr
e data logs to /tmp/ folder. /tmp gets
> cleared
> on system reboots. change log.dirs config property to some other directory.
>
> On Thu, Sep 15, 2016 at 11:46 AM, Ali Akhtar wrote:
>
> > I've noticed that, on my own machine, if I start a kafka broker, then
> >
); process.exit();
> } else if (received % hash === 0){
> process.stdout.write(received + '\n');}});
> consumer.on('error', function (err) { console.log('error', err);});
>
> Not using Mixmax yet?
>
>
>
>
but very
> late. I
> send about 300K messages using the node.js client and I am receiving at a
> very
> low rate. really not sure what is going on?
>
>
>
>
>
>
> On Thu, Sep 15, 2016 12:06 AM, Ali Akhtar ali.rac...@gmail.com
> wrote:
> Your code seems to
t;
>
>
> On Thu, Sep 15, 2016 12:33 AM, Ali Akhtar ali.rac...@gmail.com
>
> wrote:
> What's the instance size that you're using? With 300k messages your single
>
> broker might not be able to handle it.
>
>
>
>
> On Thu, Sep 15, 2016 at 12:30 PM,
throughput was 2K messages/secI am unable to push 300K messages
> with
> Kafka with the above configuration and environment so at this point my
> biggest
> question is what is the fair setup for Kafka so its comparable with NATS
> and
> NSQ?
> kant
>
>
>
>
>
gt; Again the big question is What is the right setup for Kafka to be
> comparable
> with the other I mentioned in my previous email?
>
>
>
>
>
>
> On Thu, Sep 15, 2016 1:47 AM, Ali Akhtar ali.rac...@gmail.com
> wrote:
> The issue is clearly that you're running out of r
It sounds like you can implement the 'mapping service' component yourself
using Kafka.
Have all of your messages go to one kafka topic. Have one consumer group
listening to this 'everything goes here' topic. This consumer group acts as
your mapping service. It looks at each message, and based on
Examine server.properties and see which port you're using in there
On Thu, Sep 15, 2016 at 3:52 PM, kant kodali wrote:
> which port should I use 9091 or 9092 or 2181 to send messages through kafka
> when using a client Library?
> I start kafka as follows:
> sudo bin/zookeeper-server-start.sh con
I've created a 3 broker kafka cluster, changing only the config values for
broker id, log.dirs, and zookeeper connect. I left the remaining fields as
default.
The broker ids are 1, 2, 3. I opened the port 9092 on AWS.
I then created a topic 'test' with replication factor of 2, and 3
partitions.
other using the
private IPs.
Shouldn't that be enough? I don't want to expose kafka publicly.
On Fri, Sep 16, 2016 at 10:48 PM, Ali Akhtar wrote:
> I've created a 3 broker kafka cluster, changing only the config values for
> broker id, log.dirs, and zookeeper connect. I left th
You can create multiple partitions of a topic and kafka will attempt to
distribute them evenly.
E.g if you have 3 brokers and you create 3 partitions of a topic, each
broker will be the leader of 1 of the 3 partitions.
P.S how did the benchmarking go?
On Sat, Sep 17, 2016 at 1:36 PM, kant kodali
Perhaps if you add 1 node, take down existing node, etc?
On Sun, Sep 25, 2016 at 10:37 PM, brenfield111
wrote:
> I need to change the hostnames and ips for the Zookeeper ensemble
> serving my Kafka cluster.
>
> Will Kafka carry on as usual, along with it's existing ZK nodes, after
> making the c
I have a somewhat tricky use case, and I'm looking for ideas.
I have 5-6 Kafka producers, reading various APIs, and writing their raw
data into Kafka.
I need to:
- Do ETL on the data, and standardize it.
- Store the standardized data somewhere (HBase / Cassandra / Raw HDFS /
ElasticSearch / Pos
It needs to be able to scale to a very large amount of data, yes.
On Thu, Sep 29, 2016 at 7:00 PM, Deepak Sharma
wrote:
> What is the message inflow ?
> If it's really high , definitely spark will be of great use .
>
> Thanks
> Deepak
>
> On Sep 29, 2016 19:24, "
ny
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On
laimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages ar
On Thu, Sep 29, 2016 at 8:24 PM, Ali Akhtar wrote:
>
>> I don't think I need a different speed storage and batch storage. Just
>> taking in raw data from Kafka, standardizing, and storing it somewhere
>> where the web UI can query it, seems like it will be enough.
>>
&g
ost /
> >> duplicated data? Are your writes idempotent?
> >>
> >> Absent any other information about the problem, I'd stay away from
> >> cassandra/flume/hdfs/hbase/whatever, and use a spark direct stream
> >> feeding postgres.
> >>
>
; On Thu, Sep 29, 2016 at 10:40 AM, Deepak Sharma
> wrote:
> > If you use spark direct streams , it ensure end to end guarantee for
> > messages.
> >
> >
> > On Thu, Sep 29, 2016 at 9:05 PM, Ali Akhtar
> wrote:
> >>
> >> My concern with Post
Avi,
Why did you choose Druid over Postgres / Cassandra / Elasticsearch?
On Fri, Sep 30, 2016 at 1:09 AM, Avi Flax wrote:
>
> > On Sep 29, 2016, at 09:54, Ali Akhtar wrote:
> >
> > I'd appreciate some thoughts / suggestions on which of these
> alternatives I
>
Newbie question, but what exactly does log.cleaner.enable=true do, and how
do I know if I need to set it to be true?
Also, if config changes like that need to be made once a cluster is up and
running, what's the recommended way to do that? Do you killall -12 kafka
and then make the change, and the
You may be able to control the starting offset, but if you try to control
which instance gets offset 4.. you'll lose all benefits of parallelism.
On 4 Oct 2016 3:02 pm, "Kaushil Rambhia/ MUM/CORP/ ENGINEERING" <
kaushi...@pepperfry.com> wrote:
> Hi guys,
> i am using apache kafka with phprd kafk
I need to consume a large number of topics, and handle each topic in a
different way.
I was thinking about creating a different KStream for each topic, and doing
KStream.foreach for each stream, to process incoming messages.
However, its unclear if this will be handled in a parallel way by defaul
are
> distributed over the running instances.
>
> Please see here for more details
> http://docs.confluent.io/current/streams/architecture.html#parallelism-m
> odel
>
>
> - -Matthias
>
> On 10/4/16 1:27 PM, Ali Akhtar wrote:
> > I need to consume a large number of t
That's awesome. Thanks.
On Wed, Oct 5, 2016 at 2:19 AM, Matthias J. Sax
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Yes.
>
> On 10/4/16 1:47 PM, Ali Akhtar wrote:
> > Hey Matthias,
> >
> > All my topics have 3 partitions each, and I
<3
On Wed, Oct 5, 2016 at 2:31 AM, Ali Akhtar wrote:
> That's awesome. Thanks.
>
> On Wed, Oct 5, 2016 at 2:19 AM, Matthias J. Sax
> wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA512
>>
>> Yes.
>>
>> On 10/4/16 1:47 PM, A
Just noticed this on pulling up the documentation. Oh yeah! This new look
is fantastic.
On Wed, Oct 5, 2016 at 4:31 AM, Vahid S Hashemian wrote:
> +1
>
> Thank you for the much needed new design.
> At first glance, it looks great, and more professional.
>
> --Vahid
>
>
>
> From: Gwen Shapira
I don't see a space in that topic name
On Wed, Oct 5, 2016 at 6:42 PM, Hamza HACHANI
wrote:
> Hi,
>
> I created a topic called device-connection-invert-key-value-the
> metric-changelog.
>
> I insit that there is a space in it.
>
>
>
> Now that i want to delete it because my cluster can no longe
> It's often a good
idea to over-partition your topics. For example, even if today 10 machines
(and thus 10 partitions) would be sufficient, pick a higher number of
partitions (say, 50) so you have some wiggle room to add more machines
(11...50) later if need be.
If you create e.g 30 partitions,
Heya,
I have some Kafka producers, which are listening to webhook events, and for
each webhook event, they post its payload to a Kafka topic.
Each payload contains a timestamp from the webhook source.
This timestamp is the source of truth about which events happened first,
which happened last, e
ach a state store to your processor and compare the
> timestamps of the current record with the timestamp of the one in your
> store.
>
> - -Matthias
>
> On 10/6/16 8:52 AM, Ali Akhtar wrote:
> > Heya,
> >
> > I have some Kafka producers, which are listening to web
roduct, go to
> the same instance. You can ensure this, by given all records of the
> same product the same key and "groupByKey" before processing the data.
>
> - -Matthias
>
> On 10/6/16 10:55 AM, Ali Akhtar wrote:
> > Thank you, State Store seems promising.
(Assuming 'last one' can be
determined using the timestamp in the json of the message)
On Fri, Oct 7, 2016 at 2:54 AM, Ali Akhtar wrote:
> Thanks for the reply.
>
> Its not possible to provide keys, unfortunately. (Producer is written by a
> colleague, and said colleague jus
at
> returns the JSON embedded TS instead of record TS (as
> DefaultTimestampExtractor does)
>
> See
> http://docs.confluent.io/3.0.1/streams/developer-guide.html#timestamp-ex
> tractor
>
>
> - -Matthias
>
> On 10/6/16 2:59 PM, Ali Akhtar wrote:
> > Sorry, to
What the subject says. For dev, it would be a lot easier if debugging info
can be printed to stdin instead of another topic, where it will persist.
Any ideas if this is possible?
print() or #writeAsText()
>
>
> - -Matthias
>
> On 10/6/16 6:25 PM, Ali Akhtar wrote:
> > What the subject says. For dev, it would be a lot easier if
> > debugging info can be printed to stdin instead of another topic,
> > where it will persist.
> >
> >
.html#application-
> > reset-tool
> >
> > and
> > http://www.confluent.io/blog/data-reprocessing-with-kafka-streams-resett
> > ing-a-streams-application/
> >
> >
> > About the timestamp issue: it seems that your Go client does not
> > assign vali
I am also not aware of any
> plan to add such a feature in the short-term.
>
>
>
> On Fri, Oct 7, 2016 at 1:36 PM, Ali Akhtar wrote:
>
> > Is it possible to have kafka-streams-reset be automatically called during
> > development? Something like streams.cleanUp() but whi
Since we're using Java 8 in most cases anyway, Serdes / Serialiazers should
use options, to avoid having to deal with the lovely nulls.
eatures just yet.
> >
> > FYI: There's on ongoing conversation about when Kafka would move from
> Java
> > 7 to Java 8.
> >
> > On Fri, Oct 7, 2016 at 6:14 PM, Ali Akhtar wrote:
> >
> > > Since we're using Java 8 in most cases anyway, Serdes / Serialiazers
> > should
> > > use options, to avoid having to deal with the lovely nulls.
> > >
> >
>
>
>
> --
> -- Guozhang
>
Also, you can set a retention period and have messages get auto deleted
after a certain time (default 1 week)
On Sat, Oct 8, 2016 at 3:21 AM, Hans Jespersen wrote:
> Kafka doesn’t work that way. Kafka is “Publish-subscribe messaging
> rethought as a distributed commit log”. The messages in the l
ooks
> fine to me.
>
>
> Guozhang
>
> On Fri, Oct 7, 2016 at 3:12 PM, Ali Akhtar wrote:
>
> > Hey G,
> >
> > Looks like the only difference is a valueSerde parameter.
> >
> > How does that prevent having to look for nulls in the consumer?
>
A kafka producer written elsewhere that I'm using, which uses the Go kafka
driver, is sending messages where the key is null.
Is this OK - or will this cause issues due to partitioning not happening
correctly?
What would be a good way to generate keys in this case, to ensure even
partition spread
(https://github.com/confluentinc/confluent-kafka-go/).
>
> If you decide to generate keys and you want even spread, a random
> number generator is probably your best bet.
>
> Gwen
>
> On Sun, Oct 9, 2016 at 6:05 PM, Ali Akhtar wrote:
> > A kafka producer written elsew
can't tell whether your
> Go client follows the behavior of Kafka's Java producer.
>
> -Michael
>
>
>
>
> [1]
> https://github.com/apache/kafka/blob/trunk/clients/src/
> main/java/org/apache/kafka/clients/producer/internals/
> DefaultPartitioner.java
&
So.. it should be okay to have null keys, I'm guessing.
On Mon, Oct 10, 2016 at 11:51 AM, Ali Akhtar wrote:
> Hey Michael,
>
> We're using this one: https://github.com/Shopify/sarama
>
> Any ideas how that one works?
>
> On Mon, Oct 10, 2016 at 11:48 AM, Michael No
They both have a lot of the same methods, and yet they can't be used
polymorphically because they don't share the same parent interface.
I think KIterable or something like that should be used as their base
interface w/ shared methods.
In development, I often need to delete all existing data in all topics, and
start over.
My process for this currently is: stop zookeeper, stop kafka broker, rm -rf
~/kafka/data/*
But when I bring the broker back on, it often prints a bunch of errors and
needs to be restarted before it actually wo
Heya,
Say I'm building a live auction site, with different products. Different
users will bid on different products. And each time they do, I want to
update the product's price, so it should always have the latest price in
place.
Example: Person 1 bids $3 on Product A, and Person 2 bids $5 on the
ignored; you can, though, apply a simple filter such like "filter((key,
> value) => value != null)" before your processor lambda operator, if it
> looks clearer in your code.
>
> Guozhang
>
>
> On Sun, Oct 9, 2016 at 3:14 PM, Ali Akhtar wrote:
>
> > It isn
P.S, does my scenario require using windows, or can it be achieved using
just KTable?
On Wed, Oct 12, 2016 at 8:56 AM, Ali Akhtar wrote:
> Heya,
>
> Say I'm building a live auction site, with different products. Different
> users will bid on different products. And each time t
The last time I tried, I couldn't find a way to do it, other than to
trigger the bash script for topic deletion programatically.
On Wed, Oct 12, 2016 at 9:18 AM, Ratha v wrote:
> Hi all;
>
> I have two topics(source and target). I do some processing on the message
> available in the source topic
imestamp of the current record.
>
> Does this makes sense?
>
> To fix you issue, you could add a .transformValue() before you KTable,
> which allows you to access the timestamp of a record. If you add this
> timestamp to you value and pass it to KTable afterwards, you can
> access i
s, then you do not need to anything special.
>
>
>
>
>
> On Wed, Oct 12, 2016 at 10:22 PM, Ali Akhtar wrote:
>
> > Thanks Matthias.
> >
> > So, if I'm understanding this right, Kafka will not discard which
> messages
> > which arrive out of orde
I am probably being too ocd anyway. It will almost never happen that
messages from another vm in the same network on ec2 arrive out of order.
Right?
On 13 Oct 2016 8:47 pm, "Ali Akhtar" wrote:
> Makes sense. Thanks
>
> On 13 Oct 2016 12:42 pm, "Michael Noll" wro
Is there a maven artifact that can be used to create instances
of EmbeddedSingleNodeKafkaCluster for unit / integration tests?
I'm using Kafka Streams, and I'm attempting to write integration tests for
a stream processor.
The processor listens to a topic, processes incoming messages, and writes
some data to Cassandra tables.
I'm attempting to write a test which produces some test data, and then
checks whether or not the
Please change that.
On Thu, Oct 20, 2016 at 1:53 AM, Eno Thereska
wrote:
> I'm afraid we haven't released this as a maven artefact yet :(
>
> Eno
>
> > On 18 Oct 2016, at 13:22, Ali Akhtar wrote:
> >
> > Is there a maven artifact tha
similar queue related tests we put the check in a loop. Check every
> second until either the result is found or a timeout happens.
>
> -Dave
>
> -Original Message-
> From: Ali Akhtar [mailto:ali.rac...@gmail.com]
> Sent: Wednesday, October 19, 2016 3:38 PM
> To: users
ion, options like these are what you'd currently need to do
> since you are writing directly from your Kafka Stream app to Cassandra,
> rather than writing from your app to Kafka and then using Kafka Connect to
> ingest into Cassandra.
>
>
>
> On Wed, Oct 19, 2016 at 11:
There isn't a java API for this, you'd have to mess around with bash
scripts which I haven't found to be worth it.
Just let the data expire and get deleted. Set a short expiry time for the
topic if necessary.
On Mon, Oct 24, 2016 at 6:30 PM, Demian Calcaprina
wrote:
> Hi Guys,
>
> Is there a w
+1. I hope there will be a corresponding Java library for doing admin
functionality.
On Wed, Oct 26, 2016 at 4:10 AM, Jungtaek Lim wrote:
> +1
>
>
> On Wed, 26 Oct 2016 at 8:00 AM craig w wrote:
>
> > -1
> >
> > On Tuesday, October 25, 2016, Sriram Subramanian
> wrote:
> >
> > > -1 for all the
And this will make adding health checks via Kubernetes easy.
On Wed, Oct 26, 2016 at 4:12 AM, Ali Akhtar wrote:
> +1. I hope there will be a corresponding Java library for doing admin
> functionality.
>
> On Wed, Oct 26, 2016 at 4:10 AM, Jungtaek Lim wrote:
>
>> +1
>>
I would recommend base64 encoding the message on the producer side, and
decoding it on the consumer side.
On Wed, Nov 9, 2016 at 3:40 PM, Baris Akgun (Garanti Teknoloji) <
barisa...@garanti.com.tr> wrote:
> Hi All,
>
> We are using Kafka 0,9.0.0 and we want to send our messages to topic in
> UTF-
Its probably not UTF-8 if it contains Turkish characters. That's why base64
encoding / decoding it might help.
On Wed, Nov 9, 2016 at 4:22 PM, Radoslaw Gruchalski
wrote:
> Are you sure your string is in utf-8 in the first place?
> What if you pass your string via something like:
>
> System.out.p
I have some unit tests in which I create an embedded single broker kafka
cluster, using :
EmbeddedSingleNodeKafkaCluster.java from
https://github.com/confluentinc/examples/blob/master/kafka-streams/src/test/java/io/confluent/examples/streams/kafka/EmbeddedSingleNodeKafkaCluster.java
That class al
7;s just
> annoying and it will go away.
>
> Thanks,
> Eno
>
>
> > On 11 Nov 2016, at 14:28, Ali Akhtar wrote:
> >
> > I have some unit tests in which I create an embedded single broker kafka
> > cluster, using :
> >
> > EmbeddedSingleNodeKafk
Unless I'm missing anything, there's no reason why these throwaway
processes should be shutdown gracefully. Just kill them as soon as the test
finishes.
On Fri, Nov 11, 2016 at 9:26 PM, Ali Akhtar wrote:
> Hey Eno,
>
> Thanks for the quick reply.
>
> In the meantime, is
For me, the startup doesn't take anywhere near as long as shutdown does.
On Fri, Nov 11, 2016 at 9:37 PM, Ali Akhtar wrote:
> Unless I'm missing anything, there's no reason why these throwaway
> processes should be shutdown gracefully. Just kill them as soon as the test
&
fkaEmbedded
> to start Kafka).
> So these are embedded in the sense that it's not another process, just
> threads within the main streams test process.
>
> Thanks
> Eno
>
> > On 11 Nov 2016, at 16:26, Ali Akhtar wrote:
> >
> > Hey Eno,
> >
> >
if (t == Thread.currentThread())
return;
t.stop();
});
On Fri, Nov 11, 2016 at 9:52 PM, Ali Akhtar wrote:
> Oh, so it seems like there's no easy way to just Thread.stop() without
> changing the internal kafka / zk code? :(
>
> Perhaps its possible t
While I was connected to console-consumer.sh, I posted a few messages to a
Kafka topic, one message at a time, across a few hours.
I'd post a message, see it arrive in console-consumer, a few mins later I'd
post the next message, and so on.
They all arrived in order.
However, when I now try to v
topic, you need to create
a topic with a single partition.
On Wed, Nov 30, 2016 at 4:10 AM, Ali Akhtar wrote:
> While I was connected to console-consumer.sh, I posted a few messages to a
> Kafka topic, one message at a time, across a few hours.
>
> I'd post a message, see it ar
Heya,
I need to send a group of messages, which are all related, and then process
those messages, only when all of them have arrived.
Here is how I'm planning to do this. Is this the right way, and can any
improvements be made to this?
1) Send a message to a topic called batch_start, with a batc
Heya,
Normally, you add your topics and their callbacks to a StreamBuilder, and
then call KafkaStreams.start() to start ingesting those topics.
Is it possible to add a new topic to the StreamBuilder, and start ingesting
that as well, after KafkaStreams.start() has been called?
Thanks.
was defined with a regular expression, i.e,
> kafka.stream(Pattern.compile("foo-.*");
>
> If any new topics are added after start that match the pattern, then they
> will also be consumed.
>
> Thanks,
> Damian
>
> On Fri, 2 Dec 2016 at 13:13 Ali Akhtar wrote:
more details:
> http://docs.confluent.io/current/streams/architecture.html
>
>
> -Matthias
>
> On 12/2/16 6:23 AM, Ali Akhtar wrote:
> > That's pretty useful to know - thanks.
> >
> > 1) If I listened too foo-.*, and there were 5 foo topics created after
> >
er batch, and you would be very close to the essence of the above
> proposal.
>
> Thanks,
> Apurva
>
> On Fri, Dec 2, 2016 at 5:02 AM, Ali Akhtar wrote:
>
> > Heya,
> >
> > I need to send a group of messages, which are all related, and then
> process
> &
l be no overhead.
>
> -Matthias
>
> On 12/2/16 3:58 PM, Ali Akhtar wrote:
> > Hey Matthias,
> >
> > So I have a scenario where I need to batch a group of messages together.
> >
> > I'm considering creating a new topic for each batch that arrives, i.e
> &g
uot;, "TOPIC-TO-BE-DELETED"});
> > TopicCommand.deleteTopic(zkUtils, commandOptions);
>
> So you can delete a topic within your Streams app.
>
> -Matthias
>
>
>
> On 12/2/16 9:25 PM, Ali Akhtar wrote:
> > Is there a way to delete the processed top
:
> I guess yes. You might only want to make sure the topic offsets got
> committed -- not sure if committing offsets of a deleted topic could
> cause issue (ie, crashing you Streams app)
>
> -Matthias
>
> On 12/2/16 11:04 PM, Ali Akhtar wrote:
> > Thank you very much
uestion is, what would happen if the JVM goes down
> before you delete the topic.
>
>
> -Matthias
>
> On 12/3/16 2:07 AM, Ali Akhtar wrote:
> > Is there a way to make sure the offsets got committed? Perhaps, after the
> > last msg has been consumed, I can setup a task to
s
> that batch is going to have?
>
>
> Marko Bonaći
> Monitoring | Alerting | Anomaly Detection | Centralized Log Management
> Solr & Elasticsearch Support
> Sematext <http://sematext.com/> | Contact
> <http://sematext.com/about/contact.html>
>
> On Sat
You need to also delete / restart zookeeper, its probably storing the
topics there. (Or yeah, just enable it and then delete the topic)
On Fri, Dec 9, 2016 at 9:09 PM, Rodrigo Sandoval wrote:
> Why did you do all those things instead of just setting
> delete.topic.enable=true?
>
> On Dec 9, 2016
@Damian,
In the Java equivalent of this, does each KStream / KStreamBuilder.stream()
invocation create its own topic group, i.e its own thread?
On Mon, Dec 12, 2016 at 10:29 PM, Damian Guy wrote:
> Yep - that looks correct
>
> On Mon, 12 Dec 2016 at 17:18 Avi Flax wrote:
>
> >
> > > On Dec 12,
1 - 100 of 118 matches
Mail list logo