Check Kafka broker logs as well, see if there some error there.
On Fri, Mar 30, 2018, 10:57 AM ? ? wrote:
> Hi:
> I used kafka streams for days.
> and I meet a problem today.when I test 2400W data in kafka though kafka
> streaming then write datas to HDFS .
> I found the final results is bigger
8:12 PM, Sameer Kumar
wrote:
> Hi,
>
> A changelog would have infinite retention(only one can be specified among
> compact or delete), howsover while using windows store, we have
> timewindows.until() which specifies the retention.
>
> A KTable can be created using TimeWindo
Hi,
A changelog would have infinite retention(only one can be specified among
compact or delete), howsover while using windows store, we have
timewindows.until() which specifies the retention.
A KTable can be created using TimeWindows.until(), does that mean that
changelog would have retention as
say
is there was some intermittent problem, we have unnecessary killed the
thread and also, we are not recreating that thread again. Aren't we wasting
processing here.
-Sameer.
On Fri, Jan 26, 2018 at 2:08 PM, Sameer Kumar
wrote:
> Hi,
>
> I am talking w.r.t. Kafka 1.0., red
titions are not assigned to other consumers and also handle
> ConsumerRebalanceListener onPartitionsAssigned or reduce amount of data
> being processed using max.poll.records.
> https://kafka.apache.org/0102/javadoc/org/apache/kafka/clients/consumer/
> KafkaConsumer.html
>
> On Thu, Jan
I have a scneario, let say due to GC or any other issue, my consumer takes
longer than max.poll.interval.ms to process data, what is the alternative
for preventing the consumer to be marked dead and not shun it out of the
consumer group.
Though the consumer has not died and session.timeout.ms is b
Hi,
I have a cluster of 3 Kafka brokers, and replication factor is 2. This
means I can tolerate failure of 1 node without data loss.
Recently, one of my node crashed and some of my partitions went offline.
I am not sure if this should be the case. Am I missing something.
-Sameer.
(not the default one the consumer uses).
>
> -Matthias
>
> On 1/9/18 4:22 AM, Sameer Kumar wrote:
> > Got It. Thanks. Others can also take a look at
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 54+-+Sticky+Partition+Assignment+Strategy
> >
> > -S
/
> test/java/org/apache/kafka/streams/processor/internals/assignment/
> StickyTaskAssignorTest.java
>
>
> On Tue, 9 Jan 2018 at 11:44 Sameer Kumar wrote:
>
> > Hi Damian,
> >
> > Thanks for your reply. I have some further ques.
> >
> > Would the pa
standby partitions: 2,5,6
N3
assigned partitions: 5,6,2
standby partitions: 1,3,4
-Sameer.
On Tue, Jan 9, 2018 at 2:27 PM, Damian Guy wrote:
> On Tue, 9 Jan 2018 at 07:42 Sameer Kumar wrote:
>
> > Hi,
> >
> > I would like to understand how does rebalance affect state stores
In StoreChangelogReader.restore, we have a very short poll interval of 10
ms. Any specfic reasons for the same.
-Sameer.
Hi,
I would like to understand how does rebalance affect state stores
migration. If I have a cluster of 3 nodes, and 1 goes down, the partitions
for node3 gets assigned to node1 and node2, does the rocksdb on node1/node2
also starts updating its store from changelog topic.
If yes, then what impac
g back from a topic.
>
>
> -Matthias
>
>
> On 12/19/17 3:49 AM, Sameer Kumar wrote:
> > Could you please clarify, if i just choose to use low level processor
> api,
> > what directs it to do re partitioning. I am not using them in conjunction
> > with DSLs, I plan to us
, 2017 at 11:06 AM, Sameer Kumar
wrote:
> I understand it now, even if we are able to attach custom partitioning.
> The data shall still travel from stream nodes to broker on join topic, so
> travel to network will still be there.
>
> -Sameer.
>
> On Tue, Dec 19, 2017 at 1:1
y them
> >> and then do a join.
>
> This will always trigger a rebalance. There is no API atm to tell KS
> that partitioning is preserved.
>
> Custom partitioner won't help for your case as far as I understand it.
>
>
> -Matthias
>
> On 12/17/17 9:48 PM, Sameer K
the key but only modify the value (-> "data is still local") a
> custom partitioner won't help. Also, we are improving this in upcoming
> version 1.1 and allows read access to a key in mapValue() (cf. KIP-149
> for details).
>
> Hope this helps.
>
>
> -Matthia
ou achieve what you want differently.
>
>
> -Matthias
>
> On 12/15/17 1:19 AM, Sameer Kumar wrote:
> > Hi,
> >
> > I want to use the custom partitioner in streams, I couldnt find the same
> in
> > the documentation. I want to make sure that during map phase, the keys
> > produced adhere to the customized partitioner.
> >
> > -Sameer.
> >
>
>
Hi,
I want to use the custom partitioner in streams, I couldnt find the same in
the documentation. I want to make sure that during map phase, the keys
produced adhere to the customized partitioner.
-Sameer.
-fc145893-a22e-4338-a4a2-7b46370060f6 in group c-7-aq32 has
failed, removing it from the group (kafka.coordinator.group.Group
Coordinator)
On Wed, Dec 13, 2017 at 11:43 AM, Sameer Kumar
wrote:
> Hi All,
>
> Any pointers to the above issue that I can explore further.
>
> -Sameer.
>
>
Hi All,
Any pointers to the above issue that I can explore further.
-Sameer.
On Tue, Dec 12, 2017 at 5:40 PM, Sameer Kumar
wrote:
> HI Ismael,
>
> This is what I see in the logs, I tried this twice and got the same
> exception.
>
> This is only reproducible when I have 3 n
)
[2017-
On Tue, Dec 12, 2017 at 4:07 PM, Ismael Juma wrote:
> Can you please check the broker logs for errors?
>
> Ismael
>
> On Tue, Dec 12, 2017 at 12:10 PM, Sameer Kumar
> wrote:
>
> > Hi All,
> >
> > Facing an strange exception while running Kafka Streams
Hi All,
Facing an strange exception while running Kafka Streams. I am reading from
a topic of 60 partitions. I am using exactly once in Kafka 1.0.0.
Now, this error has started appearing recently. The application runs fine
on 2 nodes, but as soon as a 3rd node is added, it starts throwing
excepti
Just wanted to let everyone know that this issue got fixed in Kafka 1.0.0.
I recently migrated to it and didnt find the issue any longer.
-Sameer.
On Thu, Sep 14, 2017 at 5:50 PM, Sameer Kumar
wrote:
> ;Ok. I will inspect this further and keep everyone posted on this.
>
> -Sameer.
&
hi all,
Faced this exception yesterday, any possible reasons for the same. At the
same time, one of the machines was restarted in my Kafka Streams cluster
and hence the job ended there.
Detailed exception trace is attached.
I am using Kafka 1.0.0.
2017-11-28 00:07:38 ERROR Kafka010Base:46 - Exce
No need to restart broker or clients. Disadvantage with approach is
> recompression.
>
> maybe you can try these settings on a test server.
>
> On Thu, Nov 23, 2017 at 2:39 PM, Sameer Kumar
> wrote:
>
> > Ok. So you mean stop all producers, change the compress type for topic at
&
he.org/documentation.html#topicconfigs
>
> On Thu, Nov 23, 2017 at 12:38 PM, Sameer Kumar
> wrote:
>
> > I am not too sure if I can have different compression types for both
> > producer and broker. It has to be same.
> >
> > This is possible by stopping all broke
.
-Sameer.
On Wed, Nov 22, 2017 at 7:24 PM, Manikumar
wrote:
> You can set topic specific compression type by setting topic level config
> "compression.type"
> another option is change compression type config on producer side.
>
> On Wed, Nov 22, 2017 at 4:56 PM, Sameer
Guys, any thoughts on below request.(getting current offsets of a consumer
group) through a java api.
On Wed, Nov 22, 2017 at 4:49 PM, Sameer Kumar
wrote:
> Hi All,
>
> I wanted to know if there is any way to get the current offsets of a
> consumer group through a java api.
>
> -Sameer.
>
If I want to reprocess only a section(i.e. from-to offsets) of the topic
through Kafka Streams, what do you think could be way to achieve it.
I want the data to be stored in same state stores, this I think would be a
common scenario in a typical production environment.
-Sameer.
Hi,
Is it possible to switch from gzip to lz4 at runtime on kafka brokers. My
servers are currently running on gzip, and I want to switch them to lz4.
-Sameer.
Hi All,
I wanted to know if there is any way to get the current offsets of a
consumer group through a java api.
-Sameer.
ith until set to 240
> and StreamsConfig.WINDOW_STORE_CHANGE_LOG_ADDITIONAL_RETENTION_MS_CONFIG
> set to 1 day (the default) this would be 10080
>
> For plain key value stores, there should be no retention period as the
> topics are compacted only.
>
> On Mon, 30 Oct 20
; default is 1 day)
>
>
>
> On Mon, 30 Oct 2017 at 12:49 Sameer Kumar wrote:
>
> > Hi,
> >
> > I have configured my settings to be the following:-
> >
> > log.retention.hours=3
> > delete.topic.enable=true
> > delete.retention.ms=1080
>
Hi,
I have configured my settings to be the following:-
log.retention.hours=3
delete.topic.enable=true
delete.retention.ms=1080
min.cleanable.dirty.ratio=0.20
segment.ms=18
Howsoever, the changelog topic created as part of stream has the
rentention.ms to be 10080, the source topic ha
:03 PM, Sameer Kumar
wrote:
> I had enabled eos through streams config and as explained in the
> documentation, I have not added anything else other than following config.
>
> streamsConfiguration.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG,
> StreamsConfig.EXACTLY_ONCE);
>
retries would be
automatically picked up. I was thinking if there is someway i can print the
configs picked up the streams app for e.g. the current retries.
Do i need to add any other configuration.
-Sameer.
On Tue, Oct 3, 2017 at 11:59 AM, Sameer Kumar
wrote:
> I had enabled eos through stre
software bugs
> resulting in data corruption, etc.).
>
>
> -Matthias
>
> On 9/29/17 7:55 AM, Damian Guy wrote:
> > You can set ProducerConfig.RETRIES_CONFIG in your StreamsConfig, i.e,
> >
> > Properties props = new Properties();
> > props.put(ProducerC
> On 9/26/17 11:30 PM, Sameer Kumar wrote:
> > Hi,
> >
> > I again received this exception while running my streams app. I am using
> > Kafka 11.0.1. After restarting my app, this error got fixed.
> >
> > I guess this might be due to bad network. Any poi
Hi,
I again received this exception while running my streams app. I am using
Kafka 11.0.1. After restarting my app, this error got fixed.
I guess this might be due to bad network. Any pointers. Any config wherein
I can configure it for retries.
Exception trace is attached.
Regards,
-Sameer.
201
replicas, even if min
> > in sync replicas is 2).
> >
> > Ismael
> >
> > On Tue, Sep 26, 2017 at 1:34 PM, Denis Bolshakov <
> > bolshakov.de...@gmail.com>
> > wrote:
> >
> > > By default kafkf does not allow dirty reads for clients, so while
error trace is attached in the initial mail.
On Tue, Sep 26, 2017 at 1:42 PM, Sameer Kumar
wrote:
> error trace is attached.
>
> On Tue, Sep 26, 2017 at 1:41 PM, Sameer Kumar
> wrote:
>
>> I received this error, and was wondering why would cause this. I received
>>
error trace is attached.
On Tue, Sep 26, 2017 at 1:41 PM, Sameer Kumar
wrote:
> I received this error, and was wondering why would cause this. I received
> this only once, got fixed for next run.
>
> For every run, I change my state store though.
>
> -Sameer.
>
I received this error, and was wondering why would cause this. I received
this only once, got fixed for next run.
For every run, I change my state store though.
-Sameer.
2017-09-26 13:27:42 INFO ClassPathXmlApplicationContext:513 - Refreshing
org.springframework.context.support.ClassPathXmlAppl
Slavić wrote:
> Hello Sameer,
>
> Behavior depends on min.insync.replicas configured for the topic.
> Find more info in the documentation
> https://kafka.apache.org/documentation/#topicconfigs
>
> Kind regards,
> Stevo Slavic.
>
> On Tue, Sep 26, 2017 at 9:01 AM, Sameer
In case one of the brokers fail, the broker would get removed from the
respective ISR list of those partitions.
In case producer has acks=all, how would it behave? would the producers be
throttled and wait till the broker get backed up.
-Sameer.
t;
>
> Guozhang
>
>
> On Wed, Sep 13, 2017 at 8:26 AM, Sameer Kumar
> wrote:
>
> > Adding more info:-
> >
> > Hi Guozhang,
> >
> > I was using exactly_once processing here, I can see this in the client
> > logs, however I am not setting tr
org.apache.kafka.streams.processor.WallclockTimestampExtractor
value.serde = class org.apache.kafka.common.serialization.Serdes$StringSerde
windowstore.changelog.additional.retention.ms = 8640
zookeeper.connect =
On Wed, Sep 13, 2017 at 12:16 PM, Sameer Kumar
wrote:
> Hi Guozhang,
>
> The producer sending data to this topic is no
this exception all the way to the user exception
> handler and then shutdown the thread. And this exception would be thrown if
> the Kafka broker itself is not available (also from your previous logs it
> seems broker 192 and 193 was unavailable and hence being kicked out by
> brok
if the retry never succeeds it will be blocked. I did not see the
> Streams API client-side logs and so cannot tell for sure, why this caused
> the Streams app to fail as well. A quick question: did you enable
> `processing.mode=exactly-once` on your streams app?
>
>
> Guozhang
>
>
Hi All,
Any thoughts on the below mail.
-Sameer.
On Wed, Sep 6, 2017 at 12:28 PM, Sameer Kumar
wrote:
> Hi All,
>
> I want to report a scenario wherein my running 2 different instances of my
> stream application caused my brokers to crash and eventually my stream
> applicatio
Hi,
I am using InMemoryStore along with GlobalKTable. I came to realize that I
was losing on data once I restart my stream application while it was
consuming data from kafka topic since it would always start with last saved
checkpoint. This shall work fine with RocksDB it being a persistent store.
Hi,
I just saw an example, does producer.initTransactions() takes care of this
part.
Also thinking if transactions are threadsafe as soon as i do begin and
commit local to a thread.
Please enlighten.
-Sameer.
On Fri, Aug 18, 2017 at 3:22 PM, Sameer Kumar
wrote:
> Hi,
>
> I have a
Hi,
I have a question on Kafka transaction.id config related to atomic writes
feature of Kafka11. If I have multiple producers across different JVMs, do
i need to set transactional.id differently for each JVM. Does transaction.id
controls the begin and ending of transactions.
If its not set uniqu
e:
>
> if (now > lastCleanMs + cleanTimeMs) {
> stateDirectory.cleanRemovedTasks(cleanTimeMs);
> lastCleanMs = now;
> }
>
> So, you will need to set it to a large enough value to disable it but not
> Long.MAX_VALUE (sorry)
>
> On Wed, 16 Aug 2017 at 10:21 Sam
synchronized. However for get and range operations that can be
> performed by IQ, i.e, other threads, we need to guard against the store
> being closed by the StreamThread, hence the synchronization.
>
> Thanks,
> Damian
>
> On Wed, 16 Aug 2017 at 07:17 Sameer Kumar wrote:
>
it.
>
> Thanks,
> Damian
>
>
> On Wed, 16 Aug 2017 at 07:39 Sameer Kumar wrote:
>
> > Hi Damian,
> >
> > Please find the relevant logs as requested. Let me know if you need any
> > more info.
> >
> > -Sameer.
> >
> > On Tue, Aug 15
the same database without any external
synchronization (the leveldb implementation will automatically do the
required synchronization).
Did we faced any issues, I want to explore this and see if I can help on
this. Is there a existing KIP on this.
-Sameer.
On Tue, Aug 15, 2017 at 10:42 AM, Sameer
leader accept the write, but it was successfully replicated to
all of the in-sync replicas.
On Thu, Aug 10, 2017 at 3:20 PM, Sameer Kumar
wrote:
> please refer to the following doc. https://cwiki.apache.org/
> confluence/display/KAFKA/Kafka+Replication
>
> It talks about both the mod
I have added a attachement containing complete trace in my initial mail.
On Mon, Aug 14, 2017 at 9:47 PM, Damian Guy wrote:
> Do you have the logs leading up to the exception?
>
> On Mon, 14 Aug 2017 at 06:52 Sameer Kumar wrote:
>
> > Exception while doing the join, cant dec
Kafka Streams, we allow users to independently query the running state
> stores in real-time in their own caller thread while the application's
> thread is continuously updating these stores, and hence the
> synchronization.
>
>
> Guozhang
>
>
> On Sun, Aug 13, 2017
Exception while doing the join, cant decipher more on this. Has anyone
faced it. complete exception trace attached.
2017-08-14 11:15:55 ERROR ConsumerCoordinator:269 - User provided listener
org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener
for group c-7-a34 failed on par
Hi,
I was inspecting RocksDBStore and I saw get and put methods being
synchronized.
Are there any issues in RocksDB due to which we have used it. Could someone
please enlighten more on this.
@Override
public synchronized V get(K key) {
validateStoreOpen();
byte[] byteValue =
plicas by using "acks" config property.
> http://kafka.apache.org/documentation/#design_ha
>
> On Wed, Aug 9, 2017 at 3:03 PM, Sameer Kumar
> wrote:
>
> > Kafka supports two replication modes: Asynchronous and Synchronous
> > replication. I couldnt fin
it would be easier to either attach them or put them
> in a gist. It is a bit hard to read in an email.
>
> Thanks,
> Damian
>
> On Wed, 9 Aug 2017 at 10:10 Sameer Kumar wrote:
>
> > Hi All,
> >
> > I wrote a Kafka Streams job that went running for 3-4 hrs, afte
Kafka supports two replication modes: Asynchronous and Synchronous
replication. I couldnt find a property that allows you to switch between
the two.
Could someone please enlighten me on this.
-Sameer.
Hi All,
I wrote a Kafka Streams job that went running for 3-4 hrs, after which it
started throwing these errors.Not sure why we got these errors.
I am using Kafka 11.0 both on broker side as well as on consumer side.
*Machine1*
2017-08-08 17:25:35 INFO StreamThread:193 - stream-thread
[LICSp-7
partitions."
>
> -hans
>
> > On Aug 4, 2017, at 5:17 AM, Sameer Kumar wrote:
> >
> > According to Kafka docs, producer decides on which partition the data
> shall
> > reside. I am aware that neither broker nor producer needs to be restarted
> > to detec
According to Kafka docs, producer decides on which partition the data shall
reside. I am aware that neither broker nor producer needs to be restarted
to detect added partitions.
Would like to understand if there is some frequency through which producer
detects new partitions.
Though consumers wer
turns out to be a common
> issue.
>
>
> Guozhang
>
>
> On Fri, Jul 28, 2017 at 4:57 AM, Damian Guy wrote:
>
> > It is due to a bug. You should set
> > StreamsConfig.STATE_DIR_CLEANUP_DELAY_MS_CONFIG to Long.MAX_VALUE -
> i.e.,
> > disabling it.
> >
Hi,
I am facing this error, no clue why this occurred. No other exception in
stacktrace was found.
Only thing different I did was I ran kafka streams jar on machine2 a couple
of mins after i ran it on machine1.
Please search for this string in the log below:-
org.apache.kafka.streams.processor.i
Sure, will do that by tomorrow.
-Sameer.
On Wed, Jul 26, 2017 at 8:39 PM, Ismael Juma wrote:
> Hi Sameer,
>
> Yes, the upgrade should be seamless. Can you please share the log entries
> with the errors?
>
> Ismael
>
> On Wed, Jul 26, 2017 at 1:35 PM, Sameer Kumar
>
I wanted to merge two KStreams one of them is a windowed stream and another
one is of type , what is the preferred way of merging them.
One way I thought was to run a map phase and create a windowed instance
based on system.currentmillis.
-Sameer.
Hi ,
I wanted to understand the process for production upgrade of Kafka. As
documented in the https://kafka.apache.org/documentation/#upgrade, it
should be seamless.
I had a 3 node cluster(single topic, partitions=60, replication factor =2
)on which i was trying the same.
As suggested, I first st
Damian,
Does this mean data is retained for infinite time limited only by disk
space.
-Sameer.
On Wed, Jul 26, 2017 at 3:53 PM, Sameer Kumar
wrote:
> got it. Thanks.
>
> On Wed, Jul 26, 2017 at 3:24 PM, Damian Guy wrote:
>
>> The changelog is one created by kafka str
Hi,
Is there a way I can merge two KTables just like I have in KStreams api.
KBuilder.merge().
I understand I can use KTable.toStream(), if I choose to use it, is there
any performance cost associated with this conversion or is it just a API
conversion.
-Sameer.
d in the topic
> for as long as the retention period.
> If you use a non-compacted topic and the kafka-streams instance crashes
> then that data may be lost from the state store as it will use the topic to
> restore its state.
>
> On Wed, 26 Jul 2017 at 10:24 Sameer Kumar wrote:
&g
at 2:22 PM, Damian Guy wrote:
> Sameer,
>
> For a KeyValue store the changelog topic is a compacted topic so there is
> no retention period. You will always retain the latest value for a key.
>
> Thanks,
> Damian
>
> On Wed, 26 Jul 2017 at 08:36 Sameer Kumar wrote:
Hi,
Retention period for state stores are clear(default, otherwise specified by
TimeWindows.until). Intrigued to know the retention period for key values.
The use case is something like I am reading from a windowed store, and
using plain reduce() with out any time windows. Would the values be
ret
model.
>
> Thus, with regard to Spark vs Kafka Streams and your example, Kafka
> Streams will execute two consecutive maps quite similar to Spark. I say
> "quite similar" only because Kafka Streams is a true stream processing
> while Spark Streaming does micro batching (ie, i
ka Streams and
> > use back-pressure to preclude memory issues.
> >
> > -David
> >
> > On 7/17/17, 12:20 PM, "Guozhang Wang" wrote:
> >
> > Sameer,
> >
> > Could you elaborate a bit more what do you mean by "
Currently, we don't have DAG processing in Kafka Streams. Having a DAG has
its own share of advantages in that, it can optimize code on its own and
come up with a optimized execution plan.
Are we exploring in this direction, do we have this in our current roadmap.
-Sameer.
> does each stream instance have? Also how many topics and partitions do you
> have?
>
>
> Thanks,
> Eno
>
> > On 23 Jun 2017, at 17:31, Sameer Kumar wrote:
> >
> > Hi,
> >
> > Came across a rebalancing issue in using KafkaStreams. I have two
> machi
Hi,
Came across a rebalancing issue in using KafkaStreams. I have two machines,
Machine1 and Machine2, machine1 is consuming all partitions and machine2 is
completely free and not processing any partitions. If I shutdown machine1,
then machine2 will take over and would start consuming all partitio
Hi Abhimanya,
You can very well do it through Kafka, KafkaStreams and something like
redis.
I would design it to be something like this:-
1. Topic 1 - Pending tasks
2. Topic 2 - Reassigned Tasks.
3. Topic 3- Task To Resource Mapping.
Some other components could be:-
4. Redis Hash(task progress
0.0.x does not have auto-repartitioning feature, and this might
> explain what you see. For this case, you need to call #through() before
> doing the aggregation.
>
>
> -Matthias
>
>
> On 6/17/17 3:28 AM, Sameer Kumar wrote:
> > Continued from m last mail...
> >
&g
17, 2017 at 3:55 PM, Sameer Kumar
wrote:
> The example I gave was just for illustration. I have impression logs and
> notification logs. Notification logs are essentially tied to impressions
> served. An impression would serve multiple items.
>
> I was just trying to aggregate
; Does this help?
>
>
> -Matthias
>
>
> On 6/15/17 11:48 PM, Sameer Kumar wrote:
> > Ok.. Let me try explain it again.
> >
> > So, Lets say my source processor has a different key, now the value that
> it
> > contains lets say contains an identifier type:
cessed on one of the
> instances, not both.
>
> Does this help?
>
> Eno
>
>
>
> > On 15 Jun 2017, at 07:40, Sameer Kumar wrote:
> >
> > Also, I am writing a single key in the output all the time. I believe
> > machine2 will have to write a key and since
Also, I am writing a single key in the output all the time. I believe
machine2 will have to write a key and since a state store is local it
wouldn't know about the counter state on another machine. So, I guess this
will happen.
-Sameer.
On Thu, Jun 15, 2017 at 11:11 AM, Sameer Kumar
, v) -> k, Serdes.String(), Serdes.Integer())
.reduce((value1, value2) -> value1 + value2, LINE_ITEM_COUNT_STORE);
return liCount;
}
On Wed, Jun 14, 2017 at 10:55 AM, Sameer Kumar
wrote:
> The input topic contains 60 partitions and data is distributed well across
> dif
Hi,
I witnessed a strange behaviour in KafkaStreams, need help in understanding
the same.
I created an application for aggregating clicks per user, I want to process
it only for 1 user( i was writing only a single key).
When I ran application on one machine, it was running fine.Now, to
loadbalanc
repeating
> forever and 2) how long have you observed that the app is stuck, and while
> it is stuck does the above entry never go away?
>
>
> Guozhang
>
>
> On Wed, May 3, 2017 at 10:50 PM, Sameer Kumar
> wrote:
>
> > My brokers are on version 10.1.0 and my clients are
I see now that my Kafka cluster is very stable, and these errors dont come
now.
-Sameer.
On Fri, May 5, 2017 at 7:53 AM, Sameer Kumar wrote:
> Yes, I have upgraded my cluster and client both to version 10.2.1 and
> currently monitoring the situation.
> Will report back in case I
I don't think this would be the right approach. from broker side, this
would mean creating 1M/10M/100M/1B directories, this would be too much for
the file system itself.
For most cases, even some thousand partitions per node should be
sufficient.
For more details, please refer to
https://www.conf
Please try out Streams 0.10.2.1 -- this should be fixed there. If not,
> please report back.
>
> I would also recommend to subscribe to the list. It's self-service
> http://kafka.apache.org/contact
>
>
> -Matthias
>
> On 5/3/17 10:49 PM, Sameer Kumar wrote:
> > My
My brokers are on version 10.1.0 and my clients are on version 10.2.0.
Also, do a reply to all, I am currently not subscribed to the mailing list.
-Sameer.
On Wed, May 3, 2017 at 5:27 PM, Sameer Kumar wrote:
> Hi,
>
>
>
> I want to report an issue where in addition of a server a
My brokers are on version 10.1.0 and my clients are on version 10.2.0.
Also, do a reply to all, I am currently not subscribed to the list.
-Sameer.
On Wed, May 3, 2017 at 6:34 PM, Sameer Kumar wrote:
> Hi,
>
>
>
> I ran two nodes in my streams compute cluster, they were running
97 matches
Mail list logo