I have created a stream with topic contains 5 partitions and expected to
create 5 stream tasks ,i got
[admin@nms-181 ]$ ls
0_0 0_1 0_2 0_3 0_4 1_0 1_1 1_2 1_3 1_4
SingleConsumerMultiConsumerUsingStreamx4 is my application_id and this
1_0,1_... contains localStateStore
and 0_0,0_ cont
I have created a stream with topic contains 5 partitions and expected to
create 5 stream tasks ,i got 10 tasks as
0_0 0_1 0_2 0_3 0_4 1_0 1_1 1_2 1_3 1_4
my doubt is:im expected to have 5 tasks how it produced 10 tasks
here are some logs:
[2017-10-24 10:27:35,284] INFO Kafka
It would depend on what your topology looks like, which you haven't show
here. But if there may be internal topics generated due to repartitioning
which would cause the extra tasks.
If you provide the topology we would be able to tell you.
Thanks,
Damian
On Tue, 24 Oct 2017 at 10:14 pravin kumar
Hi all,
I want setup a Kafka cluster in a production environment.
During latest years I've worked with Solr user and, comparing the Kafka
with Solr, it would be wonderful if even Kafka had an administration
console where see what's happening.
Looking around I've found this:
https://github.com/y
Hi Kumar,
1) 0_ and 1_ are different stream processors in your topology
2) my gues would be it does not have any state to store?
Jozef
Sent from [ProtonMail](https://protonmail.ch), encrypted email based in
Switzerland.
> Original Message
> Subject: StreamTasks
> Local Time:
Hi all,
We're upgrading a Kafka streams application from 0.10.2.1 to 0.11.0.1 and
our application is running against a Kafka cluster with version 0.10.2.1.
When we first attempted to upgrade our application to Kafka 0.11.0.1, we
observed that when we deleted the PVCs for the service, and restarte
We're working on fault tolerance testing for our Kafka cluster. I'm trying to
simulate a full volume for the service, and observe where and why it fails. So
to start, I did a fallocate -l /data/big.file and then used df to
ensure 0 bytes remained available.
Nothing happened. I assumed because
Hi,
the issue you describe, that on a "fresh" restart, all tasks are
assigned to the first thread is known, and the solution for it was to
introduce the new broker config you mentioned. Thus, there is no config
for 0.10.2.x brokers or Streams API to handle this case (that's why we
introduced the n
Hi All,
I took latest source code from kafka git repo and tried to setup my local
env. in Eclipse.
I am getting a compilation error in 1 file in streams project.
Specifically in file *KStreamImpl.java on line 157*
*Type mismatch: cannot convert from KeyValue
to KeyValue*
Is this my local enviro
This should fix what you observed:
https://github.com/apache/kafka/pull/4127
On Tue, Oct 24, 2017 at 12:21 PM, Vishal Shukla wrote:
> Hi All,
> I took latest source code from kafka git repo and tried to setup my local
> env. in Eclipse.
> I am getting a compilation error in 1 file in streams pr
Hi Kafka Community,
I have a question regarding Kafka Streams. Currently we set our commit
interval to 5 mins and buffer size to be 1GB. We observe that every 5
minutes, there is a spike of network out, which is expected. However, we
also found that the record process rate dropped to 1/4 during co
Hmm.. which Java version were you using?
Guozhang
On Tue, Oct 24, 2017 at 1:09 PM, Ted Yu wrote:
> This should fix what you observed:
>
> https://github.com/apache/kafka/pull/4127
>
> On Tue, Oct 24, 2017 at 12:21 PM, Vishal Shukla
> wrote:
>
> > Hi All,
> > I took latest source code from kaf
JDK 1.8.0_91
Cheers
On Tue, Oct 24, 2017 at 1:23 PM, Guozhang Wang wrote:
> Hmm.. which Java version were you using?
>
>
> Guozhang
>
> On Tue, Oct 24, 2017 at 1:09 PM, Ted Yu wrote:
>
> > This should fix what you observed:
> >
> > https://github.com/apache/kafka/pull/4127
> >
> > On Tue, Oct
Well. When a commit a triggered, Streams need to flush all caches and
flush all pending write of the producers. And as this happens on the
same thread that does processing, there won't be any processing of new
data until the commit is finished.
So I guess, it is expected.
-Matthias
On 10/24/17
Hi Guozhang,
I am using jdk1.7.0_45 on Mac.
(Eclipse 4.7.1a )
As per Kafka documentation Java 7 is recommended , so was thinking this
version should be good.
Could this be eclipse related error ?
On Tue, Oct 24, 2017 at 4:28 PM, Ted Yu wrote:
> JDK 1.8.0_91
>
> Cheers
>
> On Tue, Oct 24,
Thanks for clarifying that !
On Mon, Oct 23, 2017 at 3:31 AM, Michael Noll wrote:
> > *What key should the join on ? *
>
> The message key, on both cases, should contain the user ID in String
> format.
>
> > *There seems to be no common key (eg. user) between the 2 classes -
> PageView
> and U
>
> Could it be, that the first KafkaStreams instance was still in status
> "rebalancing" when you started the second/third container? If yes, this
> might explain what you observed: if the first instance is in status
> "rebalancing" it would miss that new instanced are joining the group.
> (We fi
We had multiple Jira. I guess this one is the fix you are looking for:
https://issues.apache.org/jira/browse/KAFKA-5152
-Matthias
On 10/24/17 3:21 PM, Eric Lalonde wrote:
>>
>> Could it be, that the first KafkaStreams instance was still in status
>> "rebalancing" when you started the second/thir
I tried both 1.8.0_66 and 1.7.0_80 on IntelliJ and they worked fine. I'm
wondering if it is a specific issue for Eclipse?
Guozhang
On Tue, Oct 24, 2017 at 2:27 PM, Vishal Shukla wrote:
> Hi Guozhang,
> I am using jdk1.7.0_45 on Mac.
> (Eclipse 4.7.1a )
>
> As per Kafka documentation Java 7 is r
>>> Could it be, that the first KafkaStreams instance was still in status
>>> "rebalancing" when you started the second/third container? If yes, this
>>> might explain what you observed: if the first instance is in status
>>> "rebalancing" it would miss that new instanced are joining the group.
>>
Eric:
I wonder if it is possible to load up 1.0.0 RC3 on test cluster and see
what the new behavior is ?
Thanks
On Tue, Oct 24, 2017 at 5:41 PM, Eric Lalonde wrote:
>
> >>> Could it be, that the first KafkaStreams instance was still in status
> >>> "rebalancing" when you started the second/thir
Hello, I'm working with an employer that is looking to hire a
permanent remote Cassandra database engineer. Consequently I had
hoped that some members of this mailing list may like to learn more
and discuss further. I can be reached off-list using "JamesBTobin
(at) Gmail (dot) com". Kind regards
Might be worth a try with 1.0.0 RC3 -- even if I doubt that much changes.
Can you provide debug logs for your Kafka streams applications as well
as brokers? This would help to dig into this.
-Matthias
On 10/24/17 5:53 PM, Ted Yu wrote:
> Eric:
> I wonder if it is possible to load up 1.0.0 RC3 o
23 matches
Mail list logo