On Wed, Mar 7, 2018 at 8:03 PM, John Roesler wrote:
> Thanks Ted,
>
> Sure thing; I updated the example code in the KIP with a little snippet.
>
> -John
>
> On Wed, Mar 7, 2018 at 7:18 PM, Ted Yu wrote:
>
>> Looks good.
>>
>> See if you can add punctuator into the sample code.
>>
>> On Wed, Mar
Thanks Ted,
Sure thing; I updated the example code in the KIP with a little snippet.
-John
On Wed, Mar 7, 2018 at 7:18 PM, Ted Yu wrote:
> Looks good.
>
> See if you can add punctuator into the sample code.
>
> On Wed, Mar 7, 2018 at 7:10 PM, John Roesler wrote:
>
> > Dear Kafka community,
>
Looks good.
See if you can add punctuator into the sample code.
On Wed, Mar 7, 2018 at 7:10 PM, John Roesler wrote:
> Dear Kafka community,
>
> I am proposing KIP-267 to augment the public Streams test utils API.
> The goal is to simplify testing of Kafka Streams applications.
>
> Please find d
Dear Kafka community,
I am proposing KIP-267 to augment the public Streams test utils API.
The goal is to simplify testing of Kafka Streams applications.
Please find details in the
wiki:https://cwiki.apache.org/confluence/display/KAFKA/KIP-267%3A+Add+Processor+Unit+Test+Support+to+Kafka+Streams+T
Hi:
I used kafka streams for days.
and I use it for data real-time analysis.
that is my flow
A (source topic)--> B (internal topic) --> C(sink-topic)
and there ara two process program here.
1).consumed A topic to downstream B topic
2).consumed B topic to downstream C topic
but in o
Hello friends.
Say I have a bunch of consumer jobs in the same consumer group. They want
to read topics A and B and they want these topics to be co-partitioned. So
each consumer job creates one KafkaConsumer for both topics. Everyone's
happy.
Now say these consumer jobs fall behind on topics A an
Thanks for running the release Ewen and great work everyone!
Ismael
On Tue, Mar 6, 2018 at 1:14 PM, Ewen Cheslack-Postava wrote:
> The Apache Kafka community is pleased to announce the release for Apache
> Kafka
> 1.0.1.
>
> This is a bugfix release for the 1.0 branch that was first released wi
Hi Subash,
First try to check if your zookeeper has your kafka broker registered:
/opt/zookeeper/bin/zkCli.sh -server localhost:2181 <<< "ls /brokers/ids"
The output should list your brokers. In your case [0]. In my dev env I have
3 brokers.
[zk: localhost:2181(CONNECTED) 0] ls /brokers/ids
[0
Hi team,
I am new to Kafka and I am trying to learn the basics. I have issued
command after
creating topic test in a single node cluster-
/opt/cloudera/parcels/KAFKA-2.1.1-1.2.1.1.p0.18/lib/kafka/bin/kafka-console-producer.sh
--broker-list :9092 --topic test
And then I pass message *This Is D
If you run multiple instances of your app you may not be able to access the
state store you are trying to access from the instance you are trying from,
i.e., it may be on another instance. If streams is in the RUNNING state,
this would seem to be the issue.
On Wed, 7 Mar 2018 at 15:56 detharon wr
I'm afraid that's not what I'm looking for, as I'm just trying to retrieve
the local data, from inside my application (but from outside the stream
topology), and in some cases it becomes impossible. That is, the stream
changes its state from "rebalancing" to "running", but the store is remains
inac
If you have multiple streams instances then the store might only be
available on one of the instances. Using `KafkaStreams.store(..)` will only
locate stores that are currently accessible by that instance. If you need
to be able to locate stores on other instances, then you should probably
have a r
Hello, I'm experiencing issues accessing the state stores outside the Kafka
stream. My application queries the state store every n seconds using the
.all() method to retrieve all key value pairs. I know that the state store
might not be available, so I guard against the InvalidStateStoreException
a
Hi team,
I am new to Kafka and I am trying to learn the basics. I have issued
command after
creating topic test in a single node cluster-
/opt/cloudera/parcels/KAFKA-2.1.1-1.2.1.1.p0.18/lib/kafka/bin/kafka-console-producer.sh
--broker-list :9092 --topic test
And then I pass message *This Is
Hi
I´m not using confluent and I´m working with kafka. I have renamed the
properties files with my pattern.
But my question is, i have two brokers each broker can have our own
configuration file for consumer.properties. Because when you execute the
command you can put the number of consumer.prope
I would use an absolute file path to property files when starting a mirror
maker for --consumer.config and --producer.config to point to property files. I
would also name the property files according to what would be used for, for
example, I would prefix mirro maker property files with 'mm'. He
hi everyone
I´m using mirror maker tool and i´m configuring the consumer.properties and
producer.properties, but i´m not sure which is the place where i need to
put these files.
Because mirrormaker tool is installed into the server with kafka manager,
and they are two brokers configured as well
Hi,
We do have mesos based infrastructure to run our java-based microservices
and we'd like to use it for deploying connectors as well (with benefits of
reusing deployment specific knowledge we already have, isolating the load
and in general pretty much the same reasons Kafka Streams was designed
18 matches
Mail list logo