As Eno said I'd use the interactive queries API for Q2.
Demo apps:
-
https://github.com/confluentinc/examples/blob/3.1.x/kafka-streams/src/main/java/io/confluent/examples/streams/interactivequeries/kafkamusic/KafkaMusicExample.java
-
https://github.com/confluentinc/examples/blob/3.1.x/kafka-stream
Thank you both for the directions, I'll dive into these.
Peter
On Jan 20, 2017 9:55 AM, "Michael Noll" wrote:
> As Eno said I'd use the interactive queries API for Q2.
>
> Demo apps:
> -
> https://github.com/confluentinc/examples/blob/3.
> 1.x/kafka-streams/src/main/java/io/confluent/examples/
Hello,
I have a test environment with 3 brokers and 1 zookeeper nodes, in which
clients connect using two-way ssl authentication. Recently I updated kafka
0.10.1.0 to version 0.10.1.1, and now the consumers are throwing the
following error when started:
$ bin/kafka-console-consumer.sh --bootstrap
Hi Kafka-users,
Could anyone help me suggest why am I not able to get the topic configuration
by running below command
$ ./kafka-configs.sh --zookeeper *a.b.c.d:2181/kafka --describe --entity-name
test --entity-type topics
Configs for topics:test are
$
*a.b.c.d - private url where zookeeper is
Seems you have to add some topic-level configs before seeing them. Run command
below for an example:
> bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics
> --entity-name my-topic --alter --add-config max.message.bytes=128000
发件人: Barot, Abh
Hi,
I'm testing upgrading our cluster from 0.9.0.1 to 0.10.1.0 on 2 clusters A
and B. I have upgraded only the inter.broker.protocol.version to 0.10.1.0.
The log.message.format.version is still 0.9.0.1.
I'm writing test data from a java producer to the upgraded cluster A. As
expected the .timeinde
Hi,
Any input on this will be of great help . A rolling restart could fix the
issue but not sure if thats the right way to do it.
Thanks,
Meghana
On Wed, Jan 18, 2017 at 9:46 AM, Meghana Narasimhan <
mnarasim...@bandwidth.com> wrote:
> Hi,
>
> We have a 3 node cluster with 0.9.0.1 version. The c
Suggestions?
On Thu, Jan 19, 2017 at 6:23 PM, Elias Levy
wrote:
> In the process of testing a Kafka Streams application I've come across a
> few issues that are baffling me.
>
> For testing I am executing a job on 20 nodes with four cores per node,
> each instance configured to use 4 threads, ag
Hi Meghana,
Have you tried using the 'kafka-prefered-replica-election.sh' script? It
will try to move leaders back to the preferred replicas when there is a
leader imbalance.
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-1.PreferredReplicaLeaderElectionTool
Hi there,
I see the below exception in one of my node's log( cluster with 3 nodes) and
then the node is stopped to responding(it's hung state , I mean if I do
ps-ef|grep kafka , I see the Kafka process but it is not responding) and we
lost around 100 messages:
1. What could be the reas
Hi,
I think you are facing this issue.
https://issues.apache.org/jira/browse/KAFKA-3163
Thanks
Sudev
On Fri, Jan 20, 2017 at 9:50 PM Meghana Narasimhan <
mnarasim...@bandwidth.com> wrote:
> Hi,
> I'm testing upgrading our cluster from 0.9.0.1 to 0.10.1.0 on 2 clusters A
> and B. I have upgraded
Thanks Sudev, but thats just the Jira for implementing the KIP. not sure it
addresses the issue that i am seeing.
Just trying to understand what the output means.
Thanks,
Meghana
On Fri, Jan 20, 2017 at 4:17 PM, Sudev A C wrote:
> Hi,
>
> I think you are facing this issue.
> https://issues.apac
Thanks Apurva ! Will give that a shot.
Thanks,
Meghana
On Fri, Jan 20, 2017 at 2:16 PM, Apurva Mehta wrote:
> Hi Meghana,
>
> Have you tried using the 'kafka-prefered-replica-election.sh' script? It
> will try to move leaders back to the preferred replicas when there is a
> leader imbalance.
>
Meghana,
You are probably seeing this when running the DumpLogSegment tool on the
active (last) log segment. DumpLogSegment tool is supposed to only be used
on the index of the immutable segments (i.e., when they are rolled). We
preallocate the index on the active segment with 0 values. So,
DumpLo
Looping back on this for posterity. In case anyone else runs into this, the
solution was as follows:
- add a new node with the bogus broker ID
- let the cluster equilibrate / expand ISR sets
- move any partitions that have been assigned to this broker to the other
(original) brokers in the cluster
I get this same behavior with Kafka 0.10.1.0 using PlainLoginModule and
simply making the password different from expected on the client. We also
get this behavior when creating our own Authorizer and always returning
false. I can tell a retry is happing because the brokers get called at
least ever
Hi ,
what is min hardware requirement for kafka ?
I see min ram size for production is recommended is 32GB.
what can be issue with 8 GB ram and for test purpose i was planning to use
some 1gb or 4gb aws machine, is it safe to run in 1gb machine for few days?
For log write i have big disk.
*Re
17 matches
Mail list logo