StreamsBuilder would be my vote.
> On Mar 13, 2017, at 9:42 PM, Jay Kreps wrote:
>
> Hey Matthias,
>
> Make sense, I'm more advocating for removing the word topology than any
> particular new replacement.
>
> -Jay
>
> On Mon, Mar 13, 2017 at 12:30 PM, Matthias J. Sax
> wrote:
>
>> Jay,
>>
Hey Matthias,
Make sense, I'm more advocating for removing the word topology than any
particular new replacement.
-Jay
On Mon, Mar 13, 2017 at 12:30 PM, Matthias J. Sax
wrote:
> Jay,
>
> thanks for your feedback
>
> > What if instead we called it KStreamsBuilder?
>
> That's the current name an
We are using latest Kafka and Logstash versions for ingesting several
business apps(now few but eventually 100+) log data into ELK. We have a
standardized logging structure for business apps to log data into Kafka
topic and able to ingest into ELK via Kafka input plugin.
Currently, we are using on
Hi Matthias,
Thank you for the quick response, appreciate it!
I created the topics wordCount-input and wordCount-output. Pushed some data
to wordCount-input using
docker exec -it $(docker ps -f "name=kafka\\." --format "{{.Names}}")
/bin/kafka-console-producer --broker-list localhost:9092 --topi
Running consumer with full DEBUG/TRACE level logging will show you why.
On Thu, Mar 2, 2017 at 2:13 AM, Dhirendra Suman <
dhirendra.su...@globallogic.com.invalid> wrote:
> Hi,
>
> http://stackoverflow.com/questions/42551704/call-to-
> consumerrecordsstring-string-records-consumer-poll1000-hangs-a
I am using interactive streams to query tables:
ReadOnlyKeyValueStore store
= streams.store("view-user-drafts",
QueryableStoreTypes.keyValueStore());
Documentation says that #range() should not return null values. However,
for keys that have been tombstoned, it does retu
Maybe you need to reset your application using the reset tool:
http://docs.confluent.io/current/streams/developer-guide.html#application-reset-tool
Also keep in mind, that KTables buffer internally, and thus, you might
only see data on commit.
Try to reduce commit interval or disable caching by s
Steven,
thanks for your feedback.
I am not sure about KafkaStreamsBuilder (even if agree that it is better
than KStreamBuilder), because it sounds like a builder that creates a
KafkaStreams instance. But that's of course not the case. It builds a
Topology -- that was the reason to consider callin
> On Mar 13, 2017, at 12:30 PM, Matthias J. Sax wrote:
>
> Jay,
>
> thanks for your feedback
>
>> What if instead we called it KStreamsBuilder?
>
> That's the current name and I personally think it's not the best one.
> The main reason why I don't like KStreamsBuilder is, that we have the
> c
Hi,
This is the first time that am using Kafka Stream. I would like to read
from input topic and write to output topic. However, I do not see the word
count when I try to run below example. Looks like that it does not connect
to Kafka. I do not see any error though. I tried my localhost kafka as w
Jay,
thanks for your feedback
> What if instead we called it KStreamsBuilder?
That's the current name and I personally think it's not the best one.
The main reason why I don't like KStreamsBuilder is, that we have the
concepts of KStreams and KTables, and the builder creates both. However,
the n
Thanks Mathieu.
Is it possible that the background thread within your Streams' consumer
which is responsible for sending the heartbeats got suspended due to a GC?
Note that its heartbeat frequency and the broker's side checking interval
is still defined by the "session.timeout.ms" config, so if th
Two things:
1. This is a minor thing but the proposed new name for KStreamBuilder
is StreamsTopologyBuilder. I actually think we should not put topology in
the name as topology is not a concept you need to understand at the
kstreams layer right now. I'd think of three categories of con
There is no need to create a new producer instance for each write request.
In doing so you lose the advantages of the buffering and batching that the
producer offers. In your use case I would recommend having a single running
producer and tuning the batch size and linger.ms settings if you find tha
Hi Guozhang,
Thanks for the response.
I don't believe that my client has soft failed, as my max.poll.interval.ms
is configured at 180 (30 minutes) and the client app shows "Committing
task StreamTask [N_N]" log messages within the past few (1-3) minutes of
the failure.
In terms of the "under
I'm trying to perform an upgrade of 2 kafka cluster of 5 instances, When
I'm doing the switch between 0.10.0.1 and 0.10.1.0 or 0.10.2.0, I saw
that ISR is lost when I upgrade one instance. I didn't find out yet
anything relevant about this problem, logs seems just fine.
eg.
kafka-topics.sh --
Hi all,
I've been trying to setup a Kafka cluster using Kubernetes, but after being
stuck for a few days I'm looking for some help. Hopefully someone here can
help me out.
Kubernetes setup:
3 dedicated machines, which have the following Kubernetes settings:
Service which exposes ports and DNS rec
We are planning to migrate to the newer version of Kafka. But that's a few
weeks away.
We will try setting the socket config and see how it turns out.
Thanks a lot for your response!
On Mon, Mar 13, 2017 at 3:21 PM, Eno Thereska
wrote:
> Thanks,
>
> A couple of things:
> - I’d recommend movi
Hello,
Nobody has experience with Kafka Connect tasks with external dependencies?
Thanks,
Petr
From: Petr Novak [mailto:oss.mli...@gmail.com]
Sent: 23. února 2017 14:48
To: users@kafka.apache.org
Subject: Pattern to create Task with dependencies (DI)
Hello,
it seems that KConnect ta
Hi,
We have three brokers in a cluster with replication factor is 3. We are
using Kafka-0.10.0.1. We see some failures on metadata timeout exceptions
while producing.
We have configured retries=3 and max in flight request=1.
After comparing with the old scala Producer code found that in new Produc
Hi,
I am using simple kafka producer (java based, version 0.9.0.0) in an
application where I receive lot of hits (about 50 per seconds, in much like
servlet way) on application that has kafka producer. Per request comes
different set of records.
I am using only one instance of kafka producer to p
Hi,
How can we identify if a set of brokers (nodes) belong to same cluster?
I understand we can use the zookeeper where all the brokers pointing to
same zookeeper URL's belong to same cluster.
But is there a common identity between brokers which can help identify if
brokers belong to same cluster?
Thanks,
A couple of things:
- I’d recommend moving to 0.10.2 (latest release) if you can since several
improvements were made in the last two releases that make rebalancing and
performance better.
- When running on environments with large latency on AWS at least (haven’t
tried Google cloud), o
Hi Eno,
Please find my answers inline.
We are in the process of documenting capacity planning for streams, stay
> tuned.
>
This would be great! Looking forward to it.
Could you send some more info on your problem? What Kafka version are you
> using?
>
We are using Kafka 0.10.0.0.
> Are the
Thanks. I added a link to this thread in KAFKA-4829.
2017-03-10 9:49 GMT+01:00 Michael Noll :
> I think a related JIRA ticket is
> https://issues.apache.org/jira/browse/KAFKA-4829 (see Guozhang's comment
> about the ticket's scope).
>
> -Michael
>
>
> On Thu, Mar 9, 2017 at 6:22 PM, Damian Guy w
Hi Mahenda,
We are in the process of documenting capacity planning for streams, stay tuned.
Could you send some more info on your problem? What Kafka version are you
using? Are the VMs on the same or different hosts? Also what exactly do you
mean by “the lag keeps fluctuating”, what metric are
26 matches
Mail list logo