Hi Team,
I am having Kafka setup as follows:
Kafka server version: 10.2
inter.broker.protocol.version: 8.2
log.message.format.version: 8.2
Kafka client version: 8.2
Now i need to change following properties
inter.broker.protocol.version: 10.2
log.message.format.version: 10.2
Kafka Client versi
Thanks again for the replies. VERY much appreciated. I'll check both
/admin/delete_topics and /config/topics.
Chris
On Thu, Jul 20, 2017 at 9:22 PM, Carl Haferd
wrote:
> If delete normally works, there would hopefully be some log entries when it
> fails. Are there any unusual zookeeper entri
If delete normally works, there would hopefully be some log entries when it
fails. Are there any unusual zookeeper entries in the /admin/delete_topics
path or in the other /admin folders?
Does the topic name still exist in zookeeper under /config/topics? If so,
that should probably deleted as we
Delete is definitely there. The delete worked fine, based on the fact that
there is nothing in Zookeeper, and that the controller reported that the
delete was successful, it's just something seems to have gotten out of
sync.
delete.topic.enabled is true. I've successfully deleted topics in the
p
I could be totally wrong, but I seem to recall that delete wasn't fully
implemented in 0.8.x?
On Fri, Jul 21, 2017 at 10:10 AM, Carl Haferd
wrote:
> Chris,
>
> You could first check to make sure that delete.topic.enable is true and try
> deleting again if not. If that doesn't work with 0.8.1.1
Chris,
You could first check to make sure that delete.topic.enable is true and try
deleting again if not. If that doesn't work with 0.8.1.1 you might need to
manually remove the topic's log files from the configured log.dirs folder
on each broker in addition to removing the topic's zookeeper path
I agree with Jason that we are just adding a new field so the impacted
parser tools maybe limited. This additional information would be very
useful.
Guozhang
On Wed, Jul 19, 2017 at 11:57 AM, Jason Gustafson
wrote:
> Ismael, I debated that also, but the main point was to make users aware of
> t
Hi Bill,
> When you say "even if the application has not had data for a long time" do
you have a rough idea of how long?
Minutes, hours
> What is the value of your
"auto.offset.reset" configuration?
I don't specify it explicitly, but the ConsumerConfig logs indicate
"auto.offset.reset = earli
Hi Dmitry,
When you say "even if the application has not had data for a long time" do
you have a rough idea of how long? What is the value of your
"auto.offset.reset" configuration?
Thanks,
Bill
On Thu, Jul 20, 2017 at 6:03 PM, Dmitry Minkovsky
wrote:
> My Streams application is configured
Hi Ovidu,
The see-saw behavior is inevitable with linux when you have concurrent
reads and writes. However, tuning the following two settings may help
achieve more stable performance (from Jay's link):
> *dirty_ratio*Defines a percentage value. Writeout of dirty data begins
> (via *pdflush*) whe
Hi Jason,
Regarding your comment about the current limitation on the information
returned for a consumer group, do you think it's worth expanding the API
to return some additional info (e.g. generation id, group leader, ...)?
Thanks.
--Vahid
From: Jason Gustafson
To: Kafka Users
Cc:
My Streams application is configured to commit offsets every 250ms:
Properties streamsConfig = new Properties();
streamsConfig.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 250);
However, every time I restart my application, records that have already
been processed are re-processe
Thanks Rajini!
El dia 20 jul. 2017 18:41, "Rajini Sivaram" va
escriure:
> David,
>
> The release plans are here: https://github.com/spring-
> projects/spring-kafka/
> milestone/20?closed=1
>
> We have already included TX and headers support to the current M3 which is
> planned just after the nex
Yes, I’m using Debian Jessie 2.6 installed on this hardware [1].
It is also my understanding that Kafka is based on system’s cache (Linux in
this case) which is based on Clock-Pro for page replacement policy, doing
complex things for general workloads. I will check the tuning parameters, but I
I suspect this is on Linux right?
The way Linux works is it uses a percent of memory to buffer new writes, at
a certain point it thinks it has too much buffered data and it gives high
priority to writing that out. The good news about this is that the writes
are very linear, well layed out, and hig
Hi
I am using named pipe and reading from it using Java and sending events to
Kafka Cluster.
The std out of a process is `tee` ed to
But I am observing data loss. I am yet to debug this issue. I was wondering
if anybody has already interfaced name pipe for sending data to kafka and
what are the
Hi all,
I have a weird situation here. I have deleted a few topics on my 0.8.1.1
cluster (old, I know...). The deletes succeeded according to the
controller.log:
[2017-07-20 16:40:31,175] INFO [TopicChangeListener on Controller 1]: New
topics: [Set()], deleted topics:
[Set(perf_doorway-supplier
David,
The release plans are here: https://github.com/spring-projects/spring-kafka/
milestone/20?closed=1
We have already included TX and headers support to the current M3 which is
planned just after the next SF 5.0 RC3, which is expected tomorrow.
Regards,
Rajini
On Thu, Jul 20, 2017 at 5:01
Hello Pradeep,
thank you for sharing your experience, will certainly consider it.
On Thu, Jul 20, 2017 at 9:29 AM, Pradeep Gollakota
wrote:
> Luigi,
>
> I strongly urge you to consider a 5 node ZK deployment. I've always done
> that in the past for resiliency during maintenance. In a 3 node clus
Luigi,
I strongly urge you to consider a 5 node ZK deployment. I've always done
that in the past for resiliency during maintenance. In a 3 node cluster,
you can only tolerate one "failure", so if you bring one node down for
maintenance and another node crashes during said maintenance, your ZK
clus
Yes Andrey,
you can use an ENI without EIP on AWS if you only want a private address.
After some consideration, I think that growing the zookeeper cluster more
than 3 nodes is really unlikely so I think that I will attach 3 ENI to 3
servers in autoscaling and I will configure Kafka in using this 3
Solved by kafka-5600
Le mar. 18 juil. 2017 18:51, Sabarish Sasidharan a
écrit :
> This is similar to a problem I am also grappling with. We store the
> processed offset for each partition in state store. And after restarts we
> see that sometimes the start offset that Kafka Streams uses is a few
Hi, somebody know if we will any spring integration/kafka release soon
using apache clients 11?
Advanced Spark and TensorFlow Meetup
Join Chris Fregly and 7,676 other Spark and TensorFlow Experts in San
Francisco. Be the first to hear about upcoming Meetups.
Spark and Deep Learning Experts digging deep into the internals of Spark
Core, Spark SQL, DataFrames, Spark Streaming, MLlib, Grap
I'm seeing some behavior with the DistributedHerder that I am trying to
understand. I'm working on setting up a cluster of kafka connect nodes and
have a relatively large number of connectors to submit to it (392
connectors right now that will soon become over 1100). As for the
deployment of it I a
Did you try setting `auto.offset.reset` to "earliest" ?
-Matthias
On 7/18/17 8:44 PM, Yuri da Costa Gouveia wrote:
> Hello,
> I am having trouble to get the data from old offsets. I'm using the version
> 0.10.2.1, and I need any assistance to recover this data.
> This is my consumer class:
>
>
Sameer,
the optimization you describe applies to batch processing but not to
stream processing.
As you mentioned: "will traverse the data only once".
This property is interesting in batch processing only, as it means that
the data is only read from disk once and both map operations are applies
d
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-WhydoIgetanIllegalStateExceptionwhenaccessingrecordmetadata?
-Matthias
On 7/1/17 8:13 PM, Debasish Ghosh wrote:
> Just to give some more information, the ProcessorContext that gets passed
> to the init method of the custom store has a null
You'll need a ZK quorum established before brokers boot, for sure.
On Thu, Jul 20, 2017 at 12:53 PM, M. Manna wrote:
> Hello,
>
> This might be too obvious for some people, but just thinking out loud here.
>
> So we need a recommended 3 node cluster to achieve the 1 point failure
> model. I am t
Hello,
This might be too obvious for some people, but just thinking out loud here.
So we need a recommended 3 node cluster to achieve the 1 point failure
model. I am trying to deploy a 3 node cluster (3 zks and 3 brokers) in
Linux (or Even Windows, doesn't matter here).
Under the circumstance (o
Hi,
I have two questions:
> 1°/ Is the format written on this topic easily readable using the same
> Serde I use for the state store or does Streams change it in any way?
>
If it is a KeyValue Store then you can use your Serdes to read from the
changelog.
> 2°/ since the topic will be used by s
Hello everyone,
I currently run a small Streams app which accumulates data in a state store
and periodically erases it. I would like another application (not running
Kafka Streams), to consume the Kafka topic which backs this State Store and
sometimes take actions depending on the state (of course
OK, sounds good. Let's just make sure we note this in the upgrade notes.
Ismael
On Wed, Jul 19, 2017 at 11:57 AM, Jason Gustafson
wrote:
> Ismael, I debated that also, but the main point was to make users aware of
> the rebalance latency (with KIP-134 in mind). I'm guessing no one would
> notic
33 matches
Mail list logo