Hi Vahid,
here is the output of the GetOffsetShell commands (with --time -1 & -2)
$KAFKA10_HOME/bin/kafka-run-class.sh kafka.tools.GetOffsetShell
--broker-list localhost:6092,localhost:6093,localhost:6094,localhost:6095
--topic topicPurge --time -2 --partitions 0,1,2
topicPurge:0:67
topicPurge:1
Hi Karan,
Just to clarify, with `--time -1` you are getting back the latest offset
of the partition.
If you do `--time -2` you'll get the earliest valid offset.
So, let's say the latest offset of partition 0 of topic 'test' is 100.
When you publish 5 messages to the partition, and before retenti
The issue is in zookeeper and kafka configuration
Kafka server.proterties
#advertised.host.name=10.179.165.7 # commnent at 20170621
#advertised.listeners=PLAINTEXT://0.0.0.0:9080 # commnent at 20170621
#port=9080 #comment at 20170621
listeners=PLAINTEXT://10.179.165.7:9080 #changed from 0.0.0.0 to
Hi Karan,
I think the issue is in verification step. Because the start and end
offsets are not going to be reset when messages are deleted.
Have you checked whether a consumer would see the messages that are
supposed to be deleted? Thanks.
--Vahid
From: karan alang
To: users@kafka.apa
A quick note on notable changes since rc1:
1. A significant performance improvement if transactions are enabled:
https://github.com/apache/kafka/commit/f239f1f839f8bcbd80cce2a4a8643e15d340be8e
2. Fixed a controller regression if many brokers are started
simultaneously:
https://github.com/apache/ka
Hello Kafka users, developers and client-developers,
This is the third candidate for release of Apache Kafka 0.11.0.0.
This is a major version release of Apache Kafka. It includes 32 new KIPs.
See the release notes and release plan (
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+
Hi Vahid,
somehow, the changes suggested don't seem to be taking effect, and i dont
see the data being purged from the topic.
Here are the steps i followed -
1) topic is set with param -- retention.ms=1000
$KAFKA10_HOME/bin/kafka-topics.sh --describe --topic topicPurge --zookeeper
localhost:216
Hello All,
Let's assume I have a 3-Node Zookeeper ensemble and a 3-Node Kafka Cluster
in my Kafka environment and one of ZK node goes down.
What would be the impact of 1 ZK node failure on Kafka Cluster?
I am just trying to understand difference between 2 node Zookeeper ensemble
and a 3 node Zoo
Hi,
there are two things:
1) aggregation operator produce an output record each time the aggregate
is is updates. Thus, you would get 6 record in you example. At the same
time, we deduplicate consecutive outputs with an internal cache. And the
cache is flushed non-mechanistically (either partly f
Hey Subhash,
thanks, i was able to test this out with 1 partition topic & verify this.
On Thu, Jun 22, 2017 at 1:39 PM, Subhash Sriram
wrote:
> Hi Karan,
>
> Yeah, so as to Paolo's point, keep in mind that Kafka does not guarantee
> order across partitions, only within a partition. If you publi
got it, thanks!
On Thu, Jun 22, 2017 at 12:40 PM, Paolo Patierno wrote:
> Kafka guarantees messages ordering at partition level not across
> partitions at topic level. Having out of order reading maybe possible If
> your topic has more than one partition.
>
> From: Subhash Sriram
> Sent: Thursda
Hi Karan,
Yeah, so as to Paolo's point, keep in mind that Kafka does not guarantee
order across partitions, only within a partition. If you publish messages
to a topic with 3 partitions, it will only be guaranteed that they are
consumed in order within the partition.
You can retry your test by pu
Hi Subhash,
number of partitions - 3
On Thu, Jun 22, 2017 at 12:37 PM, Subhash Sriram
wrote:
> How many partitions are in your topic?
>
> On Thu, Jun 22, 2017 at 3:33 PM, karan alang
> wrote:
>
> > Hi All -
> >
> > version - kafka 0.10
> > I'm publishing data into Kafka topic using command lin
Hi Karan,
The other broker config that plays a role here is
"log.retention.check.interval.ms".
For a low log retention time like in your example if this broker config
value is much higher, then the broker doesn't delete old logs regular
enough.
--Vahid
From: karan alang
To: users@kaf
Kafka guarantees messages ordering at partition level not across partitions at
topic level. Having out of order reading maybe possible If your topic has more
than one partition.
From: Subhash Sriram
Sent: Thursday, 22 June, 21:37
Subject: Re: Kafka 0.10 - kafka console consumer not reading the d
How many partitions are in your topic?
On Thu, Jun 22, 2017 at 3:33 PM, karan alang wrote:
> Hi All -
>
> version - kafka 0.10
> I'm publishing data into Kafka topic using command line,
> and reading the data using kafka console consumer
>
> *Publish command ->*
>
> $KAFKA_HOME/bin/kafka-verifia
Hi All -
version - kafka 0.10
I'm publishing data into Kafka topic using command line,
and reading the data using kafka console consumer
*Publish command ->*
$KAFKA_HOME/bin/kafka-verifiable-producer.sh --topic mmtopic1
--max-messages 100 --broker-list
localhost:9092,localhost:9093,localhost:909
Hi All -
How do i go about deleting data from Kafka Topics ? I've Kafka 0.10
installed.
I tried setting the parameter of the topic as shown below ->
$KAFKA10_HOME/bin/kafka-topics.sh --zookeeper localhost:2161 --alter
--topic mmtopic6 --config retention.ms=1000
I was expecting to have the data p
Hi team,
As per my experimentation mirror maker doesn't compress messages and send
to target broker if it is not configured to do so even the messages in
source broker are compressed. I understand the current implementation of
mirror maker has no visibility to what compression codec the source
me
Answers inline:
> On 22 Jun 2017, at 03:26, Guozhang Wang wrote:
>
> Thanks for the updated KIP, some more comments:
>
> 1.The config name is "default.deserialization.exception.handler" while the
> interface class name is "RecordExceptionHandler", which is more general
> than the intended purp
Hi all,
I’m playing with Kafka Streams 0.10.2.1 and I’m having some issues here which I
hope you can help me to clarify/understand.
In a hypothetical scenario, I have 2 source streams – clicks and orders – which
I’m trying to join to match determine from which page the purchase has been
made.
Raghav,
We are going through the voting process now, expecting to have another RC
and release in a few more days.
Guozhang
On Thu, Jun 22, 2017 at 3:59 AM, Raghav wrote:
> Hi
>
> Would anyone know when is the Kafka 0.11.0 scheduled to be released ?
>
> Thanks.
>
> --
> Raghav
>
--
-- Guoz
That's fair, and nice find with the transaction performance improvement!
Once the RC is out, we'll do a final round of performance testing with the
new ProducerPerformance changes enabled.
I think it's fair that this shouldn't delay the release. Is there an
official stance on what should and shou
Hi Tom,
We are going to do another RC to include Apurva's significant performance
improvement when transactions are enabled:
https://github.com/apache/kafka/commit/f239f1f839f8bcbd80cce2a4a8643e15d340be8e
Given that, we can also include the ProducerPerformance changes that Apurva
did to find and
Do you list all three brokers on your consumers bootstrap-server list?
-hans
> On Jun 22, 2017, at 5:15 AM, 夏昀 wrote:
>
> hello:
> I am trying the quickstart of kafka documentation,link is,
> https://kafka.apache.org/quickstart. when I moved to Step 6: Setting up a
> multi-broker cluster,I ha
Thank you very much , Damian
^_^
> 在 2017年6月22日,22:43,Damian Guy 写道:
>
> Hi,
> Yes the key format used by a window store changelog is the same format as
> is stored in RocksDB. You can see what the format is by looking here:
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/or
Hi,
Yes the key format used by a window store changelog is the same format as
is stored in RocksDB. You can see what the format is by looking here:
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/internals/WindowStoreUtils.java
Thanks,
Damian
On Thu
hello:
I am trying the quickstart of kafka documentation,link is,
https://kafka.apache.org/quickstart. when I moved to Step 6: Setting up a
multi-broker cluster,I have deployed 3 kafka broker instance.I killed either
server-1 or server-2, everything goes well as the document says. But when I
ki
I explicitly call KTable.to(Serde>, Serdes.Long(), String
topic),
save the same data to another topic(manually created by myself), then the excp
is gone.
so the **-changelog internal topic has special key format ? (even the key type
is same = windowed )
Hi:
when call KGroupedStream.count(Windows windows , String storeName )
storeName-changelog is auto created as internal topic, and key type :
windowed , value type: Long
I try to consume from the internal storeName-changelog, code sample like:
final Deserializer> windowedDeserializer = new
Hello Eli,
This is from Kafka: Definitive Guide ( by Neha Narkhede , Gwen Shapira ,
and Todd Palino) , Chapter 2. Installing Kafka
"The Kafka broker limits the maximum size of a message that can be
produced, configured by the message.max.bytes parameter which defaults to
100, or 1 megabyte. A
Hi Barton - I think we can use Async Producer with Call Back api(s) to
keep track on which event failed ..
--Senthil
On Thu, Jun 22, 2017 at 4:58 PM, SenthilKumar K
wrote:
> Thanks Barton.. I'll look into these ..
>
> On Thu, Jun 22, 2017 at 7:12 AM, Garrett Barton
> wrote:
>
>> Getting good
Thanks Barton.. I'll look into these ..
On Thu, Jun 22, 2017 at 7:12 AM, Garrett Barton
wrote:
> Getting good concurrency in a webapp is more than doable. Check out these
> benchmarks:
> https://www.techempower.com/benchmarks/#section=data-r14&hw=ph&test=db
> I linked to the single query one be
Hi
Would anyone know when is the Kafka 0.11.0 scheduled to be released ?
Thanks.
--
Raghav
Hi Eno,
I am less interested in the user facing interface but more in the actual
implementation. Any hints where I can follow the discussion on this? As
I still want to discuss upstreaming of KAFKA-3705 with someone
Best Jan
On 21.06.2017 17:24, Eno Thereska wrote:
(cc’ing user-list too)
Note that while I agree with the initial proposal (withKeySerdes, withJoinType,
etc), I don't agree with things like .materialize(), .enableCaching(),
.enableLogging().
The former maintain the declarative DSL, while the later break the declarative
part by mixing system decisions in the DSL. I
Hi Abhimanya,
You can very well do it through Kafka, KafkaStreams and something like
redis.
I would design it to be something like this:-
1. Topic 1 - Pending tasks
2. Topic 2 - Reassigned Tasks.
3. Topic 3- Task To Resource Mapping.
Some other components could be:-
4. Redis Hash(task progress
Thanks for the reply Mayank. Do you know if this is documented somewhere? I
wasnt able to find mention of it.
Thanks
Eli
> On 22 Jun 2017, at 05:50, mayank rathi wrote:
>
> If you are compressing messages than size of "compressed" message should be
> less than what's specified in these paramet
38 matches
Mail list logo