Hello Murilo,
Thanks for your interests in KIP-150.
As we discussed in the KIP, the scope of this co-group is for stream
co-aggregation. For your case, the first joining table is not from the
aggregation but is a source table itself, in this case it cannot be
included in the co-group of KIP-150.
Vinay,
It is up to your personal judgment if you "trust" an x.0 release or not.
The RC was carefully tested and 2.5.0 was release as "general available"
what guarantee a certain level of stability. Of course, there could
always be unknown bug.
About upgrading from 2.1 to 2.5: please consult the u
Hi
I'm really excited about the new release for KafkaStreams.
I've been watching the new CoGroup feature, and now that this is out, I'm
trying to play around with it.
I wonder what would be the best way to do a
KTable.leftJoin(otherTable).leftJoin(yetAnotherTable)...
Taking the Customer example in
Hello Nicolae,
If your output topic is configured as log compacted, then sending a record
with null-bytes effectively serves as a tombstone. Note that you'd need to
make sure in your sink node's serializer that the serialized bytes are null
when the unserialized object typed value indicates to be
Hello kafka community,
Is there a way from kafka streams to emit a tombstone? say we have an
events topic with upsert or delete; and an output topic with the current
state only, log compacted; an upsert event will get mapped to current user
topic (it holds the full entity); but a delete would perha
log.message.timestamp.difference.max.ms is the "The maximum difference
allowed between the timestamp when a broker receives a message and the
timestamp specified in the message.".
What happens if the message timestamp is in the future? We just
encountered a problem where a producer set a timestam
I've just published a blog post highlighting many of the improvements that
landed with 2.5.0.
https://blogs.apache.org/kafka/entry/what-s-new-in-apache2
-David
On Wed, Apr 15, 2020 at 4:15 PM David Arthur wrote:
> The Apache Kafka community is pleased to announce the release for Apache
> Kafka
Sorry sent several times, ignore please this thread.
On 2020/04/16 15:49:16, Ilya R wrote:
> Hello, everyone
>
> I'm researching problem when reading a topic from the beginning max speed per
> partition/per consumer not more than 80MB/s. But near the end of the topic
> speed rise up to 250-30
Hello, everyone
I'm researching problem when reading a topic from the beginning max speed per
partition/per consumer not more than 80MB/s. But near the end of the topic
speed rise up to 250-300MB/s unexpectedly, not only on the last log segment on
several latest log segments. I've already chang
There is RAID10 on 10 SSD Intel 1.9TB with raid cache 2GB. Topic size is 290GB
it nearly fully in cache. Broker CPU usage near 2-3%.
CPU is 2 x Intel(R) Xeon(R) Gold 6244 CPU @ 3.60GHz.
I didn't find any bottlenecks via JMX graphs.
On 2020/04/16 15:59:23, Seva Feldman wrote:
> Hi,
>
> Can you
Hi,
Can you look at the disk IO while you are approaching end of the topic and
your performance raises? End of the topic may reside in FS cache in memory
and that may explain.
BR
On Thu, Apr 16, 2020 at 6:56 PM Ilya R wrote:
> Hello, everyone
>
> I'm researching problem when reading a topic fr
Hello, everyone
I'm researching problem when reading a topic from the beginning max speed per
partition/per consumer not more than 80MB/s. But near the end of the topic
speed rise up to 250-300MB/s unexpectedly, not only on the last log segment on
several latest log segments. I've already chang
Hi,
Can you please clarify below queries.
1. Can this be considered a stable version, and is it advisable to upgrade
to 2.5.0 from 2.1 version? since it's just released.
2. Are there any breaking changes? Or changes to existing configuration
needed?
And any basic things to keep in mind before u
Hi,
What should be the retention period for __consumer_offsets topic? Should it
be the same as other Kafka topics?
Thanks,
Nitin
Thank you, Nicolas!
Bruno
On Thu, Apr 16, 2020 at 2:24 PM Nicolas Carlot
wrote:
>
> I've opened a Jira issue on the subject
> https://issues.apache.org/jira/browse/KAFKA-9880
>
>
> Le jeu. 16 avr. 2020 à 13:14, Bruno Cadonna a écrit :
>
> > Hi Nicolas,
> >
> > Yeah, I meant "doesn't". Sorry for
I've opened a Jira issue on the subject
https://issues.apache.org/jira/browse/KAFKA-9880
Le jeu. 16 avr. 2020 à 13:14, Bruno Cadonna a écrit :
> Hi Nicolas,
>
> Yeah, I meant "doesn't". Sorry for that!
>
> Best,
> Bruno
>
> On Thu, Apr 16, 2020 at 1:02 PM Nicolas Carlot
> wrote:
> >
> > Hi Bru
Broker.id=2 restart??run bottom sh??
ghy@ghy-VirtualBox:~/T/k/bin$ ./kafka-topics.sh --describe --bootstrap-server
localhost:9083 --topic my-replicated-topic
Topic: my-replicated-topic PartitionCount:
3 ReplicationFactor:
3 Configs: segment.bytes=1073741824
Topic:
thank you !!
now i ok !
because :
./kafka-reassign-partitions.sh --zookeeper localhost:2181/cluster_1
--reassignment-json-file AddNewNode_Reassign-Partitions.json --execute
: OK
./kafka-reassign-partitions.sh --zookeeper localhost:2181
--reassignment-json-file
AddNewNode_Reass
Hi there,
I'm not sure if I understand your question correctly, but that error will
be shown if your AddNewNode_Reassign-Partitions.json file contains
partitions that don't exist on the cluster.
In your case, I suspect that "my-replicated-topic" simply doesn't exist, if
you create it with at least
ghy@ghy-VirtualBox:~/T/k/bin$ ./kafka-reassign-partitions.sh --zookeeper
localhost:2181 --reassignment-json-file AddNewNode_Reassign-Partitions.json
--execute
Partitions reassignment failed due to The proposed assignment contains
non-existent partitions: ListBuffer(my-replicated-topic-0,
my-rep
Hello there
I have two doubts that I am not exactly sure about them. Can anyone please help
me ?
1. A producer client sends a ProduceRequest to the cluster, but crashes
right after the leader replica has sent out the ProduceResponse, then restart
the producer and resend the message would l
Hi Nicolas,
Yeah, I meant "doesn't". Sorry for that!
Best,
Bruno
On Thu, Apr 16, 2020 at 1:02 PM Nicolas Carlot
wrote:
>
> Hi Bruno,
>
> "As far as I understand, the issue is that bulk loading as done in Kafka
> Streams does work as expected if FIFO compaction is used."
>
> You meant "doesn't"
Hi Bruno,
"As far as I understand, the issue is that bulk loading as done in Kafka
Streams does work as expected if FIFO compaction is used."
You meant "doesn't" right ?
Ok, I will open a ticket, but I don't think my "fix" is the correct one.
Just ignoring the issue doesn't seem to be a correct
Hi Nicolas,
Thank you for reporting this issue.
As far as I understand, the issue is that bulk loading as done in Kafka
Streams does work as expected if FIFO compaction is used.
I would propose that you open a bug ticket. Please make sure to include
steps to reproduce the issue in the ticket. Ad
# Bump #
So is this a bug ? Should I file a ticket ?
Any idea ? I don't like the idea of having to patch Kafka libraries...
Le mer. 1 avr. 2020 à 16:33, Nicolas Carlot
a écrit :
> Added some nasty code in kafka 2.4.1. Seems to work fine for now... From
> my understanding, the compaction process
Hi, I am using Ambari to manage Kafka, info as listed below:
Ambari version: 2.7.4.0
Kafka version: 2.0.0
The problem I ran into is that one broker restarts without shutdown log,
which makes it difficult to track down the reason. The related logs are
as follows, in which I cannot find "shut down
I am using Ambari to manage Kafka, info listed below:
Ambari version: 2.7.4.0
Kafka version: 2.0.0
broker number: 10
On every broker, authorizer logger keeps outputting following logs:
[2020-04-14 07:56:40,214] INFO Principal = User:xxx is Denied Operation =
Describe from host = 10.90.1.213 on
Well, obviously the API changed a lot between 0.10 and 2.4, client code
developed for one version surely is incompatible with the other one...
Le jeu. 16 avr. 2020 à 09:20, Gowtham S a écrit :
> Hi Nicolas,
> Sorry for the confusion made, there is no communication between clients.
> We are in ne
Kafka Java Client not support --unavailable-partitions and
--topics-with-overrides param?
Hi Nicolas,
Sorry for the confusion made, there is no communication between clients. We
are in need of using two Kafka client wrappers provided by two external
teams.
Team A has its own Kafka cluster with own client to produce/consume data,
and team B also have their own cluster and clients.
One
30 matches
Mail list logo