I have a topic with message as per:
{'file_name' : filename,
'line_number' : src_line_number,
'section' : vTag,
'line_data': line_data
}
I want to unpack line_data into multiple columns based on position.
Can I do this via stream processing. the output will go onto a new
topic/strea
Thanks Daniyar for replying.
Do kafka streams have any apis to do the partitioning and grouping that you
are suggesting?
Also if I have to merge everything into a single partition what should be
the efficient way to do this?
On Fri, Jan 17, 2020 at 6:03 AM Daniyar Kulakhmetov
wrote:
> Since yo
Ryanne, thanks for the reply,that just helps me a lot.
At 2020-01-16 22:27:22, "Ryanne Dolan" wrote:
In this case the consumers just subscribe to "topic1" like normal, and the
remote topics (primary.topic1, secondary.topic1) are just for DR. MM2 is not
required for things to work under norma
Since you not going to merge everything into one partition, you don't need
to sort all messages across all partitions (because messages are sorted
only within partition).
I'd suggest splitting X partitions to Y groups and then merge source
partitions within each group into their destination partiti
Does an acknowledgement in this scenario guarantee persistence of the message
in the logs of the ISRs, including the leader?
Sincerely,
Anindya
> On Jan 16, 2020, at 3:49 PM, M. Manna wrote:
>
> Andiya,
>
> On Wed, 15 Jan 2020 at 21:59, Anindya Haldar
> wrote:
>
>> Okay, let’s say
>>
>> -
Andiya,
On Wed, 15 Jan 2020 at 21:59, Anindya Haldar
wrote:
> Okay, let’s say
>
> - the application is using a non-transactional producer, shared across
> multiple threads
> - the linger.ms and buffer.memory is non-zero, and so is batch.size such
> that messages are actually batched
> - the repl
Congratulations everyone!
On Tue, Jan 14, 2020 at 9:30 AM Gwen Shapira wrote:
> Hi everyone,
>
> I'm happy to announce that Colin McCabe, Vahid Hashemian and Manikumar
> Reddy are now members of Apache Kafka PMC.
>
> Colin and Manikumar became committers on Sept 2018 and Vahid on Jan
> 2019. The
Any take on this very specific question?
Sincerely,
Anindya Haldar
Oracle Responsys
> On Jan 15, 2020, at 1:59 PM, Anindya Haldar wrote:
>
> Okay, let’s say
>
> - the application is using a non-transactional producer, shared across
> multiple threads
> - the linger.ms and buffer.memory is no
Just to add when this operation will be going on no new data will be added
to original Kafka topic. I am trying to avoid buffering all data to a
temporary datastore to sort.
On Thu, 16 Jan 2020, 23:14 Debraj Manna, wrote:
> Hi
>
> I have a Kafka topic with X partitions. Each message has a timest
Hi
I have a Kafka topic with X partitions. Each message has a timestamp, ts.
Can someone suggest me some way of sorting all the messages (based on ts)
across all partitions and putting it in a new topic with Y partitions (Y <
X ) using Kafka java client?
Thanks
thank you very much everyone for the answers!
-Messaggio originale-
Da: Ryanne Dolan
Inviato: giovedì 16 gennaio 2020 18:20
A: Kafka Users
Oggetto: Re: Kafka Broker leader change without effect
That's right, thanks for the correction. I don't suppose the producer is
configured with ac
That's right, thanks for the correction. I don't suppose the producer is
configured with acks=all in this case.
Ryanne
On Thu, Jan 16, 2020, 11:05 AM JOHN, BIBIN wrote:
> Producer request will not fail. Producer will fail based on acks and
> min.insync.replicas config parameters.
>
>
> -Ori
offsets.topic.replication.factor is set to 1, so the consumer is likely
failing because some of the __consumer_offsets topic partitions are offline.
On Thu, Jan 16, 2020 at 9:05 AM JOHN, BIBIN wrote:
> Producer request will not fail. Producer will fail based on acks and
> min.insync.replicas con
Producer request will not fail. Producer will fail based on acks and
min.insync.replicas config parameters.
-Original Message-
From: Ryanne Dolan
Sent: Thursday, January 16, 2020 10:52 AM
To: Kafka Users
Subject: Re: Kafka Broker leader change without effect
Marco, the replication f
Marco, the replication factor of 3 is not possible when you only have two
brokers, thus the producer will fail to send records until the third broker
is restored. You would need to change the topic replication factor to 2 for
your experiment to work as you expect.
Ryanne
On Thu, Jan 16, 2020, 9:5
There has been similar issue raised at confluent too:
https://github.com/confluentinc/cp-docker-images/issues/638
I am however not using docker, but it looks like same issue where kafka
connect startup throws lot of these warning messages before starting up.
Any idea how to avoid all these warnin
Hello guys!
i have a problem i wrote about stackoverflow here:
https://stackoverflow.com/questions/59772124/kafka-broker-leader-change-without-effect
Can you help me?
thank you
Marco
Hey,
running MM2 that tried to process its backlog of 1 week just after it was
started.
I see these in the logs:
[2020-01-16 13:07:30,985] ERROR
WorkerSourceTask{id=MirrorSourceConnector-0} Failed to flush, timed out
while waiting for producer to flush outstanding 4112 messages
(org.apache.kafka
MM2 nodes only communicate via Kafka -- no connection is required between
them.
To reconfigure, a rolling restart probably won't do what you expect, since
the configuration is always dictated by a single leader. Once the leader is
bounced, it will broadcast the new configuration via Kafka. If you
SQ... makes 2 of us...
I have a bucket load of app servers that have log4j, apache httpd.log and
what we call spolog (its our own custom text log files) on them, we can't
use NFS so hoping to deploy just the connector in stand alone mode on all
these log sources.
anyone done this, that can advise
In this case the consumers just subscribe to "topic1" like normal, and the
remote topics (primary.topic1, secondary.topic1) are just for DR. MM2 is
not required for things to work under normal circumstances, but if one
cluster goes down you can recover its data from the other.
Ryanne
On Thu, Jan
Hi,
I run two instances of MM2 with the command connect-mirror-maker.sh
Q1., Is there any requirement to cluster MM2? Like a network connection
between the nodes? How do MM2 coordinate the work between nodes?
Q2., Assuming I run two instances and want to update the configuration,
should it work
Not sure what you mean by "SQ", but Kafka Connect workers can indeed be
deployed on their own, connecting to a remote Kafka cluster.
Standalone would make sense in that case, yes.
--
Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
On Thu, 16 Jan 2020 at 03:42, George
Hi,
I am following the documentation under:
https://kafka.apache.org/documentation/#connect_user
https://docs.confluent.io/current/connect/userguide.html
For testing I am using standalone mode. And I am using kafka_2.12-2.3.1.
So I have defined:
plugin.path=/path/to/plugin/dir
In file: config/conn
24 matches
Mail list logo