You can use `ByteArrayDeserializer` to get `byte[]` key and value from
the consumer and deserialize both "manually" using AvorDeserializer that
is created with MockSchemaRegistry. `ConsumerRecord` also give you the
topic name for each record via #topic().
-Matthias
On 10/29/18 4:03 PM, chinchu ch
Thanks Mathias . How do I deal with this scenario in the case of a
consumer that expects a specific record type ?.I am trying to write an
integration test for a scenario for the below scenairo.All of these uses
an avro object as the value for producer record .I am kind of able to
write this
That is quite expensive to do...
Might be best to write a short Java program that uses
Consumer#endOffset() and Consumer#beginningOffsets()
-Matthias
On 10/29/18 3:02 PM, Burton Williams wrote:
> you can user kafkacat starting at that offset to head and pipe the output
> to "wc -l" (word count)
You set:
> senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,avroSerializer.getClass());
This will tell the Producer to create a new AvroSerailizer object, and
this object expects "schema.registry.url" to be set during
initialization, ie, you need to add the config to `senderProps`.
H
Make sure to call `KafkaStreams#close()` to get the latest offsets
committed.
Beside this, you can check the consumer and Streams logs in DEBUG mode,
to see what offset is picked up (or not).
-Matthias
On 10/29/18 11:43 AM, Patrik Kleindl wrote:
> Hi
> How long does your application run? More t
you can user kafkacat starting at that offset to head and pipe the output
to "wc -l" (word count).
-BW
On Mon, Oct 29, 2018 at 3:39 AM Sachit Murarka
wrote:
> Hi All,
>
> Could you please help me in getting count of all messages stored in kafka
> from a particular offset?
> I have tried GetOffs
Hey folks,
I am getting the below exception when using a mockSchemaRegsitry in a
junit test . I appreciate your help.I am using confluent 4.0
private SchemaRegistryClient schemaRegistry;
private KafkaAvroSerializer avroSerializer;
Properties defaultConfig = new Properties();
defaultConfi
Hi
How long does your application run? More than the 60 seconds you set for commit
interval?
Have a look at
https://sematext.com/opensee/m/Kafka/uyzND1SdDRMgROjn?subj=Re+Kafka+Streams+why+aren+t+offsets+being+committed+
and check if your offsets are really comitted
Best regards
Patrik
> Am 29.
Hi
No, my application id doesn't change
пн, 29 окт. 2018 г. в 19:11, Patrik Kleindl :
> Hi
> Does your applicationId change?
> Best regards
> Patrik
>
> > Am 29.10.2018 um 13:28 schrieb Pavel Koroliov :
> >
> > Hi everyone! I use kafka-streams, and i have a problem when i use
> > windowedBy. Ever
Hi
Does your applicationId change?
Best regards
Patrik
> Am 29.10.2018 um 13:28 schrieb Pavel Koroliov :
>
> Hi everyone! I use kafka-streams, and i have a problem when i use
> windowedBy. Everything works well until I restart the application. After
> restarting my aggregation starts from beginn
Hi all!
I was trying to deploy a little Kafka cluster over Kubernetes (k8s), and
noticed this wonderful helm charts from here
https://github.com/confluentinc/cp-helm-charts
The Kafka chart included there, will try to expose the brokers so they can
be accesible not only from within the k8s cluster,
Hi everyone! I use kafka-streams, and i have a problem when i use
windowedBy. Everything works well until I restart the application. After
restarting my aggregation starts from beginning.
Code bellow:
>
> StreamsBuilder builder = new StreamsBuilder()
> KStream stream = builder.stream(topic,
Hi All,
Recently I faced the Below issue
WARN Fetcher:679 - Received unknown topic or partition error in ListOffset
request for partition messages-2-1. The topic/partition may not exist or
the user may not have Describe access to it
My consumer is not working for the particular topic. When I ch
Hi, everyone,
As titled, I'm dealing with the problem "How to create kafka producer by
.Net framework in Kerberos-enabled Kafka cluster?"
And I need some help. Does anyone have similar experience and kindly
provide some suggestion?
Best,
*Yvonne Chang 張雅涵
Hi John and Matthias
thanks for the questions, maybe explaining our use case helps a bit:
We are receiving CDC records (row-level insert/update/delete) in one topic
per table. The key is derived from the DB records, the value is null in
case of deletes. Those would be the immutable facts I guess.
T
Hi All,
Could you please help me in getting count of all messages stored in kafka
from a particular offset?
I have tried GetOffsetShell command, it is not giving me.
Kind Regards,
Sachit Murarka
Hi,
We recently saw a split-brain behavior happened in production.
There were a controller switch, and then unclean leader elections.
It led to 24 partitions (out of 70) had 2 leaders for 2 days until I
restarted the broker. Producers and consumers were talking to different
"leaders" so the recor
17 matches
Mail list logo