des, but not (yet) into the other direction.
>
> (`Joined` is also used for stream-stream join and `otherValueSerde` is
> uses there -- for stream-table join, `otherValueSerde` is ignored atm.)
>
> Thus, after mapValue() that changes the value type, we don't have a
> serde at
he Processor API
> can help me.
> But since the stream is really complex, I would prefer a confirmation
> before start wasting my time on this.
>
> Can anyone help me about this issue please ?
>
> Thanks
--
Richard Rossel
Atlanta - GA
new topic into a new Stream, and using that for
applying the left join.
That trick worked, but I'm pretty sure there is a more elegant way to do this.
Thanks folks, any help is really appreciated.
--
Richard Rossel
Atlanta - GA
t that using auto.offset.reset = none It will avoid any offset
reset, why doesn't it work?
Setting:
16 brokers, all running 2.3.0, replica 3 on all topics.
Some of the Consumer config:
auto.commit.interval.ms = 5000
auto.offset.reset = none
enable.auto.commit = false
connections.max.idle.m
sco-2019/whats-the-time-and-why/
>
>
> -Matthias
>
> On 7/10/20 7:00 AM, Richard Rossel wrote:
> > Thanks Matthias, it makes sense, now I need to find out why the
> > topic is not sorted by timestamp.
> >
> > The topic I'm using to be loaded as globalKTable is p
content semantically goes (partially)
> back in time what is usually undesired -- your table content should
> usually go forward in time only.
>
> Does this help?
>
>
> -Matthias
>
>
>
> On 7/8/20 10:54 AM, Richard Rossel wrote:
the same key
type (username),
and the entityTopic is receiving entries from another process very
frequently.
Do you think those warning messages are because of the way I'm creating the
Ktable?
Thanks.-
--
Richard Rossel
Atlanta - GA
.
So my two questions are:
a) How is the strategy kafka uses to decide if a log segment needs to
be deleted. It's using Max CreateTime and compare with retention
limits?
b) How can I delete that whole log segment file, and of course,
without messing the system?
Thanks.-
--
Richard Rossel
At
e": 385505944,
> "offsetLag": 0,
> "isFuture": false
> }
> [gpaggi@kafkalog001 ~]$ du -bc /var/lib/kafka/data/access_logs-1/*.log
> 367401290 /var/lib/kafka/data/access_logs-1/000245067429.log
> 18104654 /var/lib/kafka/data/access_l
thing: do I add to the command that
> you've written the path and name of the CSV file that I want to write the
> data to? Please, advise.
> Thanks,Doaa.
>
> Sent from Yahoo Mail on Android
>
> On Tue, Feb 25, 2020 at 6:20 PM, Richard Rossel wrote:
> you can us
you try browsing the beans with jmxterm before configuring the exporter?
>
> Gabriele
>
> On Mon, 24 Feb 2020 at 23:01, Richard Rossel wrote:
> >
> > Hi Gabriele,
> > I'm using Kafka 5.3.1, which is apache kafka 2.3 and I'm using JMX to
> > retrieve m
Hello,
> I'm new to kafka and I'd like to write data from kafka to a CSV file in a
> Mac. Please, advise.
> Thank You & Kindest Regards,Doaa.
--
Richard Rossel
Atlanta - GA
topic.__consumer_offsets.Size.value
> 1315862 1582578155
>
> Gabriele
>
> On Mon, 24 Feb 2020 at 19:54, Richard Rossel wrote:
> >
> > Hi List,
> > I'm trying to find the way to keep track of topics' size (or
> > partitions) across brokers, but
&
how do you keep monitoring this type of
metric?
Thanks.-
--
Richard Rossel
Atlanta - GA
14 matches
Mail list logo