One more thought that you could think about, have two consumer groups 1
that starts every hour for you "db consumer" and 2 for near real time , the
2ed should run all the time and populate your "memory db" like Redis and
the TTL could be arranged from Redis mechainsem
בתאריך יום ו׳, 28 במאי 2021,
So I think, You should write to your db the partition and the offset, while
initing the real time consumer you'd read from database where to set the
consumer starting point, kind-of the "exactly once" programming approach,
בתאריך יום ו׳, 28 במאי 2021, 21:38, מאת Ronald Fenner <
rfen...@gamecircus
That might work if my consumers were in the same process but the db consumer is
a python job running under Airflow and the realtime consumer wold be running as
a backend service on another server.
Also how would I seed the realtime consumer at startup if the consumer isn't
running which would c
https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seek(org.apache.kafka.common.TopicPartition,%20long)
בתאריך יום ו׳, 28 במאי 2021, 08:04, מאת Ran Lupovich :
> While your DB consumer is running you get the access to the partition
> ${partition} @ offset $
While your DB consumer is running you get the access to the partition
${partition} @ offset ${offset}
https://github.com/confluentinc/examples/blob/6.1.1-post/clients/cloud/nodejs/consumer.jswhen
setting your second consumers for real time just set them tostart from that
point
בתאריך יום ו׳, 28 ב
I'm trying to figure out how to pragmatically read a consumer groups offset for
a topic.
What I'm trying to do is read the offsets of our DB consumers that run once an
hour and batch lad all new messages. I then would have another consumer that
monitors the offsets that have been consumed and c