Thank you Sudesh, for reviewing the KIP and the valuable feedback.

Thanks,
Pritam

On Thu, Apr 17, 2025 at 8:59 PM Sudesh Wasnik <wasnik...@gmail.com> wrote:

> Thanks Pritam!
> No more feedback from my end. Nice addition !
>
> Thanks,
> Sudesh
> On 17 Apr 2025 at 4:59 PM +0530, pritam kumar <kumarpritamm...@gmail.com>,
> wrote:
> > I have made Nanosecond related changes in the KIP. Please have a look.
> > Thanks
> > Pritam.
> >
> > On Thu, Apr 17, 2025 at 4:25 PM pritam kumar <kumarpritamm...@gmail.com>
> > wrote:
> >
> > > Hi Sudesh,
> > > Sorry for the earlier comment, I just checked that avro in 1.12 has
> > > timestamp-nanos support. I will update the KIP to have nanosecond
> support
> > > also and correspondingly I will make changes for nanosecond.
> > >
> > > Thanks
> > > Pritam.
> > >
> > > On Wed, Apr 16, 2025 at 7:13 PM pritam kumar <
> kumarpritamm...@gmail.com>
> > > wrote:
> > >
> > > > Also just to add I did not add this in the first place as I think
> Avro
> > > > itself does not have the Nanosecond logical type.
> > > >
> > > > On Wed, Apr 16, 2025 at 7:01 PM pritam kumar <
> kumarpritamm...@gmail.com>
> > > > wrote:
> > > >
> > > > > Thanks Sudesh, for taking a look at this. I am already working on
> > > > > extending this for nanosecond precision as most sinks like iceberg
> have
> > > > > started giving nanosecond precision options.
> > > > >
> > > > > On Wed, Apr 16, 2025 at 4:41 PM Sudesh Wasnik <wasnik...@gmail.com
> >
> > > > > wrote:
> > > > >
> > > > > > Hi Pritam ! Thanks for the KIP !
> > > > > > Let’s extend the KIP to also add support for Nanosecond
> precision!
> > > > > >
> > > > > > Thanks
> > > > > > Sudesh
> > > > > >
> > > > > > On 2025/04/05 01:30:49 pritam kumar wrote:
> > > > > > > Hi Kafka Community,
> > > > > > > Sorry due to some changes I had to change the link to the kip.
> > > > > > > Here is the updated KIP link:
> > > > > > >
> > > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1154%3A+Extending+support+for+Microsecond+Precision+for+Kafka+Connect
> > > > > > >
> > > > > > > On Sat, Apr 5, 2025 at 12:14 AM pritam kumar <ku...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi Kafka Community,
> > > > > > > >
> > > > > > > > I’d like to start a discussion on KIP-1153: Extending
> Support for
> > > > > > > > Microsecond Precision for Kafka Connect
> > > > > > > > <
> > > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1153%3A+Extending+Support+for+Microsecond+Precision+for+Kafka+Connect
> > > > > > >
> > > > > > > > .
> > > > > > > >
> > > > > > > > The primary motivation behind this KIP is to enhance the
> precision
> > > > > > of
> > > > > > > > timestamp handling in Kafka Connect. Currently, Kafka
> Connect is
> > > > > > limited to
> > > > > > > > millisecond-level precision for timestamps. However, many
> modern
> > > > > > data
> > > > > > > > formats and platforms have moved beyond this limitation:
> > > > > > > >
> > > > > > > > -
> > > > > > > >
> > > > > > > > Formats such as *Avro*, *Parquet*, and *ORC* support
> microsecond
> > > > > > (and
> > > > > > > > even nanosecond) precision. For example, Avro specifies
> support for
> > > > > > > > timestamp-micros (spec link
> > > > > > > > <
> https://avro.apache.org/docs/1.11.0/spec.html#timestamp-micros>).
> > > > > > > > -
> > > > > > > >
> > > > > > > > Sink systems like *Apache Iceberg*, *Delta Lake*, and
> *Apache Hudi*
> > > > > > > > offer *microsecond and nanosecond precision* for time-based
> fields,
> > > > > > > > making millisecond precision inadequate for accurate data
> > > > > > replication and
> > > > > > > > analytics in some use cases.
> > > > > > > >
> > > > > > > > This gap can lead to *loss of precision* when transferring
> data
> > > > > > through
> > > > > > > > Kafka Connect, especially when interacting with systems that
> depend
> > > > > > on
> > > > > > > > high-resolution timestamps for change tracking, event
> ordering, or
> > > > > > > > deduplication.
> > > > > > > >
> > > > > > > > The goal of this KIP is to:
> > > > > > > >
> > > > > > > > -
> > > > > > > >
> > > > > > > > Introduce microsecond-level timestamp handling in Kafka
> Connect
> > > > > > schema
> > > > > > > > and data representation.
> > > > > > > > -
> > > > > > > >
> > > > > > > > Ensure connectors (both source and sink) can leverage this
> precision
> > > > > > > > when supported by the underlying data systems.
> > > > > > > > -
> > > > > > > >
> > > > > > > > Maintain backward compatibility with existing
> millisecond-based
> > > > > > > > configurations and data.
> > > > > > > >
> > > > > > > > We welcome community feedback on:
> > > > > > > >
> > > > > > > > -
> > > > > > > >
> > > > > > > > Potential implementation concerns or edge cases we should
> address
> > > > > > > > -
> > > > > > > >
> > > > > > > > Suggestions for schema enhancements or conversion strategies
> > > > > > > > -
> > > > > > > >
> > > > > > > > Impacts on connector compatibility and testing
> > > > > > > >
> > > > > > > > Looking forward to your thoughts and input on this proposal!
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > > Link to the KIP.
> > > > > > > >
> > > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1153%3A+Extending+Support+for+Microsecond+Precision+for+Kafka+Connect
> > > > > > > > <
> > > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1152%3A+Add+transactional+ID+prefix+filter+to+ListTransactions+API
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > >
> > > > > > Sent with a Spark
> > > > > >
> > > > >
>

Reply via email to