davecromberge opened a new issue, #12448:
URL: https://github.com/apache/pinot/issues/12448

   **What needs to be done?**
   
   - Upgrade Flink version
   The version of Flink in the pinot project pom should be updated from 1.14.6 
to the latest version of Flink, 1.18.1.
   
   - Authentication
   The `org.apache.pinot.controller.helix.ControllerRequestClient` does not 
currently accept an `org.apache.pinot.spi.auth.AuthProvider`.
   Similarly, the `org.apache.pinot.connector.flink.sink.PinotSinkFunction<T>` 
does not accept an AuthProvider. 
   
   - Error handling
   Verify that the underlying SegmentWriter handles errors appending records to 
a segment and that errors propagate correctly to the Flink runtime.  Check that 
the SegmentUploader handles transmission errors and propagates these correctly.
   
   - Schema and Data Type Mapping
   Correctly convert types from Flink timestamps with timezones to the Pinot 
equivalent.  See [this 
comment](https://docs.google.com/document/d/1GVoFHOHSDPs1MEDKEmKguKwWMqM1lwQKj2e64RAKDf8/edit?disco=AAAAH-h_wJc).
   Double / decimal conversions - see [this 
comment](https://docs.google.com/document/d/1GVoFHOHSDPs1MEDKEmKguKwWMqM1lwQKj2e64RAKDf8/edit?disco=AAAAH-h_wJU).
   
   Nice to haves:
   
   - Segment size parameter
   Currently `segmentFlushMaxNumRecords` controls when the segment is flushed 
according to the number of ingested rows.  A dual to this could be 
`desiredSegmentSize` that could be used to flush segments when the number of 
bytes approaches or exceeds a size threshold.
   
   - Checkpoint support
   Understanding the limitations behind only supporting Batch mode execution in 
Flink.  Can the current segment writer be serialized and is there support for 
resuming from the serialized state?
   
   - Connector assembly
   Packaging the connector as a single assembly with shaded dependencies so 
that it can be used within the FlinkSQL environment.  This is done in other 
connectors such as Delta Lake, Google BigQuery etc.
   
   Other questions:
   - Is it possible to upload directly to a deep store and push metadata to the 
controller, or do we need the controller to implement the two-phase commit 
protocol?
   
   **Why the feature is needed**
   
   Our particular use case involves using pre-aggregation before ingestion into 
Pinot using Apache Datasketches.  These are serialized as binary and can be in 
the order of megabytes.  These are appended to a Delta Lake.  The idea is to 
stream records continuously from the Delta Lake using the [Flink Delta 
Connector](https://github.com/delta-io/delta/tree/master/connectors/flink) and 
have fine grained control over Pinot Segment generation.  These segments are to 
be uploaded directly to Pinot.   Our Pinot controllers are secured using Basic 
Authentication.  
   
   It is possible to clone and modify the existing connector and make 
modifications but some of these enhancements might benefit other users and 
discussing here is better.
   
   **Initial idea/proposal**
   Discuss the points above and collaborate on implementation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to