krishvishal commented on code in PR #2916:
URL: https://github.com/apache/iggy/pull/2916#discussion_r2935490865


##########
core/journal/src/lib.rs:
##########
@@ -30,16 +34,40 @@ where
     fn header(&self, idx: usize) -> Option<Self::HeaderRef<'_>>;
     fn previous_header(&self, header: &Self::Header) -> 
Option<Self::HeaderRef<'_>>;
 
-    fn append(&self, entry: Self::Entry) -> impl Future<Output = ()>;
+    fn append(&self, entry: Self::Entry) -> impl Future<Output = 
io::Result<()>>;
     fn entry(&self, header: &Self::Header) -> impl Future<Output = 
Option<Self::Entry>>;
+
+    /// Advance the snapshot watermark so entries at or below `op` may be
+    /// evicted from the journal's in-memory index. The default is a no-op
+    /// for journals that do not require this watermark.
+    fn set_snapshot_op(&self, _op: u64) {}
+
+    /// Number of entries that can be appended before the journal would need
+    /// to evict un-snapshotted slots. Returns `None` for journals that don't 
persist to disk.
+    fn remaining_capacity(&self) -> Option<usize> {
+        None
+    }
+
+    /// Remove snapshotted entries from the WAL to reclaim disk space.
+    /// The default is a no-op for journals that do not persist to disk.
+    ///
+    /// # Errors
+    /// Returns an I/O error if compaction fails.
+    fn compact(&self) -> impl Future<Output = io::Result<()>> {
+        async { Ok(()) }
+    }

Review Comment:
   I agree that `drain` API is much cleaner. One thing to consider is, `drain` 
would read and deserialize all removed entries to return them to the caller, 
but the main consumer today (checkpoint) doesn't need the returned entries, it 
just wants them removed from  WAL. What do we do considering wasted the 
deserialization cost?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to