sashapolo commented on code in PR #6229:
URL: https://github.com/apache/ignite-3/pull/6229#discussion_r2197516582


##########
modules/storage-rocksdb/src/main/java/org/apache/ignite/internal/storage/rocksdb/index/AbstractRocksDbIndexStorage.java:
##########
@@ -257,14 +257,27 @@ String createStorageInfo() {
     }
 
     /**
-     * Deletes the data associated with the index, using passed write batch 
for the operation.
+     * Deletes the data associated with the index to prepare the storage for 
subsequent use, using passed write batch for the operation.
+     *
+     * @throws RocksDBException If failed to delete data.
+     */
+    public final void eraseData(WriteBatch writeBatch) throws RocksDBException 
{

Review Comment:
   We usually use `clearData` in this context



##########
modules/storage-rocksdb/src/main/java/org/apache/ignite/internal/storage/rocksdb/RocksDbStorageEngine.java:
##########
@@ -270,4 +273,24 @@ public void destroyMvTable(int tableId) {
             rocksDbStorage.rocksDbInstance.destroyTable(tableId);
         }
     }
+
+    @Override
+    public Set<Integer> tableIdsOnDisk() {
+        return storageByProfileName.values().stream()
+                .flatMap(storage -> 
storage.rocksDbInstance.tableIdsInRocksDb().stream())

Review Comment:
   `tableIdsInRocksDb` doesn't look like a good name here, it's obvious that 
it's from RocksDB. Why is this method not also called `tableIdsOnDisk`?



##########
modules/storage-rocksdb/src/main/java/org/apache/ignite/internal/storage/rocksdb/instance/SharedRocksDbInstance.java:
##########
@@ -443,4 +448,44 @@ private void destroyColumnFamily(ColumnFamily 
columnFamily) {
             );
         }
     }
+
+    /**
+     * Returns IDs of all tables for which there are storages in the 
underlying RocksDB.
+     */
+    public Set<Integer> tableIdsInRocksDb() {
+        Set<Integer> tableIds = new HashSet<>();
+
+        try (
+                var upperBound = new 
Slice(incrementPrefix(PARTITION_META_PREFIX));
+                var readOptions = new 
ReadOptions().setIterateUpperBound(upperBound);
+                RocksIterator it = meta.columnFamily().newIterator(readOptions)
+        ) {
+            it.seek(PARTITION_META_PREFIX);
+
+            while (it.isValid()) {
+                byte[] key = it.key();
+                int tableId = ByteUtils.bytesToInt(key, 
PARTITION_META_PREFIX.length);
+                tableIds.add(tableId);
+
+                it.next();
+            }
+
+            // Doing this to make an exception thrown if the iteration was 
stopped due to an error and not due to exhausting
+            // the iteration space.
+            it.status();
+        } catch (RocksDBException e) {
+            throw new IgniteInternalException(INTERNAL_ERR, "Cannot get table 
IDs", e);
+        }
+
+        return Set.copyOf(tableIds);

Review Comment:
   What's this for?



##########
modules/storage-rocksdb/src/main/java/org/apache/ignite/internal/storage/rocksdb/index/AbstractRocksDbIndexStorage.java:
##########
@@ -257,14 +257,27 @@ String createStorageInfo() {
     }
 
     /**
-     * Deletes the data associated with the index, using passed write batch 
for the operation.
+     * Deletes the data associated with the index to prepare the storage for 
subsequent use, using passed write batch for the operation.
+     *
+     * @throws RocksDBException If failed to delete data.
+     */
+    public final void eraseData(WriteBatch writeBatch) throws RocksDBException 
{
+        cleanup(writeBatch, false);
+    }
+
+    /**
+     * Deletes the data associated with the index (the storage will not be 
used anymore), using passed write batch for the operation.
      *
      * @throws RocksDBException If failed to delete data.
      */
     public final void destroyData(WriteBatch writeBatch) throws 
RocksDBException {
+        cleanup(writeBatch, true);
+    }
+
+    private void cleanup(WriteBatch writeBatch, boolean finalDestruction) 
throws RocksDBException {
         clearIndex(writeBatch);
 
-        if (descriptor.mustBeBuilt()) {
+        if (descriptor.mustBeBuilt() && !finalDestruction) {

Review Comment:
   maybe two separate (partially copy-pasted) methods are a cleaner solution 
here...



##########
modules/storage-api/src/testFixtures/java/org/apache/ignite/internal/storage/engine/AbstractPersistentStorageEngineTest.java:
##########
@@ -70,6 +70,9 @@
  * because it doesn't limit the usage of the engine with a single table.
  */
 public abstract class AbstractPersistentStorageEngineTest extends 
AbstractStorageEngineTest {
+    /** Makes sure that table destruction is persisted durably. */
+    protected abstract void persistTableDestructionIfNeeded();

Review Comment:
   Why is this method needed? Is it because PageMemory and RocksDb have 
different `destroy` durability effects? Can we just make them identical?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@ignite.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to