swamirishi commented on code in PR #8214:
URL: https://github.com/apache/ozone/pull/8214#discussion_r2069225130


##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java:
##########
@@ -534,21 +534,23 @@ public void start(OzoneConfiguration configuration) 
throws IOException {
       int maxOpenFiles = configuration.getInt(OZONE_OM_DB_MAX_OPEN_FILES,
           OZONE_OM_DB_MAX_OPEN_FILES_DEFAULT);
 
-      this.store = loadDB(configuration, metaDir, maxOpenFiles);
+      this.store = loadDB(configuration, metaDir, maxOpenFiles, getLock());
 
       initializeOmTables(CacheType.FULL_CACHE, true);
     }
 
     snapshotChainManager = new SnapshotChainManager(this);
   }
 
-  public static DBStore loadDB(OzoneConfiguration configuration, File metaDir, 
int maxOpenFiles) throws IOException {
+  public static DBStore loadDB(OzoneConfiguration configuration, File metaDir, 
int maxOpenFiles, IOzoneManagerLock lock)
+      throws IOException {
     return newDBStoreBuilder(configuration, null, metaDir)
         .setOpenReadOnly(false)
         .setEnableCompactionDag(true)
         .setCreateCheckpointDirs(true)
         .setEnableRocksDbMetrics(true)
         .setMaxNumberOfOpenFiles(maxOpenFiles)
+        .setLock(lock)

Review Comment:
   Why add lock here? Can't we directly pass lock here?
   
https://github.com/apache/ozone/blob/dc9952e446940b82b277a7e8c9d02239a8417a06/hadoop-hdds/rocksdb-checkpoint-differ/src/main/java/org/apache/ozone/rocksdiff/RocksDBCheckpointDiffer.java#L1419-L1425



##########
hadoop-hdds/rocksdb-checkpoint-differ/src/main/java/org/apache/ozone/rocksdiff/RocksDBCheckpointDiffer.java:
##########
@@ -843,9 +889,16 @@ public synchronized Optional<List<String>> 
getSSTDiffListWithFullPath(DifferSnap
               String sstFullPath = getSSTFullPath(sst, src.getDbPath(), 
dest.getDbPath());
               Path link = Paths.get(sstFilesDirForSnapDiffJob,
                   sst + SST_FILE_EXTENSION);
-              Path srcFile = Paths.get(sstFullPath);
-              createLink(link, srcFile);
-              return link.toString();
+              // Take a read lock on the SST FILE
+              ReentrantReadWriteLock.ReadLock sstReadLock = 
getSSTFileLock(sstFullPath).readLock();

Review Comment:
   @SaketaChalamchala Do we even need a lock? Given that we are relying on the 
filesystem doing an Atomic Move I believe we don't even need a lock



##########
hadoop-hdds/rocks-native/src/main/java/org/apache/hadoop/hdds/utils/db/managed/ManagedRawSSTFileReader.java:
##########
@@ -48,10 +49,13 @@ public static boolean tryLoadLibrary() {
   }
 
   public static boolean loadLibrary() throws NativeLibraryNotLoadedException {
-    ManagedRocksObjectUtils.loadRocksDBLibrary();
-    if 
(!NativeLibraryLoader.getInstance().loadLibrary(ROCKS_TOOLS_NATIVE_LIBRARY_NAME,
 Arrays.asList(
-        ManagedRocksObjectUtils.getRocksDBLibFileName()))) {
-      throw new 
NativeLibraryNotLoadedException(ROCKS_TOOLS_NATIVE_LIBRARY_NAME);
+    if (!isLibraryLoaded) {

Review Comment:
   @SaketaChalamchala I see this is an optimization. Can we do it as a separate 
patch? This doesn't seem to be related to the change. Lets limit this patch to 
just pruning key info from the sst files.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java:
##########
@@ -534,21 +534,23 @@ public void start(OzoneConfiguration configuration) 
throws IOException {
       int maxOpenFiles = configuration.getInt(OZONE_OM_DB_MAX_OPEN_FILES,
           OZONE_OM_DB_MAX_OPEN_FILES_DEFAULT);
 
-      this.store = loadDB(configuration, metaDir, maxOpenFiles);
+      this.store = loadDB(configuration, metaDir, maxOpenFiles, getLock());
 
       initializeOmTables(CacheType.FULL_CACHE, true);
     }
 
     snapshotChainManager = new SnapshotChainManager(this);
   }
 
-  public static DBStore loadDB(OzoneConfiguration configuration, File metaDir, 
int maxOpenFiles) throws IOException {
+  public static DBStore loadDB(OzoneConfiguration configuration, File metaDir, 
int maxOpenFiles, IOzoneManagerLock lock)
+      throws IOException {
     return newDBStoreBuilder(configuration, null, metaDir)
         .setOpenReadOnly(false)
         .setEnableCompactionDag(true)
         .setCreateCheckpointDirs(true)
         .setEnableRocksDbMetrics(true)
         .setMaxNumberOfOpenFiles(maxOpenFiles)
+        .setLock(lock)

Review Comment:
   Pass `StripedLock <ReadWriteLock> `directly here.



##########
hadoop-hdds/rocksdb-checkpoint-differ/src/main/java/org/apache/ozone/compaction/log/CompactionFileInfo.java:
##########
@@ -32,16 +32,26 @@ public final class CompactionFileInfo {
   private final String startKey;
   private final String endKey;
   private final String columnFamily;
+  private boolean pruned;

Review Comment:
   Please add a testcase for the fields added.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@ozone.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@ozone.apache.org
For additional commands, e-mail: issues-h...@ozone.apache.org

Reply via email to