adoroszlai commented on a change in pull request #72: HDDS-2341. Validate tar 
entry path during extraction
URL: https://github.com/apache/hadoop-ozone/pull/72#discussion_r338166152
 
 

 ##########
 File path: 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestTarContainerPacker.java
 ##########
 @@ -188,13 +184,139 @@ public void pack() throws IOException, 
CompressorException {
     assertExampleChunkFileIsGood(
         Paths.get(destinationContainerData.getChunksPath()));
     Assert.assertFalse(
-        "Descriptor file should not been exctarcted by the "
+        "Descriptor file should not have been extracted by the "
             + "unpackContainerData Call",
         destinationContainer.getContainerFile().exists());
     Assert.assertEquals(TEST_DESCRIPTOR_FILE_CONTENT, descriptor);
+  }
+
+  @Test
+  public void unpackContainerDataWithValidRelativeDbFilePath()
+      throws Exception {
+    //GIVEN
+    KeyValueContainerData sourceContainerData =
+        createContainer(SOURCE_CONTAINER_ROOT);
+
+    String fileName = "sub/dir/" + TEST_DB_FILE_NAME;
+    File file = writeDbFile(sourceContainerData, fileName);
+    String entryName = TarContainerPacker.DB_DIR_NAME + "/" + fileName;
+
+    File containerFile = packContainerWithSingleFile(file, entryName);
+
+    // WHEN
+    unpackContainerData(containerFile);
+
+    // THEN
+    assertExampleMetadataDbIsGood(file.toPath().getParent());
+  }
+
+  @Test
+  public void unpackContainerDataWithValidRelativeChunkFilePath()
+      throws Exception {
+    //GIVEN
+    KeyValueContainerData sourceContainerData =
+        createContainer(SOURCE_CONTAINER_ROOT);
+
+    String fileName = "sub/dir/" + TEST_CHUNK_FILE_NAME;
+    File file = writeChunkFile(sourceContainerData, fileName);
+    String entryName = TarContainerPacker.CHUNKS_DIR_NAME + "/" + fileName;
+
+    File containerFile = packContainerWithSingleFile(file, entryName);
+
+    // WHEN
+    unpackContainerData(containerFile);
+
+    // THEN
+    assertExampleChunkFileIsGood(file.toPath().getParent());
 
 Review comment:
   Good point.
   
   I don't recall exactly why I added this assertion.  I can say that the main 
goal of the valid/invalid test cases is to verify that they are 
accepted/rejected by `unpackContainerData` (by way of not throwing or throwing 
exception).  The subsequent assertion is not really necessary, as the 
pack/unpack process is validated in `@Test pack()`.
   
   Now that you pointed this out, I tweaked the test a bit to be able to verify 
the chunk/db file for these cases, too.  Also added post-test cleanup.
   
   Thanks for the review.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to