This is an automated email from the ASF dual-hosted git repository.
aicam pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/texera.git
The following commit(s) were added to refs/heads/main by this push:
new 9bc1ebdff0 fix: prevent orphaned sessions when lakeFS abort fails
(#4197)
9bc1ebdff0 is described below
commit 9bc1ebdff048fc74e9f1e88c6a0dc7e2085c49f2
Author: Xuan Gu <[email protected]>
AuthorDate: Mon Feb 16 09:59:23 2026 -0800
fix: prevent orphaned sessions when lakeFS abort fails (#4197)
<!--
Thanks for sending a pull request (PR)! Here are some tips for you:
1. If this is your first time, please read our contributor guidelines:
[Contributing to
Texera](https://github.com/apache/texera/blob/main/CONTRIBUTING.md)
2. Ensure you have added or run the appropriate tests for your PR
3. If the PR is work in progress, mark it a draft on GitHub.
4. Please write your PR title to summarize what this PR proposes, we
are following Conventional Commits style for PR titles as well.
5. Be sure to keep the PR description updated to reflect all changes.
-->
### What changes were proposed in this PR?
<!--
Please clarify what changes you are proposing. The purpose of this
section
is to outline the changes. Here are some tips for you:
1. If you propose a new API, clarify the use case for a new API.
2. If you fix a bug, you can clarify why it is a bug.
3. If it is a refactoring, clarify what has been changed.
3. It would be helpful to include a before-and-after comparison using
screenshots or GIFs.
4. Please consider writing useful notes for better and faster reviews.
-->
This PR fixes the issue where a failed LakeFS abort call could cause the
database transaction to roll back, leaving the upload session stuck in
the database. It moves the LakeFS abort call outside the database
transaction in abortMultipartUpload to prevent orphaned upload sessions.
The transaction now handles validation and DB cleanup, returning the
necessary values (repoName, uploadId, physicalAddress) as a tuple. After
the transaction commits, the LakeFS abort is called separately, so the
session is always cleaned up regardless of whether LakeFS succeeds.
### Any related issues, documentation, discussions?
<!--
Please use this section to link other resources if not mentioned
already.
1. If this PR fixes an issue, please include `Fixes #1234`, `Resolves
#1234`
or `Closes #1234`. If it is only related, simply mention the issue
number.
2. If there is design documentation, please add the link.
3. If there is a discussion in the mailing list, please add the link.
-->
Fixes #4196
### How was this PR tested?
<!--
If tests were added, say they were added here. Or simply mention that if
the PR
is tested with existing test cases. Make sure to include/update test
cases that
check the changes thoroughly including negative and positive cases if
possible.
If it was tested in a way different from regular unit tests, please
clarify how
you tested step by step, ideally copy and paste-able, so that other
reviewers can
test and check, and descendants can verify in the future. If tests were
not added,
please describe why they were not added and/or why it was difficult to
add.
-->
Manually tested
### Was this PR authored or co-authored using generative AI tooling?
<!--
If generative AI tooling has been used in the process of authoring this
PR,
please include the phrase: 'Generated-by: ' followed by the name of the
tool
and its version. If no, write 'No'.
Please refer to the [ASF Generative Tooling
Guidance](https://www.apache.org/legal/generative-tooling.html) for
details.
-->
No
---------
Co-authored-by: carloea2 <[email protected]>
Co-authored-by: Chen Li <[email protected]>
---
.../texera/service/resource/DatasetResource.scala | 25 +++++++---------------
.../service/resource/DatasetResourceSpec.scala | 4 ++++
2 files changed, 12 insertions(+), 17 deletions(-)
diff --git
a/file-service/src/main/scala/org/apache/texera/service/resource/DatasetResource.scala
b/file-service/src/main/scala/org/apache/texera/service/resource/DatasetResource.scala
index a60bc07adf..ad5f224720 100644
---
a/file-service/src/main/scala/org/apache/texera/service/resource/DatasetResource.scala
+++
b/file-service/src/main/scala/org/apache/texera/service/resource/DatasetResource.scala
@@ -1994,7 +1994,7 @@ class DatasetResource {
URLDecoder.decode(encodedFilePath, StandardCharsets.UTF_8.name())
)
- withTransaction(context) { ctx =>
+ val (repoName, uploadId, physicalAddr) = withTransaction(context) { ctx =>
if (!userHasWriteAccess(ctx, did, uid)) {
throw new ForbiddenException(ERR_USER_HAS_NO_ACCESS_TO_DATASET_MESSAGE)
}
@@ -2030,21 +2030,6 @@ class DatasetResource {
}
val physicalAddr =
Option(session.getPhysicalAddress).map(_.trim).getOrElse("")
- if (physicalAddr.isEmpty) {
- throw new WebApplicationException(
- "Upload session is missing physicalAddress. Restart the upload.",
- Response.Status.INTERNAL_SERVER_ERROR
- )
- }
-
- withLakeFSErrorHandling {
- LakeFSStorageClient.abortPresignedMultipartUploads(
- dataset.getRepositoryName,
- filePath,
- session.getUploadId,
- physicalAddr
- )
- }
// Delete session; parts removed via ON DELETE CASCADE
ctx
@@ -2057,8 +2042,14 @@ class DatasetResource {
)
.execute()
- Response.ok(Map("message" -> "Multipart upload aborted
successfully")).build()
+ (dataset.getRepositoryName, session.getUploadId, physicalAddr)
+ }
+
+ withLakeFSErrorHandling {
+ LakeFSStorageClient.abortPresignedMultipartUploads(repoName, filePath,
uploadId, physicalAddr)
}
+
+ Response.ok(Map("message" -> "Multipart upload aborted
successfully")).build()
}
/**
diff --git
a/file-service/src/test/scala/org/apache/texera/service/resource/DatasetResourceSpec.scala
b/file-service/src/test/scala/org/apache/texera/service/resource/DatasetResourceSpec.scala
index c03a6d4cb6..24253c3a92 100644
---
a/file-service/src/test/scala/org/apache/texera/service/resource/DatasetResourceSpec.scala
+++
b/file-service/src/test/scala/org/apache/texera/service/resource/DatasetResourceSpec.scala
@@ -2445,5 +2445,9 @@ class DatasetResourceSpec
intercept[WebApplicationException] {
abortUpload(filePath)
}.getResponse.getStatus shouldEqual 400
+
+ // DB session is cleaned up
+ fetchSession(filePath) shouldBe null
+ fetchPartRows(uploadId) shouldBe empty
}
}