The GitHub Actions job "Build and push images" on texera.git/main has failed. Run started by GitHub user bobbai00 (triggered by bobbai00).
Head commit for run: 4f822e376afb40be3c6035762d98d7851f995dee / Chris <[email protected]> feat: Python Support for Large Binary (#4100) <!-- Thanks for sending a pull request (PR)! Here are some tips for you: 1. If this is your first time, please read our contributor guidelines: [Contributing to Texera](https://github.com/apache/texera/blob/main/CONTRIBUTING.md) 2. Ensure you have added or run the appropriate tests for your PR 3. If the PR is work in progress, mark it a draft on GitHub. 4. Please write your PR title to summarize what this PR proposes, we are following Conventional Commits style for PR titles as well. 5. Be sure to keep the PR description updated to reflect all changes. --> ### What changes were proposed in this PR? <!-- Please clarify what changes you are proposing. The purpose of this section is to outline the changes. Here are some tips for you: 1. If you propose a new API, clarify the use case for a new API. 2. If you fix a bug, you can clarify why it is a bug. 3. If it is a refactoring, clarify what has been changed. 3. It would be helpful to include a before-and-after comparison using screenshots or GIFs. 4. Please consider writing useful notes for better and faster reviews. --> This PR introduces Python support for the `large_binary` attribute type, enabling Python UDF operators to process data larger than 2 GB. Data is offloaded to MinIO (S3), and the tuple retains only a pointer (URI). This mirrors the existing Java LargeBinary implementation, ensuring cross-language compatibility. (See #4067 for system diagram and #4111 for renaming) ## Key Features ### 1. MinIO/S3 Integration - Utilizes the shared `texera-large-binaries` bucket. - Implements lazy initialization of S3 clients and automatic bucket creation. ### 2. Streaming I/O - **`LargeBinaryOutputStream`:** Writes data to S3 using multipart uploads (64KB chunks) to prevent blocking the main execution. - **`LargeBinaryInputStream`:** Lazily downloads data only when the read operation begins. Implements standard Python `io.IOBase`. ### 3. Tuple & Iceberg Compatibility - `largebinary` instances are automatically serialized to URI strings for Iceberg storage and Arrow tables. - Uses a magic suffix (`__texera_large_binary_ptr`) to distinguish pointers from standard strings. ### 4. Serialization - Pointers are stored as strings with metadata (`texera_type: LARGE_BINARY`). Auto-conversion ensures UDFs always see `largebinary` instances, not raw strings. ## User API Usage ### 1. Creating & Writing (Output) Use `LargeBinaryOutputStream` to stream large data into a new object. ```python from pytexera import largebinary, LargeBinaryOutputStream # Create a new handle large_binary = largebinary() # Stream data to S3 with LargeBinaryOutputStream(large_binary) as out: out.write(my_large_data_bytes) # Supports bytearray, bytes, etc. ``` ### 2. Reading (Input) Use `LargeBinaryInputStream` to read data back. It supports all standard Python stream methods. ```python from pytexera import LargeBinaryInputStream with LargeBinaryInputStream(large_binary) as stream: # Option A: Read everything all_data = stream.read() # Option B: Chunked reading chunk = stream.read(1024) # Option C: Iteration for line in stream: process(line) ``` ## Dependencies - `boto3`: Required for S3 interactions. - `StorageConfig`: Uses existing configuration for endpoints/credentials. ## Future Direction - Support for R UDF Operators - Check #4123 ### Any related issues, documentation, discussions? <!-- Please use this section to link other resources if not mentioned already. 1. If this PR fixes an issue, please include `Fixes #1234`, `Resolves #1234` or `Closes #1234`. If it is only related, simply mention the issue number. 2. If there is design documentation, please add the link. 3. If there is a discussion in the mailing list, please add the link. --> Design: #3787 ### How was this PR tested? <!-- If tests were added, say they were added here. Or simply mention that if the PR is tested with existing test cases. Make sure to include/update test cases that check the changes thoroughly including negative and positive cases if possible. If it was tested in a way different from regular unit tests, please clarify how you tested step by step, ideally copy and paste-able, so that other reviewers can test and check, and descendants can verify in the future. If tests were not added, please describe why they were not added and/or why it was difficult to add. --> Tested by running this workflow multiple times and check MinIO dashboard to see whether six objects are created and deleted. Specify the file scan operator's property to use any file bigger than 2GB. [Large Binary Python.json](https://github.com/user-attachments/files/24062982/Large.Binary.Python.json) ### Was this PR authored or co-authored using generative AI tooling? <!-- If generative AI tooling has been used in the process of authoring this PR, please include the phrase: 'Generated-by: ' followed by the name of the tool and its version. If no, write 'No'. Please refer to the [ASF Generative Tooling Guidance](https://www.apache.org/legal/generative-tooling.html) for details. --> No. --------- Signed-off-by: Chris <[email protected]> Report URL: https://github.com/apache/texera/actions/runs/20547736582 With regards, GitHub Actions via GitBox
