Hi,
I'm taking the task [1] of adding residual filter support to swiss join of
acero and came up with this PR [2].
While this PR is currently in draft status, as I am working on adding more
specialized tests and benchmarks, all the fundamental code and comments are
complete, and it passes all exi
With 3 +1 votes (3 binding) the release is approved
The release is available here:
It has also been released to crates.io.
Thank you to everyone who helped verify this release
On 05/01/2024 13:29, Raphael Taylor-Davies wrote:
Hi,
I would like to propose a release of Apache Arrow Rust Object
hi all — I was just catching up on e-mail threads and wanted to give a few
historical comments on this.
When we were assembling the Arrow PMC and committing to do the project in
2015, standardizing Arrow-over-REST was always something that was on the
TODO list — at that time we didn't have the IPC
The vote passes with 4 binding +1 votes, 1 non-binding. Thanks everyone!
I'll take care of the release tasks.
On Sat, Jan 6, 2024, at 19:06, Sutou Kouhei wrote:
> +1
>
> I ran the following on Debian GNU/Linux sid:
>
> JAVA_HOME=/usr/lib/jvm/default-java \
> dev/release/verify-release-candi
Thanks for the hint.
After reading through the geoarrow spec, I think I agree that this is probably
the best approach.
As far as I can tell all that is required is a standardized set of metadata
tags and then some well implemented compute functions that can easily project
the raw to physical i
The Apache Arrow community is pleased to announce the 0.9.0 release of the
Apache Arrow ADBC libraries. It includes 34 resolved GitHub issues ([1]).
The release is available now from [2] and [3].
Release notes are available at:
https://github.com/apache/arrow-adbc/blob/apache-arrow-adbc-0.9.0/C
[x] Close the GitHub milestone/project
[x] Add the new release to the Apache Reporter System
[x] Upload source release artifacts to Subversion
[x] Create the final GitHub release
[x] Update website
[x] Upload wheels/sdist to PyPI
[x] Publish Maven packages
[x] Update tags for Go modules
[x] Deploy
> Where I am struggling a little bit is to understand at what level those
compute functions should be implemented. As far as I can tell, when I load
a dictionary encoded arrow into a Pandas data frame or made a query using
DataFusion, the user can then just operate as if they are working directly
w
Hi Ruoxi,
I am taking a look at the PR, and will continue the discussion on Github.
With Regards,
Vibhatha Abeykoon