Hi all,
Reminder that our biweekly call is tomorrow at
https://meet.google.com/vtm-teks-phx. All are welcome to join. Notes will
be sent out to the mailing list afterward.
Neal
+1 (binding)
I ran the followings on Debian GNU/Linux sid:
* INSTALL_NODE=0 \
CUDA_TOOLKIT_ROOT=/usr \
ARROW_CMAKE_OPTIONS="-DgRPC_SOURCE=BUNDLED -DBoost_NO_BOOST_CMAKE=ON" \
dev/release/verify-release-candidate.sh source 1.0.1 0
* dev/release/verify-release-candidate.sh b
Also, the fact that Ray has forked Plasma means their implementation
becomes potentially incompatible with Arrow's. So even if we keep
Plasma in our codebase, we can't guarantee interoperability with Ray.
Regards
Antoine.
Le 18/08/2020 à 19:51, Wes McKinney a écrit :
> I do not think there i
I do not think there is an urgency to remove Plasma from the Arrow
codebase (as it currently does not cause much maintenance burden), but
the reality is that Ray has already hard-forked and so new maintainers
will need to come out of the woodwork to help support the project if
it is to continue hav
Hi Uwe,
I put a PR to the arrow-site repo.
https://github.com/apache/arrow-site/pull/72
Best
On Wed, Jul 22, 2020 at 10:38 AM Uwe L. Korn wrote:
> Hello Niranda,
>
> cool to see this. Feel free to open a PR to add it to the Powered By list
> on https://arrow.apache.org/powered_by/
>
> Cheers
>
We are very interested in Plasma as a stand-alone project. The fork would
hit us doubly hard, because it reduces both the appeal of an Arrow-specific
use case as well as our planned Ray integration.
We are developing effectively a database for network activity data that
runs with Arrow as data pla
It is my personal opinion that actual UDF functions registered with data
fusion should take a known set of input types and single return type (e.g.
sum_i32 --> i32). I think this would:
1. Simplify the implementation of both the DataFusion optimizer and the UDFs
2. Make it easier for UDF writers a
+1 (binding) based on verifying the Rust implementation only.
On Tue, Aug 18, 2020 at 3:43 AM Antoine Pitrou wrote:
>
> +1.
>
> Source verification went fine on Ubuntu 18.04, with CUDA enabled, except
> Javascript where some tests failed.
>
> Binary verification went fine on Ubuntu 18.04.
>
> Re
Sorry to thread hijack.
> One key point is that we perform our own dictionary encoding of the data
> before generating the Arrow file, so basically all of the dimensional data
> in the Arrow file itself consists of just numbers (integers) that represent
> keys into an array of strings stored outs
My thoughts on the points raised so far:
* Does supporting Big Endian increase the reach of Arrow by a lot?
Probably not a significant amount, but it does provide one more avenue of
adoption.
* Does it increase code complexity?
Yes. I agree this is a concern. The PR in question did not seem t
Arrow Build Report for Job nightly-2020-08-18-0
All tasks:
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-08-18-0
Failed Tasks:
- test-conda-cpp-valgrind:
URL:
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-08-18-0-github-test-conda-cpp-valgrind
+1.
Source verification went fine on Ubuntu 18.04, with CUDA enabled, except
Javascript where some tests failed.
Binary verification went fine on Ubuntu 18.04.
Regards
Antoine.
Le 18/08/2020 à 01:13, Krisztián Szűcs a écrit :
> Hi,
>
> I would like to propose the following release candidat
12 matches
Mail list logo