Works great! Thanks a lot!
On Tue, Jul 6, 2021 at 5:31 PM Sutou Kouhei wrote:
> Hi,
>
> Ah, we changed package name that setup APT source.
>
> Could you try the following?
>
>
> sudo apt update
> sudo apt install -y -V ca-certificates lsb-release wget
> wget https://apache.jfrog.io/artifact
Hi,
Ah, we changed package name that setup APT source.
Could you try the following?
sudo apt update
sudo apt install -y -V ca-certificates lsb-release wget
wget https://apache.jfrog.io/artifactory/arrow/$(lsb_release --id --short | tr
'A-Z' 'a-z')/apache-arrow-archive-keyring-latest-$(lsb_
Thanks for the info. I did try the Install Instructions but I only got up
to here:
+ lsb_release --id --short
+ id=Ubuntu
+ lsb_release --codename --short
+ codename=xenial
+ tr A-Z a-z
+ echo Ubuntu
+ wget
https://apache.jfrog.io/artifactory/arrow/ubuntu/apache-arrow-apt-source-latest-xenial.deb
Hi,
It's not temporary.
See
https://blog.conan.io/2021/03/31/Bintray-sunset-timeline.html
for details.
Could you try instruction described in
https://arrow.apache.org/install/ ?
Note that we dropped support for Ubuntu Xenial because it
reached EOL. We don't provide newer packages for Xenial.
Hello,
I realize that newer packages are on jfrog.io Until last week, I was still
able to use bintray.com for Xenial packages of 3.0.0. Today
https://apache.bintray.com/arrow/ returns forbidden. Is this temporary? If
not, are these Xenial packages available somewhere else?
Thank you!
Rares
Wow, building a static blog uses libffi?
In any case, if it's merely the website build that fails with this
issue, I would suggest to build it under emulation (using `arch -x86
...` perhaps?).
Le 06/07/2021 à 19:00, Wes McKinney a écrit :
I've been trying to build the website on an M1 Ma
I've been trying to build the website on an M1 Mac and I'm running
into libffi-related crashes in the Ruby gem toolchain that I'm not
able to diagnose myself
https://gist.github.com/wesm/635697a8904c91c562892991d8269291
Let me know if anyone has successfully built the site or has other
recommenda
>
> Right, I had wanted to focus the discussion on Flight as I think schema
> evolution or multiplexing streams (more so the latter) is a property of the
> transport and not the stream format itself. If we are leaning towards just
> schema evolution then maybe it makes sense to discuss it for the I
Hi all,
Our biweekly sync call is tomorrow at
https://meet.google.com/vtm-teks-phx. All are welcome to join. Notes
will be shared with the mailing list afterward.
Ian
It's time again for our quarterly ASF board report, I created a Google
doc here where you are all free to suggest edits
https://docs.google.com/document/d/1yOVUiIHvdC_3guX7bsBrBmsGGbKaC4KLpfMI4HIrX3M/edit?usp=sharing
This is due next Wednesday July 14 so we have a little bit of time
Thanks
Wes
When the streaming compression interfaces were originally implemented
in 2018 [1], there was not a distinction between LZ4 "raw" compression
(which is Compression::LZ4) and LZ4 "frame" compression
(Compression::LZ4_FRAME). So in that patch while the LZ4 raw
compression method was being used for one
Note that the "iter_batches" method on ParquetFile already gives you a
way to consume the Parquet file progressively with a stream of
RecordBatches without creating a single Table for the full Parquet
file (which will already leverage the row groups of the Parquet file).
The example in the JIRA use
I left a comment in Jira, but I agree that having a faster method to
"box" Arrow array values as Python objects would be useful in a lot of
places. Then these common C++ code paths could be used to "tupleize"
record batches reasonably efficiently
On Tue, Jul 6, 2021 at 3:08 PM Alessandro Molina
w
I guess that doing it at the Parquet reader level might allow the
implementation to better leverage row groups, without the need to keep in
memory the whole Table when you are iterating over data. While the current
jira issue seems to suggest the implementation for Table once it's already
fully ava
14 matches
Mail list logo