Hey Wes,
The database in question accepts columnar chunks of "regular" binary data over
the network, one of the sources of which is parquet.
Thus, data only comes out of parquet on my side, and I was wondering how to get
it out as "regular" binary columns. Something like tobytes() for an Arrow
We have some stuff I Dremio that we've planned on open sourcing but
haven't yet done so. We should try to get that out for others to consume.
On Jan 7, 2018 11:49 AM, "Uwe L. Korn" wrote:
> Has anyone made progress on the JDBC adapter yet?
>
> I recently came across a lot projects with good JDB
hi Eli,
I'm wondering what kind of API you would want, if the perfect one
existed. If I understand correctly, you are embedding objects in a
BYTE_ARRAY column in Parquet, and need to do some post-processing as
the data goes in / comes out of Parquet?
Thanks,
Wes
On Sat, Jan 6, 2018 at 8:37 AM, E
Just saw this. Thanks Wes. Will send along if no further comments tomorrow.
On Jan 2, 2018 2:18 PM, "Siddharth Teotia" wrote:
> +1. Thanks, Wes.
>
> On Tue, Jan 2, 2018 at 12:10 PM, Holden Karau
> wrote:
>
> > Would it make sense to mention the other Apache projects using/planning
> to
> > use
Hey all,
Does anyone want to help draft a board report? Just noticed it is due soon.
Thanks
Jacques
Jim Crist created ARROW-1980:
Summary: [Python] Race condition in `write_to_dataset`
Key: ARROW-1980
URL: https://issues.apache.org/jira/browse/ARROW-1980
Project: Apache Arrow
Issue Type: Bug
Wes McKinney created ARROW-1979:
---
Summary: [JS] JS builds handing in es2015:umd tests
Key: ARROW-1979
URL: https://issues.apache.org/jira/browse/ARROW-1979
Project: Apache Arrow
Issue Type: Bug