hings complicated later. So I think it will be beneficial to have
> something quick up and running, get better understanding of our needs and
> gaps, and go from there.
> > The needed infra is already up on AWS, so as soon as we resolve DNS and
> key exchange issues we can launch.
>
ll be beneficial to have something
> quick up and running, get better understanding of our needs and gaps, and go
> from there.
> The needed infra is already up on AWS, so as soon as we resolve DNS and key
> exchange issues we can launch.
>
> -Areg.
>
> -----Original M
> -Areg.
>
> -Original Message-----
> From: Tanya Schlusser [mailto:ta...@tickel.net]
> Sent: Thursday, February 7, 2019 4:40 PM
> To: dev@arrow.apache.org
> Subject: Re: Benchmarking dashboard proposal
>
> Late, but there's a PR now with first-draf
dy up on AWS, so as soon as we resolve DNS and key
exchange issues we can launch.
-Areg.
-Original Message-
From: Tanya Schlusser [mailto:ta...@tickel.net]
Sent: Thursday, February 7, 2019 4:40 PM
To: dev@arrow.apache.org
Subject: Re: Benchmarking dashboard proposal
Late, but there'
Late, but there's a PR now with first-draft DDL (
https://github.com/apache/arrow/pull/3586).
Happy to receive any feedback!
I tried to think about how people would submit benchmarks, and added a
Postgraphile container for http-via-GraphQL.
If others have strong opinions on the data modeling pleas
I hope to make a PR with the DDL by tomorrow or Wednesday night—DDL along
with a README in a new directory `arrow/dev/benchmarking` unless directed
otherwise.
A "C++ Benchmark Collector" script would be super. I expect some
back-and-forth on this to identify naïve assumptions in the data model.
A
hi folks,
I'm curious where we currently stand on this project. I see the
discussion in https://issues.apache.org/jira/browse/ARROW-4313 --
would the next step be to have a pull request with .sql files
containing the DDL required to create the schema in PostgreSQL?
I could volunteer to write the
I don't want to be the bottleneck and have posted an initial draft data
model in the JIRA issue https://issues.apache.org/jira/browse/ARROW-4313
It should not be a problem to get content into a form that would be
acceptable for either a static site like ASV (via CORS queries to a
GraphQL/REST inte
hi folks,
I'd like to propose some kind of timeline for getting a first
iteration of a benchmark database developed and live, with scripts to
enable one or more initial agents to start adding new data on a daily
/ per-commit basis. I have at least 3 physical machines where I could
immediately set
I don't think there is one but I just created
https://lists.apache.org/thread.html/278e573445c83bbd8ee66474b9356c5291a16f6b6eca11dbbe4b473a@%3Cdev.arrow.apache.org%3E
On Mon, Jan 21, 2019 at 10:35 AM Tanya Schlusser wrote:
>
> Areg,
>
> If you'd like help, I volunteer! No experience benchmarking
Sorry, copy-paste failure: https://issues.apache.org/jira/browse/ARROW-4313
On Mon, Jan 21, 2019 at 11:14 AM Wes McKinney wrote:
>
> I don't think there is one but I just created
> https://lists.apache.org/thread.html/278e573445c83bbd8ee66474b9356c5291a16f6b6eca11dbbe4b473a@%3Cdev.arrow.apache.or
Areg,
If you'd like help, I volunteer! No experience benchmarking but tons
experience databasing—I can mock the backend (database + http) as a
starting point for discussion if this is the way people want to go.
Is there a Jira ticket for this that i can jump into?
On Sun, Jan 20, 2019 at 3:24
hi Areg,
This sounds great -- we've discussed building a more full-featured
benchmark automation system in the past but nothing has been developed
yet.
Your proposal about the details sounds OK; the single most important
thing to me is that we build and maintain a very general purpose
database sc
I'll see if I can figure out why the benchmarks at
https://pandas.pydata.org/speed/arrow/ aren't being updated this weekend.
On Fri, Jan 18, 2019 at 2:34 AM Uwe L. Korn wrote:
> Hello,
>
> note that we have(had?) the Python benchmarks continuously running and
> reported at https://pandas.pydata.
We also have some JS benchmarks [1]. Currently they're only really run on
an ad-hoc basis to manually test major changes but it would be great to
include them in this.
[1] https://github.com/apache/arrow/tree/master/js/perf
On Fri, Jan 18, 2019 at 12:34 AM Uwe L. Korn wrote:
> Hello,
>
> note t
Hello,
note that we have(had?) the Python benchmarks continuously running and reported
at https://pandas.pydata.org/speed/arrow/. Seems like this stopped in July 2018.
UWe
On Fri, Jan 18, 2019, at 9:23 AM, Antoine Pitrou wrote:
>
> Hi Areg,
>
> That sounds like a good idea to me. Note our be
Hi Areg,
That sounds like a good idea to me. Note our benchmarks are currently
scattered accross the various implementations. The two that I know of:
- the C++ benchmarks are standalone executables created using the Google
Benchmark library, aptly named "*-benchmark" (or "*-benchmark.exe" on
+1 It make sense to track the performance of arrow Because I think project
arrow is different from other projects that its goal is efficiently data
exchange between systems/languages.
Melik-Adamyan, Areg 于2019年1月18日周五 下午2:14写道:
> Hello,
>
> I want to restart/attach to the discussions for creati
18 matches
Mail list logo