+1. I'd love to simply define a timer in my code (maybe metrics-scala ?)
using Spark's metrics registry. Also maybe switch to the newer version
(io.dropwizard.metrics) ?
On Thu, Aug 27, 2015 at 4:42 PM, Reynold Xin wrote:
> I'd like this to happen, but it hasn't been super high priority on
> any
hat needs to be changed. Maybe you can
> start by telling us what you need to change for every upgrade? Feel free to
> email me in private if this is sensitive and you don't want to share in a
> public list.
>
>
>
>
>
>
> On Thu, Aug 13, 2015 at 2:01 PM, Thomas D
Hi,
I have asked this before but didn't receive any comments, but with the
impending release of 1.5 I wanted to bring this up again.
Right now, Spark is very tightly coupled with OSS Hive & Hadoop which
causes me a lot of work every time there is a new version because I don't
run OSS Hive/Hadoop v
So I'm a little confused, has Hive 0.12 support disappeared in 1.4.0 ? The
release notes didn't mention anything, but the documentation doesn't list a
way to build for 0.12 anymore (
http://spark.apache.org/docs/latest/building-spark.html#building-with-hive-and-jdbc-support,
in fact it doesn't list
deployment?
>
> On Fri, Jun 12, 2015 at 7:18 PM, Thomas Dudziak wrote:
> > -1 to this, we use it with an old Hadoop version (well, a fork of an old
> > version, 0.23). That being said, if there were a nice developer api that
> > separates Spark from Hadoop (or rather, two A
-1 to this, we use it with an old Hadoop version (well, a fork of an old
version, 0.23). That being said, if there were a nice developer api that
separates Spark from Hadoop (or rather, two APIs, one for scheduling and
one for HDFS), then we'd be happy to maintain our own plugins for those.
cheers