Yeah I agree reflection is the best solution. Whenever we do
reflection we should clearly document in the code which YARN API
version corresponds to which code path. I'm guessing since YARN is
adding new features... we'll just have to do this over time.
- Patrick
On Fri, Jul 25, 2014 at 3:35 PM,
Actually reflection is probably a better, lighter weight process for this.
An extra project brings more overhead for something simple.
On Fri, Jul 25, 2014 at 3:09 PM, Colin McCabe
wrote:
> So, I'm leaning more towards using reflection for this. Maven profiles
> could work, but it's tough s
So, I'm leaning more towards using reflection for this. Maven profiles
could work, but it's tough since we have new stuff coming in in 2.4, 2.5,
etc. and the number of profiles will multiply quickly if we have to do it
that way. Reflection is the approach HBase took in a similar situation.
best
I have a similar issue with SPARK-1767. There are basically three ways to
resolve the issue:
1. Use reflection to access classes newer than 0.21 (or whatever the oldest
version of Hadoop is that Spark supports)
2. Add a build variant (in Maven this would be a profile) that deals with
this.
3. Aut
(I'm resending this mail since it seems that it was not sent. Sorry if this
was already sent.)
Hi,
A couple of month ago, I made a pull request to fix
https://issues.apache.org/jira/browse/SPARK-1825.
My pull request is here: https://github.com/apache/spark/pull/899
But that pull request
Hi,
A couple of month ago, I made a pull request to fix
https://issues.apache.org/jira/browse/SPARK-1825.
My pull request is here: https://github.com/apache/spark/pull/899
But that pull request has problems:
l It is Hadoop 2.4.0+ only. It won't compile on the versions below it.
l The