bq. If and only if they take the Hadoop class path at face value.
Many applications don’t because of conflicting dependencies and
instead import specific jars.

We do make the assumptions that applications need to pick up all the
dependency (either automatically or manually). The situation is
similar with adding a new dependency into hdfs in a minor release.

Maven / gradle obviously help, but I'd love to hear more about it how
you get it to work. In trunk hadoop-env.sh adds 118 jars into the
class path. Are you manually importing 118 jars for every single
applications?



On Wed, Nov 11, 2015 at 3:09 PM, Haohui Mai <ricet...@gmail.com> wrote:
> bq. currently pulling in hadoop-client gives downstream apps
> hadoop-hdfs-client, but not hadoop-hdfs server side, right?
>
> Right now hadoop-client pulls in hadoop-hdfs directly to ensure a
> smooth transition. Maybe we can revisit the decision in the 2.9 / 3.x?
>
> On Wed, Nov 11, 2015 at 3:00 PM, Steve Loughran <ste...@hortonworks.com> 
> wrote:
>>
>>> On 11 Nov 2015, at 22:15, Haohui Mai <ricet...@gmail.com> wrote:
>>>
>>> bq.  it basically makes the assumption that everyone recompiles for
>>> every minor release.
>>>
>>> I don't think that the statement holds. HDFS-6200 keeps classes in the
>>> same package. hdfs-client becomes a transitive dependency of the
>>> original hdfs jar.
>>>
>>> Applications continue to work without recompilation as the classes
>>> will be in the same name and will be available in the classpath. They
>>> have the option of switching to depending only on hdfs-client to
>>> minimize the dependency when they are comfortable.
>>>
>>> I'm not claiming that there are no bugs in HDFS-6200, but just like
>>> other features we discover bugs and fix them continuously.
>>>
>>> ~Haohui
>>>
>>
>> currently pulling in hadoop-client gives downstream apps hadoop-hdfs-client, 
>> but not hadoop-hdfs server side, right?

Reply via email to