Ok mapreduce is the engine, but I am talking about the file format. Even with 
Tez or Spark as an engine the fileformat will always be built on mapped.* 
and/or mapreduce.* 
Aside from hive this is also the case in Spark and Flink to leverage the 
benefits to access different storages (S3, HDFS etc) and file formats 
transparently.

> On 13. Sep 2017, at 20:53, Alan Gates <alanfga...@gmail.com> wrote:
> 
> I’m not aware of any plans in Hive to do any more work that uses Map Reduce
> as the execution engine, so I expect Hive will continue to use mapred.
> 
> Alan.
> 
>> On Wed, Sep 13, 2017 at 4:25 AM, Jörn Franke <zuinn...@gmail.com> wrote:
>> 
>> Dear all,
>> 
>> I have developed several custom input formats (e.g. for the Bitcoin
>> blockchain) including a HiveSerde, which are open source.
>> I plan to develop for my HadoopOffice inputformat also a HiveSerde, but I
>> wonder if I should continue to use mapred.* apis or if i should use
>> mapreduce.*
>> 
>> My inputformats support both APIs, but it seems that Hive is one of the
>> last (please correct me here) to use the mapred.* API
>> 
>> Personally, i have no preference, since I support both.
>> 
>> Thank you.
>> 
>> All the best
>> 

Reply via email to