Ok mapreduce is the engine, but I am talking about the file format. Even with
Tez or Spark as an engine the fileformat will always be built on mapped.*
and/or mapreduce.*
Aside from hive this is also the case in Spark and Flink to leverage the
benefits to access different storages (S3, HDFS etc
I’m not aware of any plans in Hive to do any more work that uses Map Reduce
as the execution engine, so I expect Hive will continue to use mapred.
Alan.
On Wed, Sep 13, 2017 at 4:25 AM, Jörn Franke wrote:
> Dear all,
>
> I have developed several custom input formats (e.g. for the Bitcoin
> bloc
Dear all,
I have developed several custom input formats (e.g. for the Bitcoin
blockchain) including a HiveSerde, which are open source.
I plan to develop for my HadoopOffice inputformat also a HiveSerde, but I
wonder if I should continue to use mapred.* apis or if i should use
mapreduce.*
My inpu
On Sep 6, 2011, at 9:06 AM, Ashutosh Chauhan wrote:
> https://cwiki.apache.org/confluence/display/Hive/Roadmap lists
> mapred->mapreduce transition as one of the roadmap item.
> Is that still the plan of action? Or are we leaning towards mapred for a
> while. Supporting both simultaneously is a th
https://cwiki.apache.org/confluence/display/Hive/Roadmap lists
mapred->mapreduce transition as one of the roadmap item.
Is that still the plan of action? Or are we leaning towards mapred for a
while. Supporting both simultaneously is a third option.
Ashutosh
An interesting fact I picked up at the Hadoop Hackathon today was that the
mapred API has been "undeprecated". This means that it may be OK to just leave
Hive on the old mapred API and spend cleanup energy elsewhere.
JVS