In addition, there is also lot more coming with LLAP.

http://www.slideshare.net/HadoopSummit/llap-subsecond-analytical-queries-in-hive

There is also no fine grained access control natively in Spark.
LLAP would help with that as well -
http://www.slideshare.net/HadoopSummit/finegrained-security-for-spark-and-hive



On Sun, Aug 7, 2016 at 7:24 PM, Marcin Tustin <mtus...@handybook.com> wrote:

> I think that's right. My testing (not very scientific) puts it on par for
> redshift for the datasets I use.
>
>
> On Sunday, August 7, 2016, Edward Capriolo <edlinuxg...@gmail.com> wrote:
>
>> A few entities going to "kill/take out/better than hive"
>> I seem to remember HadoopDb, Impala, RedShift , voltdb...
>>
>> But apparent hive is still around and probably faster
>> http://www.slideshare.net/hortonworks/hive-on-spark-is-blazi
>> ng-fast-or-is-it-final
>>
>>
>>
>>
>> On Sun, Aug 7, 2016 at 9:49 PM, 理 <wwl...@126.com> wrote:
>>
>>> in  my opinion, multiple  engine  is not  advantage,  but reverse.  it
>>>  disperse  the dev energy.
>>>   consider  the activity ,sparksql  support  all  tpc ds without modify
>>> syntax!  but  hive cannot.
>>> consider the tech,   dag, vectorization,   etc sparksql also has,
>>> seems the  code  is  more   efficiently.
>>>
>>>
>>> regards
>>> On 08/08/2016 08:48, Will Du wrote:
>>>
>>> First, hive supports different engines. Look forward it's dynamic engine
>>> switch
>>> Second, look forward hadoop 3rd gen and map reduce on memory will fill
>>> the gap
>>>
>>> Thanks,
>>> Will
>>>
>>> On 2016年8月7日, at 20:27, 理 <wwl...@126.com> wrote:
>>>
>>> hi,
>>>   sparksql improve  so fast,   both  hive and sparksql  are similar,  so
>>> hive  will  lost  or not?
>>>
>>> regards
>>>
>>>
>>>
>>>
>>>
>>>
>>
> Want to work at Handy? Check out our culture deck and open roles
> <http://www.handy.com/careers>
> Latest news <http://www.handy.com/press> at Handy
> Handy just raised $50m
> <http://venturebeat.com/2015/11/02/on-demand-home-service-handy-raises-50m-in-round-led-by-fidelity/>
>  led
> by Fidelity
>
>

Reply via email to