Thanks Alan.

One crude solution would be to copy data from the ACID table to a simple
table and present that table to Spark to see the data.

This is basically Spark optimiser issue not the engine itself

My Hive runs on Spark query engine and all works fine there.

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 1 August 2016 at 23:47, Alan Gates <alanfga...@gmail.com> wrote:

> There’s no way to force immediate compaction.  If there are compaction
> workers in the metastore that aren’t busy they should pick that up
> immediately.  But there isn’t an ability to create a worker thread and
> start compacting.
>
> Alan.
>
> > On Aug 1, 2016, at 14:50, Mich Talebzadeh <mich.talebza...@gmail.com>
> wrote:
> >
> >
> > Rather than queuing it
> >
> > hive> alter table payees COMPACT 'major';
> > Compaction enqueued.
> > OK
> >
> > Thanks
> >
> > Dr Mich Talebzadeh
> >
> > LinkedIn
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >
> > http://talebzadehmich.wordpress.com
> >
> > Disclaimer: Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
> >
>
>

Reply via email to