Any help will be much appreciated.Thanks
On Tue, Nov 17, 2015 at 2:39 PM, Sanjeev Verma
wrote:
> Thank Elliot, Eugene
> I am able to see the Base file created in one of the partition, seems the
> Compactor kicked in and created it but it has not created base files in
> rest of t
n+Properties#ConfigurationProperties-hive.compactor.worker.threads>
> >0
>
> Also, see
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterTable/PartitionCompact
> on
> how to trigger compaction manually.
>
> *Eugene*
>
> F
I have enable the hive transaction and able to see the delta files created
for some of the partition but i dont not see any base file created yet.it
seems strange to me seeing so many delta files without any base file.
Could somebody let me know when Base file created.
Thanks
I am creating a table from the two parquet partioned tables and getting the
error duplicate column. any idea whats going wrong here.
create table sample_table AS select * from parquet_table par inner
join parquet_table_counter ptc ON ptc.user_di=par.user_id;
FAILED: SemanticException [Error 100
Even having enough heap size my hiveserver2 going outofmemory, I enable
heap dump on error which producing 650MB of heap although I have
hiveserver2 configured with 8GB Heap.
here is the stacktrace of the thread which went in to OOM,could anybody let
me know why it throwing OOM
"pool-2-thread-4"
.(OutOfMemoryError.java:48)"
>
> which def. indicates that it's not very happy memory wise. I would def.
> recommend to bump up the memory and see if it helps. If not, we can debug
> further from there.
>
> On Tue, Sep 8, 2015 at 12:17 PM, Sanjeev Verma
> wrote:
&
What this exception implies here? how to identify the problem here.
Thanks
On Tue, Sep 8, 2015 at 10:44 PM, Sanjeev Verma
wrote:
> We have 8GB HS2 java heap, we have not tried any bumping.
>
> On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swar...@gmail.com <
> kulkarni.swar...@g
We have 8GB HS2 java heap, we have not tried any bumping.
On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> How much memory have you currently provided to HS2? Have you tried bumping
> that up?
>
> On Mon, Sep 7, 2015 at 1:09
our
> problem?
>
> [1] https://issues.apache.org/jira/browse/HIVE-10410
>
> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma
> wrote:
>
>> We are using hive-0.13 with hadoop1.
>>
>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swar...@gmail.com <
>> kulkarni.swar...@
We are using hive-0.13 with hadoop1.
On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> Sanjeev,
>
> Can you tell me more details about your hive version/hadoop version etc.
>
> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Ver
Can somebody gives me some pointer to looked upon?
On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma
wrote:
> Hi
> We are experiencing a strange problem with the hiveserver2, in one of the
> job it gets the GC limit exceed from mapred task and hangs even having
> enough heap availabl
Hi
We are experiencing a strange problem with the hiveserver2, in one of the
job it gets the GC limit exceed from mapred task and hangs even having
enough heap available.we are not able to identify what causing this issue.
Could anybody help me identify the issue and let me know what pointers I
nee
12 matches
Mail list logo