ime than what we expected, is there anyway that we can boost the
> performance? Also, in spite of turning the property on when we try to
> create dynamic partitions for multiple years data at a time we are again
> running into heap error. How can we handle this problem? Please
rt into events values ('2', 'Sam', 1234, 30);
insert into events values ('3', 'Jeff', 1234, 50);
insert into events values ('4', 'Ted', 1234, 60);
I realize select * and select s are not all that interesting in this
context but what
t; comes to "highly partitioned" tables.
>
> Any thoughts on this issue would be greatly appreciated.
>
> Thanks in advance,
> Pradeep
>
--
Slava Markeyev | Engineering | Upsight
Find me on LinkedIn <http://www.linkedin.com/in/slavamarkeyev>
<http://www.linkedin.com/in/slavamarkeyev>
anna start Spark, I need to include its libraries to the
> PATH, and the conflicts seems inevitable.
>
>
>
> On Mon, Jun 8, 2015 at 12:09 PM, Slava Markeyev <
> slava.marke...@upsight.com> wrote:
>
>> It sounds like you are running into a jar conflict between the
g.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238)
> at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:131)
> at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl.(ConnectionFactoryImpl.java:85)
>
>
--
Slava Markeyev | Engineering | Upsight
Find me on LinkedIn <http://www.linkedin.com/in/slavamarkeyev>
<http://www.linkedin.com/in/slavamarkeyev>
n(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.lang.RuntimeException: Unable to instantiate
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
> at
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
the partition creation step?
>
> Thanks,
> Chris
>
> On 4/13/15 10:59 PM, Slava Markeyev wrote:
>
>> This is something I've encountered when doing ETL with hive and having it
>> create 10's of thousands partitions. The issue
>> is each partition needs
-
>
> Refs - Here are the parameters that I used:
>
> export HADOOP_HEAPSIZE=16384
>
> set PARQUET_FILE_SIZE=268435456;
>
> set parquet.block.size=268435456;
>
> set dfs.blocksize=268435456;
>
> set parquet.compression=SNAPPY;
>
> SET hive.exec.dynamic.partition.mode=nonstrict;
>
> SET hive.exec.max.dynamic.partitions=50;
>
> SET hive.exec.max.dynamic.partitions.pernode=5;
>
> SET hive.exec.max.created.files=100;
>
>
>
>
>
> Thank you very much!
>
> Tianqi Tong
>
--
Slava Markeyev | Engineering | Upsight
>> >
>> >On Fri, Mar 27, 2015 at 3:10 PM, @Sanjiv Singh
>> >wrote:
>> >
>> >> Can I know why do you want to do so?
>> >>
>> >> Currently There is no command or direct way to do that..then I can
>> suggest
>> >> wor
you are creating hive/Impala table when the CSV file has some
> values with COMMA in between. it is like
>
> sree,12345,"payment made,but it is not successful"
>
>
>
>
>
> I know opencsv serde is there but it is not available in lower versions of
> Hive
'INSERT OVERWRITE'?
>
> Thanks!
>
--
Slava Markeyev | Engineering | Upsight
<http://www.linkedin.com/in/slavamarkeyev>
<http://www.linkedin.com/in/slavamarkeyev>
11 matches
Mail list logo