Hi Folks,
I have tried column name like (Order Details) with space, it cause error.
Is there any way to mention column name with sapce.
Regards,
Renuka N
Hi Rajat, I used an alternative instead of Datanucleus plugin in IntelliJ.
Try create a run configuration as the following pictured shows. And make
sure you have datanucleus in your module's dependencies.
Hope it can help.
*孙若曦*
2015-06-18 3:32 GMT+08:00 Rajat Jain :
> Hi,
>
> I want to run Hi
Hi guys,
Let me ask a quick question.
Is there something like maximum number for columns in Hive table?
Thanks,
Shimpei
It's work~~ But I see some ERROR and Deadlock .
2015-06-18 09:06:06,509 ERROR [test.oracle-22]: txn.CompactionTxnHandler
(CompactionTxnHandler.java:findNextToCompact(194)) - Unable to select next
element for compaction, ERROR: could not serialize access due to concurrent
update
2015-06-18
Thank you! I will try
r7raul1...@163.com
From: Alan Gates
Date: 2015-06-18 08:33
To: user
Subject: Re: delta file compact take no effect
See
https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-Configuration
Compaction is initiated by the thrift metastore serve
See
https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-Configuration
Compaction is initiated by the thrift metastore server. You need to set
the values labeled metastore in the above page in the hive-site.xml for
your metastore server.
Alan.
r7raul1...@163
Hi all,
I have a pretty big Hive Query. I'm joining over 3 Hive-Tables which have
thousands of lines each. I'm grouping this join by several columns. In the
Hive-Shell this query only reach about 80%. After about 1400 seconds its
canceling with the following error:
Status: Failed
Vertex
In the wiki page,
https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2
It is still advising to disable hdfs and local filesystem cache. It should
not be needed anymore, given that HIVE-4501 is resolved since 0.13.0.
Is this correct?
referenced configuration:
fs.hdfs.impl.disabl
Hi,
> Caused by: java.lang.ClassCastException:
>org.apache.hadoop.hive.common.type.HiveVarchar cannot be cast to
>java.lang.String
>at
>org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatchCtx.addPartitionCo
>lsToBatch(VectorizedRowBatchCtx.java:566)
Is it a partition column the one m
Hi,
I have one table with VARCHAR and CHAR datatypes. While reading data through
hive, I am getting below error :--
Diagnostic Messages for this Task:
Error: java.io.IOException: java.io.IOException: java.lang.RuntimeException:
java.lang.ClassCastException: org.apache.hadoop.hive.common.type
Hi,
I can see that two new counters have been added for hive
(RECORDS_IN/RECORDS_OUT) in hive 0.14.
Prior to this release which counters could be used to get the records read by
hive job and records written. Because i noticed that in hive 0.14 for a few
hive jobs i see map_input_records but
I am able to fix that issue, but got another error
[127.0.0.1:1] hive> CREATE TABLE IF NOT EXISTS pagecounts_hbase (rowkey
STRING, pageviews STRING, bytes STRING) STORED BY
'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES
('hbase.columns.mapping' = ':key,f:c1,f:c2') TBL
Hi,
Whats wrong with my settings?
[127.0.0.1:1] hive> CREATE TABLE IF NOT EXISTS pagecounts_hbase (rowkey
STRING, pageviews STRING, bytes STRING) STORED BY
'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES
('hbase.columns.mapping' = ':key,f:c1,f:c2') TBLPROPERTIES ('
hba
my hive version hive-cdh5.4.0
follower this step, the exception throw
# hive
CREATE TABLE test1 (name string) PARTITIONED BY (pt string);
ALTER TABLE test1 ADD PARTITION (pt='1');
ALTER TABLE test1 CHANGE name name1 string;
ALTER TABLE test1 CHANGE name name1 string cascade;
then throw excep
My use case is to query time series data ingested into HBase table
containing a web page name or url as row key and related properties as
column qualifiers. The properties for the web page are dynamic ie, the
columns qualifiers are dynamic for a given timestamp.
I would like to create a Hive manag
Hi Edward,Can we do the same/similar thing for parquet file?Any
pointer?Regards,Mohammad
On Tuesday, June 16, 2015 2:35 PM, Edward Capriolo
wrote:
https://github.com/edwardcapriolo/filecrush
On Tue, Jun 16, 2015 at 5:05 PM, Chagarlamudi, Prasanth
wrote:
Hello,I am looking for a
16 matches
Mail list logo