I ran into this today, when dropping a table CREATED LIKE an external
table, and ended up with a significant chunk of data unintentionally
deleted (moved to trash, thankfully!). The (potential) issue is this:
currently, CREATE TABLE LIKE doesn't maintain the EXTERNAL status of the
table, unless
ThisWiki-Howtogetpermissiontoedit>).
-- Lefty
On Thu, Apr 30, 2015 at 7:07 PM, Andrew Mains
mailto:andrew.ma...@kontagent.com>> wrote:
Hi all,
Could I get edit access to the hive wiki in order to update the
hive/hbase integration docs
(https://cwiki.apache.org/conflu
Hi all,
Could I get edit access to the hive wiki in order to update the
hive/hbase integration docs
(https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration).
Specifically I'd like to:
1. Add documentation about compound key support
(the only statement on the wiki right now is
Filed https://issues.apache.org/jira/browse/HIVE-10545 for this; we're
planning on taking this up in the next couple of weeks.
On 3/30/15 4:48 PM, Andrew Mains wrote:
hive's hbase integration doesn't currently seem to support predicate
pushdown for queries over HBase snapshots.
Are you suggesting taking advantage of the sorted order to seek to the key
mentioned in a SARG
Pretty much, yes. It's essentially the same use case as predicate
pushdown for the live table case (already implemented), which converts
predicates into a scan, and we should be able to reuse a sig
Hi all,
Looking at the current implementation on trunk, hive's hbase integration
doesn't currently seem to support predicate pushdown for queries over
HBase snapshots. Does this seem like a reasonable feature to add?
It would be nice to have relative feature parity between queries running
over
lem already here. This InputFormat expects
that
conf.setInt(FixedLengthInputFormat.FIXED_RECORD_LENGTH, recordLength);
is set. I haven’t found any way to specify a parameter for a InputFormat.
I couldn’t find any way to specify it. Do you have any hints how to do it?
Ingo
On 18 Dec 2014, at 23:40, And
Hi Ingo,
Take a look at
https://hadoop.apache.org/docs/r2.3.0/api/org/apache/hadoop/mapred/FixedLengthInputFormat.html--it
seems to be designed for use cases very similar to yours. You may need
to subclass it to make things work precisely the way you need (in
particular, to deal with the head
Hi Atul,
The setting you pasted is for the metastore's authentication (setting to
false means SASL is disabled there). The setting you want is:
hive.server2.authentication – Authentication mode, default NONE. Options
are NONE, NOSASL, KERBEROS, LDAP, PAM and CUSTOM.
See https://cwiki.apach
Hard to say what the problem is just based on the stderr; try checking
hive's logs for more information:
https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-ErrorLogs
Andrew
On 11/4/14, 9:43 PM, Jihyun Suh wrote:
I have a problem to run hql from shell script.
I ca
We've been running hive 13 over CDH5 with MRv1. Things have worked
pretty much out of the box thus far--we haven't run into any
HDFS/mapreduce compatibility issues, for instance. The caveats Edward
mentioned are definitely worth taking note of; we haven't tried running
12 and 13 concurrently, a
If anyone needs it in the future, I've submitted a patch for this
feature here: https://issues.apache.org/jira/browse/HIVE-7805
On 7/21/14, 5:26 PM, Andrew Mains wrote:
Hi all,
We have a table in hive/HBase with a composite row key, the first
field of which is a "bucket". Sinc
Hi Felix,
Good question. Looking at the parsing code for column mappings in hive
13.1
(https://github.com/apache/hive/blob/release-0.13.1/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java#L177),
there doesn't seem to currently any support for escaping. Trunk looks to
have th
Can you check the hive logs (which should be located in
/tmp//hive.log )? See
https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-ErrorLogs
There ought to be a more specific error there.
Andrew
On 8/6/14, 6:13 PM, Rahul Channe wrote:
Hi All,
I am getting following
André,
To my knowledge, your understanding is correct--given that both Hive and
HCatalog are pointing to the same metastore instance, all HCatalog table
operations should be
reflected in Hive, and vice versa. You should be able to use the Hive
CLI and hcat interchangeably to execute your DDL.
Agreed--as far as I can tell there isn't any support for this currently.
This JIRA (https://issues.apache.org/jira/browse/HIVE-3727, referenced
in http://hortonworks.com/blog/hbase-via-hive-part-1/) seems relevant,
but there's no recent work on it, and I imagine the patch included is
out of da
Hi all,
We have a table in hive/HBase with a composite row key, the first field
of which is a "bucket". Since the bucket is based on a hash, every query
we have on our data needs to search through each bucket, and then to
apply start and stop row filters within each bucket. The most efficient
Done. https://issues.apache.org/jira/browse/HIVE-7433
Andrew
On 7/16/14, 6:09 PM, Navis류승우 wrote:
My bad. Could you do that?
Thanks,
Navis
2014-07-17 9:15 GMT+09:00 Andrew Mains <mailto:andrew.ma...@kontagent.com>>:
Hi all,
I'm currently experimenting with
Hi all,
I'm currently experimenting with using the new HBaseKeyFactory interface
(implemented in https://issues.apache.org/jira/browse/HIVE-6411) to do
some custom serialization and predicate pushdown on our HBase schema.
Ideally, I'd like to be able to use the information from the
hbase.colu
Hi all,
I'm currently experimenting with using the new HBaseKeyFactory interface
(implemented in https://issues.apache.org/jira/browse/HIVE-6411) to do
some custom serialization and predicate pushdown on our HBase schema.
Ideally, I'd like to be able to use the information from the
hbase.colu
Hi,
We've run into this issue as well, and it is indeed annoying. As I
recall, the issue comes in not when the records are read off disk but
when hive deals with the records further down the line (I forget exactly
where).
I believe this issue is relevant:
https://issues.apache.org/jira/brow
One method could be to either create a custom table off of a query with
a JSON serde (for instance, I've used
https://github.com/rcongiu/Hive-JSON-Serde).
Something like:
CREATE EXTERNAL TABLE my_tmp_table
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
STORED AS TEXTFILE
LOCATION '
Hi Thomas,
Check out the developer guide and FAQ:
https://cwiki.apache.org/confluence/display/Hive/DeveloperGuide#DeveloperGuide-CompilingandRunningHive,
https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ . The
instructions on the FAQ ought to work for the latest code (at least,
Hi all,
I'm having some issues working with records containing new lines after
deserialization. Some information:
1. The serialized records do not contain new lines (they are base64
encoded protobuf messages).
2. The deserialized records DO contain new lines (our SerDe base64
decodes them an
Hi all,
I'm having some issues working with records containing new lines after
deserialization. Some information:
1. The serialized records do not contain new lines (they are base64
encoded protobuf messages).
2. The deserialized records DO contain new lines (our SerDe base64
decodes them and e
25 matches
Mail list logo