I have checked out the latest HIVE Trunk and built it. I am now trying to
configure a mysql metastore. I ran the script hive-schema-0.7.0.mysql.sql
which was the latest one in the list after creating the metastore database.
So why is it complaining about this CDS table?
2011-09-16 17:45
Some history:
The unit testing inside hive is good at doing what it does. Essentially it
runs a hive .q file and diffs the file against previous known runs.
This does some nice things for hive:
1) We are sure the query planner/parsers evaluate the query the same way.
2) We are sure the query retu
I used reflect to urldecode like below and it works great.
SELECT reflect("java.net.URLDecoder", "decode","to be decoded value) from
footable limit 1;
I have to start building UDF at somepoint. I am deferring that for now.
Thank you Loren and Carl for the ideas and your time.
Thank you,
Chalc
Thanks, Yongqiang!
Could you please confirm my understanding of how to use block compression?
As of now, I am setting these properties before populating the table
that should contain compressed data:
SET io.seqfile.compression.type=BLOCK;
SET hive.exec.compress.output=true;
SET mapred.output.c
There is no fix. The page is removed in new versions. This information is
really just a subset of information you could find from the hadoop
jobtracker. It was nice because you were sure which jobs belonged to hive.
I write my hive select queries with some bogus comments like so.
select /* jid=5