Answered my own question, no there is not. The way to do is is to modify
the DB_LOCATION_URI field in metastore.DBS (at least if you're using MySQL)
On Mon, Jun 30, 2014 at 5:14 PM, Jon Bender
wrote:
> Hey all,
>
> I'm on Hive 0.10.0 on one of my clusters. We had a namenod
Hey all,
I'm on Hive 0.10.0 on one of my clusters. We had a namenode hostname
change, so I'm trying to point all of our tables, partitions and databases
to the new locations.
When i describe database mydb, the location shows up as
"hdfs:///user/hive/warehouse/mydb.db", and i want to set it
to "h
Hi there,
I'm trying to pass some external properties to a UDF. In the MapReduce
world I'm used to extending Configured in my classes, but in my UDF class
when initializing a new Configuration object or HiveConf object it doesn't
inherit any of those properties. I see it in the Job Configuration
e MR jobs (like the Pi example) and look at the job
> tracker, are you seeing all your TT’s getting used?
>
> On Aug 15, 2011, at 10:47 AM, Jon Bender wrote:
>
> It's actually just an uncompressed UTF-8 text file.
>
> This was essentially the create table clause:
> CREA
al file compressed with GZip or BZip? Those file formats
> aren’t splittable, so they get assigned to one mapper.
>
> On Aug 15, 2011, at 10:23 AM, Jon Bender wrote:
>
> > Hello,
> >
> > I have external tables in Hive stored in a single flat text file. When I
> exec
Hello,
I have external tables in Hive stored in a single flat text file. When I
execute queries against it, all of my jobs are run as a single map task,
even on very large tables.
What steps do I need to make to ensure that these queries are split up and
pushed out to multiple TTs? Do I need to
Hey all,
Just wondering what the best way is to rename specific Hive table
partitions. Is there some HiveQL command for this, or will I need to insert
into new partitions to reflect the new naming convention?
Cheers,
Jon