upon?
> Can anyone see any potential problems with this approach?
> Maybe I should be posting this to hadoop-common?
>
> Thanks in advance,
> Matt
>
>
> On Wed, May 9, 2012 at 7:11 PM, Jonathan Seidman <
> jonathan.seid...@gmail.com> wrote:
>
>> Varun
Varun – So yes, Hive stores the full URI to the NameNode in the metadata
for every table and partition. From my experience you're best off modifying
the metadata to point to the new NN, as opposed to trying to manipulate
DNS. Fortunately, this is fairly straightforward since there's mainly one
colu
Farah – The easiest way to dump data to a file is with a query like the
following:
hive> INSERT OVERWRITE LOCAL DIRECTORY 'DIRECTORY_NAME' SELECT * from
TABLE_NAME;
The drawback of this is that Hive uses ^A as the separator by default. In
the past what I found easiest was to just run a simple sed
Farah – can you configure the remote server as a client machine? You would
just need to install Hadoop with a configuration pointing to your cluster,
and then install Hive. You'd then be able to execute all Hive commands
against your cluster. Note that you won't run any daemons on this node, so
yo
Are you actually referring to RHive: https://github.com/nexr/RHive/wiki? If
so it looks like a very interesting project, but I haven't talked to anyone
actually using it yet. If it looks like a good fit for your particular
applications then the best thing would be to install and start working with
Hey Bradford - from my experience that error occurs when there's a conflict
between the "default.fs.name" setting and the value in the
metastore.SDS.location column in the Hive metadata. For us this has occurred
when either migrating to a new cluster or changing the NN hostname. Not sure
how all th