Matt – changing the DNS resolved the Hive errors, but led to other issues
which I'm afraid I can't remember right now. I just remember the change
broke something else, so the best course seemed to be to fix the metadata.
This of course doesn't mean you'll hit the same issue, but on the other
hand if you're using say MySQL as the metastore, you can fix the metadata
with a couple of simple queries.

On Mon, Jun 11, 2012 at 5:19 PM, Matthew Byrd <mb...@acunu.com> wrote:

> Hi Jon,
>
> I've just encountered the same issue.
> I was wondering if you would be so kind as to elaborate, on why you'd be
> best off manipulating the metadata as opposed to trying to manipulate the
> DNS?
>
> I had a go at having the Namenode use a dns alias Namenode, then the hive
> metadata did indeed contain this alias rather than a hostname.
> So when I changed to a new Namendoe and changed the alias as well
> everything seemed to work fine.
>
> I'm just wondering if there isn't something bad lurking underneath this
> approach?
> Is using dns aliases for Namnode//Jobtracker something that people in the
> Hadoop world do? or frown upon?
> Can anyone see any potential problems with this approach?
> Maybe I should be posting this to hadoop-common?
>
> Thanks in advance,
> Matt
>
>
> On Wed, May 9, 2012 at 7:11 PM, Jonathan Seidman <
> jonathan.seid...@gmail.com> wrote:
>
>> Varun – So yes, Hive stores the full URI to the NameNode in the metadata
>> for every table and partition. From my experience you're best off modifying
>> the metadata to point to the new NN, as opposed to trying to manipulate
>> DNS. Fortunately, this is fairly straightforward since there's mainly one
>> column you need to modify, and assuming you're using something like MySQL
>> will only require a global search-and-replace on the URI in this column. I
>> don't remember the exact table that contains this info, but if you browse
>> the metastore tables you should find a LOCATION column which contains the
>> NN URI that you need to change.
>>
>>
>> On Wed, May 9, 2012 at 11:14 AM, varun kumar <varun....@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> I have changed the namenode from one server to another when there was a
>>> crash of hardware.
>>>
>>> After configuring the Namenode.
>>>
>>> When i am executing hive query below error is shown..
>>>
>>> bin/hive -e “insert overwrite table pokes select a.* from invites a
>>> where a.ds=’2008-08-15′;”
>>> Hive history
>>> file=/tmp/Bhavesh.Shah/hive_job_log_Bhavesh.Shah_201112021007_2120318983.txt
>>> Total MapReduce jobs = 2
>>> Launching Job 1 out of 2
>>> Number of reduce tasks is set to 0 since there’s no reduce operator
>>> Starting Job = job_201112011620_0004, Tracking URL =
>>> http://x.x.x.b:50030/jobdetails.jsp?jobid=job_201112011620_0004<http://localhost:50030/jobdetails.jsp?jobid=job_201112011620_0004>
>>> Kill Command = C:\cygwin\home\Bhavesh.Shah\hadoop-0.20.2\/bin/hadoop job
>>> -Dmapred.job.tracker=localhost:9101 -kill job_201112011620_0004
>>> 2011-12-02 10:07:30,777 Stage-1 map = 0%, reduce = 0%
>>> 2011-12-02 10:07:57,796 Stage-1 map = 100%, reduce = 100%
>>> Ended Job = job_201112011620_0004 with errors
>>> FAILED: Execution Error, return code 2 from
>>> org.apache.hadoop.hive.ql.exec.MapRedTask
>>>
>>> I have noticed that it is trying to communicate with the old host.I am
>>> unable to trouble shoot where  i have done  wrong  in building the hadoop
>>> Namenode.
>>>
>>> Can you please suggest me why hive is not able to communicate to the new
>>> Name node.
>>>
>>>
>>>
>>>
>>> --
>>> Regards,
>>> Varun Kumar.P
>>>
>>>
>>
>

Reply via email to