Thats the summary of i'm getting when i run the command
DFS Used%: 27%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
On Thu, May 22, 2014 at 12:22 PM, Nagarjuna Vissarapu <
nagarjuna.v...@gmail.com> wrote:
> It displays the usage details
>
>
> On Thu, May 22, 2014
It displays the usage details
On Thu, May 22, 2014 at 12:20 PM, Sreenath wrote:
> Ok what is the result you are expecting once i run this command ?
>
>
> On Thu, May 22, 2014 at 12:17 PM, Nagarjuna Vissarapu <
> nagarjuna.v...@gmail.com> wrote:
>
>> I think your hdfs memory is filled. Type the
Ok what is the result you are expecting once i run this command ?
On Thu, May 22, 2014 at 12:17 PM, Nagarjuna Vissarapu <
nagarjuna.v...@gmail.com> wrote:
> I think your hdfs memory is filled. Type the following command and check
> it once hadoop dfsadmin -report.
>
>
> On Thu, May 22, 2014 at 1
I think your hdfs memory is filled. Type the following command and check it
once hadoop dfsadmin -report.
On Thu, May 22, 2014 at 11:58 AM, Shengjun Xin wrote:
> Are datanodes dead?
>
>
> On Thu, May 22, 2014 at 2:23 PM, Sreenath wrote:
>
>> Hi All,
>>
>> We are running a hadoop cluster and ma
Hi,
No the data nodes are not dead and the HDFS is almost 70% free.
Is it related to some network Issues?
On Thu, May 22, 2014 at 11:58 AM, Shengjun Xin wrote:
> Are datanodes dead?
>
>
> On Thu, May 22, 2014 at 2:23 PM, Sreenath wrote:
>
>> Hi All,
>>
>> We are running a hadoop cluster and m
Are datanodes dead?
On Thu, May 22, 2014 at 2:23 PM, Sreenath wrote:
> Hi All,
>
> We are running a hadoop cluster and many of our hive queries are failing
> in the reduce phase with the following error
>
> java.io.IOException: All datanodes *.*.*.*:50230 are bad. Aborting...
> at
> org.
Hi All,
We are running a hadoop cluster and many of our hive queries are failing in
the reduce phase with the following error
java.io.IOException: All datanodes *.*.*.*:50230 are bad. Aborting...
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3290
Hi all, I encounter problems when trying to run map-join for large tables by
using bucketing tables.The table is partitioned into 20 buckets:
create table if not exists ip_c_bucket (country string, ip_from bigint,
ip_to bigint) clustered by(ip_from) into 20 buckets;
For
It's seconds.
new Date(time * 1000);
2014-05-22 14:19 GMT+09:00 Santhosh Thomas :
> I am trying to find the creation time of a table using table.createTime()
> function. I was hoping that it returns the time in milli seconds, but looks
> like it is not. Any idea how to get the actual table crea
I am trying to find the creation time of a table using table.createTime()
function. I was hoping that it returns the time in milli seconds, but looks
like it is not. Any idea how to get the actual table creation time?
thanks
Santhosh
Hi,
We've run into this issue as well, and it is indeed annoying. As I
recall, the issue comes in not when the records are read off disk but
when hive deals with the records further down the line (I forget exactly
where).
I believe this issue is relevant:
https://issues.apache.org/jira/brow
Hi,
I'm trying to process JSON data in hive (0.12) with "\n" inside some of the
keys & values. It is messed up and I have no control over changing the
input.
What is the best way to process this data in hdfs?
Thanks!
Hi,
I am trying to create an external table which is pointing to a directory
containing symlinks to files in hdfs. I am using CDH 4.4 with Hive 0.12.
When I tried to run a select query on this table it returns 0 rows. And
when I run a count query the map task fails with following error:
014-05-2
do you mean python hiveserver client library?
I would recommend you to upgrade to python 2.6 to the least
On Wed, May 21, 2014 at 9:54 PM, Hari Rajendhran wrote:
> Hi Team,
>
> Does Python 2.4.3 supports apache hive 0.13 version ?
>
>
>
> Best Regards
> Hari Krishnan Rajendhran
> Hadoop Admin
>
Hi Team,
Does Python 2.4.3 supports apache hive 0.13 version ?
Best Regards
Hari Krishnan Rajendhran
Hadoop Admin
DESS-ABIM ,Chennai BIGDATA Galaxy
Tata Consultancy Services
Cell:- 9677985515
Mailto: hari.rajendh...@tcs.com
Website: http://www.tcs.com
___
Hi all,
It seems that for some reason HS2 outputs far less logging than HS1 in hive
0.12 for example, starting HS1 in the following way : hive --service hiveserver
and executing show tables produces this :
14/04/30 17:14:16 [pool-1-thread-2] INFO service.HiveServer: Running the query:
show tabl
Hi all,
I'm trying to understand the different HIVE join optimizations. I got the
idea that we're trying to limit the shuffling of key value pairs from
mappers to reducers. But, I cannot grasp the idea behind SMB joins.
For example :
Table A with four columns (user_id, col2, col3, col4) bucketed
Hi All,
I had metastore(0.12) running previously. But after upgrading to 0.13 it is
failing with below error message. The upgrade was a clean new setup.
Additional details:-
Mysql Version:- 5.6
Mysql Connector :- 5.1.30
Starting Hive Metastore Server
log4j:WARN No such property [maxBackupIndex
18 matches
Mail list logo