Re: Hive Metadata URI error

2011-12-11 Thread Kirk True
To me it looks like the error message is getting a blank for the URI property value. Can you triple-check the property _name_ is correct (including capitalization)? On 12/11/11 9:35 PM, Periya.Data wrote: Sam: I added "file://". Now it looks like this: file:///home/users/jtv/CDH3/hive/conf/met

Re: Hive Metadata URI error

2011-12-11 Thread Periya.Data
Sam: I added "file://". Now it looks like this: file:///home/users/jtv/CDH3/hive/conf/metastore_db The problem has not gone away. I still have the same problem. I tried rebooting my ec-2 instance. Still no difference. What does it mean by "does not have a scheme". What is it expecting? Thanks, P

Re: Hive Metadata URI error

2011-12-11 Thread Sam Wilson
Try file:// in front of the property value... Sent from my iPhone On Dec 12, 2011, at 12:07 AM, "Periya.Data" wrote: > Hi, >I am trying to create Hive tables on an EC2 instance. I get this strange > error about URI schema and log4j properties not found. I do not know how to > fix this. >

Hive Metadata URI error

2011-12-11 Thread Periya.Data
Hi, I am trying to create Hive tables on an EC2 instance. I get this strange error about URI schema and log4j properties not found. I do not know how to fix this. On EC2 instance : Ubuntu 10.04, Hive-0.7.1-cdh3u2. Initially I did not have an entry for hive.metastore.uris property in my hive-de

Hive on hadoop 0.20.205

2011-12-11 Thread praveenesh kumar
Did anyone tried HIVE on Hadoop 0.20.205. I am trying to build HIVE from svn. but I am seeing its downloading hadoop-0.20.3-CDH3-SNAPSHOT.tar.gz and hadoop-0.20.1.tar.gz. If I am trying to do ant -Dhadoop.version=”0.20.205″ package ,but build is failing. I am getting these errors, every time I a

Re:Re:Re: hiveserver usage

2011-12-11 Thread 王锋
I want to know why the hiveserver use so large memory,and where the memory has been used ? 在 2011-12-12 10:02:44,"王锋" 写道: The namenode summary: the mr summary and hiveserver: hiveserver jvm args: export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=1 -Xms15000m -XX:MaxHeapFreeRatio=40

Re: amazon elastic mapreduce

2011-12-11 Thread Aniket Mokashi
Hi, You have a couple of options to save your intermediate state- 1. If your metastore is HA, you can save your state in metastore (eg- alter table TBLPROPERTIES ("job.state", "DoneTill:121122)). 2. You can periodically save your state in EMR-local drives and upload it to s3. You can use any cust

Re:Re: hiveserver usage

2011-12-11 Thread 王锋
The namenode summary: the mr summary and hiveserver: hiveserver jvm args: export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=1 -Xms15000m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseParallelGC -XX:ParallelGCThreads=20 -XX:+UseParall elOldGC -XX:-UseGCOverheadLimit -verbose:gc

Re: hiveserver usage

2011-12-11 Thread Aaron Sun
how's the data look like? and what's the size of the cluster? 2011/12/11 王锋 > Hi, > > I'm one of engieer of sina.com. We have used hive ,hiveserver > several months. We have our own tasks schedule system .The system can > schedule tasks running with hiveserver by jdbc. > > But The hives

hiveserver usage

2011-12-11 Thread 王锋
Hi, I'm one of engieer of sina.com. We have used hive ,hiveserver several months. We have our own tasks schedule system .The system can schedule tasks running with hiveserver by jdbc. But The hiveserver use mem very large, usally large than 10g. we have 5min tasks which will be

amazon elastic mapreduce

2011-12-11 Thread Cam Bazz
Hello All, So I had a single node pseudo cluster that has been calculating me some statistics running for a year. finally it grew more than do-it-at-home task. So I have my data uploaded to s3, and I have configured everything so that I can load my tables, and load the partitions, and the data is

Re: Hive Reducers hanging - interesting problem - skew ?

2011-12-11 Thread Mark Grover
Hi jS, Sorry about the delayed response. Did you make any progress? Based on my understanding, you are right. If you have 2 big collections of records being processed in Stage 2, map join may not work. At this point, I will say 2 things though: 1) If you are relying on joins of big collections (