Hi Sam,
Could you try '\001' instead of '\u0001' ?
Sent from my iPhone
On Jun 20, 2012, at 3:57 PM, Sam William wrote:
>
>
> Mark,
> I did not get any errors, but it seemed like the splitting was happening
> with u,0,1 as delimiters. My temporary fix is to define a table with one
> fie
Mark,
I did not get any errors, but it seemed like the splitting was happening
with u,0,1 as delimiters. My temporary fix is to define a table with one
field and create a view split function on top . I dont want this additional
overheard, if I can make the input format work with the c
If you have a query producing that many partitions it is probably a
bad idea. Consider using hive's bucketing or changing your
partitioning scheme.
Edward
On Wed, Jun 20, 2012 at 12:52 PM, Greg Fuller wrote:
> Hi,
>
> I sent this out to common-u...@hadoop.apache.org yesterday, but the hive
> ma
Hi,
I sent this out to common-u...@hadoop.apache.org yesterday, but the hive
mailing list might be a better forum for this question.
With CDH3u4 and Cloudera Manager, and I am running a hive query to repartition
all of our tables. I'm reducing the number of partitions from 5 to 2, because
the
Has anyone had success reading XML files in Hive? I've been looking at the
cloud9 XMLInputFormat to read in the top-level XML node in each file, with the
goal to then use XPath (and/or LATERAL VIEW) to read individual records in the
file. There isn't much in the way of documentation or example
Sam,
If you can please post a row or two of your data along with any errors you are
getting, that would be helpful.
Mark
- Original Message -
From: "Mapred Learn"
To: user@hive.apache.org
Cc: user@hive.apache.org
Sent: Tuesday, June 19, 2012 8:34:22 PM
Subject: Re: Text file with ctrl ch
this is because you are you are not able to contact the namenode as it
is not running.not only -ls, none of the Hdfs command will work in
such a situation..was your NN stopped after getting started, or it
didn't start even once??Can you paste your logs here??
Regards,
Mohammad Tariq
On Wed,
No i guess it still doesnt serve the purpose actually i mean the error is
hduser@XPS-L501X:/home/soham/cloudera/hadoop-2.0.0-cdh4.0.0/etc/hadoop$
hadoop fs -ls
12/06/20 15:03:51 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0.1:54310. Already tried 0 time(s).
12/06/20 15:03:52 INFO i
change the line "127.0.1.1" in your /etc/hosts file to
"127.0.0.1"..Also check if there is any problem with the configuration
properties.
Regards,
Mohammad Tariq
On Wed, Jun 20, 2012 at 2:52 PM, soham sardar wrote:
> the thing is i have updated my JAVA_HOME in both .bashrc and hadoop-env.sh
the thing is i have updated my JAVA_HOME in both .bashrc and hadoop-env.sh
now the problem is when i try
jps is the output is
6113 NodeManager
6799 DataNode
7562 Jps
7104 SecondaryNameNode
5728 ResourceManager
and now except namenode ,all other nodes are working and when i try to give
hadoop f
10 matches
Mail list logo