Did you start hiverserver service before running the client Program.
Cheers, Adarsh
Ayush Gupta wrote:
Probing this further reveals that the connection is reset by the
server in exactly 10 minutes every time.
I'm running Hive 0.6. I do not see anything relevant at
http://wiki.apache.org/ha
it necessary to configure Zookeeper separately or not. I think this
is true for large clusters.
Thanks & Regards
Adarsh Sharma
I didn't configure zookeeper separately. I doubt that this might not be
the issue.
I attached my zookeeper logs and hbase-site.xml.
Wed Jan 12 11:54:47 IST
Jean-Daniel Cryans wrote:
Sorry it that wasn't obvious, but you need to run hive using this command:
I am extremely Sorry Sir.
As per your instructions I am sending you the output of the create table
command.
Please check the attachment.
Thanks & Warm Regards
Adarsh Sharma
e harmless if the process
on that other machine just took more time to boot, also it happened 20
minutes before your test. Do verify that hbase works before trying to
create a table.
J-D
On Sun, Jan 9, 2011 at 10:37 PM, Adarsh Sharma wrote:
Jean-Daniel Cryans wrote:
You also need to create th
at
java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:317)
at
org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:533)
If needed I will send my error logs. Please let me know.
*
Thanks & Best regards
Adarsh Sharma
*
help me to solve this.
Thanks
Adarsh Sharma wrote:
Jean-Daniel Cryans wrote:
You also need to create the table in order to see the relevant debug
information, it won't create it until it needs it.
Sir
Check the output :
hive> CREATE TABLE hive_hbasetable_k(key int, value stri
astLeaderElection.java:296)
at java.lang.Thread.run(Thread.java:619)
2011-01-10 11:35:01,850 INFO org.apache.zookeeper.server.quorum
If you require any other information, Please let me know.
Best regards
Adarsh
J-D
On Jan 9, 2011 9:30 PM, "Adarsh Sharma" <mailto:
ault pat=.*
OK
11/01/10 10:25:15 INFO ql.Driver: OK
Time taken: 7.897 seconds
11/01/10 10:25:15 INFO CliDriver: Time taken: 7.897 seconds
hive> exit;
It seems that Hive is working but I am facing issues while integrating
with Hbase.
Best Regards
Adarsh Sharma
J-D
On Fri, Jan 7, 2011 at
53 PM, Adarsh Sharma
mailto:adarsh.sha...@orkash.com>> wrote:
John Sichi wrote:
Here is what you need to do:
1) Use svn to check out the source for Hive 0.6
I download Hive-0.6.0 source code with the command
svn co http://svn.apache.org/repos/asf/hive/bran
John Sichi wrote:
On Jan 6, 2011, at 9:53 PM, Adarsh Sharma wrote:
I want to know why it occurs in hive.log
2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle
"org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it
know why it occurs in hive.log
2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle
"org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be
resolved.
With Best Regards
Adarsh Sharma
4) Use your new Hive build
sktrackers and Regionservers.
Among these nodes is set zookeeper.quorum.property to have 5
Datanodes. I don't know the number of servers needed for Zookeeper in
fully distributed mode.
Best Regards
Adarsh Sharma
John Sichi wrote:
Since the exception below is from JDO, it has to do with the configuration of
Hive's metastore (not HBase/Zookeeper).
JVS
On Jan 5, 2011, at 2:14 AM, Adarsh Sharma wrote:
Dear all,
I am trying Hive/Hbase Integration from the past 2 days. I am facing the below
t Derby metastore )
and Hbase-0.20.3.
Please tell how this could be resolved.
Also I want to add one more thing that my hadoop Cluster is of 9 nodes
and 8 nodes act as Datanodes,Tasktrackers and Regionservers.
Among these nodes is set zookeeper.quorum.property to have 5 Datanodes.
Would this is the issue.
I don't know the number of servers needed for Zookeeper in fully
distributed mode.
Best Regards
Adarsh Sharma
erent sizes ( 10Gb, 20GB, 30 Gb , 50GB ) .
I shall be grateful for this kindness.
Thanks & Regards
Adarsh Sharma
Dear all,
A very-very Happy New Year 2011 to all. May God Bless all of us to solve
future problems.
Thanks and Regards
Adarsh Sharma
50060/tasklog?attemptid=attempt_201012141048_0023_r_00_2>
attempt_201012141048_0023_r_00_3task_201012141048_0023_r_00
<http://localhost:50030/taskdetails.jsp?tipid=task_201012141048_0023_r_00>172.24.10.91
<http://172.24.10.91:50060>FAILED
Shuffle Error: Exceeded MAX_
at the bottom corresponding to
this Job ID.
Best Regards
Adarsh Sharma
e Please tell me why this occurs and how to resolve it.
Thanks & Regards
Adarsh Sharma
olExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
I know this errrooccurs due to libfb3003.jar bothin Hadoop and Hive
lib. Can someone Please tell how to resolve this errror.
Thanks & Regards
Adarsh Sharma
physical host just
creates conflict for things like disk, ether and CPU that the virtual OS
won't be aware of. Also, VM to disk performance is pretty bad right now,
though that's improving.
Thanks & Regards
Adarsh Sharma
dbcp for connection pooling. We used it for a while but stopped
using it because of some out of PermGen issues (which probably was
unrelated). We combined this with Spring Templates to make using
it pretty simple in our code.
Bennie.
-Original Message-
From: Ada
Hi all,
As all of us know that Hadoop considers the user who starts the hadoop
cluster as superuser.
It provides all access to HDFS to that user.
But know I want to know that how we can R/W access to new user for e.g
Tom to access HDFS.
Is there any command or we can write code for it. I rea
<http://xx.xx.xx.xxx:50030/taskdetails.jsp?jobid=job_201009280549_0050&tipid=task_201009280549_0050_r_00>
There is a log for this Job ID where detailed information of error is
given .
Regards
Adarsh Sharma
24 matches
Mail list logo