across the servers as i wanted.
But, Is there any way to control the process of storing specific rows on
specific region servers..? (Of course is started a separate thread for this
question.)
On Mon, Aug 26, 2013 at 3:01 PM, Vamshi Krishna wrote:
> Ted, I guessed the problem could be due to o
row should be stored on which region server..? )
--
*Regards*
*
Vamshi Krishna
*
g the tires IMO but is still a degenerate case.
> In my opinion 5 is the lowest you should go. You shouldn't draw conclusions
> from inadequate deploys.
>
> On Friday, August 23, 2013, Vamshi Krishna wrote:
>
> > Hello all,
> > I set up a 2 node hbase clust
t; hbase.regions.slop
> 0.2
> Rebalance if any regionserver has average + (average *
> slop) regions.
> Default is 20% slop.
>
>
>
>
>
>
> Frank Chow
>
> From: Vamshi Krishna
> Date: 2013-08-23 22:51
> To: user; Dhaval Shah
> Sub
to 10MB seems too small.
> Did you mean 10GB?
>
>
> Regards,
> Dhaval
>
>
> ________
> From: Vamshi Krishna
> To: user@hbase.apache.org; zhoushuaifeng
> Sent: Friday, 23 August 2013 9:38 AM
> Subject: Re: Will hbase automatically distrib
UI http://:60010.
All the tables that i created are residing on the same machine-1.
On Fri, Aug 23, 2013 at 7:36 PM, Ted Yu wrote:
> What version of HBase are you using ?
>
> Can you search in master log for balancer related lines ?
>
> Thanks
>
>
> On Fri, Aug 23, 2013 at
fter 5 min.
Could any one tell me why is it happening so.?
--
*Regards*
*
Vamshi Krishna
*
t
>
>
>
>
> Frank Chow
--
*Regards*
*
Vamshi Krishna
*
master and region server.
Do we need to distribute the data explicitly by any means..?
I guess that should be done automatically by load balancer of Hbase , right?
Please someone help me..!
--
*Regards*
*
Vamshi Krishna
*
pe of issues a few months ago and wrote up my
> >>> solutions to the 3 main hbase communication/setup errors I got.
> >>>
> >>> See if this helps
> >>>
> http://jayunit100.blogspot.com/2013/05/debugging-hbase-installation.html
> >>>
> >>&
next
hbase.zookeeper.property.maxClientCnxns
1024
hbase.coprocessor.user.region.classes
com.bil.coproc.ColumnAggregationEndpoint
--
*Regards*
*
Vamshi Krishna
*
Hi all,
Facing problem with Hbase region server disconnecting with master after
some time. I set up Hbase cluster with 2 machines where Machine-1 (M1) is
master and Region server and M2 is only Region server.
After running hbase-start.sh , all the daemons are started perfectly but
after some time
I setup hbase cluster on two machines. One machine has master aswell as
regionserver and other has only RS. After running ./start-hbase.sh all
daemons are started perfectly. But 2nd machine which runs only RS is
getting disconnceted after some time and what ever data i iserted in to
Hbase table res
toString() and
> apply some basic regexs for validation. Or you could consider using a
> more structured serialization format like protobufs.
>
> --gh
>
> On Tue, Apr 24, 2012 at 9:35 PM, Vamshi Krishna
> wrote:
> > Hi all , here i am having one basic doubt about const
rg/book.html#regions.arch
>
>
>
>
>
> On 2/14/12 11:06 AM, "Doug Meil" wrote:
>
> >
> >Keys are stored in sorted order, it's basically a binary search.
> >
> >
> >
> >
> >On 2/14/12 9:31 AM, "Vamshi Krishna" wrote:
&
region boundaries from META.
>
>
>
>
> On 2/13/12 1:46 AM, "Vamshi Krishna" wrote:
>
> >Hi all, i have a small basic doubt regarding get() method which is used in
> >HTable. From the hbase book, under 8.3.Client section, i understood that,
> >when eve
lease..
--
*Regards*
*
Vamshi Krishna
*
download, please send me. Thank you
--
*Regards*
*
Vamshi Krishna
*
>
>
>
> On 1/28/12 6:43 AM, "Ioan Eugen Stan" wrote:
>
> >2012/1/28 Vamshi Krishna :
> >> Hi, here i am trying to read rows from a table, and put them to a file
> >>as
> >> it is.For that my mapper class and run method are as shown be
n(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
please some body help..
--
*Regards*
*
Vamshi Krishna
*
batch.size()];
> and then pass it to
> table.batch(batch, results);
>
> I hope that helps
>
>
> On Tue, Dec 20, 2011 at 11:26 AM, Vamshi Krishna >wrote:
>
> > Hi all, i am trying to copy rows from multiple tables to one table,
> > selecting rows by some crite
:/usr/local/hadoop-0.20.2$
Actually some thousands of rows have to be inserted to the new table, but
only 200 rows i could see after above messages thrown on to console.
Please help me why it is happening like this..? please somebody help..
--
*Regards*
*
Vamshi Krishna
*
s : FAILED
Task attempt_201112191543_0001_m_01_0 failed to report status for 600
seconds. Killing!
No tasks of job are running..
please help what could be the reason for this?
--
*Regards*
*
Vamshi Krishna
*
u have added the relevant jars in your classpath correctly.
> Error is : Caused by: java.lang.ClassNotFoundException:
> org.apache.zookeeper.KeeperException
> Your code is not able to find proper jars in path.
>
> Hope this helps!
>
> -Original Message-
> From:
above settings on all the machines of my cluster.
Thank you.
On Wed, Dec 14, 2011 at 8:13 PM, Vamshi Krishna wrote:
> Hi, thank you. all these days i am coding in eclipse and trying to run
> that program from eclipse only, but never i saw that program running on the
> cluster , o
t; in the command-line when
> you want it to run against a cluster.
>
> On 13-Dec-2011, at 8:43 AM, Vamshi Krishna wrote:
>
> > what i shoud set in job's classpath ? where should i do setting class
> path
> > for job and how ? My requirement is to run the MR jobs
ss(MyHset.class);
job.setMapperClass(setInsertionMapper.class);
...
...
On Mon, Dec 12, 2011 at 11:35 PM, Jean-Daniel Cryans wrote:
> That setting also needs to be in your job's classpath, it won't guess it.
>
> J-D
>
> On Thu, Dec 8, 2011 at 10:14 PM, Vamsh
ning?
On all machines , all daemons are running.
what i should do to run it on clusetr from the eclipse.. please help..
On Fri, Dec 9, 2011 at 11:44 AM, Vamshi Krishna wrote:
> Hi harsh,
> ya, i no jobs are seen in that jobtracker page, under RUNNING JOBS it is
> none, under FINISH
..? please help..
On Fri, Dec 9, 2011 at 11:13 PM, Jean-Daniel Cryans wrote:
> You don't need the conf dir in the jar, in fact you really don't want
> it there. I don't know where that alert is coming from, would be nice
> if you gave more details.
>
> J-D
>
&g
are locations inside
DFS.
when i tried to create jar of my whole java project from eclipse, i got
alert that, 'conf directory of HBase was not exported to jar', So how to
run my program through command line to insert data into hbase table..?? can
anybody help??
--
*Regards*
*
Vamshi Krishna
*
merely launching the program
> via a LocalJobRunner, and not submitting to the cluster. This is cause of
> improper config setup (you need "mapred.job.tracker" set at minimum, to
> submit a distributed job).
>
> On 08-Dec-2011, at 12:10 PM, Vamshi Krishna wrote:
>
> > Hi all
of mapreduce jobs, like map tasks' progresses
etc.. What is the problem, how can i see their progress on the browser,
while mapreduce program is running from eclipse? i am using ubuntu-10.04
can anybody help?
--
*Regards*
*
Vamshi Krishna
*
ely identify an RS.
>
> Regards
> Ram
>
> -Original Message-
> From: Vamshi Krishna [mailto:vamshi2...@gmail.com]
> Sent: Tuesday, December 06, 2011 11:56 AM
> To: user@hbase.apache.org
> Subject: what is region server startcode
>
> Hi i want to move some r
gt;
> Startcode is used to distinguish between the start-ups of same
> regionserver at different time point.
> It's the start-time of one regionserver.
>
> Jieshan.
>
> -邮件原件-
> 发件人: Vamshi Krishna [mailto:vamshi2...@gmail.com]
> 发送时间: 2011年12月6日 14:26
> 收件人: user@
Hi i want to move some regions of table from one server to other, but in
the move method argumemnts, what is the meaning of regionserver startcode?
from where i can get it? please can anybody tell..?
--
*Regards*
*
Vamshi Krishna
*
me, i dont understand what went wrong in my set up.
> >
> >
> > On Thu, Dec 1, 2011 at 11:28 PM, Jean-Daniel Cryans >wrote:
> >
> >> So since I don't see the rest of the log I'll have to assume that the
> >> region server was never able to connec
the other machines to master:6.
>
> J-D
>
> On Thu, Dec 1, 2011 at 6:45 AM, Vamshi Krishna
> wrote:
> > I found in the logs of region server machines, i found this error (on
> both
> > regionserver machines)
> >
> > 2011-11-30 14:43:42,447 INFO org.apache
at their logs to figure what's going on.
>
> J-D
>
> On Tue, Nov 29, 2011 at 10:46 PM, Vamshi Krishna
> wrote:
> > hey soryy for posting multiple times.
> > J-D, As you said, i refered to my regionserver log, there i found
> > Could not resolve
checked the hbase.region.max.filesize , it is 256MB.
And one morething, is there any minimum size that we can set for both
region size and keyValue size?
can anybody help ?
--
*Regards*
*
Vamshi Krishna
*
t the master log to see what's going
> on.
>
> J-D
>
> On Mon, Nov 28, 2011 at 10:33 PM, Vamshi Krishna
> wrote:
> > Hi Lars,
> > i am not using cygwin, i am using 3 ubuntu-10.04 machines.
> > Finally that problem i mentioned got resolved i.e now i can see th
e list of servers to the regionservers file in the
> $HBASE_HOME/conf/ dir? Are you using Cygwin? Or what else is your
> environment?
>
> Lars
>
> On Nov 26, 2011, at 7:37 AM, Vamshi Krishna wrote:
>
> > Hi i am running hbase on 3 machines, on one node master and regionserv
n, Nov 28, 2011 at 9:00 PM, Suraj Varma wrote:
> Ok.
>
> Can you run dos2unix against both your HBASE_HOME/bin and
> HBASE_HOME/conf directory?
>
> After this, restart your cluster and see if you are getting the same issue.
> --Suraj
>
>
> On Sun, Nov 27, 2011 at 10:
eased latencies, etc, etc). So - the cluster
> is unlikely to be stable with such a setup.
>
> I would recommend going with a fully wired setup, if your goal is to
> have a stable hbase cluster. If it is a "at home test cluster", then
> that's fine - but be prepared for f
lhost" works.
>
> Your setup should work for test environments ... for production, the
> standard setup would be to co-locate region servers and data nodes to
> get data locality.
> --Suraj
>
> On Thu, Nov 24, 2011 at 10:51 PM, Vamshi Krishna
> wrote:
> > Hi, i checked
directory ,
but i could find out {HBASE_HOME}/bin/hbase-daemon.sh on both machines.
Infact the path of the {HBASE_HOME} folder on each of the respective
machines machines is same. i.e
/home/hduser/Documents/HBASE_SOFTWRAE/hbase-0.90.4
please can anybody help?
--
*Regards*
*
Vamshi Krishna
*
tat -anp. I
> would recommend either configuring the /etc/hosts to bind the
> vamshikrishna-desktop and vamshi-laptop hostnames to the 10.0.1.x
> addresses.
>
> -Joey
>
> On Thu, Nov 17, 2011 at 1:40 AM, Vamshi Krishna
> wrote:
> > hi
> > i am working with 2
D
>
> On Thu, Nov 17, 2011 at 6:34 AM, Vamshi Krishna
> wrote:
> > hi
> > i am working with 2 node hbase cluster as shown below
> > On node1 (10.0.1.54) : master node, region server, hadoop namenode,
> hadoop
> > datanode
> > on node2 (10.0.1.55): region s
}/bin/hbase-daemon.sh: No such file
or directory , but i could find out {HBASE_HOME}/bin/hbase-daemon.sh
clearly.. i don't know what went wrong..!
--
*Regards*
*
Vamshi Krishna
*
}/bin/hbase-daemon.sh: No such file
or directory , but i could find out {HBASE_HOME}/bin/hbase-daemon.sh
clearly.. i don't know what went wrong..!
please can any body help?
--
*Regards*
*
Vamshi Krishna
*
ven for any newly created tables also..
Please can anybody help..?
--
*Regards*
*
Vamshi Krishna
*
,
hbase-0.26-transactional.jar to my hbase project build path while i am
working on the top of hbase-0.90.4.?
--
*Regards*
*
Vamshi Krishna
*
t name}:2888:3888
>
> Make sure that in zookeeper home directory contains server1 folder with
> "myid" file and value as "0"
>
>
> Thanks & Regards,
> B Anil Kumar.
>
>
> On Fri, Nov 11, 2011 at 3:39 PM, Vamshi Krishna &g
uld be the problem? and how i need to set right the
situation?
On Thu, Nov 10, 2011 at 3:56 PM, Harsh J wrote:
> Your HDFS isn't accessible, and your HMaster is running in distributed
> mode. Bring up your HDFS and you should be all set!
>
> (Or, tweak your hbase-site.xml to
i run bin/start-hbase.sh command HRegionserver daemon is supposed to
run on regionserver machine also, but i couldnot find any such daemon on
regionserver machine.
can anybody please help.!
--
*Regards*
*
Vamshi Krishna
*
nt.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:558)
at org.apache.hadoop.hbase.client.HTable.(HTable.java:171)
at org.apache.hadoop.hbase.client.HTable.(HTable.java:145)
at PutExample.main(PutExample.java:19)
please anyone help me, where i went wrong?
--
*Regards*
*
Vamshi Krishna
*
[INFO] Total time: 1 minute 40 seconds
[INFO] Finished at: Sun Sep 04 11:50:57 IST 2011
[INFO] Final Memory: 2M/15M
[INFO]
vamshi@vamshikrishna-desktop:~/Documents/hbase-core-trunk$
Please can somebody help..
--
*Regards*
*
Vamshi Krishna
*
culty you're having. Typically, this would take
> >>>a
> >>> form like, "I tried doing X, but instead of seeing Y, I saw Z instead".
> >>>The
> >>> more specific you can be with what you already tried and the errors or
> >>> fa
to debug and run
normal programs on hadoop , but i couldn't run the source code. Please help
me in this regard. I have been trying to do the same for somany days, please
sombody help how to change, debug and run the source code of hadoop/HBase in
eclipse IDE.
Thank you.
--
*Regards*
*
V
ients
request is actually handled by the resultant RS.
The *main idea* being is , we can get rid off the single point failure
with respect to -ROOT- stored in a single region.
On Thu, Aug 18, 2011 at 12:13 PM, vamshi krishna wrote:
> Thank you very much Lars..!
>
>
> On Thu, Aug 18,
regard.
Even if anyone worked or have experience with distributed DSs , please help
and we will have a discussion. Thank you.
--
*Regards*
*
Vamshi Krishna
*
move further in my work because of such build errors.
please somebody help me in configuring the hadoop with eclipse. I wanted to
work with HBase after that.
Thank you
--
*Regards*
*
Vamshi Krishna
*
directory in hbase-site.xml
> (if not, see above link), that's where you will find META and ROOT tables.
>
> (The root directory would be a local directory in local mode, or a
> directory in HDFS).
>
>
> -- Lars
>
>
>
>
> From:
hadoop i should use to run
HBase at current time.. means asper latest versions which versions are
compatible?
*
Thank you..
Vamshi Krishna
*
63 matches
Mail list logo