al byte[] value) {
>
> this(row, family, qualifier, timestamp, Type.Put, value);
>
> which should set Type to Type.Put
>
>
> FYI
>
> On Sun, Sep 18, 2016 at 11:34 AM, Krishna > wrote:
>
> > I will try that. And when inserting KeyValues, how would I set CellT
> You need to apply the change, build hbase-client jar and use this jar.
>
> Cheers
>
> On Sat, Sep 17, 2016 at 6:34 PM, Krishna > wrote:
>
> > I'm using HBase 1.1 and didn't encounter issue accessing this table in
> > other scenarios. Data is written t
PointerException();
>
> This means CellType.valueOf() returned null for the Cell.
>
> Which release of hbase are you using ?
>
> You didn't encounter any problem accessing in other scenarios
> ?
>
> Cheers
>
> On Sat, Sep 17, 2016 at 11:
I'm getting NPE when attempting to export HBase table: hbase
org.apache.hadoop.hbase.mapreduce.Export
Does anyone know what could be causing the exception?
Here is the error stack.
Error: java.lang.NullPointerException
at
org.apache.hadoop.hbase.protobuf.generated.CellProtos$Cell$Builde
Hi,
I am testing the behaviour of KeyValue using DeleteColumn when applied via
the bulkloading process. When I do this, I still see NULL value for "cq"
where I expected it have "new-value". What's the correct approach to ensure
"cq" keeps the value inserted after performing delete?
context.write(
> >
> > http://search-hadoop.com/m/YGbb3NN3v1jeL1f
> >
> > There're a few Spark hbase connectors.
> > See this thread:
> >
> > http://search-hadoop.com/m/q3RTt4cp9Z4p37s
> >
> > Sorry I cannot answer performance comparison question.
> >
&
We are evaluating Parquet and HBase for storing a dense & very, very wide
matrix (can have more than 600K columns).
I've following questions:
- Is there is a limit on # of columns in Parquet or HFile? We expect to
query [10-100] columns at a time using Spark - what are the performance
im
sk this because the concurrent write would be slowed down
> by the major compaction and compacting 10 TB of data would take some time.
>
> Cheers
>
> On Wed, Jul 29, 2015 at 4:23 PM, Krishna > wrote:
>
> > Hi,
> >
> > I am planning to bulk-load about 10 TB of dat
Hi,
I am planning to bulk-load about 10 TB of data to a table pre-split with
30 regions with max region file size configured to 10 GB.
Is it recommended that I run a major compaction when bulk-loading finishes? How
many HFiles does the reducer create?
12:57 PM, Talat Uyarer wrote:
> Hi Ted Yu,
>
> I guess Krishna mention about Pig's HBaseStorage class. I found out
> this by searching the class on google. IMHO I find a solution for my
> problem. I can use Scan.setTimeRange[0] method. If I want to get
> smaller records from t
Yes you can. Have a look at the HBaseStorage class.
On 9 Jun 2015 1:04 pm, "Talat Uyarer" wrote:
> Hi,
>
> Can I use HBase's timestamps to gettting rows greater/smaller than
> this timestamp . Is it possible ?
>
> Thanks
>
> --
> Talat UYARER
> Websitesi: http://talat.uyarer.com
> Twitter: http:/
I know that BigInsights comes with BigSQL which interacts with HBase as
well, have you considered that option.
We have a similar use case using BigInsights 2.1.2.
On Thu, May 14, 2015 at 4:56 AM, Nick Dimiduk wrote:
> + Swarnim, who's expert on HBase/Hive integration.
>
> Yes, snapshots may be
t; Not present
> >
> > What hadoop / hbase release are you using ?
> >
> > Is there anything else in the output ?
> >
> > Cheers
> >
> > On Thu, Feb 19, 2015 at 5:36 PM, Krishna wrote:
> >
> > > Hi,
> > >
> > > I
Hi,
I'm getting this strange error when trying to view the contents of HFile.
What is wrong here? Thanks for the help.
$> hbase org.apache.hadoop.hbase.io.hfile.HFile -v -p -m -f
hdfs://:8020/hbase/data/default
Exception in thread "main" java.lang.IllegalArgumentException:
java.net.URISynt
.
now for example
A foreach of $0,$4,$5 and a dump gives me different results for statement 1
and 2.
where 1 is correct.
Has anyone faced this behavior before?.
Regards,
Krishna
Hi,
Is there any logical/practical limit on HBase RS storage size?
Which works better for HBase - a region server with 10 disks that are each
2 TB or 2 disks that are each 10TB?
I remember, one of the recommendations is to keep each disk on RS to be
less than 6 TB - is that correct?
Thanks
Restart your zookeeper.
Restart your HBase.
This might be a short term fix.
Thanks,
Krishna
On Thu, Nov 27, 2014 at 7:57 PM, wrote:
> Hi,
>
> The issue reported earlier is resolved, I now have a new issue.
>
> When I execute "list" command from the hbase shell, I a
>
> -Anoop-
>
> On Wed, Nov 12, 2014 at 11:17 AM, Krishna Kalyan >
> wrote:
>
> > For Example for table 'test_table', Values inserted are:
> >
> > Row1 - Val1 => t
> > Row1 - Val2 => t + 3
> > Row1 - Val3 => t + 5
> >
>
3
Row2 - Val2 => t + 3
How do i achieve time stamp based scans?.
Thanks and Regards,
Krishna
On Wed, Nov 12, 2014 at 10:56 AM, Krishna Kalyan
wrote:
> Hi,
> Is it possible to do a
> select * from where version = "somedate" ; using HBase APIs?.
> (Scanning for va
Hi,
Is it possible to do a
select * from where version = "somedate" ; using HBase APIs?.
(Scanning for values where version <= "somedate" )
Could you please direct me to appropriate links to achieve this?.
Regards,
Krishna
Some Resources
http://hbase.apache.org/book/schema.casestudies.html
http://www.slideshare.net/cloudera/5-h-base-schemahbasecon2012
http://www.evanconkle.com/2011/11/hbase-tutorial-creating-table/
http://www.slideshare.net/hmisty/20090713-hbase-schema-design-case-studies
On Sun, Nov 2, 2014 at 6:53
arc Spaggiari
> Date:10/21/2014 9:02 AM (GMT-05:00)
> To: user
> Cc:
> Subject: Re: Duplicate Value Inserts in HBase
>
> You can do check and puts to validate if value is already there, but it's
> slower...
>
> 2014-10-21 8:50 GMT-04:00 Krishna Kalyan :
>
Thanks Jean,
If i put the same value in my table for a particular column for a rowkey i
want HBase reject this value and retain old value with old time stamp.
In other words update only when value changes.
Regards,
Krishna
On Tue, Oct 21, 2014 at 6:02 PM, Jean-Marc Spaggiari <
jea
Hi,
I have a HBase table which is populated from pig using PigStorage.
While inserting, suppose for rowkey i have a duplicate value.
Is there a way to prevent an update?.
I want to maintain the version history for my values which are unique.
Regards,
Krishna
Thank you so much Serega.
Regards,
Krishna
On Sun, Sep 28, 2014 at 11:01 PM, Serega Sheypak
wrote:
>
> https://pig.apache.org/docs/r0.11.0/api/org/apache/pig/backend/hadoop/hbase/HBaseStorage.html
> I'm not sure how does Pig HBaseStroage works. I suppose it would read all
> d
minutes to bulk load 500.000.000 records to 10-nodes hbase with presplitted
> table.
>
> 2014-09-28 16:04 GMT+04:00 Krishna Kalyan :
>
>> Thanks Serega,
>>
>> Our usecase details:
>> We have a location table which will be stored in HBase with locationID as
>>
performance improvement when compared with mapreduce
appoach?.
Regards,
Krishna
On Sat, Sep 27, 2014 at 9:13 PM, Serega Sheypak
wrote:
> Depends on the datasets size and HBase workload. The best way is to do join
> in pig, store it and then use HBase bulk load tool.
> It's general rec
Hi,
We have a use case that involves ETL on data coming from several different
sources using pig.
We plan to store the final output table in HBase.
What will be the performance impact if we do a join with an external CSV
table using pig?.
Regards,
Krishna
Hi,
We are trying to restore HBase from a backup saved to AWS S3 but "hbase
shell" command is stuck at this step:
> DEBUG org.apache.zookeeper.ClientCnxn - Reading reply
> sessionid:0x148af374d1f0031, packet:: clientPath:null serverPath:null
> finished:false header:: 8,4 replyHeader:: 8,262,-101
Hi Ted,
I had read those, but I'm confused about how this will affect non-HBase HDFS
data. With HDFS checksumming off won't it affect data integrity?
Krishna
On 24 April 2014 15:54, Ted Yu
mailto:yuzhih...@gmail.com>> wrote:
Please take a look at the following:
http://
with checksum validation false and it will do checksum
> validation on its own. So using hbase handled checksum in a cluster
> should not affect other data.. Does that solves your doubt?
>
> -Anoop-
>
> On Tue, Apr 29, 2014 at 1:58 PM, Krishna Rao
> wrote:
>
> > Hi
Hi Ted,
I had read those, but I'm confused about how this will affect non-HBase
HDFS data. With HDFS checksumming off won't it affect data integrity?
Krishna
On 24 April 2014 15:54, Ted Yu wrote:
> Please take a look at the following:
>
> http://hbase.a
x27;t just use HBase on our
cluster, so this would seem to be a bad idea right?
Cheers,
Krishna
Hi,
We are planning to migrate form CDH3 cluster to CDH4 cluster and as part of
migration we are also planning to use HBase instead of Hive ware house that
we are using in CDH3 cluster. Daily we are bringing the data from oracle to
hadoop using sqooping and we are having 10 different data base sch
across the servers as i wanted.
But, Is there any way to control the process of storing specific rows on
specific region servers..? (Of course is started a separate thread for this
question.)
On Mon, Aug 26, 2013 at 3:01 PM, Vamshi Krishna wrote:
> Ted, I guessed the problem could be due to o
row should be stored on which region server..? )
--
*Regards*
*
Vamshi Krishna
*
g the tires IMO but is still a degenerate case.
> In my opinion 5 is the lowest you should go. You shouldn't draw conclusions
> from inadequate deploys.
>
> On Friday, August 23, 2013, Vamshi Krishna wrote:
>
> > Hello all,
> > I set up a 2 node hbase clust
t; hbase.regions.slop
> 0.2
> Rebalance if any regionserver has average + (average *
> slop) regions.
> Default is 20% slop.
>
>
>
>
>
>
> Frank Chow
>
> From: Vamshi Krishna
> Date: 2013-08-23 22:51
> To: user; Dhaval Shah
> Sub
to 10MB seems too small.
> Did you mean 10GB?
>
>
> Regards,
> Dhaval
>
>
> ____
> From: Vamshi Krishna
> To: user@hbase.apache.org; zhoushuaifeng
> Sent: Friday, 23 August 2013 9:38 AM
> Subject: Re: Will hbase automatically distrib
UI http://:60010.
All the tables that i created are residing on the same machine-1.
On Fri, Aug 23, 2013 at 7:36 PM, Ted Yu wrote:
> What version of HBase are you using ?
>
> Can you search in master log for balancer related lines ?
>
> Thanks
>
>
> On Fri, Aug 23, 2013 at
fter 5 min.
Could any one tell me why is it happening so.?
--
*Regards*
*
Vamshi Krishna
*
t
>
>
>
>
> Frank Chow
--
*Regards*
*
Vamshi Krishna
*
master and region server.
Do we need to distribute the data explicitly by any means..?
I guess that should be done automatically by load balancer of Hbase , right?
Please someone help me..!
--
*Regards*
*
Vamshi Krishna
*
pe of issues a few months ago and wrote up my
> >>> solutions to the 3 main hbase communication/setup errors I got.
> >>>
> >>> See if this helps
> >>>
> http://jayunit100.blogspot.com/2013/05/debugging-hbase-installation.html
> >>>
> >>&
next
hbase.zookeeper.property.maxClientCnxns
1024
hbase.coprocessor.user.region.classes
com.bil.coproc.ColumnAggregationEndpoint
--
*Regards*
*
Vamshi Krishna
*
Hi all,
Facing problem with Hbase region server disconnecting with master after
some time. I set up Hbase cluster with 2 machines where Machine-1 (M1) is
master and Region server and M2 is only Region server.
After running hbase-start.sh , all the daemons are started perfectly but
after some time
I setup hbase cluster on two machines. One machine has master aswell as
regionserver and other has only RS. After running ./start-hbase.sh all
daemons are started perfectly. But 2nd machine which runs only RS is
getting disconnceted after some time and what ever data i iserted in to
Hbase table res
Hi Dan
I can check this in some time and see what is the problem. I can try helping
you as far as possible.
RegardsRam
> From: ramkrishna.vasude...@huawei.com
> To: ram_krish...@hotmail.com
> Subject: FW: Follow-up to regionservers not being online - more logs included
> Date: Fri, 19 Oct 2012
toString() and
> apply some basic regexs for validation. Or you could consider using a
> more structured serialization format like protobufs.
>
> --gh
>
> On Tue, Apr 24, 2012 at 9:35 PM, Vamshi Krishna
> wrote:
> > Hi all , here i am having one basic doubt about const
rg/book.html#regions.arch
>
>
>
>
>
> On 2/14/12 11:06 AM, "Doug Meil" wrote:
>
> >
> >Keys are stored in sorted order, it's basically a binary search.
> >
> >
> >
> >
> >On 2/14/12 9:31 AM, "Vamshi Krishna" wrote:
&
region boundaries from META.
>
>
>
>
> On 2/13/12 1:46 AM, "Vamshi Krishna" wrote:
>
> >Hi all, i have a small basic doubt regarding get() method which is used in
> >HTable. From the hbase book, under 8.3.Client section, i understood that,
> >when eve
lease..
--
*Regards*
*
Vamshi Krishna
*
download, please send me. Thank you
--
*Regards*
*
Vamshi Krishna
*
>
>
>
> On 1/28/12 6:43 AM, "Ioan Eugen Stan" wrote:
>
> >2012/1/28 Vamshi Krishna :
> >> Hi, here i am trying to read rows from a table, and put them to a file
> >>as
> >> it is.For that my mapper class and run method are as shown be
n(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
please some body help..
--
*Regards*
*
Vamshi Krishna
*
ec 20, 2011 at 8:32 PM, Akhtar Muhammad Din
wrote:
> Krishna,
> You should set auto flush to false to enable batch mode of operation and
> exception is being thrown because batch and results are not of the same
> size. Initialize results like this
> Object[] results = new Object[
:/usr/local/hadoop-0.20.2$
Actually some thousands of rows have to be inserted to the new table, but
only 200 rows i could see after above messages thrown on to console.
Please help me why it is happening like this..? please somebody help..
--
*Regards*
*
Vamshi Krishna
*
s : FAILED
Task attempt_201112191543_0001_m_01_0 failed to report status for 600
seconds. Killing!
No tasks of job are running..
please help what could be the reason for this?
--
*Regards*
*
Vamshi Krishna
*
u have added the relevant jars in your classpath correctly.
> Error is : Caused by: java.lang.ClassNotFoundException:
> org.apache.zookeeper.KeeperException
> Your code is not able to find proper jars in path.
>
> Hope this helps!
>
> -Original Message-
> From:
above settings on all the machines of my cluster.
Thank you.
On Wed, Dec 14, 2011 at 8:13 PM, Vamshi Krishna wrote:
> Hi, thank you. all these days i am coding in eclipse and trying to run
> that program from eclipse only, but never i saw that program running on the
> cluster , o
t; in the command-line when
> you want it to run against a cluster.
>
> On 13-Dec-2011, at 8:43 AM, Vamshi Krishna wrote:
>
> > what i shoud set in job's classpath ? where should i do setting class
> path
> > for job and how ? My requirement is to run the MR jobs
ss(MyHset.class);
job.setMapperClass(setInsertionMapper.class);
...
...
On Mon, Dec 12, 2011 at 11:35 PM, Jean-Daniel Cryans wrote:
> That setting also needs to be in your job's classpath, it won't guess it.
>
> J-D
>
> On Thu, Dec 8, 2011 at 10:14 PM, Vamsh
ning?
On all machines , all daemons are running.
what i should do to run it on clusetr from the eclipse.. please help..
On Fri, Dec 9, 2011 at 11:44 AM, Vamshi Krishna wrote:
> Hi harsh,
> ya, i no jobs are seen in that jobtracker page, under RUNNING JOBS it is
> none, under FINISH
..? please help..
On Fri, Dec 9, 2011 at 11:13 PM, Jean-Daniel Cryans wrote:
> You don't need the conf dir in the jar, in fact you really don't want
> it there. I don't know where that alert is coming from, would be nice
> if you gave more details.
>
> J-D
>
&g
are locations inside
DFS.
when i tried to create jar of my whole java project from eclipse, i got
alert that, 'conf directory of HBase was not exported to jar', So how to
run my program through command line to insert data into hbase table..?? can
anybody help??
--
*Regards*
*
Vamshi Krishna
*
merely launching the program
> via a LocalJobRunner, and not submitting to the cluster. This is cause of
> improper config setup (you need "mapred.job.tracker" set at minimum, to
> submit a distributed job).
>
> On 08-Dec-2011, at 12:10 PM, Vamshi Krishna wrote:
>
> > Hi all
of mapreduce jobs, like map tasks' progresses
etc.. What is the problem, how can i see their progress on the browser,
while mapreduce program is running from eclipse? i am using ubuntu-10.04
can anybody help?
--
*Regards*
*
Vamshi Krishna
*
ely identify an RS.
>
> Regards
> Ram
>
> -Original Message-
> From: Vamshi Krishna [mailto:vamshi2...@gmail.com]
> Sent: Tuesday, December 06, 2011 11:56 AM
> To: user@hbase.apache.org
> Subject: what is region server startcode
>
> Hi i want to move some r
gt;
> Startcode is used to distinguish between the start-ups of same
> regionserver at different time point.
> It's the start-time of one regionserver.
>
> Jieshan.
>
> -邮件原件-
> 发件人: Vamshi Krishna [mailto:vamshi2...@gmail.com]
> 发送时间: 2011年12月6日 14:26
> 收件人: user@
Hi i want to move some regions of table from one server to other, but in
the move method argumemnts, what is the meaning of regionserver startcode?
from where i can get it? please can anybody tell..?
--
*Regards*
*
Vamshi Krishna
*
Dec 2, 2011 at 6:34 AM, Vamshi Krishna
> wrote:
> > I disabled the firewall in all the machines.Then i started
> > hbase(bin/start-hbase.sh), after 2-3 minutesi stopped hbase on master
> node
> > (bin/stop-hbase.sh). Even then HRegionserver daemon is running on region
> >
the other machines to master:6.
>
> J-D
>
> On Thu, Dec 1, 2011 at 6:45 AM, Vamshi Krishna
> wrote:
> > I found in the logs of region server machines, i found this error (on
> both
> > regionserver machines)
> >
> > 2011-11-30 14:43:42,447 INFO org.apache
at their logs to figure what's going on.
>
> J-D
>
> On Tue, Nov 29, 2011 at 10:46 PM, Vamshi Krishna
> wrote:
> > hey soryy for posting multiple times.
> > J-D, As you said, i refered to my regionserver log, there i found
> > Could not resolve
checked the hbase.region.max.filesize , it is 256MB.
And one morething, is there any minimum size that we can set for both
region size and keyValue size?
can anybody help ?
--
*Regards*
*
Vamshi Krishna
*
t the master log to see what's going
> on.
>
> J-D
>
> On Mon, Nov 28, 2011 at 10:33 PM, Vamshi Krishna
> wrote:
> > Hi Lars,
> > i am not using cygwin, i am using 3 ubuntu-10.04 machines.
> > Finally that problem i mentioned got resolved i.e now i can see th
e list of servers to the regionservers file in the
> $HBASE_HOME/conf/ dir? Are you using Cygwin? Or what else is your
> environment?
>
> Lars
>
> On Nov 26, 2011, at 7:37 AM, Vamshi Krishna wrote:
>
> > Hi i am running hbase on 3 machines, on one node master and regionserv
n, Nov 28, 2011 at 9:00 PM, Suraj Varma wrote:
> Ok.
>
> Can you run dos2unix against both your HBASE_HOME/bin and
> HBASE_HOME/conf directory?
>
> After this, restart your cluster and see if you are getting the same issue.
> --Suraj
>
>
> On Sun, Nov 27, 2011 at 10:
eased latencies, etc, etc). So - the cluster
> is unlikely to be stable with such a setup.
>
> I would recommend going with a fully wired setup, if your goal is to
> have a stable hbase cluster. If it is a "at home test cluster", then
> that's fine - but be prepared for f
lhost" works.
>
> Your setup should work for test environments ... for production, the
> standard setup would be to co-locate region servers and data nodes to
> get data locality.
> --Suraj
>
> On Thu, Nov 24, 2011 at 10:51 PM, Vamshi Krishna
> wrote:
> > Hi, i checked
directory ,
but i could find out {HBASE_HOME}/bin/hbase-daemon.sh on both machines.
Infact the path of the {HBASE_HOME} folder on each of the respective
machines machines is same. i.e
/home/hduser/Documents/HBASE_SOFTWRAE/hbase-0.90.4
please can anybody help?
--
*Regards*
*
Vamshi Krishna
*
tat -anp. I
> would recommend either configuring the /etc/hosts to bind the
> vamshikrishna-desktop and vamshi-laptop hostnames to the 10.0.1.x
> addresses.
>
> -Joey
>
> On Thu, Nov 17, 2011 at 1:40 AM, Vamshi Krishna
> wrote:
> > hi
> > i am working with 2
D
>
> On Thu, Nov 17, 2011 at 6:34 AM, Vamshi Krishna
> wrote:
> > hi
> > i am working with 2 node hbase cluster as shown below
> > On node1 (10.0.1.54) : master node, region server, hadoop namenode,
> hadoop
> > datanode
> > on node2 (10.0.1.55): region s
}/bin/hbase-daemon.sh: No such file
or directory , but i could find out {HBASE_HOME}/bin/hbase-daemon.sh
clearly.. i don't know what went wrong..!
--
*Regards*
*
Vamshi Krishna
*
}/bin/hbase-daemon.sh: No such file
or directory , but i could find out {HBASE_HOME}/bin/hbase-daemon.sh
clearly.. i don't know what went wrong..!
please can any body help?
--
*Regards*
*
Vamshi Krishna
*
ven for any newly created tables also..
Please can anybody help..?
--
*Regards*
*
Vamshi Krishna
*
,
hbase-0.26-transactional.jar to my hbase project build path while i am
working on the top of hbase-0.90.4.?
--
*Regards*
*
Vamshi Krishna
*
t name}:2888:3888
>
> Make sure that in zookeeper home directory contains server1 folder with
> "myid" file and value as "0"
>
>
> Thanks & Regards,
> B Anil Kumar.
>
>
> On Fri, Nov 11, 2011 at 3:39 PM, Vamshi Krishna &g
uld be the problem? and how i need to set right the
situation?
On Thu, Nov 10, 2011 at 3:56 PM, Harsh J wrote:
> Your HDFS isn't accessible, and your HMaster is running in distributed
> mode. Bring up your HDFS and you should be all set!
>
> (Or, tweak your hbase-site.xml to
i run bin/start-hbase.sh command HRegionserver daemon is supposed to
run on regionserver machine also, but i couldnot find any such daemon on
regionserver machine.
can anybody please help.!
--
*Regards*
*
Vamshi Krishna
*
nt.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:558)
at org.apache.hadoop.hbase.client.HTable.(HTable.java:171)
at org.apache.hadoop.hbase.client.HTable.(HTable.java:145)
at PutExample.main(PutExample.java:19)
please anyone help me, where i went wrong?
--
*Regards*
*
Vamshi Krishna
*
[INFO] Total time: 1 minute 40 seconds
[INFO] Finished at: Sun Sep 04 11:50:57 IST 2011
[INFO] Final Memory: 2M/15M
[INFO]
vamshi@vamshikrishna-desktop:~/Documents/hbase-core-trunk$
Please can somebody help..
--
*Regards*
*
Vamshi Krishna
*
culty you're having. Typically, this would take
> >>>a
> >>> form like, "I tried doing X, but instead of seeing Y, I saw Z instead".
> >>>The
> >>> more specific you can be with what you already tried and the errors or
> >>> fa
amshi Krishna
*
ients
request is actually handled by the resultant RS.
The *main idea* being is , we can get rid off the single point failure
with respect to -ROOT- stored in a single region.
On Thu, Aug 18, 2011 at 12:13 PM, vamshi krishna wrote:
> Thank you very much Lars..!
>
>
> On Thu, Aug 18,
regard.
Even if anyone worked or have experience with distributed DSs , please help
and we will have a discussion. Thank you.
--
*Regards*
*
Vamshi Krishna
*
move further in my work because of such build errors.
please somebody help me in configuring the hadoop with eclipse. I wanted to
work with HBase after that.
Thank you
--
*Regards*
*
Vamshi Krishna
*
directory in hbase-site.xml
> (if not, see above link), that's where you will find META and ROOT tables.
>
> (The root directory would be a local directory in local mode, or a
> directory in HDFS).
>
>
> -- Lars
>
>
>
>
> From:
hadoop i should use to run
HBase at current time.. means asper latest versions which versions are
compatible?
*
Thank you..
Vamshi Krishna
*
on
> servers.
>
>
> On Thu, Mar 24, 2011 at 5:32 PM, Vivek Krishna wrote:
>
>> I have a total of 10 clients-nodes with 3-10 threads running on each node.
>> Record size ~1K
>>
>> Viv
>>
>>
>>
>>
>> On Thu, Mar 24, 2011 at 8:28 PM,
No, they are not present in .META.
Thanks,
Murali Krishna
From: Stack
To: user@hbase.apache.org
Sent: Sun, 10 April, 2011 2:49:52 AM
Subject: Re: Old tables in dfs
Generally yes. Are they mentioned in .META. table?
St.Ack
On Sat, Apr 9, 2011 at 4:38 AM
1 - 100 of 137 matches
Mail list logo