Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread Lars George
What error are you getting? The NPE? As Tatsuya pointed out, you are using the same time stamps: private final long ts2 = ts1 + 100; private final long ts3 = ts1 + 100; That cannot work, you are overriding cells. Lars On Thu, Feb 24, 2011 at 8:34 AM, 陈加俊 wrote: > HTable

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread 陈加俊
HTable object has not setAutoFlush. It's default value is true at my cluster.So I set it true as follows ,but error is still the same. public class GetRowVersionsTest extends TestCase { private final byte[] family= Bytes.toBytes("log"); private final byte[] qualifier = Bytes.toBytes("

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread Ryan Rawson
Does the HTable object have setAutoFlush(false) turned on by any chance? On Wed, Feb 23, 2011 at 11:22 PM, 陈加俊 wrote: > line 89:        final NavigableMap> > familyMap = map.get(family); > map is null , > and strangely  I use r.list() instead, > final List list = r.list(); > r is null ! > > > 201

Re: How to limit the number of logs that producted by DailyRollingFileAppender

2011-02-23 Thread 陈加俊
I uncomment MaxBackupIndex and restart regionserver but warn message as follows: starting regionserver, logging to /app/cloud/hbase/bin/../logs/hbase-uuwatch-regionserver-gentoo_uuwatch_183.out log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender. On Thu, Feb

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread 陈加俊
line 89:final NavigableMap> familyMap = map.get(family); map is null , and strangely I use r.list() instead, final List list = r.list(); r is null ! 2011/2/24 Ryan Rawson > Which line is line 89? > > Also it's preferable to do: > assertEquals(3, versionMap.size()); > vs: > assertTrue(

Re: Install problem - HBase 0.90.1 cannot connect to zookeeper

2011-02-23 Thread Stack
What is in the .out file? You are on windows? If so, check the archives. For example, this looks promising: http://search-hadoop.com/m/719ud2FRxYH1/windows+zookeeper&subj=hbase+setup+windows+7 St.Ack On Wed, Feb 23, 2011 at 10:58 PM, sun sf wrote: > St.Ack > > I found the past question you ha

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread Ryan Rawson
Which line is line 89? Also it's preferable to do: assertEquals(3, versionMap.size()); vs: assertTrue(versionMap.size() == 3); since the error messages from the former are more descriptive "expected 3 was 2". looking at the code it looks like it should work... On Wed, Feb 23, 2011 at 11:07 PM,

Re: Stargate

2011-02-23 Thread Lars George
Hi Mike, The values are Base64 encoded, so you need to use a decoder. HBase ships with one in the REST package that you can use for example. Lars On Wed, Feb 23, 2011 at 7:22 PM, Mike wrote: > I'm having some issues converting the results of a restful call through > stargate.  I'm returning the

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread 陈加俊
This is my test case ,but I get NPE some times . java.lang.NullPointerException at com.uuwatch.idm.hbase.GetRowVersionsTest.testGetRowMultipleVersions(GetRowVersionsTest.java:89) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMe

Re: Install problem - HBase 0.90.1 cannot connect to zookeeper

2011-02-23 Thread sun sf
St.Ack I found the past question you have answerd. I have checked the out log file and it gives the same errors - http://hbase.apache.org/docs/r0.20.6/cygwi

Re: Install problem - HBase 0.90.1 cannot connect to zookeeper

2011-02-23 Thread sun sf
I only use the default configuration to start. I check the log but I do not know how to where the Zookeeper ensemble error lies. Can you tell me how to check Zookeeper ensemble status and how to fix the problem? Thanks in advance. On Thu, Feb 24, 2011 at 3:01 PM, Stack wrote: > Zookeeper ens

Re: Install problem - HBase 0.90.1 cannot connect to zookeeper

2011-02-23 Thread Stack
Zookeeper ensemble is not running. See logs for why. St.Ack On Wed, Feb 23, 2011 at 9:46 PM, sun sf wrote: > Thank you for your quick reply. > > I know there are several different default configurations > between HBase0.90.1 and HBase0.20.6. > > And so I tried pseudo and standalone install, > it

Re: Install problem - HBase 0.90.1 cannot connect to zookeeper

2011-02-23 Thread sun sf
Thank you for your quick reply. I know there are several different default configurations between HBase0.90.1 and HBase0.20.6. And so I tried pseudo and standalone install, it seems both of them had the same zookeeper error. In the standalone,I only added the root.dir to the hbase-site.xml and n

Re: Install problem - HBase 0.90.1 cannot connect to zookeeper

2011-02-23 Thread Stack
On Wed, Feb 23, 2011 at 9:08 PM, sun sf wrote: > > org.apache.haddop.hase.zookeeper.MiniZookeeperCluster > Server localhost:2181 not up java.net.NoRouteToHostException > No route to host MinizookeeperCluster.j

Re: Stack Overflow?

2011-02-23 Thread Stack
Hey David: Yeah, a few of us have started to refer to the 'two week cycle' where it seems the same questions come around again. Karl Fogels' Producing Open Source Software, http://producingoss.com/en/producingoss.pdf, has a good section on this topic. In it he advocates 'Conspicuous Use of Archi

Install problem - HBase 0.90.1 cannot connect to zookeeper

2011-02-23 Thread sun sf
I have installed HBase0.20.6 successfully but I met the following problem when try to install HBase0.90.1. It always says zookeepee cannot be connected when we use the same configuration as HBase0.20.6. At last, I reinstalled the CentOS5.5, and start HBase0.90.1 in Stand alone, the following erro

Re: HBase 0.90.0 cannot be put more data after running hours

2011-02-23 Thread Schubert Zhang
Currently, with 0.90.1, this issue happen when there is only 8 regions in each RS, and totally 64 regions in all totally 8 RS. Ths CPU% of the client is very high. On Thu, Feb 24, 2011 at 10:55 AM, Schubert Zhang wrote: > Now, I am trying the 0.90.1, but this issue is still there. > > I attach

Re: Multiple scans vs single scan with filters

2011-02-23 Thread Otis Gospodnetic
Hi, > With a record size of 1k, I'd guesstimate that going with more scans > is going to be better than one big scan. This is because a scan that > filters out data still has to read that data from disk, and 1k rows > are pretty big. Would your answer be different if Alex/you knew if that da

Re: HBase 0.90.0 cannot be put more data after running hours

2011-02-23 Thread Schubert Zhang
On Sat, Jan 29, 2011 at 1:02 AM, Stack wrote: > On Thu, Jan 27, 2011 at 10:33 PM, Schubert Zhang > wrote: > > 1. The .META. table seems ok > > I can read my data table (one thread for reading). > > I can use hbase shell to scan my data table. > > And I can use 1~4 threads to put more

Re: HBase 0.90.0 cannot be put more data after running hours

2011-02-23 Thread Anty
Sorry, the vmstat output for 2) is wrong. 1) when there are only 2 client threads, vmstat output is procs ---memory-- ---swap-- -io --system-- -cpu-- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 29236 161 557488 12027072 0 0 19 53 0 0 3 1 97 0 0

Re: HBase 0.90.0 cannot be put more data after running hours

2011-02-23 Thread Anty
1) when there are only 2 client threads, vmstat output is procs ---memory-- ---swap-- -io --system-- -cpu-- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 29236 161 557488 12027072 0 0 19 53 0 0 3 1 97 0 0 0 0 29236 1610152 557488 12027076 0 0 0 0

Re: Stack Overflow?

2011-02-23 Thread Otis Gospodnetic
Hi David, When I see people asking questions that others have asked before (and received answers) I tend to point them to those questions/answers via a tool, so they become aware of the tool, hopefully start using it, and thus check before asking next time around. For Lucene, Solr, etc. I poi

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread Tatsuya Kawano
Hi Jiajun, Make sure you don't have the same timestamp on every versions of puts; try to put Thread.sleep() in your test(?) codes when necessary. You might not want to specify the timestamp by yourself but want to let HBase to store appropriate ones. -- Tatsuya Kawano (Mr.) Tokyo, Japan O

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread 陈加俊
I will check my code and table descriptions again. And the test case is TestGetRowVersions. I believe that I made a mistake. 2011/2/24 Ryan Rawson > There are test cases for this, the functionality DOES work, something is > up... > > Without full code and full descriptions of your tables, debugg

Re: Number of regions

2011-02-23 Thread Ryan Rawson
There have been threads about this lately, check out the search box on hbase.org which searches the list archives. On Feb 23, 2011 6:56 PM, "Nanheng Wu" wrote: > What are some of the trade-offs of using larger region files and less > regions vs the other way round? Currently each of my host has ~7

Number of regions

2011-02-23 Thread Nanheng Wu
What are some of the trade-offs of using larger region files and less regions vs the other way round? Currently each of my host has ~700 regions with the default hfile size, is this an acceptable number? (hosts have 16GB of RAM). Another totally unrelated question: I have Gzip enabled on the hfile

Can not store multiple cell by a single RESTful 'put' request, java.lang.IllegalArgumentException: argument type mismatch replyed.

2011-02-23 Thread 茅旭峰
Hi, I'm using hbase-0.89.20100924+28. When I was trying to do a multiple cell store by a single RESTful 'put' request,like curl -v -X PUT -H "Content-Type: text/xml" -d '456jpgyyy' http://10.241.67.22:18080/Tables1/xxx I got 2011-02-24 10:01:16,136 ERROR org.mortbay.log: /Tables1/xxx java.lang.I

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread Ryan Rawson
There are test cases for this, the functionality DOES work, something is up... Without full code and full descriptions of your tables, debugging is harder than it needs to be. It's probably a simple typo or something, check your code and table descriptions again. Many people rely on the multi ver

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread 陈加俊
final List list = result.list(); for (final KeyValue it : list) { System.out.println(Bytes.toString(it.getKey())); System.out.println(Bytes.toString(it.getValue())); } I can only get the last version! why ? Is there any testcas

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread 陈加俊
/** * Create a sorted list of the KeyValue's in this result. * * @return The sorted list of KeyValue's. */ public List list() { if(this.kvs == null) { readFields(); } return isEmpty()? null: Arrays.asList(sorted()); } I will try it . Thank you very much! On Thu,

RE: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread Buttler, David
Result.list() ? Putting the hbase source into your IDE of choice (yay Eclipse!) is really helpful Dave -Original Message- From: 陈加俊 [mailto:cjjvict...@gmail.com] Sent: Wednesday, February 23, 2011 5:42 PM To: user@hbase.apache.org Cc: Buttler, David Subject: Re: I can't get many versio

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread 陈加俊
Thank you David ! I alter the table schema as follow: > alter 'cjjIndexPageModify', {NAME => 'log' , VERSIONS => 5 , METHOD => 'add'} How to iterate over KeyValues? which method in Result? On Thu, Feb 24, 2011 at 9:27 AM, Buttler, David wrote: > What is your table schema set to? By default

Stack Overflow?

2011-02-23 Thread Buttler, David
Hi all, It seems that we are getting a lot of repeated questions now. Perhaps it would be useful to start migrating the simple questions off to stackoverflow (or whichever stack exchange website is most appropriate), and just pointing people there? Obviously there are still a lot of questions

RE: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread Buttler, David
What is your table schema set to? By default it holds 3 versions. Also, you might iterating over KeyValues instead of using the Map since you don't really care about the organization, just the time. Dave -Original Message- From: 陈加俊 [mailto:cjjvict...@gmail.com] Sent: Wednesday, Februa

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread 陈加俊
I execute it five times at diffrent time. //put data by version final Put p = new Put(key); // key final long ts = System.currentTimeMillis(); p.add(FAMILY, q1, ts,v1); p.add(FAMILY, q2, ts,v2); p.add(FAMILY, q3, ts,v3); table.put(p); So I can get five versions ,right

Re: TableInputFormat configuration problems with 0.90

2011-02-23 Thread Jean-Daniel Cryans
Yeah it should, also I'm pretty sure you're right to say that this regression comes from HBASE-2036... would you mind opening a jira? Thanks for the report and the digging Dan! J-D On Wed, Feb 23, 2011 at 3:30 PM, Dan Harvey wrote: > Ah ok, most of the time we were using the default Hadoop conf

Re: async table updates?

2011-02-23 Thread Dan Harvey
You could use something like the row log library from the Lilly project to queue the processing from one table to the other every time you do a put in HBase. As the queue is in HBase itself too it's all atomic, I'm sure you could get it working with minimal lag between the two tables. http://ww

Re: TableInputFormat configuration problems with 0.90

2011-02-23 Thread Dan Harvey
Ah ok, most of the time we were using the default Hadoop configuration object and not the HBase one. I guess that's a change between 0.20 and 0.90? Would it not make sense for the TableMapReduceUtil class to do that for you? As you'll need it in every HBase map reduce job. Anyway, I guess we s

Re: which hadoop and zookeeper version should I use with hbase 0.90.1

2011-02-23 Thread Ryan Rawson
We have also found that the sun JDK is the one to use. Certain version have substantial bugs we reveal in fact. I dont know of any serious deployment that is using OpenJDK. I am aware that the source of OpenJDK and the sun-hotspot JVM are very very similar if not even the same, but sub-patch rele

Re: Multiple scans vs single scan with filters

2011-02-23 Thread Ryan Rawson
With a record size of 1k, I'd guesstimate that going with more scans is going to be better than one big scan. This is because a scan that filters out data still has to read that data from disk, and 1k rows are pretty big. But nothing will beat hard numbers. Build a test setup and let us know whic

Multiple scans vs single scan with filters

2011-02-23 Thread Alex Baranau
Hello, Would be great if somebody can share thoughts/ideas/some numbers on the following problem. We have a reporting app. To fetch data for some chart/report we currently use multiple scans, usually 10-50. We fetch about 100 records with each scan which we use to construct a report. I've revise

Re: async table updates?

2011-02-23 Thread Ryan Rawson
In thrift there is a 'oneway' or 'async' or 'fire and forget' call type. I cant recommend those kinds of approaches, since once your system runs into problems you have no feedback. So if you are asking for a one shot, no reply "assume it worked" call, we don't have one (nor would I wish that hell

async table updates?

2011-02-23 Thread Vishal Kapoor
I have two tables called LIVE and MASTER. LIVE reports about the MASTER activity and I need to process records in LIVE almost real time( some business logic) if I need to store the activity of entities reported by LIVE rows in MASTER say in ACTIVITY:LAST_REPORTED I could process my data in LIVE a

Re: which hadoop and zookeeper version should I use with hbase 0.90.1

2011-02-23 Thread Mike Spreitzer
I have now installed the recommended way --- build Hadoop branch-0.20-append, install and config it, then smash its Hadoop core jar into the HBase lib/. Very light testing revealed no problems. But the testing is still so little that I do not recommend drawing any conclusions about reliabilit

Re: when does put return to the caller?

2011-02-23 Thread Ryan Rawson
There is a batch put call, should be trivial to use some kind of background thread to invoke callbacks when it returns. Check out the HTable API, javadoc, etc. All available via http://hbase.org ! -ryan On Wed, Feb 23, 2011 at 1:25 PM, Hiller, Dean (Contractor) wrote: > I was wonder if put re

when does put return to the caller?

2011-02-23 Thread Hiller, Dean (Contractor)
I was wonder if put returns after writing the data into memory on two out of the three nodes letting my client continue so we don't have to wait for the memory to then go to disk. After all, if it is replicated, probably don't need to wait for it to be written to disk(ie. Kind of like the in-memor

Re: TableInputFormat configuration problems with 0.90

2011-02-23 Thread Jean-Daniel Cryans
How do you create the configuration object Dan? Are you doing: Configuration conf = HBaseConfiguration.create(); Job job = new Job(conf, "somename"); or are you just creating a normal Configuration? BTW the code I wrote is what I expect people do and what I'm doing myself. J-D On Wed, Feb 23,

Re: table creation is failing now and then (CDH3b3)

2011-02-23 Thread Ryan Rawson
You should consider upgrading to hbase 0.90.1, a lot of these kinds of issues were fixed. -ryan On Wed, Feb 23, 2011 at 12:02 PM, Dmitriy Lyubimov wrote: > Hi all, > > from time to time we come to a sitation where .META. table seems to be stuck > in some corrupted state. > In particular, attempt

table creation is failing now and then (CDH3b3)

2011-02-23 Thread Dmitriy Lyubimov
Hi all, from time to time we come to a sitation where .META. table seems to be stuck in some corrupted state. In particular, attempts to create more tables cause ERROR: org.apache.hadoop.hbase.client.NoServerForRegionException: No server address listed in .META. for region LEAD_DATA,,129848469894

Re: ERROR zookeeper.ZKConfig: no clientPort found in zoo.cfg

2011-02-23 Thread Jean-Daniel Cryans
I think I solved the problem in Dan's "TableInputFormat configuration problems with 0.90" thread, you need to pass a Configuration object to Job that was created using HBaseConfiguration.create(). J-D On Wed, Feb 23, 2011 at 2:20 AM, Cavus,M.,Fa. Post Direkt wrote: > Hi Jean-Daniel, > > I've onl

Re: huge .oldlogs

2011-02-23 Thread charan kumar
Excellent! Thank you J-D On Wed, Feb 23, 2011 at 11:45 AM, Jean-Daniel Cryans wrote: > Yes, you can delete the content of the folder (not the folder itself) > safely. > > J-D > > On Wed, Feb 23, 2011 at 11:37 AM, charan kumar > wrote: > > I have been inserting a ton of data for the past few days

Re: huge .oldlogs

2011-02-23 Thread Jean-Daniel Cryans
Yes, you can delete the content of the folder (not the folder itself) safely. J-D On Wed, Feb 23, 2011 at 11:37 AM, charan kumar wrote: > I have been inserting a ton of data for the past few days. This looks like > the issue. > > If the issue is related to that, can I delete the .oldlogs folder

Re: huge .oldlogs

2011-02-23 Thread charan kumar
I have been inserting a ton of data for the past few days. This looks like the issue. If the issue is related to that, can I delete the .oldlogs folder without causing any issues? I will also look into upgrading.. On Wed, Feb 23, 2011 at 11:23 AM, Jean-Daniel Cryans wrote: > I'll have to trust

Re: Trying to contact region "Some region"

2011-02-23 Thread Ryan Rawson
We fixed a lot of the exception handling in 0.90. The exception text is much better. Check it out! -ryan On Wed, Feb 23, 2011 at 11:18 AM, Jean-Daniel Cryans wrote: > It could be due to slow splits, heavy GC, etc. Make sure your machines > don't swap at all, that HBase has plenty of memory, tha

Re: huge .oldlogs

2011-02-23 Thread Jean-Daniel Cryans
I'll have to trust you on that :) The other possible situation is that you are inserting a ton of data and logs are generated faster than they get cleaned. 0.90.0 has a limiter that was later removed in 0.90.1 by https://issues.apache.org/jira/browse/HBASE-3501 so you should upgrade and see if it

Re: huge .oldlogs

2011-02-23 Thread Ted Yu
Please look for other exceptions. I have been stress testing 0.90.1 and my .oldlogs folder is empty. On Wed, Feb 23, 2011 at 11:18 AM, charan kumar wrote: > Hi J-D, > > There are no NPE's in the log. > > Thanks, > Charan > > On Wed, Feb 23, 2011 at 11:04 AM, Jean-Daniel Cryans >wrote: > > > Ch

Re: Trying to contact region "Some region"

2011-02-23 Thread Jean-Daniel Cryans
It could be due to slow splits, heavy GC, etc. Make sure your machines don't swap at all, that HBase has plenty of memory, that you're not trying to use more CPUs than your machines actually have (like setting 4 maps on a 4 cores machine when also using hbase), etc. Also upgrading to 0.90.1 will h

Re: huge .oldlogs

2011-02-23 Thread charan kumar
Hi J-D, There are no NPE's in the log. Thanks, Charan On Wed, Feb 23, 2011 at 11:04 AM, Jean-Daniel Cryans wrote: > Check you master log, if you see a lot of NPEs then it means you have > an old hbase-default.xml lying around. > > J-D > > On Wed, Feb 23, 2011 at 10:58 AM, charan kumar > wrot

Re: huge .oldlogs

2011-02-23 Thread Jean-Daniel Cryans
Check you master log, if you see a lot of NPEs then it means you have an old hbase-default.xml lying around. J-D On Wed, Feb 23, 2011 at 10:58 AM, charan kumar wrote: > Hello, > >   I was wondering, if I can safely remove the .oldlogs folder. The table > data I have has 1 TB , where as 2.5 TB (w

huge .oldlogs

2011-02-23 Thread charan kumar
Hello, I was wondering, if I can safely remove the .oldlogs folder. The table data I have has 1 TB , where as 2.5 TB (with replication 7.5 TB) for .oldlogs folder. I am using hbase-0.90.0 hadoop-append. Thanks, Charan

Re: hbase table creation

2011-02-23 Thread Jean-Daniel Cryans
I already answered your question: http://search-hadoop.com/m/MrtFB1ctLFT J-D On Tue, Feb 22, 2011 at 8:40 PM, hbase_user wrote: > > > > hi, > I am new to hbase and hadoop. Any how i have succeeded in setting up a > hadoop cluster which consists of 3 machines. Now i need some help on > building u

Stargate

2011-02-23 Thread Mike
I'm having some issues converting the results of a restful call through stargate. I'm returning the data as a json representation which appears to work fine as it returns the desired fields: JsonRepresentation jr = new JsonRepresentation(resource.get(MediaType.APPLICATION_JSON)); When I parse th

Re: which hadoop and zookeeper version should I use with hbase 0.90.1

2011-02-23 Thread Stack
On Wed, Feb 23, 2011 at 2:14 AM, Oleg Ruchovets wrote: > > -- In case I found bugs or problems , where I am going to post the questions > ? To this list. > -- Which of it is developed for  hbase? Simply  As I understand it is 3 > different branches which developed by 3 different organizations (

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread Stack
What do you get for a result? You are only entering a single version of each column, a single version of FAMILY:q1, a single version FAMILY:q2, and a FAMILY:q3. St.Ack On Wed, Feb 23, 2011 at 2:54 AM, 陈加俊 wrote: > I can't get many versions of the specified column,but only get the latest > versi

Re: TableInputFormat configuration problems with 0.90

2011-02-23 Thread Dan
Other than patching the TableInputFormat class, if you call HBaseConfiguration.addHbaseResources(job.getConfiguration()); on the job whilst your setting up the map reduce job it will add the needed configuration to Hadoop's configuration class. On Wed, Feb 23, 2011 at 4:34 PM, Cavus,M.,Fa. Post

Re: TableInputFormat configuration problems with 0.90

2011-02-23 Thread Dan
Or the other way would be adding the HBase configs to the Hadoop config, which I think maybe what is intended. If I do it whilst I'm setting up the job with HBaseConfiguration.addHbaseResources() it works fine, should the TableMapReduceUtil.initTableMapperJob do this for you? I think this was the

TableInputFormat configuration problems with 0.90

2011-02-23 Thread Dan
Hey, I'm just testing our code to move over to 0.90 and I'm finding some issues with the map/reduce jobs we've written using TableInputFormat. We setup the jobs using TableMapReduceUtil.initTableMapperJob(..); which worked fine in 0.20.6 but now throws the following errors when I try to run them

I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread 陈加俊
I can't get many versions of the specified column,but only get the latest version of the specified column. Is there anyone help me? //put data by version final Put p = new Put(key); // key final long ts = System.currentTimeMillis(); p.add(FAMILY, q1, ts,v1); p.add(FAMILY, q2, ts,v

Re: which hadoop and zookeeper version should I use with hbase 0.90.1

2011-02-23 Thread Oleg Ruchovets
I found couple hadoop 0.20.0 links: 1) http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/ 2) https://github.com/facebook/hadoop-20-append 3) https://docs.