I suggest it to be ROWCOL because you have many columns to match against
in your second table (column qualifiers).
-Usman
Should the Bloom filter be ROW or ROWCOL?
Vishal
On Fri, Mar 11, 2011 at 11:44 AM, Lars George
wrote:
Hi,
If you expect a lot of misses with that approach then en
I tested by setting VERSIONS => 365 for a column family 'x' and using the
timestamp to store the dates (365 days in a year).
Problem with this setup is if you do a delete operation for a timestamp it
marks all cell versions older and inclusive of the timestamp.
This was not desired so we decid
Hi,
Please see my comments inline and Thanks for the tips.
They were very helpful in getting a better understanding of Hbase and
schema design.
Regards,
Usman
On Wed, Mar 2, 2011 at 12:22 AM, Usman Waheed wrote:
I want to do this so i can use the timestamp attribute for a cell as a
search
man
Why do you need to specify the timestamp? Why not let the server do it?
Otherwise, the below should work for all but the case where many
clients writing from different machines with unsync'd clocks.
St.Ack
On Tue, Mar 1, 2011 at 3:18 PM, Usman Waheed wrote:
Hi,
I have a test table
Hi,
I have a test table where i have set all cells in it to have only one
version (VERSIONS => 1).
I was thinking of using the timestamp value with the row key + column(s)
to retrieve data using getRowWithColumnsTs (Thrift Perl API).
When i insert the data (perform puts) into my test table,
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#setTimeRange(long,
long)).
Could this be the issue? If you add 1 to your timestamp, do you get
expected result?
St.Ack
On Tue, Mar 1, 2011 at 8:50 AM, Usman Waheed wrote:
Hi,
I am using the Thrift API (Perl) to retrieve data
Hi,
I am using the Thrift API (Perl) to retrieve data out of Hbase tables and
my getRow function works fine but when i use the getRowTs for some odd
reason i am not getting back the record with the timestamp param.
The record exits in the table and from the hbase shell using the get
command
his script, run:
${HBASE_HOME}/bin/hbase org.jruby.Main rename_table.rb
Because it's JRuby, you must run it in a JVM.
J-D
On Mon, Feb 28, 2011 at 8:45 AM, Stack wrote:
On Mon, Feb 28, 2011 at 6:40 AM, Usman Waheed wrote:
Hi,
Is the table rename not supported in Hbase 0.90.0 at th
HI,
I have been using the Thrift Perl API to connect to Hbase for my web app.
At the moment i only perform random reads and scans based on date ranges
and some other search criteria.
It works and I am still testing performance.
-Usman
On Mon, Feb 28, 2011 at 2:37 PM, edward choi wrote:
Hi,
Is the table rename not supported in Hbase 0.90.0 at the moment?
I tried using the rename_table.rb in the bin directory but it returned
with the following errors:
./rename_table.rb abc xyz
./rename_table.rb: line 36: include: command not found
./rename_table.rb: line 37: import: command n
The below can be fixed using guava-rc06.jar in the -libjars as follows:
hadoop jar /usr/lib/hbase-0.90.0/hbase-0.90.0.jar importtsv -libjars
/usr/lib/hbase-0.90.0/lib/guava-r06.jar
-Dimporttsv.columns=HBASE_ROW_KEY,metrics:a,metrics:b,metrics:c,metrics:d
-Dimporttsv.bulk.output=/var/data/ou
Hi,
I am getting the following error when trying to run an import job using
hadoop with the importtsv tool. My HADOOP_CLASSPATH is set to the
following in hadoop-env.sh:
export
HADOOP_CLASSPATH=/usr/lib/hbase-0.90.0/lib/hbase-0.90.0.jar:/usr/lib/hbase-0.90.0/lib/zookeeper-3.3.2.jar:/usr/l
Hi,
I would like to setup an Hbase table that would provide users the ability
to perform selects only (get and scans). We don't have a need for users to
perform inserts or updates at the moment. But yes i will have to
load/insert the data into the tables before users can perform selects.
Hi,
I am a newbie to Hbase and am testing on a small 3 node cluster running
Hadoop 0.20.2 and Hbase 0.89.
Is there a limit in Hbase on how many versions of a cell one can keep
record of under a given column family?
I understand that each column family can have its own rules but was
wonder
14 matches
Mail list logo