But I just considered about the efficiency. Why HBase does not directly
write a tombstone to row key instead of for each cell?
regards
Yong
On Sat, Nov 26, 2011 at 8:11 AM, Jahangir Mohammed
wrote:
> Tombstone. Same as cell.
>
> Thanks,
> Jahangir Mohammed.
>
> On Sat, Nov 26, 2011 at 1:14 AM,
Since you say you see blocks loosing when it's IO busy/bound, I very much
that think the xceivers has been set to low value. Raise its value.
Thanks,
Jahangir Mohammed.
On Sat, Nov 26, 2011 at 2:21 AM, Jahangir Mohammed
wrote:
> What is dfs.datanode.max.xceivers set?
>
>
>
> On Sat, Nov 26, 201
What is dfs.datanode.max.xceivers set?
On Sat, Nov 26, 2011 at 2:17 AM, Harsh J wrote:
> Ah wait, my bad. Do not raise dfs.replication.min when using HBase - it
> can cause RSes to go down if min block #s > 1 were not completely
> guaranteed. As a result, close() on files fail to work and bloc
Ah wait, my bad. Do not raise dfs.replication.min when using HBase - it can
cause RSes to go down if min block #s > 1 were not completely guaranteed. As a
result, close() on files fail to work and block out until the replicas are
available to satisfy dfs.replication.min - and thereby cause thing
You can set it in hbase-site.xml:
http://hbase.apache.org/book.html#hdfs_client_conf
On 26-Nov-2011, at 12:14 PM, Gaojinchao wrote:
> When HBase use HDFS system file. How do we set "dfs.replication.min"?
> who can share relevant experience?
> Currently on our environment, We use the default value
Tombstone. Same as cell.
Thanks,
Jahangir Mohammed.
On Sat, Nov 26, 2011 at 1:14 AM, yonghu wrote:
> hello,
>
> I read http://hbase.apache.org/book/versions.html and have a question
> about
> delete operation. As it mentions, the user can delete a whole row or delete
> a data version of cell. T
The setting is available in hdfs-site.xml.
Default replication is 3, the dfs.replication.min is used only when the
namenode is in safe mode i.e. on startup and leaves that state once all
blocks are reported by datanodes.
Thanks,
Jahangir Mohammed
On Sat, Nov 26, 2011 at 1:44 AM, Gaojinchao wrot
When HBase use HDFS system file. How do we set "dfs.replication.min"?
who can share relevant experience?
Currently on our environment, We use the default value:
dfs.replication :3
dfs.replication.min: 1
I found some block lost when the IO is very busy.
Hi i am running hbase on 3 machines, on one node master and regionserver,
on other two nodes regionservers. i ran bin/start-hbase.sh on master, all
the hbase daemons running on master node(HMaster, HQuorumpeer,
HRegionServer), But on the other regionserver machines i could not find any
hbase daemon
hello,
I read http://hbase.apache.org/book/versions.html and have a question about
delete operation. As it mentions, the user can delete a whole row or delete
a data version of cell. The delete operation of data version of cell is
just to write a tombstone marker for that version. I want to know h
Hi All,
I created a Hbase table with following schema:
--> {: [,,...]}
Row record sample:
<"1"> {<"cf_01">:<"qualifier_01">,<"qualifier_02">,...]}
I wrote following code to retrieve the all the qualifier name associated
with one TRowResult. Here is the code:
$rowKey = "1";
$rowResults =
It is easy enough to backport for HBase, if you check the code.
Or you can refer to cloudera version as well.
-邮件原件-
发件人: saurabh@gmail.com [mailto:saurabh@gmail.com] 代表 Sam Seigal
发送时间: 2011年11月26日 11:48
收件人: user@hbase.apache.org
主题: Re: snappy compression
Is there any concerns
Is there any concerns in applying the SNAPPY patch @
https://issues.apache.org/jira/browse/HBASE-3691 to 0.90.3 ?
2011/11/25 Gaojinchao :
> You can search maillist about topic "Snappy for 0.90.4".
>
>
> -邮件原件-
> 发件人: saurabh@gmail.com [mailto:saurabh@gmail.com] 代表 Sam Seigal
> 发送时间
You can search maillist about topic "Snappy for 0.90.4".
-邮件原件-
发件人: saurabh@gmail.com [mailto:saurabh@gmail.com] 代表 Sam Seigal
发送时间: 2011年11月26日 11:29
收件人: user@hbase.apache.org
主题: snappy compression
Hi,
The Compression.Algorithm enum does not have "SNAPPY" as an option in
Hba
Hi,
The Compression.Algorithm enum does not have "SNAPPY" as an option in
Hbase 0.90.3 (the version I am on). How can I create a table with
SNAPPY compression via code ? Is this possible ?
HColumnDescriptor.setCompressionType() takes Algorithm enumeration as
a parameter.
Thanks,
Sam
There is a difference, the start-all.sh uses SSH to log into your machine to
execute the hbase master start command. This means the environment is set up
differently. When you run the command yourself, as you did in the latter event,
the current env is used and that seems to have the proper conf
16 matches
Mail list logo