Hi Michael & Mohammad,
I am aware of alter table command and doing the same through java api. But,
i want to add the coprocessor from the hbase shell while running the create
table command. Something like:
create 'test', {NAME=>'cf',
COMPRESSION=>'SNAPPY'},COPROCESSOR=>'example.coproc'
In my opi
Its a simple kill...
Scan is used using startrow and stoprow
Scan scan = new Scan(Bytes.toBytes("adidas"), Bytes.toBytes("adidas1"));
Our cluster size is 15. The load average when I see in master is 78%...It
is not that overloaded. but writes are happening in the cluster...
Thanks
Kiran
On We
My pleasure :-)
--Honghua
发件人: Kyle Lin [kylelin2...@gmail.com]
发送时间: 2013年6月13日 10:41
收件人: user@hbase.apache.org
主题: Re: 答复: Scanning not show the correct value of latest version
Hello Hong Hua
I've do some experiments and really got the rule of dele
Hello Hong Hua
I've do some experiments and really got the rule of deletion. Thanks for
your explanation.
2013/6/12 冯宏华
> hi, Kyle
>
> This is NOT a bug. There are three types of delete: Delete, DeleteColumn,
> DeleteFamily; Delete is actually DeleteCell which can only mask(delete) a
> cell
Hello Anil and Michael,
As of hbase-0.94.4 it is not possible to create a table with co-proc
using create command through shell. Though alter has the option to add a
co-proc.
hbase> alter 't1', METHOD => 'table_att',
'coprocessor'=>'hdfs:///foo.jar|com.foo.FooRegionObserver|1001|arg1=1,arg2
Depending on the version of HBase and thus HBase Shell, what does the help
command tell you?
If the alter table doesn't have it as an option... then I'd say you're better
off writing Java.
On Jun 12, 2013, at 7:00 PM, anil gupta wrote:
> Hi All,
>
> I am using hbase(0.94.2) shell to create t
Hi All,
I am using hbase(0.94.2) shell to create tables that are using
coprocessors. I have added coprocessors to existing tables by using
HBaseAdmin api. But, now i need to need to add them while creating the
table.
As per the following link i can alter a table and add a coproc. But, i
cannot f
Hi all,
I have just released HappyBase 0.6. This release improves the exception
handling in the connection pool that was added in HappyBase 0.5.
Release notes:
http://happybase.readthedocs.org/en/latest/news.html
Documentation:
http://happybase.readthedocs.org/
Source code:
https://git
Yeah, it should not block the other regions.
For the region server, was it a kill -9 or in simple kill (the former
triggers a recovery, the later will close the region before stopping the
process)?
How do you select the scan scope? With stop/start rows?
Can you share the client code you're using?
you may run into OOM when doing compaction.
On Wed, Jun 12, 2013 at 10:14 AM, Rahul Ravindran wrote:
> Hello,
> I am trying to understand the downsides of having a large number of hfiles
> by having a large hbase.hstore.compactionThreshold
>
> This delays major compaction. However, the amount
Yes we killed the region server but datanode is still running on the node...
Sample Test scenario: Assume, I have table with pre-splits a upto z (about
26 regions). I brought down region server purposefully with regions having
prefixes c and d. Then I used client API to scan data from regions with
You can configure below to more value to close more regions at a time.
hbase.regionserver.executor.closeregion.threads
3
On Wed, Jun 12, 2013 at 7:38 PM, Nicolas Liochon wrote:
> What was your test exactly? You killed -9 a region server but kept the
> datanode alive?
> Could you d
Hello,
I am trying to understand the downsides of having a large number of hfiles by
having a large hbase.hstore.compactionThreshold
This delays major compaction. However, the amount of data that needs to be
read and re-written as a single hfile during major compaction will remain the
same un
What was your test exactly? You killed -9 a region server but kept the
datanode alive?
Could you detail the queries you were doing?
On Wed, Jun 12, 2013 at 2:10 PM, kiran wrote:
> It is not possible for us to migrate to new version immediately.
>
> @Anoop we purposefully brought down one region
Apologies. The Subject was supposed to be *Hbase Monitoring using JMX*
On Wed, Jun 12, 2013 at 1:00 PM, Hanish Bansal <
hanish.bansal.agar...@gmail.com> wrote:
> Hi
>
> I am trying to monitor HBase using JMX.
>
> I enabled JMX using *$HABASE_HOME/conf/hbase-env.sh* configuration file
> by adding
Hi
I am trying to monitor HBase using JMX.
I enabled JMX using *$HABASE_HOME/conf/hbase-env.sh* configuration file by
adding the following lines:
*export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false**
-Dcom.sun.management.jmxremote.authenticate=false"**export
HBASE_MASTER_OPTS="$HBAS
It is not possible for us to migrate to new version immediately.
@Anoop we purposefully brought down one regionserver, then we observed the
website is taking too much time to respond. We observed the pattern for
about 5 min till the regions are relocated.
Also we issued queries in our website taki
To add to Andrew's point...
There are now disks which are a hybrid of flash and spinning media. So that
your writes are faster since they are buffered in to flash and your reads may
be faster since they too could be cached in flash. (YMMV because of random
access vs sequential access reads)
hi, Kyle
This is NOT a bug. There are three types of delete: Delete, DeleteColumn,
DeleteFamily; Delete is actually DeleteCell which can only mask(delete) a cell
with exactly the SAME timestamp, in your below case, "* Row1
column=cf:c2, timestamp=1370935373545, ty
19 matches
Mail list logo