This is the generated output. ¿ What now ? ¿ How can I recover data?
# > hbase hbck
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client
environment:zookeeper.version=3.4.2-1221870, built on 12/21/2011 20:46 GMT
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client environment:host.name
=dwilyast02
12/
Hi,
I have two regionservers and two tables with 10 regions each. When
starting first table's 10 regions assigned to first RS and the next table's
regions assigned to next RS. So, when i use coprocessor, it is not being
executed in both RS. what will be the problem?
--
Regards,
Balaji,K
On Mon, Apr 2, 2012 at 8:19 PM, Jonathan Hsieh wrote:
> I'm in the process of testing a hypothesis Todd suggested
> and will share results after test is done.
>
What is the hypothesis?
St.Ack
The interesting point I didn't mention from my simplistic tests is that
these slowdowns were present when using 0.92ish hbase on top of cdh3u3 hdfs
(the olds school hadoop 0.20.x based hadoop and didn't even use a hadoop 23
based hdfs) . I'm in the process of testing a hypothesis Todd suggested
an
Jon,
we had a fair few long pauses. Our test tool gave us latency, and we
got a lot of requests taking much longer than they should.
Unfortunately we didn't hold onto our logs from the PerformanceEvaluation runs.
Also I would note that PerformanceEvaluation internally disables
autoFlush, so it do
Hi Alok, please refer to my previous post where I detailed some of the
stuff we did.
At this point, I'm unsure if it is actually possible to get good
autoFlushed throughput with 0.23, we weren't able to and switched back
to 0.20.2
If you want to persevere however, please let us know if you make a
Hi guys, conversation went off the list briefly as I resent stack
dumps to stack. We've moved back to hdfs 0.20.2 but want to post this
back here and try to summarize events as well as our experiences with
0.23 and concerns.
Quick summary: after having some issues with 0.20.2(since resolved),
we
Juhani,
Have you looked at any of the logs from your perf runs? Can you try
running HBase's performance evaluation with debug comments on? I'd like
to know if what I'm seeing is the same as you.
I've started running some of these and have encountered what seems to be
networking code isssues (S
Thanks for the suggestion, Sandy. I wil let you know the outcome once i run
the job.
On Mon, Apr 2, 2012 at 3:26 PM, Sandy Pratt wrote:
> It might work to set the property as final on the server side, so that
> clients can't override it:
>
>
>mapred.reduce.tasks.speculati
It might work to set the property as final on the server side, so that clients
can't override it:
mapred.reduce.tasks.speculative.execution
false
true
If true, then multiple instances of some reduce
tasks
On Mon, Apr 2, 2012 at 1:41 PM, Jean-Daniel Cryans wrote:
> Decrease *hbase.hregion.memstore.flush.size?
> >
>
> Even if you decrease it enough so that you don't hit the too many hlogs
> you'll still end up flushing tiny files which will trigger compactions a
> lot too.
>
>
> > Are there other con
You could use a prefix on the rowkey. I imagine there are multiple
different field types, so just have an enum or something that enumerates
the different field types you have, such as name, date, email, etc. Each
value would have a 1 char identifier, so then your search table would have
rowkeys l
Thanks Bryan I will try it it sounds good.
But another question how could I make a table with 2 row keys: name, date ???
Sent from my iPad
On Apr 2, 2012, at 10:47 PM, "Bryan Beaudreault"
wrote:
> I imagine you don't want this search to have to scan the entire patients
> table to find someon
HBasene?
https://github.com/akkumar/hbasene
On 04/02/2012 04:46 PM, Bryan Beaudreault wrote:
I imagine you don't want this search to have to scan the entire patients
table to find someone by their name, assuming there could be many many
patients. It may be a better idea to create a search table
I imagine you don't want this search to have to scan the entire patients
table to find someone by their name, assuming there could be many many
patients. It may be a better idea to create a search table. The search
table could have search terms in the row key, and the columns could be
profileIds.
On Mon, Apr 2, 2012 at 12:27 PM, Miles Spielberg wrote:
>
> Our region server are each hosting ~270 regions. Our writes are extremely
> well distributed (our HBase keys are output from a hash function) and small
> (~100s of bytes). I believe that the writes are being so well distributed
> across
Helllooo,
I am using hbase thrift for my app. I have made a table for patient which has
first a column family called info which contains his/her general info.
I want to make a method to search for a patient by his name and date of birth.
I didn't find any method for search all requires the row
HBaseCon is also on the home page...
http://hbase.apache.org/
On 4/2/12 3:18 PM, "Lars George" wrote:
>http://www.hbasecon.com/
>
>On Apr 2, 2012, at 10:16 PM, Marcos Ortiz wrote:
>
>> I heard yesterday that the first conference dedicated to HBase will be
>>in the next days. Where I can fi
We're looking at how to store a large amount of (per-user) list data in
hbase, and we were trying to figure out what kind of access pattern made
the most sense. One option is store the majority of the data in a key, so
we could have something like
:"" (no value)
:"" (no value)
:"" (no value)
The
We are frequently seeing "flush storms" like the following:
2012-03-29 07:44:32,743 INFO
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: Using
syncFs -- HDFS-200
2012-03-29 07:44:32,749 INFO org.apache.hadoop.hbase.regionserver.wal.HLog:
Roll /hbase/.logs/alf-data1001.ve.box.net,60
http://www.hbasecon.com/
On Apr 2, 2012, at 10:16 PM, Marcos Ortiz wrote:
> I heard yesterday that the first conference dedicated to HBase will be in the
> next days. Where I can fin all the information about the event?
>
> regards and best wishes
>
> --
> Marcos Luis Ortíz Valmaseda (@marcos
I heard yesterday that the first conference dedicated to HBase will be
in the next days. Where I can fin all the information about the event?
regards and best wishes
--
Marcos Luis Ortíz Valmaseda (@marcosluis2186)
Data Engineer at UCI
http://marcosluis2186.posterous.com
10mo. ANIVERSARIO
2012/4/2 Alok Singh :
> Sorry for jumping on this thread late, but, I have seen very similar
> behavior in our cluster with hadoop 0.23.2 (CDH4B2 snapshot) and hbase
> 0.23.1. We have a small, 7 node cluster (48GB/16Core/6x10Kdisk/GigE
> network) with about 500M rows/4Tb of data. The random read pe
Okay. I guess we will look into putting the host entries.
On 2 Apr 2012, at 17:19, Dave Wang wrote:
> The link I referred you to states that forward and reverse resolution is
> required for at least < 0.92.x. If you do not have DNS, then perhaps you
> can hardcode the resolutions in /etc/resolv
Sorry for jumping on this thread late, but, I have seen very similar
behavior in our cluster with hadoop 0.23.2 (CDH4B2 snapshot) and hbase
0.23.1. We have a small, 7 node cluster (48GB/16Core/6x10Kdisk/GigE
network) with about 500M rows/4Tb of data. The random read performance
is excellent, but, r
Dear Lars
you do not have any updated guideline , because I'm not professional in Java
and Maven.Regard Mahdi> Subject: Re: HBase database sample
> From: lars.geo...@gmail.com
> Date: Mon, 2 Apr 2012 19:57:54 +0300
> To: user@hbase.apache.org
>
> Please note though that YCSB 0.1.4 is now fully
Dear
li
thanks.
> From: l...@idle.li
> Date: Sun, 1 Apr 2012 20:18:42 -0700
> Subject: Re: HBase database sample
> To: user@hbase.apache.org
>
> Follow the instructions here:
> http://blog.lars-francke.de/2010/08/16/performance-testing-hbase-using-ycsb/
>
> The load portion will load a thousa
Dear
doug
i think you didn't read my question :) because I know what is HBase and I work
it successfuly and design my tables and insert a few data in it. but my
question is : I need a sample database for testing my Master thesis that it
contains more than 1000 rows.
regard
> From: doug.m...
Please note though that YCSB 0.1.4 is now fully mavenized and uses the POM to
pull in the various dependencies, as well as supplying a script that you can
use to avoid the lengthy java command line. So the build steps and invocation
have changed a bit, but the overall idea stays the same.
Lars
+common-u...@hadoop.apache.org
Hi Harsh,
Thanks for the information.
Is there any way to differentiate between a client side property and
server-side property?or a Document which enlists whether a property is
server or client-side? Many times i have to speculate over this and try out
test runs.
The link I referred you to states that forward and reverse resolution is
required for at least < 0.92.x. If you do not have DNS, then perhaps you
can hardcode the resolutions in /etc/resolv.conf or similar.
- Dave
On Mon, Apr 2, 2012 at 7:19 AM, Ben Cuthbert wrote:
> Hi Dave
>
> Thanks. So wha
Can you run 'bin/hbase hbck' and see if there is any inconsistency ?
Thanks
On Mon, Apr 2, 2012 at 7:07 AM, Toni Moreno wrote:
> when I try count data rows I have this output after a while.--
>
> hbase(main):001:0> list
> TABLE
> tsdb
> tsdb-uid
> 2 row(s) in 0.7600 seconds
>
> hbase(main):002:
when I try count data rows I have this output after a while.--
hbase(main):001:0> list
TABLE
tsdb
tsdb-uid
2 row(s) in 0.7600 seconds
hbase(main):002:0> count 'tsdb-uid'
ERROR: org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to
find region for tsdb-uid,,99 after 7 t
Also, see this chapter.
http://hbase.apache.org/book.html#schema
On 4/2/12 11:40 AM, "Doug Meil" wrote:
>
>See the link to the BigTable paper here...
>
>http://hbase.apache.org/book.html#other.info
>
>... and there is other reading material and videos too.
>
>
>
>On 4/1/12 11:30 PM, "Mahdi
See the link to the BigTable paper here...
http://hbase.apache.org/book.html#other.info
... and there is other reading material and videos too.
On 4/1/12 11:30 PM, "Mahdi Negahi" wrote:
>
>thanks, but all databases have good examples , like Cinema in Neo4j and
>etc.
>but if nobody has a sa
Follow the instructions here:
http://blog.lars-francke.de/2010/08/16/performance-testing-hbase-using-ycsb/
The load portion will load a thousand rows into HBase for testing.
On Sun, Apr 1, 2012 at 8:12 PM, Mahdi Negahi wrote:
>
>
> thanks for your reply
> but i install and know what is HBase dat
Hi Dave
Thanks. So what happens when you run in a network that does not have DNS over
firewalls. So like running primary data center to backup data center?
On 2 Apr 2012, at 14:33, Dave Wang wrote:
> Ben,
>
> Please see:
>
> http://hbase.apache.org/book/os.html#dns
>
> - Dave
>
> On Mon, Apr
Ben,
Please see:
http://hbase.apache.org/book/os.html#dns
- Dave
On Mon, Apr 2, 2012 at 5:25 AM, Ben Cuthbert wrote:
> Was thinking, does hbase have to use hostname? What if you are running
> this in a FW env that does not have DNS Access?
> On 2 Apr 2012, at 06:31, Ben Cuthbert wrote:
>
> >
Was thinking, does hbase have to use hostname? What if you are running this in
a FW env that does not have DNS Access?
On 2 Apr 2012, at 06:31, Ben Cuthbert wrote:
> All when I try and run in distributed mode with two servers I get this error
> when starting the slave node
>
> two nodes
>
> no
Hash
I think you have a good point here - It is a good practice that the
utilities given from HBase also adapt HBase own recommendations.
For example the RowCounter
(org.apache.hadoop.hbase.mapreduce.RowCounter.java) utility is neither
setting the speculative execution to 'false', nor the scan cach
40 matches
Mail list logo