Re: Is it normal that a dead data node causes 3 region servers to go down?

2012-06-25 Thread Peter Naudus
xlefty.com/hbase-hbase-regionserver-009.log.1000 ~Peter On Mon, 25 Jun 2012 12:23:51 -0400, Stack wrote: On Mon, Jun 25, 2012 at 4:17 PM, Peter Naudus wrote: We're running CDH3 (hbase: 0.90.6, hadoop: 0.20.2). As far as I'm aware, no one tried to shut down the server. I read online

Re: Is it normal that a dead data node causes 3 region servers to go down?

2012-06-25 Thread Peter Naudus
er-008.log.1000http://www.linuxlefty.com/hbase-hbase-regionserver-009.log.1000~PeterOn Mon, 25 Jun 2012 12:23:51 -0400, Stack wrote:> On Mon, Jun 25, 2012 at 4:17 PM, Peter Naudus > wrote:>> We're running CDH3 (hbase: 0.90.6, hadoop: 0.20.2).>>>> As far as I'm aware,

Re: Is it normal that a dead data node causes 3 region servers to go down?

2012-06-25 Thread Peter Naudus
We're running CDH3 (hbase: 0.90.6, hadoop: 0.20.2). As far as I'm aware, no one tried to shut down the server. I read online that that the "user requested stop" error is sometimes logged on unknown exceptions, not necessarily during an explicit shutdown (apache-hbase.679495.n3.nabble.com/Di

Re: Is it normal that a dead data node causes 3 region servers to go down?

2012-06-25 Thread Peter Naudus
s had some kind of failover mechanism that would prevent this. Since there are multiple copies of the data stored, it doesn't make sense that the inaccessibility of one copy causes all other copies to also become inaccessible.~ PeterOn Thu, 21 Jun 2012 11:57:55 -0400, Peter Naudus wrote:Hello All,

Single disk failure on single node causes 1 data node, 3 region servers to go down

2012-06-21 Thread Peter Naudus
Hello All,As this problem has both a Hadoop and HBase component, rather than posting the same message to both groups, I'm posting the datanode portion of this problem under the title of "Single disk failure (with HDFS-457 applied) causes data node to die" to the Hadoop users group.In our production

Thrift's TCompactProtocol

2012-06-01 Thread Peter Naudus
Hello all,In order to use Thrift's TCompactProtocol, is it required to start thrift using the "--compact" option? If so, how would I go about specifying this option when running thrift embedded in the region server?Thanks,~ Peter

Deployment Best Practices

2012-05-30 Thread Peter Naudus
Hello All, Is there a "community standard" / "best" way to deploy HBase to a cluster? We're in the process of setting up a ~15 node cluster and I'm curious how you all go about your deployments. Do you package the code into an RPM, place it into a central YUM repository, and then drive the

Re: java.io.IOException: Compression algorithm 'snappy' previously failed test

2012-04-23 Thread Peter Naudus
lot better luck with the RPMs instead of the tarballs. ~ Peter On Mon, 23 Apr 2012 11:01:19 -0400, Nathaniel Cook wrote: Was there any resolution to this? I am experiencing the same issue. Nathaniel On Wed, Feb 29, 2012 at 10:52 AM, Peter Naudus wrote: Thanks for your help :) To make

Re: java.io.IOException: Compression algorithm 'snappy' previously failed test

2012-02-29 Thread Peter Naudus
y ideas? On Tue, 28 Feb 2012 20:02:38 -0500, Stack wrote: On Tue, Feb 28, 2012 at 1:52 PM, Peter Naudus wrote: What else can I do to fix / diagnose this problem? Does our little compression tool help? http://hbase.apache.org/book.html#compression.test St.Ack --

java.io.IOException: Compression algorithm 'snappy' previously failed test

2012-02-28 Thread Peter Naudus
Hello All, I am using HBase-0.92.0 and Hadoop-0.23.0 . When attempting to create a table with snappy compression it gets stuck in the PENDING_OPEN state with the following message repeated in the region server's log file: 2012-02-28 21:08:18,010 INFO org.apache.hadoop.hbase.regionserver.H