+1
This makes an observed big difference for stability of small/test clusters.
I second Ryan's specific point about stability of small clusters being
important.
- Andy
On Thu Jan 21st, 2010 2:46 PM PST Ryan Rawson wrote:
>Scaling _down_ is a continual problem for us, and this is one of
+1
mahadev
On 1/21/10 2:46 PM, "Ryan Rawson" wrote:
> Scaling _down_ is a continual problem for us, and this is one of the
> prime factors. It puts a bad taste in the mouth of new people who then
> run away from HBase and HDFS since it is "unreliable and unstable". It
> is perfectly within sco
+1
On Thu, Jan 21, 2010 at 2:58 PM, Tsz Wo (Nicholas), Sze
wrote:
> +1
> Nicholas Sze
>
>
>
>
> - Original Message
>> From: Stack
>> To: hdfs-dev@hadoop.apache.org
>> Cc: HBase Dev List
>> Sent: Thu, January 21, 2010 2:36:25 PM
>> Subject: [VOTE -- Round 2] Commit hdfs-630 to 0.21?
>>
Scaling _down_ is a continual problem for us, and this is one of the
prime factors. It puts a bad taste in the mouth of new people who then
run away from HBase and HDFS since it is "unreliable and unstable". It
is perfectly within scope to support a cluster of about 5-6 machines
which can have an a
+1
Nicholas Sze
- Original Message
> From: Stack
> To: hdfs-dev@hadoop.apache.org
> Cc: HBase Dev List
> Sent: Thu, January 21, 2010 2:36:25 PM
> Subject: [VOTE -- Round 2] Commit hdfs-630 to 0.21?
>
> I'd like to propose a new vote on having hdfs-630 committed to 0.21.
> The first
I'd like to propose a new vote on having hdfs-630 committed to 0.21.
The first vote on this topic, initiated 12/14/2009, was sunk by Tsz Wo
(Nicholas), Sze suggested improvements. Those suggestions have since
been folded into a new version of the hdfs-630 patch. Its this new
version of the patch -
Hi Zlatin,
I agree that this access is unsynchronized and thus can be stale. Looking at
the implementation of HashMap.size, it doesn't need to do anything except
copy a single int, so it shouldn't be a cause for any errors beyond stale
data (ie it can't throw a ConcurrentModificationException).
I