On Wed, 18 May 2011, Chris Mosetick wrote:
> to go in the packing dept. I still love their prices!
There's a reason fort at: you don't get what you don't pay for!
--
Rich Teer, Publisher
Vinylphile Magazine
www.vinylphilemag.com
___
zfs-discuss maili
The drives I just bought were half packed in white foam then wrapped
> in bubble wrap. Not all edges were protected with more than bubble
> wrap.
Same here for me. I purchased 10 x 2TB Hitachi 7200rpm SATA disks from
Newegg.com in March. The majority of the drives were protected in white
foam.
On Mon, May 16 at 21:55, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Paul Kraus
All drives have a very high DOA rate according to Newegg. The
way they package drives for shipping is exactly how Seagate
spec
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Paul Kraus
>
> All drives have a very high DOA rate according to Newegg. The
> way they package drives for shipping is exactly how Seagate
> specifically says NOT to pack them here
8 m
2011-05-16 9:14, Richard Elling пишет:
On May 15, 2011, at 10:18 AM, Jim Klimov wrote:
Hi, Very interesting suggestions as I'm contemplating a Supermicro-based server
for my work as well, but probably in a lower budget as a backup store for an
aging Thumper (not as its superior replacement).
On Mon, May 16 at 14:29, Paul Kraus wrote:
I have stopped buying drives (and everything else) from Newegg
as they cannot be bothered to properly pack items. It is worth the
extra $5 per drive to buy them from CDW (who uses factory approved
packaging). Note that I made this change 5 or so y
On Mon, May 16, 2011 at 2:35 PM, Krunal Desai wrote:
> An order of 6 the 5K3000 drives for work-related purposes shipped in a
> Styrofoam holder of sorts that was cut in half for my small number of
> drives (is this what 20 pks come in?). No idea what other packaging
> was around them (shipping a
Actually it is 100 or less, i.e. a 10 msec delay.
-- Garrett D'Amore
On May 16, 2011, at 11:13 AM, "Richard Elling" wrote:
> On May 16, 2011, at 10:31 AM, Brandon High wrote:
>> On Mon, May 16, 2011 at 8:33 AM, Richard Elling
>> wrote:
>>> As a rule of thumb, the resilvering disk is expected
On Mon, May 16, 2011 at 2:29 PM, Paul Kraus wrote:
> What Newegg was doing is buying drives in the 20-pack from the
> manufacturer and packing them individually WRAPPED IN BUBBLE WRAP and
> then stuffed in a box. No clamshell. I realized *something* was up
> when _every_ drive I looked at had a mu
On Mon, May 16, 2011 at 1:20 PM, Brandon High wrote:
> The 1TB and 2TB are manufactured in China, and have a very high
> failure and DOA rate according to Newegg.
All drives have a very high DOA rate according to Newegg. The
way they package drives for shipping is exactly how Seagate
spe
On Mon, May 16, 2011 at 1:20 PM, Brandon High wrote:
> The 1TB and 2TB are manufactured in China, and have a very high
> failure and DOA rate according to Newegg.
>
> The 3TB drives come off the same production line as the Ultrastar
> 5K3000 in Thailand and may be more reliable.
Thanks for the he
On May 16, 2011, at 10:31 AM, Brandon High wrote:
> On Mon, May 16, 2011 at 8:33 AM, Richard Elling
> wrote:
>> As a rule of thumb, the resilvering disk is expected to max out at around
>> 80 IOPS for 7,200 rpm disks. If you see less than 80 IOPS, then suspect
>> the throttles or broken data path.
On Mon, May 16, 2011 at 8:33 AM, Richard Elling
wrote:
> As a rule of thumb, the resilvering disk is expected to max out at around
> 80 IOPS for 7,200 rpm disks. If you see less than 80 IOPS, then suspect
> the throttles or broken data path.
My system was doing far less than 80 IOPS during resilv
On Sat, May 14, 2011 at 11:20 PM, John Doe wrote:
>> 171 Hitachi 7K3000 3TB
> I'd go for the more environmentally friendly Ultrastar 5K3000 version - with
> that many drives you wont mind the slower rotation but WILL notice a
> difference in power and cooling cost
A word of caution - The Hita
following are some thoughts if it's not too late:
> 1 SuperMicro 847E1-R1400LPB
I guess you meant the 847E1[b]6[/b]-R1400LPB, the SAS1 version makes no sense
> 1 SuperMicro H8DG6-F
not the best choice, see below why
> 171 Hitachi 7K3000 3TB
I'd go for the more environmentally
On May 16, 2011, at 5:02 AM, Sandon Van Ness wrote:
> On 05/15/2011 09:58 PM, Richard Elling wrote:
>>> In one of my systems, I have 1TB mirrors, 70% full, which can be
>>> sequentially completely read/written in 2 hrs. But the resilver took 12
>>> hours of idle time. Supposing you had a 70% f
> From: Sandon Van Ness [mailto:san...@van-ness.com]
>
> ZFS resilver can take a very long time depending on your usage pattern.
> I do disagree with some things he said though... like a 1TB drive being
> able to be read/written in 2 hours? I seriously doubt this. Just reading
> 1 TB in 2 hours me
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> > In one of my systems, I have 1TB mirrors, 70% full, which can be
> > sequentially completely read/written in 2 hrs. But the resilver took 12
> > hours of idle time. Supposing you had a 70% full pool of raidz3, 2TB
disks,
> > using 1
I have to agree. ZFS needs a more intelligent scrub/resilver algorithm, which
can 'sequentialise' the process.
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
Giovanni Tirloni wrote:
On Mon, May 16, 2011 at 9:02 AM, Sandon Van Ness wrote:
Actually I have seen resilve
On Mon, May 16, 2011 at 9:02 AM, Sandon Van Ness wrote:
>
> Actually I have seen resilvers take a very long time (weeks) on
> solaris/raidz2 when I almost never see a hardware raid controller take more
> than a day or two. In one case i thrashed the disks absolutely as hard as I
> could (hardware
On 05/15/2011 09:58 PM, Richard Elling wrote:
In one of my systems, I have 1TB mirrors, 70% full, which can be
sequentially completely read/written in 2 hrs. But the resilver took 12
hours of idle time. Supposing you had a 70% full pool of raidz3, 2TB disks,
using 10 disks + 3 parity, and a u
On Sun, May 15, 2011 at 10:14 PM, Richard Elling
wrote:
> On May 15, 2011, at 10:18 AM, Jim Klimov wrote:
>> In case of RAIDZ2 this recommendation leads to vdevs sized 6 (4+2), 10 (8+2)
>> or 18 (16+2) disks - the latter being mentioned in the original post.
>
> A similar theory was disproved ba
On May 15, 2011, at 10:18 AM, Jim Klimov wrote:
> Hi, Very interesting suggestions as I'm contemplating a Supermicro-based
> server for my work as well, but probably in a lower budget as a backup store
> for an aging Thumper (not as its superior replacement).
>
> Still, I have a couple of ques
On May 15, 2011, at 8:01 PM, Edward Ned Harvey
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>
>> On one hand, I've read that as current drives get larger (while their
> random
>> IOPS/MBPS don't grow nearly as fast
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> On one hand, I've read that as current drives get larger (while their
random
> IOPS/MBPS don't grow nearly as fast with new generations), it is becoming
> more and more reasonabl
Hi, Very interesting suggestions as I'm contemplating a Supermicro-based server
for my work as well, but probably in a lower budget as a backup store for an
aging Thumper (not as its superior replacement).
Still, I have a couple of questions regarding your raidz layout recommendation.
On one ha
26 matches
Mail list logo