Well, if this is not a root disk and the server boots at least to single-user,
as you wrote above, you can try to disable auto-import of this pool.
Easiest of all is to disable auto-imports of all pools by removing or renaming
the file /etc/zfs/zpool.cache - it is a list of known pools for autom
On 05/15/2011 09:58 PM, Richard Elling wrote:
In one of my systems, I have 1TB mirrors, 70% full, which can be
sequentially completely read/written in 2 hrs. But the resilver took 12
hours of idle time. Supposing you had a 70% full pool of raidz3, 2TB disks,
using 10 disks + 3 parity, and a u
On Mon, May 16, 2011 at 9:02 AM, Sandon Van Ness wrote:
>
> Actually I have seen resilvers take a very long time (weeks) on
> solaris/raidz2 when I almost never see a hardware raid controller take more
> than a day or two. In one case i thrashed the disks absolutely as hard as I
> could (hardware
I have to agree. ZFS needs a more intelligent scrub/resilver algorithm, which
can 'sequentialise' the process.
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
Giovanni Tirloni wrote:
On Mon, May 16, 2011 at 9:02 AM, Sandon Van Ness wrote:
Actually I have seen resilve
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> > In one of my systems, I have 1TB mirrors, 70% full, which can be
> > sequentially completely read/written in 2 hrs. But the resilver took 12
> > hours of idle time. Supposing you had a 70% full pool of raidz3, 2TB
disks,
> > using 1
> From: Sandon Van Ness [mailto:san...@van-ness.com]
>
> ZFS resilver can take a very long time depending on your usage pattern.
> I do disagree with some things he said though... like a 1TB drive being
> able to be read/written in 2 hours? I seriously doubt this. Just reading
> 1 TB in 2 hours me
> Can you share your 'zpool status' output for both pools?
Faster, smaller server:
~# zpool status pool0
pool: pool0
state: ONLINE
scan: scrub repaired 0 in 2h18m with 0 errors on Sat May 14 13:28:58 2011
Much larger, more capable server:
~# zpool status pool0 | head
pool: pool0
state: ONLINE
On May 16, 2011, at 5:02 AM, Sandon Van Ness wrote:
> On 05/15/2011 09:58 PM, Richard Elling wrote:
>>> In one of my systems, I have 1TB mirrors, 70% full, which can be
>>> sequentially completely read/written in 2 hrs. But the resilver took 12
>>> hours of idle time. Supposing you had a 70% f
following are some thoughts if it's not too late:
> 1 SuperMicro 847E1-R1400LPB
I guess you meant the 847E1[b]6[/b]-R1400LPB, the SAS1 version makes no sense
> 1 SuperMicro H8DG6-F
not the best choice, see below why
> 171 Hitachi 7K3000 3TB
I'd go for the more environmentally
All these zpool corrupted are the root of local zones
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, May 14, 2011 at 11:20 PM, John Doe wrote:
>> 171 Hitachi 7K3000 3TB
> I'd go for the more environmentally friendly Ultrastar 5K3000 version - with
> that many drives you wont mind the slower rotation but WILL notice a
> difference in power and cooling cost
A word of caution - The Hita
Don,
Can you send the entire 'zpool status' output? I wanted to see your
pool configuration. Also run the mdb command in a loop (at least 5
tiimes) so we can see if spa_last_io is changing. I'm surprised you're
not finding the symbol for 'spa_scrub_inflight' too. Can you check
that you didn't mis
On Mon, May 16, 2011 at 8:33 AM, Richard Elling
wrote:
> As a rule of thumb, the resilvering disk is expected to max out at around
> 80 IOPS for 7,200 rpm disks. If you see less than 80 IOPS, then suspect
> the throttles or broken data path.
My system was doing far less than 80 IOPS during resilv
> Can you send the entire 'zpool status' output? I wanted to see your
> pool configuration. Also run the mdb command in a loop (at least 5
> tiimes) so we can see if spa_last_io is changing. I'm surprised you're
> not finding the symbol for 'spa_scrub_inflight' too. Can you check
> that you didn't
> I copy and pasted to make sure that wasn't the issue :)
Which, ironically, turned out to be the problem- there was an extra
carriage return in there that mdb did not like:
Here is the output:
spa_name = [ "pool0" ]
spa_last_io = 0x82721a4
spa_scrub_inflight = 0x1
spa_name = [ "pool0" ]
spa_las
Here is another example of the performance problems I am seeing:
~# dd if=/dev/zero of=/pool0/ds.test bs=1024k count=2000 2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 56.2184 s, 37.3 MB/s
37MB/s seems like some sort of bad joke for all these disks. I can
write the same a
On May 16, 2011, at 10:31 AM, Brandon High wrote:
> On Mon, May 16, 2011 at 8:33 AM, Richard Elling
> wrote:
>> As a rule of thumb, the resilvering disk is expected to max out at around
>> 80 IOPS for 7,200 rpm disks. If you see less than 80 IOPS, then suspect
>> the throttles or broken data path.
You mentioned that the pool was somewhat full, can you send the output
of 'zpool iostat -v pool0'? You can also try doing the following to
reduce 'metaslab_min_alloc_size' to 4K:
echo "metaslab_min_alloc_size/Z 1000" | mdb -kw
NOTE: This will change the running system so you may want to make this
On Mon, May 16, 2011 at 1:20 PM, Brandon High wrote:
> The 1TB and 2TB are manufactured in China, and have a very high
> failure and DOA rate according to Newegg.
>
> The 3TB drives come off the same production line as the Ultrastar
> 5K3000 in Thailand and may be more reliable.
Thanks for the he
> You mentioned that the pool was somewhat full, can you send the output
> of 'zpool iostat -v pool0'?
~# zpool iostat -v pool0
capacity operationsbandwidth
poolalloc free read write read write
-- - - - - -
On Mon, May 16, 2011 at 1:20 PM, Brandon High wrote:
> The 1TB and 2TB are manufactured in China, and have a very high
> failure and DOA rate according to Newegg.
All drives have a very high DOA rate according to Newegg. The
way they package drives for shipping is exactly how Seagate
spe
On Mon, May 16, 2011 at 2:29 PM, Paul Kraus wrote:
> What Newegg was doing is buying drives in the 20-pack from the
> manufacturer and packing them individually WRAPPED IN BUBBLE WRAP and
> then stuffed in a box. No clamshell. I realized *something* was up
> when _every_ drive I looked at had a mu
Actually it is 100 or less, i.e. a 10 msec delay.
-- Garrett D'Amore
On May 16, 2011, at 11:13 AM, "Richard Elling" wrote:
> On May 16, 2011, at 10:31 AM, Brandon High wrote:
>> On Mon, May 16, 2011 at 8:33 AM, Richard Elling
>> wrote:
>>> As a rule of thumb, the resilvering disk is expected
On Mon, May 16, 2011 at 2:35 PM, Krunal Desai wrote:
> An order of 6 the 5K3000 drives for work-related purposes shipped in a
> Styrofoam holder of sorts that was cut in half for my small number of
> drives (is this what 20 pks come in?). No idea what other packaging
> was around them (shipping a
On Mon, May 16 at 14:29, Paul Kraus wrote:
I have stopped buying drives (and everything else) from Newegg
as they cannot be bothered to properly pack items. It is worth the
extra $5 per drive to buy them from CDW (who uses factory approved
packaging). Note that I made this change 5 or so y
On Fri, Apr 29, 2011 at 5:17 PM, Brandon High wrote:
> On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote:
>> Running ZFSv28 on 64-bit FreeBSD 8-STABLE.
>
> I'd suggest trying to import the pool into snv_151a (Solaris 11
> Express), which is the reference and development platform for ZFS.
Would
On Mon, May 16, 2011 at 1:55 PM, Freddie Cash wrote:
> Would not import in Solaris 11 Express. :( Could not even find any
> pools to import. Even when using "zpool import -d /dev/dsk" or any
> other import commands. Most likely due to using a FreeBSD-specific
> method of labelling the disks.
> You mentioned that the pool was somewhat full, can you send the output
> of 'zpool iostat -v pool0'? You can also try doing the following to
> reduce 'metaslab_min_alloc_size' to 4K:
>
> echo "metaslab_min_alloc_size/Z 1000" | mdb -kw
So just changing that setting moved my write rate from 40MB/s
> Running a zpool scrub on our production pool is showing a scrub rate
> of about 400K/s. (When this pool was first set up we saw rates in the
> MB/s range during a scrub).
Usually, something like this is caused by a bad drive. Can you post iostat -en
output?
Vennlige hilsener / Best regards
ro
2011-05-16 9:14, Richard Elling пишет:
On May 15, 2011, at 10:18 AM, Jim Klimov wrote:
Hi, Very interesting suggestions as I'm contemplating a Supermicro-based server
for my work as well, but probably in a lower budget as a backup store for an
aging Thumper (not as its superior replacement).
2011-05-16 22:21, George Wilson пишет:
echo "metaslab_min_alloc_size/Z 1000" | mdb -kw
Thanks, this also boosted my home box from hundreds of kb/s into
several Mb/s range, which is much better (I'm evacuating data from
a pool hosted in a volume inside my main pool, and the bottleneck
is quite s
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Paul Kraus
>
> All drives have a very high DOA rate according to Newegg. The
> way they package drives for shipping is exactly how Seagate
> specifically says NOT to pack them here
8 m
As a followup:
I ran the same DD test as earlier- but this time I stopped the scrub:
pool0 14.1T 25.4T 88 4.81K 709K 262M
pool0 14.1T 25.4T104 3.99K 836K 248M
pool0 14.1T 25.4T360 5.01K 2.81M 230M
pool0 14.1T 25.4T305 5.69K 2.38M 231M
On Mon, May 16 at 21:55, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Paul Kraus
All drives have a very high DOA rate according to Newegg. The
way they package drives for shipping is exactly how Seagate
spec
34 matches
Mail list logo