In the beginning, I created a mirror named DumpFiles on FreeBSD. Later, I
decided to move those drives to a new Solaris 11 server-- but rather than
import the old pool I'd create a new pool. And I liked the DumpFiles name,
so I stuck with it.
Oops.
Now whenever I run zpool import, it shows a faul
On 29/05/2012 11:10 PM, Jim Klimov wrote:
2012-05-29 16:35, Nathan Kroenert wrote:
Hi John,
Actually, last time I tried the whole AF (4k) thing, it's performance
was worse than woeful.
But admittedly, that was a little while ago.
The drives were the seagate green barracuda IIRC, and performan
In message <4fc509e8.8080...@jvm.de>, Stephan Budach writes:
>If now I'd only knew how to get the actual S11 release level of my box.
>Neither uname -a nor cat /etc/release does give me a clue, since they
>display all the same data when run on different hosts that are on
>different updates.
$ p
On May 29, 2012, at 6:10 AM, Jim Klimov wrote:
> Also note that ZFS IO often is random even for reads, since you
> have to read metadata and file data often from different dispersed
> locations.
This is true for almost all other file systems, too. For example, in UFS,
metadata is stored in fixed
On 2012-May-29 22:04:39 +1000, Edward Ned Harvey
wrote:
>If you have a drive (or two drives) with bad sectors, they will only be
>detected as long as the bad sectors get used. Given that your pool is less
>than 100% full, it means you might still have bad hardware going undetected,
>if you pass
On 05/29/2012 03:29 AM, Daniel Carosone wrote:
For the mmap case: does the ARC keep a separate copy, or does the vm
system map the same page into the process's address space? If a
separate copy is made, that seems like a potential source of many
kinds of problems - if it's the same page then th
Am 29.05.12 18:59, schrieb Richard Elling:
On May 29, 2012, at 8:12 AM, Cindy Swearingen wrote:
Hi--
You don't see what release this is but I think that seeing the checkum
error accumulation on the spare was a zpool status formatting bug that
I have seen myself. This is fixed in a later Solari
On May 29, 2012, at 8:12 AM, Cindy Swearingen wrote:
> Hi--
>
> You don't see what release this is but I think that seeing the checkum
> error accumulation on the spare was a zpool status formatting bug that
> I have seen myself. This is fixed in a later Solaris release.
>
Once again, Cindy bea
Hi--
You don't see what release this is but I think that seeing the checkum
error accumulation on the spare was a zpool status formatting bug that
I have seen myself. This is fixed in a later Solaris release.
Thanks,
Cindy
On 05/28/12 22:21, Stephan Budach wrote:
Hi all,
just to wrap this is
2012-05-29 16:35, Nathan Kroenert wrote:
Hi John,
Actually, last time I tried the whole AF (4k) thing, it's performance
was worse than woeful.
But admittedly, that was a little while ago.
The drives were the seagate green barracuda IIRC, and performance for
just about everything was 20MB/s per
>The drives were the seagate green barracuda IIRC, and performance for
>just about everything was 20MB/s per spindle or worse, when it should
>have been closer to 100MB/s when streaming. Things were worse still when
>doing random...
It is possible that your partitions weren't aligned at 4K an
On 05/29/12 07:26, bofh wrote:
ashift:9 is that standard?
Depends on what the drive reports as physical sector size.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 05/29/12 08:35, Nathan Kroenert wrote:
Hi John,
Actually, last time I tried the whole AF (4k) thing, it's performance
was worse than woeful.
But admittedly, that was a little while ago.
The drives were the seagate green barracuda IIRC, and performance for
just about everything was 20MB/s pe
Hi John,
Actually, last time I tried the whole AF (4k) thing, it's performance
was worse than woeful.
But admittedly, that was a little while ago.
The drives were the seagate green barracuda IIRC, and performance for
just about everything was 20MB/s per spindle or worse, when it should
hav
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
>
> Now, I will run a scrub once more to veryfy the zpool.
If you have a drive (or two drives) with bad sectors, they will only be
detected as long as the bad sectors get used.
On Tue, May 29, 2012 at 6:54 AM, John Martin wrote:
> $ zdb -C | grep ashift
> ashift: 12
> ashift: 12
> ashift: 12
>
That's interesting. I just created a raidz3 pool out of 7x3TB drives.
My drives were
ST3000DM001-9YN1
Hitachi HDS72303
Hitachi HDS72303
S
On 05/28/12 08:48, Nathan Kroenert wrote:
Looking to get some larger drives for one of my boxes. It runs
exclusively ZFS and has been using Seagate 2TB units up until now (which
are 512 byte sector).
Anyone offer up suggestions of either 3 or preferably 4TB drives that
actually work well with Z
Hi Richard,
Am 29.05.12 06:54, schrieb Richard Elling:
On May 28, 2012, at 9:21 PM, Stephan Budach wrote:
Hi all,
just to wrap this issue up: as FMA didn't report any other error than
the one which led to the degradation of the one mirror, I detached
the original drive from the zpool which
18 matches
Mail list logo