On 10/21/10 03:47 PM, Harry Putnam wrote:
build 133
zpool version 22
I'm getting:
zpool status:
NAMESTATE READ WRITE CKSUM
z3 DEGRADED 0 0 167
mirror-0 DEGRADED 0 0 334
c5d0DEGRADED 0 0 335 too ma
All this reminds me:
There was some talk awhile ago about allowing multiple pools per ZIL or
L2ARC device. Any progress on that front?
[yadda, yadda, no forward-looking statements allowed, yadda yadda.]
--
Erik Trimble
Java System Support
Mailstop: usca22-317
Phone: x67195
Santa Clara, CA
Ti
On Mon, 2010-10-18 at 17:32 -0400, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Marty Scholes
> >
> > Would it make sense for scrub/resilver to be more aware of operating in
> > disk order instead of zfs order
Since my name was mention, a couple of things:
(a) I'm not infallible. :-)
(b) In my posts, I swapped "slab" for "record". I really should have
said "record". It's more correct as to what's going on.
(c) It is possible for constituent drives in a RaidZ to be issued
concurrent requests for porti
build 133
zpool version 22
I'm getting:
zpool status:
NAMESTATE READ WRITE CKSUM
z3 DEGRADED 0 0 167
mirror-0 DEGRADED 0 0 334
c5d0DEGRADED 0 0 335 too many errors
c6d0DEGRADED 0
On Wed, 20 Oct 2010, Marty Scholes wrote:
Untrue. The performance of a 21-disk raidz3 will be nowhere near the
performance of a 20 disk 2-way mirrror.
You know this stuff better than I do. Assuming no bus/cpu
bottlenecks, a 21 disk raidz3 should provide sequential throughput
of 18 disks and
On 2010-Oct-21 01:28:46 +0800, David Dyer-Bennet wrote:
>On Wed, October 20, 2010 04:24, Tuomas Leikola wrote:
>
>> I wished for a more aggressive write balancer but that may be too much
>> to ask for.
>
>I don't think it can be too much to ask for. Storage servers have long
>enough lives that ad
Where would that log be located? Tried poking around in /var/svc/log
and /var/adm, but I've found just the snapshot-service logs (while
useful, they don't seem to have logged the auto-deletion of
snapshots).
Also, that 'pcplusmp' is triggering every minute, on the minute. It's
probably one of my d
On 10/21/10 07:00 AM, Jeff Bacon wrote:
So, Best Practices says "use (N^2)+2 disks for your raidz2".
I wanted to use 7 disk stripes not 6, just to try to balance my risk
level vs available space.
Doing some testing on my hardware, it's hard to say there's a ton of
difference one way or the other
Richard wrote:
>
> Untrue. The performance of a 21-disk raidz3 will be nowhere near the
> performance of a 20 disk 2-way mirrror.
You know this stuff better than I do. Assuming no bus/cpu bottlenecks, a 21
disk raidz3 should provide sequential throughput of 18 disks and random
throughput of 1 d
Orvar Korvar wrote:
> Sometimes you read about people having low performance deduping: it is
> because they have too little RAM.
>
I mostly heard they have low performance when they start deleting
deduplicated data, not before that.
So do you think that with 2.2GB of RAM per 1 TB of storage, w
>Huh, I don't actually ever recall enabling that. Perhaps that is
>connected to the message I started getting every minute recently in
>the kernel buffer,
It's on by default.
You can see if it was ever enabled by using:
zfs list -t snapshot |grep @zfs-auto
>Oct 20 12:20:49 megatron pcp
So, Best Practices says "use (N^2)+2 disks for your raidz2".
I wanted to use 7 disk stripes not 6, just to try to balance my risk
level vs available space.
Doing some testing on my hardware, it's hard to say there's a ton of
difference one way or the other - seek/create/delete is a bit faster on
On Wed, October 20, 2010 04:24, Tuomas Leikola wrote:
> I wished for a more aggressive write balancer but that may be too much
> to ask for.
I don't think it can be too much to ask for. Storage servers have long
enough lives that adding disks to them is a routine operation; to the
extent that t
Krunal,
The file system size changes are probably caused when these
snapshots are created and deleted automatically.
The recurring messages below are driver related and probably
have nothing to do with the snapshots.
Thanks,
Cindy
On 10/20/10 10:50, Krunal Desai wrote:
Argh, yes, lots of sna
Argh, yes, lots of snapshots sitting around...apparently time-slider
got activated somehow awhile back. Disabled the services and am now
cleaning out the snapshots!
On Wed, Oct 20, 2010 at 12:41 PM, Tomas Ögren wrote:
> On 20 October, 2010 - Krunal Desai sent me these 1,5K bytes:
>
>> Huh, I don'
On 20 October, 2010 - Krunal Desai sent me these 1,5K bytes:
> Huh, I don't actually ever recall enabling that. Perhaps that is
> connected to the message I started getting every minute recently in
> the kernel buffer,
>
> Oct 20 12:20:49 megatron pcplusmp: [ID 805372 kern.info] pcplusmp: ide
> (
Huh, I don't actually ever recall enabling that. Perhaps that is
connected to the message I started getting every minute recently in
the kernel buffer,
Oct 20 12:20:49 megatron pcplusmp: [ID 805372 kern.info] pcplusmp: ide
(ata) instance 3 irq 0xf vector 0x45 ioapic 0x2 intin 0xf is bound to
cpu 0
>tank com.sun:auto-snapshot true local
>
>I don't utilize snapshots (this machine just stores media)...so what
>could be up?
You've also disabled the time-slider functionality? (automatic snapshots)
Casper
___
zfs-discuss mailin
Hi all,
I've got an interesting (I think) thing happening with my storage pool
(tank, 8x1.5TB RAID-Z2)...namely that I seem to gain free-space
without deleting files. I noticed this happening awhile ago, so I set
up a cron script that ran every night and does:
pfexec ls -alR /tank > /export/home/
Am 20.10.10 15:11, schrieb Edward Ned Harvey:
From: Stephan Budach [mailto:stephan.bud...@jvm.de]
Although, I have to say that I do have exactly 3 files that are corrupt
in each snapshot until I finally deleted them and restored them from
their original source.
zfs send will abort when trying t
Am 20.10.10 17:53, schrieb Cassandra Pugh:
well, I was expecting/hoping that this command would work as expected:
zpool create testpool vdeva vdevb vdevc
*zpool replace testpool vdevc vdevd.*
# zpool status reports the disk is reslivered.
This obviously worked since the device you were about
well, I was expecting/hoping that this command would work as expected:
zpool create testpool vdeva vdevb vdevc
*zpool replace testpool vdevc vdevd.*
# zpool status reports the disk is reslivered.
On a (non-mirror or raid) test pool i just created, this command works.
However, when the disk fail
On Wed, Oct 20, 2010 at 5:00 PM, Richard Elling
wrote:
>>> Now, is there a way, manually or automatically, to somehow balance the data
>>> across these LVOLs? My first guess is that doing this _automatically_ will
>>> require block pointer rewrite, but then, is there way to hack this thing by
>
On 20/10/2010 14:48, Darren J Moffat wrote:
On 20/10/2010 14:03, Edward Ned Harvey wrote:
In a discussion a few weeks back, it was mentioned that the Best
Practices
Guide says something like "Don't put more than ___ disks into a single
vdev." At first, I challenged this idea, because I see no
On Oct 20, 2010, at 2:24 AM, Tuomas Leikola wrote:
> On Tue, Oct 19, 2010 at 7:13 PM, Roy Sigurd Karlsbakk
> wrote:
>> I have this server with some 50TB disk space. It originally had 30TB on WD
>> Greens, was filled quite full, and another storage chassis was added. Now,
>> space problem gone,
On Oct 20, 2010, at 6:03 AM, Edward Ned Harvey wrote:
> In a discussion a few weeks back, it was mentioned that the Best Practices
> Guide says something like "Don't put more than ___ disks into a single
> vdev." At first, I challenged this idea, because I see no reason why a
> 21-disk raidz3 wou
On 20/10/2010 14:03, Edward Ned Harvey wrote:
In a discussion a few weeks back, it was mentioned that the Best Practices
Guide says something like "Don't put more than ___ disks into a single
vdev." At first, I challenged this idea, because I see no reason why a
21-disk raidz3 would be bad. It
--
Von meinem iPhone iOS4
gesendet.
Stephan Budach
Jung von Matt/it-services GmbH
Glashüttenstraße 79
20257 Hamburg
Tel: +49 40-4321-1353
Fax: +49 40-4321-1114
E-Mail: stephan.bud...@jvm.de
Internet: www.jvm.de
Geschäftsführer: Ulrich Pallas, Frank Willhelm
AG HH HRB 98380
Am 20.10.2010 um 1
On Wed, Oct 20, 2010 at 4:05 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>> 4. Guess what happens if you have 2 or 3 failed disks in your raidz3,
>> and
>> they're trying to resilver at
> -Original Message-
> From: Darren J Moffat [mailto:darr...@opensolaris.org]
> > It's one of the big selling points, reasons for ZFS to exist. You
> should
> > always give ZFS JBOD devices to work on, so ZFS is able to scrub both
> of the
> > redundant sides of the data, and when a checks
On Wed, Oct 20, 2010 at 2:50 PM, Edward Ned Harvey wrote:
> One of the above mentioned disks needed to be resilvered yesterday.
> (Actually a 2T disk.) It has now resilvered 1.12T in 18.5 hrs, and has 10.5
> hrs remaining. This is a mirror. The problem would be several times worse
> if it were
> From: Stephan Budach [mailto:stephan.bud...@jvm.de]
>
> Although, I have to say that I do have exactly 3 files that are corrupt
> in each snapshot until I finally deleted them and restored them from
> their original source.
>
> zfs send will abort when trying to send them, while scrub doesn't
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> 4. Guess what happens if you have 2 or 3 failed disks in your raidz3,
> and
> they're trying to resilver at the same time. Does the system ignore
> subsequently failed di
In a discussion a few weeks back, it was mentioned that the Best Practices
Guide says something like "Don't put more than ___ disks into a single
vdev." At first, I challenged this idea, because I see no reason why a
21-disk raidz3 would be bad. It seems like a good thing.
I was operating on ass
Hello *,
we're running a local zone located on an iSCSI device and see zpools faulting
at each reboot of the server.
[b]$ zpool list
NAME SIZE ALLOC FREECAP HEALTH ALTROOT
data 168G 127G 40.3G75% ONLINE -
iscsi1- - - - FAULTED -
$ zpool statu
>> From: Stephan Budach [mailto:stephan.bud...@jvm.de]
>>
>>> Just in case this wasn't already clear.
>>>
>>> After scrub sees read or checksum errors, zpool status -v will list
>>> filenames that are affected. At least in my experience.
>>> --
>>> - Tuomas
>>
>> That didn't do it for me. I us
> From: Edward Ned Harvey [mailto:sh...@nedharvey.com]
>
> Let's crunch some really quick numbers here. Suppose a 6Gbit/sec
> sas/sata bus, with 6 disks in a raid-5. Each disk is 1TB, 1000G, and
> each disk is capable of sustaining 1 Gbit/sec sequential operations.
> These are typical measureme
On 20/10/2010 12:20, Edward Ned Harvey wrote:
It's one of the big selling points, reasons for ZFS to exist. You should
always give ZFS JBOD devices to work on, so ZFS is able to scrub both of the
redundant sides of the data, and when a checksum error occurs, ZFS is able
to detect *and* correct i
> From: Stephan Budach [mailto:stephan.bud...@jvm.de]
>
> > Just in case this wasn't already clear.
> >
> > After scrub sees read or checksum errors, zpool status -v will list
> > filenames that are affected. At least in my experience.
> > --
> > - Tuomas
>
> That didn't do it for me. I used scru
On Wed, Oct 20, 2010 at 3:50 AM, Bob Friesenhahn
wrote:
> On Tue, 19 Oct 2010, Cindy Swearingen wrote:
>>>
>>> unless you use copies=2 or 3, in which case your data is still safe
>>> for those datasets that have this option set.
>>
>> This advice is a little too optimistic. Increasing the copies p
Sometimes you read about people having low performance deduping: it is because
they have too little RAM.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf
On 10/20/10 08:12 PM, sridhar surampudi wrote:
Hi Cindys,
Thank you for reply.
zfs/ zpool should have ability of accessing snapshot devices with a
configurable name.
As an example if file system stack is created as
vxfs( /mnt1)
|
|
vxvm(lv1)
|
|
(device from an array / LUN say de
On Tue, Oct 19, 2010 at 7:13 PM, Roy Sigurd Karlsbakk
wrote:
> I have this server with some 50TB disk space. It originally had 30TB on WD
> Greens, was filled quite full, and another storage chassis was added. Now,
> space problem gone, fine, but what about speed? Three of the VDEVs are quite
Am 19.10.2010 um 22:36 schrieb Tuomas Leikola :
> On Mon, Oct 18, 2010 at 4:55 PM, Edward Ned Harvey
> wrote:
>> Thank you, but, the original question was whether a scrub would identify
>> just corrupt blocks, or if it would be able to map corrupt blocks to a list
>> of corrupt files.
>>
>
> J
Hi Cindys,
Thank you for reply.
zfs/ zpool should have ability of accessing snapshot devices with a
configurable name.
As an example if file system stack is created as
vxfs( /mnt1)
|
|
vxvm(lv1)
|
|
(device from an array / LUN say dev1),
If i take array level or hardware level sn
46 matches
Mail list logo