Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-15 Thread Richard Elling
On Mar 14, 2010, at 11:25 PM, Tonmaus wrote: > Hello again, > > I am still concerned if my points are being well taken. > >> If you are concerned that a >> single 200TB pool would take a long >> time to scrub, then use more pools and scrub in >> parallel. > > The main concern is not scrub time.

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 11:10 PM, Tim Cook wrote: On Mon, Mar 15, 2010 at 9:10 PM, Ross Walker wrote: On Mar 15, 2010, at 7:11 PM, Tonmaus wrote: Being an iscsi target, this volume was mounted as a single iscsi disk from the solaris host, and prepared as a zfs pool consisting of this single

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Tim Cook
On Mon, Mar 15, 2010 at 9:10 PM, Ross Walker wrote: > On Mar 15, 2010, at 7:11 PM, Tonmaus wrote: > > Being an iscsi >>> target, this volume was mounted as a single iscsi >>> disk from the solaris host, and prepared as a zfs >>> pool consisting of this single iscsi target. ZFS best >>> practice

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 7:11 PM, Tonmaus wrote: Being an iscsi target, this volume was mounted as a single iscsi disk from the solaris host, and prepared as a zfs pool consisting of this single iscsi target. ZFS best practices, tell me that to be safe in case of corruption, pools should always be m

Re: [zfs-discuss] persistent L2ARC

2010-03-15 Thread Giovanni Tirloni
On Mon, Mar 15, 2010 at 5:39 PM, Abdullah Al-Dahlawi wrote: > Greeting ALL > > > I understand that L2ARC is still under enhancement. Does any one know if > ZFS can be upgrades to include "Persistent L2ARC", ie. L2ARC will not loose > its contents after system reboot ? > There is a bug opened for

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Tim Cook
On Mon, Mar 15, 2010 at 9:55 AM, Gabriele Bulfon wrote: > Hello, > I'd like to check for any guidance about using zfs on iscsi storage > appliances. > Recently I had an unlucky situation with an unlucky storage machine > freezing. > Once the storage was up again (rebooted) all other iscsi clients

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Carson Gaspar
Someone wrote (I haven't seen the mail, only the unattributed quote): My guess is unit conversion and rounding. Your pool has 11 base 10 TB, which is 10.2445 base 2 TiB. Likewise your fs has 9 base 10 TB, which is 8.3819 base 2 TiB. Not quite. 11 x 10^12 =~ 10.004 x (1024^4). So, th

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Tonmaus
> My guess is unit conversion and rounding. Your pool > has 11 base 10 TB, > which is 10.2445 base 2 TiB. > > Likewise your fs has 9 base 10 TB, which is 8.3819 > base 2 TiB. > Not quite. > > 11 x 10^12 =~ 10.004 x (1024^4). > > So, the 'zpool list' is right on, at "10T" available. Duh!

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Tonmaus
> Being an iscsi > target, this volume was mounted as a single iscsi > disk from the solaris host, and prepared as a zfs > pool consisting of this single iscsi target. ZFS best > practices, tell me that to be safe in case of > corruption, pools should always be mirrors or raidz > on 2 or more disks

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Erik Trimble
On Mon, 2010-03-15 at 15:40 -0700, Carson Gaspar wrote: > Tonmaus wrote: > > > I am lacking 1 TB on my pool: > > > > u...@filemeister:~$ zpool list daten NAMESIZE ALLOC FREE > > CAP DEDUP HEALTH ALTROOT daten10T 3,71T 6,29T37% 1.00x > > ONLINE - u...@filemeister:~$ zpool sta

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Erik Trimble
On Mon, 2010-03-15 at 15:03 -0700, Tonmaus wrote: > Hi Cindy, > trying to reproduce this > > > For a RAIDZ pool, the zpool list command identifies > > the "inflated" space > > for the storage pool, which is the physical available > > space without an > > accounting for redundancy overhead. > > >

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Carson Gaspar
Tonmaus wrote: I am lacking 1 TB on my pool: u...@filemeister:~$ zpool list daten NAMESIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT daten10T 3,71T 6,29T37% 1.00x ONLINE - u...@filemeister:~$ zpool status daten pool: daten state: ONLINE scrub: none requested config: NAME

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Tonmaus
Hi Cindy, trying to reproduce this > For a RAIDZ pool, the zpool list command identifies > the "inflated" space > for the storage pool, which is the physical available > space without an > accounting for redundancy overhead. > > The zfs list command identifies how much actual pool > space is ava

Re: [zfs-discuss] pool causes kernel panic, recursive mutex enter, 134

2010-03-15 Thread Mark
some screenshots that may help: pool: tank id: 5649976080828524375 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: data ONLINE mirror-0 ONLINE c27t2d0ONLINE c27t0d0ONLINE m

Re: [zfs-discuss] zpool reporting consistent read errors

2010-03-15 Thread David Dyer-Bennet
On Mon, March 15, 2010 15:35, Svein Skogen wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On 15.03.2010 21:13, no...@euphoriq.com wrote: >> Wow. I never thought about it. I changed the power supply to a cheap >> one a while back (a now seemingly foolish effort to save money) - it >

[zfs-discuss] pool causes kernel panic, recursive mutex enter, 134

2010-03-15 Thread Mark
hi, i´m using opensolaris about 2 years with an mirrored rpool and an data pool with 3 x 2 (mirrored) drives. the data pool drives are connected to SIL pci-express cards. yesterday i updated from 130 to 134, everything seemed to be fine and i also replaced 1 pair of mirrored drives with larger d

[zfs-discuss] persistent L2ARC

2010-03-15 Thread Abdullah Al-Dahlawi
Greeting ALL I understand that L2ARC is still under enhancement. Does any one know if ZFS can be upgrades to include "Persistent L2ARC", ie. L2ARC will not loose its contents after system reboot ? -- Abdullah Al-Dahlawi George Washington University Department. Of Electrical & Computer Engine

Re: [zfs-discuss] zpool reporting consistent read errors

2010-03-15 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 15.03.2010 21:13, no...@euphoriq.com wrote: > Wow. I never thought about it. I changed the power supply to a cheap one a > while back (a now seemingly foolish effort to save money) - it could be the > issue. I'll change it back and let you know

Re: [zfs-discuss] zpool reporting consistent read errors

2010-03-15 Thread no...@euphoriq.com
Wow. I never thought about it. I changed the power supply to a cheap one a while back (a now seemingly foolish effort to save money) - it could be the issue. I'll change it back and let you know. Thanks -- This message posted from opensolaris.org _

Re: [zfs-discuss] backup zpool to tape

2010-03-15 Thread Greg
Hey Scott, Thanks for the information. I doubt I can drop that kind of cash, but back to getting bacula working! Thanks again, Greg -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.openso

Re: [zfs-discuss] backup zpool to tape

2010-03-15 Thread Scott Meilicke
Greg, I am using NetBackup 6.5.3.1 (7.x is out) with fine results. Nice and fast. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool reporting consistent read errors

2010-03-15 Thread David Dyer-Bennet
On Mon, March 15, 2010 00:54, no...@euphoriq.com wrote: > I'm running a raidz1 with 3 Samsung 1.5TB drives. Every time I scrub the > pool I get multiple read errors, no write errors and no checksum errors on > one drive (always the same drive, and no data loss). > > I've changed cables, changed t

Re: [zfs-discuss] CR 6880994 and pkg fix

2010-03-15 Thread David Dyer-Bennet
On Sun, March 14, 2010 13:54, Frank Middleton wrote: > > How can it even be remotely possible to get a checksum failure on mirrored > drives > with copies=2? That means all four copies were corrupted? Admittedly this > is > on a grotty PC with no ECC and flaky bus parity, but how come the same >

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 12:19 PM, Ware Adams wrote: On Mar 15, 2010, at 12:13 PM, Gabriele Bulfon wrote: Well, I actually don't know what implementation is inside this legacy machine. This machine is an AMI StoreTrends ITX, but maybe it has been built around IET, don't know. Well, maybe I s

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ware Adams
On Mar 15, 2010, at 12:13 PM, Gabriele Bulfon wrote: > Well, I actually don't know what implementation is inside this legacy machine. > This machine is an AMI StoreTrends ITX, but maybe it has been built around > IET, don't know. > Well, maybe I should disable write-back on every zfs host connec

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Gabriele Bulfon
Well, I actually don't know what implementation is inside this legacy machine. This machine is an AMI StoreTrends ITX, but maybe it has been built around IET, don't know. Well, maybe I should disable write-back on every zfs host connecting on iscsi? How do I check this? Thx Gabriele. -- This mes

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 10:55 AM, Gabriele Bulfon wrote: Hello, I'd like to check for any guidance about using zfs on iscsi storage appliances. Recently I had an unlucky situation with an unlucky storage machine freezing. Once the storage was up again (rebooted) all other iscsi clients were

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Khyron
Yeah, this threw me. A 3 disk RAID-Z2 doesn't make sense, because at a redundancy level, RAID-Z2 looks like RAID 6. That is, there are 2 levels of parity for the data. Out of 3 disks, the equivalent of 2 disks will be used to store redundancy (parity) data and only 1 disk equivalent will store

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ware Adams
On Mar 15, 2010, at 10:55 AM, Gabriele Bulfon wrote: > - In this case, the storage appliance is a legacy system based on linux, so > raids/mirrors are managed at the storage side its own way. Being an iscsi > target, this volume was mounted as a single iscsi disk from the solaris host, > and p

Re: [zfs-discuss] Posible newbie question about space between zpool and zf

2010-03-15 Thread Michael Hassey
That solved it. Thank you Cindy. Zpool list NOT reporting raidz overhead is what threw me... Thanks again. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listin

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Cindy Swearingen
Hi Michael, For a RAIDZ pool, the zpool list command identifies the "inflated" space for the storage pool, which is the physical available space without an accounting for redundancy overhead. The zfs list command identifies how much actual pool space is available to the file systems. See the ex

[zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Gabriele Bulfon
Hello, I'd like to check for any guidance about using zfs on iscsi storage appliances. Recently I had an unlucky situation with an unlucky storage machine freezing. Once the storage was up again (rebooted) all other iscsi clients were happy, while one of the iscsi clients (a sun solaris sparc, run

[zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Michael Hassey
Sorry if this is too basic - So I have a single zpool in addition to the rpool, called xpool. NAMESIZE USED AVAILCAP HEALTH ALTROOT rpool 136G 109G 27.5G79% ONLINE - xpool 408G 171G 237G42% ONLINE - I have 408 in the pool, am using 171 leaving me 237 GB. The