[zfs-discuss] ZFS Panic

2009-04-08 Thread Grant Lowe
Hi All, Don't know if this is worth reporting, as it's human error. Anyway, I had a panic on my zfs box. Here's the error: marksburg /usr2/glowe> grep panic /var/log/syslog Apr 8 06:57:17 marksburg savecore: [ID 570001 auth.error] reboot after panic: assertion failed: 0 == dmu_buf_hold_arra

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-08 Thread Jeff Bonwick
> > Yes, I made note of that in my OP on this thread. But is it enough to > > end up with 8gb of non-compressed files measuring 8gb on > > reiserfs(linux) and the same data showing nearly 9gb when copied to a > > zfs filesystem with compression on. > > whoops.. a hefty exaggeration it only show

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-08 Thread Harry Putnam
Harry Putnam writes: > Richard Elling writes: > >> Harry Putnam wrote: >>> Robert Milkowski writes: >>> >>> Then is block doesn't compress better than 12.5% it won't be compressed at all. Then in zfs you need extra space for checksums, etc. How did the OP came up with how

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-08 Thread Harry Putnam
Richard Elling writes: > Harry Putnam wrote: >> Robert Milkowski writes: >> >> >>> Then is block doesn't compress better than 12.5% it won't be >>> compressed at all. Then in zfs you need extra space for checksums, etc. >>> >>> How did the OP came up with how much data is being used? >>>

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-08 Thread Richard Elling
Harry Putnam wrote: Robert Milkowski writes: Then is block doesn't compress better than 12.5% it won't be compressed at all. Then in zfs you need extra space for checksums, etc. How did the OP came up with how much data is being used? OP, just used `du -sh' at both ends of the trans

Re: [zfs-discuss] Can this be done?

2009-04-08 Thread Michael Shadle
Wed, Apr 8, 2009 at 9:39 AM, Miles Nordin wrote: >> "ms" == Michael Shadle writes: > >    ms> When I attach this new raidz2, will ZFS auto "rebalance" data >    ms> between the two, or will it keep the other one empty and do >    ms> some sort of load balancing between the two for future wri

Re: [zfs-discuss] Importing zpool after one side of mirror was destroyed

2009-04-08 Thread Miles Nordin
> "gs" == Geoff Shipman writes: gs> At this point boot from disk into single user mode and move gs> the /etc/zfs/zpool.cache file to a different name. and these days ``boot single-user'' seems to often mean 'boot -m milestone=none'. The old 'boot -s' will, AFAICT, still read zpool.

Re: [zfs-discuss] Can this be done?

2009-04-08 Thread Miles Nordin
> "ms" == Michael Shadle writes: ms> When I attach this new raidz2, will ZFS auto "rebalance" data ms> between the two, or will it keep the other one empty and do ms> some sort of load balancing between the two for future writes ms> only? the second choice. You can see how t

Re: [zfs-discuss] Can this be done?

2009-04-08 Thread Cindy . Swearingen
Michael, You can't attach disks to an existing RAIDZ vdev, but you add another RAIDZ vdev. Also keep in mind that you can't detach disks from RAIDZ pools either. See the syntax below. Cindy # zpool create rzpool raidz2 c1t0d0 c1t1d0 c1t2d0 # zpool status pool: rzpool state: ONLINE scrub:

Re: [zfs-discuss] ZFS data loss

2009-04-08 Thread Fajar A. Nugraha
On Wed, Apr 8, 2009 at 4:06 PM, Tomas Ögren wrote: >> Do you think there is something that can be done to recover lost data? >> >> Thanks, >>   Vic > > Does 'zpool import' find anything? devfsadm -v  to re-scan devices > first perhaps.. ... or info from the other thread "boot from disk into sing

Re: [zfs-discuss] ZFS data loss

2009-04-08 Thread Tomas Ögren
On 07 April, 2009 - Victor Galis sent me these 5,8K bytes: > Hi, > > I have lost a ZFS volume and I am hoping to get some help to recover the > information ( a couple of months worth of work :( ). > > I have been using ZFS for more than 6 months on this project. Yesterday > I ran a "zvol statu