Re: [zfs-discuss] ZFS Panic

2009-04-09 Thread Remco Lengers
Grant, Didn't see a response so I'll give it a go. Ripping a disk away and silently inserting a new one is asking for trouble imho. I am not sure what you were trying to accomplish but generally replace a drive/lun would entail commands like zpool offline tank c1t3d0 cfgadm | grep c1t3d0 sa

Re: [zfs-discuss] zfs and nfs

2009-04-09 Thread OpenSolaris Forums
> I'm using Solaris 10 (10/08). This feature is what > exactly i want. thank for response. Duh. What I meant previously was that this feature is not available in the Solaris 10 releases. Cindy -- This message posted from opensolaris.org ___ zfs-discus

Re: [zfs-discuss] Zpool import error! - Help Needed

2009-04-09 Thread OpenSolaris Forums
H have a similar problem: r...@moby1:~# zpool import pool: bucket id: 12835839477558970577 state: UNAVAIL action: The pool cannot be imported due to damaged devices or data. config: bucket UNAVAIL insufficient replicas raidz2UNAVAIL corrupted data c

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread OpenSolaris Forums
if you rsync data to zfs over existing files, you need to take something more into account: if you have a snapshot of your files and rsync the same files again, you need to use "--inplace" rsync option , otherwise completely new blocks will be allocated for the new files. that`s because rsync w

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread Harry Putnam
Jeff Bonwick writes: >> > Yes, I made note of that in my OP on this thread. But is it enough to >> > end up with 8gb of non-compressed files measuring 8gb on >> > reiserfs(linux) and the same data showing nearly 9gb when copied to a >> > zfs filesystem with compression on. >> >> whoops.. a he

[zfs-discuss] zfs as a cache server

2009-04-09 Thread Francois
Hello list, What would be the best zpool configuration for a cache/proxy server (probably based on squid) ? In other words with which zpool configuration I could expect best reading performance ? (there'll be some writes too but much less). Thanks. -- Francois ___

Re: [zfs-discuss] zfs as a cache server

2009-04-09 Thread Greg Mason
Francois, Your best bet is probably a stripe of mirrors. i.e. a zpool made of many mirrors. This way you have redundancy, and fast reads as well. You'll also enjoy pretty quick resilvering in the event of a disk failure as well. For even faster reads, you can add dedicated L2ARC cache devic

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread Jonathan
OpenSolaris Forums wrote: > if you rsync data to zfs over existing files, you need to take > something more into account: > > if you have a snapshot of your files and rsync the same files again, > you need to use "--inplace" rsync option , otherwise completely new > blocks will be allocated for th

Re: [zfs-discuss] Efficient backup of ZFS filesystems?

2009-04-09 Thread Henk Langeveld
Gary Mills wrote: I've been watching the ZFS ARC cache on our IMAP server while the backups are running, and also when user activity is high. The two seem to conflict. Fast response for users seems to depend on their data being in the cache when it's needed. Most of the disk I/O seems to be wr

[zfs-discuss] ZFS stripe over EMC write performance.

2009-04-09 Thread Yuri Elson
What is the best write performance improvement anyone has seen (if any) on a ZFS stripe over EMC SAN? I'd be interested to hear results for both - striped and non-striped EMC config. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.

Re: [zfs-discuss] zfs as a cache server

2009-04-09 Thread Jean-Noël Mattern
Hi François, You should take care of the recordsize in your filesystems. This should be tuned according to the size of the most accessed files. Maybe disabling the "atime" is also good idea (but it's probably something you already know ;) ). We've also noticed some cases where enabling compress

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread Daniel Rock
Jonathan schrieb: OpenSolaris Forums wrote: if you have a snapshot of your files and rsync the same files again, you need to use "--inplace" rsync option , otherwise completely new blocks will be allocated for the new files. that`s because rsync will write entirely new file and rename it over th

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread Jonathan
Daniel Rock wrote: > Jonathan schrieb: >> OpenSolaris Forums wrote: >>> if you have a snapshot of your files and rsync the same files again, >>> you need to use "--inplace" rsync option , otherwise completely new >>> blocks will be allocated for the new files. that`s because rsync will >>> write en

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread Greg Mason
Harry, ZFS will only compress data if it is able to gain more than 12% of space by compressing the data (I may be wrong on the exact percentage). If ZFS can't get get that 12% compression at least, it doesn't bother and will just store the block uncompressed. Also, the default ZFS compressio

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread reader
Greg Mason writes: > Harry, > > ZFS will only compress data if it is able to gain more than 12% of > space by compressing the data (I may be wrong on the exact > percentage). If ZFS can't get get that 12% compression at least, it > doesn't bother and will just store the block uncompressed. > > Al

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread reader
OpenSolaris Forums writes: > if you rsync data to zfs over existing files, you need to take > something more into account: > > if you have a snapshot of your files and rsync the same files again, > you need to use "--inplace" rsync option , otherwise completely new > blocks will be allocated for

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread reader
Jonathan writes: > It appears I may have misread the initial post. I don't really know how > I misread it, but I think I missed the snapshot portion of the message > and got confused. I understand the interaction between snapshots, > rsync, and --inplace being discussed now. I don't think you

Re: [zfs-discuss] zfs as a cache server

2009-04-09 Thread Scott Lawson
Hi Francois, I use ZFS with Squid proxies here at MIT. (MIT New Zealand that is ;)) My basic set up is like so. - 2 x Sun SPARC v240's dual CPU's with 2 x 36 GB boot disks and 2 x 73 GB cache disks. Each machine has 4GB RAM. - Each has a copy of squid, Squidguard and an apache server. - A

Re: [zfs-discuss] ZFS Panic

2009-04-09 Thread Grant Lowe
Hi Remco. Yes, I realize that was asking for trouble. It wasn't supposed to be a test of yanking a LUN. We needed a LUN for a VxVM/VxFS system and that LUN was available. I was just surprised at the panic, since the system was quiesced at the time. But there is coming a time when we will b

[zfs-discuss] raidz on-disk layout

2009-04-09 Thread m...@bruningsystems.com
Hi, For anyone interested, I have blogged about raidz on-disk layout at: http://mbruning.blogspot.com/2009/04/raidz-on-disk-format.html Comments/corrections are welcome. thanks, max ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread David Magda
On Apr 7, 2009, at 16:43, OpenSolaris Forums wrote: if you have a snapshot of your files and rsync the same files again, you need to use "--inplace" rsync option , otherwise completely new blocks will be allocated for the new files. that`s because rsync will write entirely new file and rena

[zfs-discuss] ZIL SSD performance testing... -IOzone works great, others not so great

2009-04-09 Thread Patrick Skerrett
Hi folks, I would appreciate it if someone can help me understand some weird results I'm seeing with trying to do performance testing with an SSD offloaded ZIL. I'm attempting to improve my infrastructure's burstable write capacity (ZFS based WebDav servers), and naturally I'm looking at im

Re: [zfs-discuss] ZIL SSD performance testing... -IOzone works great, others not so great

2009-04-09 Thread Neil Perrin
Patrick, The ZIL is only used for synchronous requests like O_DSYNC/O_SYNC and fsync(). Your iozone command must be doing some synchronous writes. All the other tests (dd, cat, cp, ...) do everything asynchronously. That is they do not require the data to be on stable storage on return from the w

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-04-09 Thread Jorgen Lundman
We finally managed to upgrade the production x4500s to Sol 10 10/08 (unrelated to this) but with the hope that it would also make "zfs send" usable. Exactly how does "build 105" translate to Solaris 10 10/08? My current speed test has sent 34Gb in 24 hours, which isn't great. Perhaps the n

[zfs-discuss] vdev_disk_io_start() sending NULL pointer in ldi_ioctl()

2009-04-09 Thread Shyamali . Chakravarty
Hi All, I have corefile where we see NULL pointer de-reference PANIC as we have sent (deliberately) NULL pointer for return value. vdev_disk_io_start() ... ... error = ldi_ioctl(dvd->vd_lh, zio->io_cmd, (uintptr_t)&zio->io_dk_callback,

Re: [zfs-discuss] ZFS Panic

2009-04-09 Thread Rince
FWIW, I strongly expect live ripping of a SATA device to not panic the disk layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to be "fault-tolerant" and "drive dropping away at any time" is a rather expected scenario. [I've popped disks out live in many cases, both when I was

Re: [zfs-discuss] ZFS Panic

2009-04-09 Thread Andre van Eyssen
On Fri, 10 Apr 2009, Rince wrote: FWIW, I strongly expect live ripping of a SATA device to not panic the disk layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to be "fault-tolerant" and "drive dropping away at any time" is a rather expected scenario. Ripping a SATA device

Re: [zfs-discuss] ZFS Panic

2009-04-09 Thread Rince
On Fri, Apr 10, 2009 at 12:43 AM, Andre van Eyssen wrote: > On Fri, 10 Apr 2009, Rince wrote: > > FWIW, I strongly expect live ripping of a SATA device to not panic the >> disk >> layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to >> be >> "fault-tolerant" and "drive droppi