Re: [zfs-discuss] Small stalls slowing down rsync from holding network saturation every 5 seconds

2010-05-30 Thread Sandon Van Ness
On 05/30/2010 04:22 PM, Richard Elling wrote: > If you want to decouple the txg commit completely, then you might consider > using a buffer of some sort. I use mbuffer for pipes, but that may be tricky > to use in an rsync environment. > -- richard > I initially thought this was I/O but now I

Re: [zfs-discuss] Zfs mirror boot hang at boot

2010-05-30 Thread Frank Cusack
On 5/29/10 12:54 AM -0700 Matt Connolly wrote: I'm running snv_134 on 64-bit x86 motherboard, with 2 SATA drives. The zpool "rpool" uses whole disk of each drive. Can't be. zfs can't boot from a whole disk pool on x86 (maybe sparc too). You have a single solaris partition with the root pool on

Re: [zfs-discuss] Small stalls slowing down rsync from holding network saturation every 5 seconds

2010-05-30 Thread Richard Elling
On May 30, 2010, at 3:04 PM, Sandon Van Ness wrote: > Basically for a few seconds at a time I can get very nice speeds through > rsync (saturating a 1 gig link) which is around 112-113 megabytes/sec > which is about as good as I can expect after overhead. The problem is > that every 5 seconds when

Re: [zfs-discuss] Disk space overhead (total volume size) by ZFS

2010-05-30 Thread Sandon Van Ness
On 05/30/2010 03:10 PM, Mattias Pantzare wrote: > > On Sun, May 30, 2010 at 23:37, Sandon Van Ness wrote: > > > >> >> I just wanted to make sure this is normal and is expected. I fully >> >> expected that as the file-system filled up I would see more disk space >> >> being used than with ot

Re: [zfs-discuss] Disk space overhead (total volume size) by ZFS

2010-05-30 Thread Mattias Pantzare
On Sun, May 30, 2010 at 23:37, Sandon Van Ness wrote: > I just wanted to make sure this is normal and is expected. I fully > expected that as the file-system filled up I would see more disk space > being used than with other file-systems due to its features but what I > didn't expect was to lose o

[zfs-discuss] Small stalls slowing down rsync from holding network saturation every 5 seconds

2010-05-30 Thread Sandon Van Ness
Basically for a few seconds at a time I can get very nice speeds through rsync (saturating a 1 gig link) which is around 112-113 megabytes/sec which is about as good as I can expect after overhead. The problem is that every 5 seconds when data is actually written to disks (physically looking at the

Re: [zfs-discuss] Disk space overhead (total volume size) by ZFS

2010-05-30 Thread Sandon Van Ness
On 05/30/2010 02:51 PM, Brandon High wrote: > On Sun, May 30, 2010 at 2:37 PM, Sandon Van Ness wrote: > >> ZFS: >> r...@opensolaris: 11:22 AM :/data# df -k /data >> > 'zfs list' is more accurate than df, since it will also show space > used by snapshots. eg: > bh...@basestar:~$ df -h /expo

Re: [zfs-discuss] Disk space overhead (total volume size) by ZFS

2010-05-30 Thread Brandon High
On Sun, May 30, 2010 at 2:37 PM, Sandon Van Ness wrote: > ZFS: > r...@opensolaris: 11:22 AM :/data# df -k /data 'zfs list' is more accurate than df, since it will also show space used by snapshots. eg: bh...@basestar:~$ df -h /export/home/bhigh Filesystem size used avail capacity

[zfs-discuss] Disk space overhead (total volume size) by ZFS

2010-05-30 Thread Sandon Van Ness
I just wanted to make sure this is normal and is expected. I fully expected that as the file-system filled up I would see more disk space being used than with other file-systems due to its features but what I didn't expect was to lose out on ~500-600GB to be missing from the total volume size right

Re: [zfs-discuss] zpool/zfs list question

2010-05-30 Thread Brandon High
On Sun, May 30, 2010 at 11:46 AM, Roy Sigurd Karlsbakk wrote: > Is there a way to report zpool/zfs stats in a fixed scale, like KiB or even > bytes? Some (but not all) commands use -p. -p Use exact (parseable) numeric output. -B -- Brandon High : bh...@freaks.com ___

[zfs-discuss] zpool/zfs list question

2010-05-30 Thread Roy Sigurd Karlsbakk
Hi all Using zpool/zfs list -H gives me a good overview of things, and is easy to parse, except that the allocation and data sizes are reported in 'human readable' form. For scripting, this is somehow non-optimal. Is there a way to report zpool/zfs stats in a fixed scale, like KiB or even byte

Re: [zfs-discuss] zfs/lofi/share panic

2010-05-30 Thread Frank Middleton
On 05/27/10 05:16 PM, Dennis Clarke wrote: I just tried this with a UFS based filesystem just for a lark. It never failed on UFS, regardless of the contents of /etc/dfs/dfstab. Guess I must now try this with a ZFS fs under that iso file. Just tried it again with b134 *with* "share /mnt" i

[zfs-discuss] [RESOLVED] Re: expand zfs for OpenSolaris running inside vm

2010-05-30 Thread me
Reinstalling grub helped. What is the purpose of dump slice? On Sun, May 30, 2010 at 9:05 PM, me wrote: > Thanks! It is exactly i was looking for. > > > On Sat, May 29, 2010 at 12:44 AM, Cindy Swearingen < > cindy.swearin...@oracle.com> wrote: > >> 2. Attaching a larger disk to the root pool an

Re: [zfs-discuss] expand zfs for OpenSolaris running inside vm

2010-05-30 Thread me
Thanks! It is exactly i was looking for. On Sat, May 29, 2010 at 12:44 AM, Cindy Swearingen < cindy.swearin...@oracle.com> wrote: > 2. Attaching a larger disk to the root pool and then detaching > the smaller disk > > I like #2 best. See this section in the ZFS troubleshooting wiki: > > http://

[zfs-discuss] [RESOLVED] Re: No mount all at boot

2010-05-30 Thread me
I had empty directory /export/home created in root. It was preventing mount. Just deleted it and all is ok. On Sun, May 30, 2010 at 5:40 PM, me wrote: > I was trying to expand space of rpool. I didn't done it but after removing > one (not in use) disk from VM configuration, system doesn't start

[zfs-discuss] No mount all at boot

2010-05-30 Thread me
I was trying to expand space of rpool. I didn't done it but after removing one (not in use) disk from VM configuration, system doesn't start (no X). After shell login i found out that there is no home: zfs mount rpool/ROOT/opensolaris / Home can be mount manually correctly. What is wrong? -- Dm

Re: [zfs-discuss] zfs send/recv reliability

2010-05-30 Thread Brandon High
On Fri, May 28, 2010 at 10:05 AM, Gregory J. Benscoter wrote: > I’m primarily concerned with in the possibility of a bit flop. If this > occurs will the stream be lost? Or will the file that that bit flop occurred > in be the only degraded file? Lastly how does the reliability of this plan > compa