Re: [zfs-discuss] Snapshot question

2009-11-13 Thread Tristan Ball
I think the exception may be when doing a recursive snapshot - ZFS appears to halt IO so that it can take all the snapshots at the same instant. At least, that's what it looked like to me. I've got an Opensolaris ZFS box providing NFS to VMWare, and I was getting SCSI timeout's within the Virtua

Re: [zfs-discuss] heads up on SXCE build 125 (LU + mirrored root pools)

2009-11-13 Thread Chris Du
Seems like upgrading from b126 to b127 will have the same problem. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Eric D. Mudama
On Fri, Nov 13 at 15:58, Tim Cook wrote: On Fri, Nov 13, 2009 at 2:48 PM, Orvar Korvar < knatte_fnatte_tja...@yahoo.com> wrote: Yes I do fine. How do you do-be-do-be-do? I have OpenSolaris b125 and filled a zpool with data. I did scrub on it, which took 8 hours. Some of the drives were connect

Re: [zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Tim Cook
On Fri, Nov 13, 2009 at 2:48 PM, Orvar Korvar < knatte_fnatte_tja...@yahoo.com> wrote: > Yes I do fine. How do you do-be-do-be-do? > > I have OpenSolaris b125 and filled a zpool with data. I did scrub on it, > which took 8 hours. Some of the drives were connected to the mobo, some of > the drives

Re: [zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Orvar Korvar
Yes I do fine. How do you do-be-do-be-do? I have OpenSolaris b125 and filled a zpool with data. I did scrub on it, which took 8 hours. Some of the drives were connected to the mobo, some of the drives were connected to the AOC-MV8... marvellsx88 card which is used in Thumper. Then I connected a

[zfs-discuss] [Fwd: [osol-announce] IMPT: Infrastructure upgrade this weekend, 11/13-15]

2009-11-13 Thread Cindy Swearingen
Original Message Subject: [osol-announce] IMPT: Infrastructure upgrade this weekend, 11/13-15 Date: Wed, 11 Nov 2009 12:37:19 -0800 From: Derek Cicero Reply-To: mai...@opensolaris.org To: opensolaris-annou...@opensolaris.org All, Due to infrastructure upgrades in several phy

Re: [zfs-discuss] Snapshot question

2009-11-13 Thread Richard Elling
On Nov 13, 2009, at 6:43 AM, Rodrigo E. De León Plicet wrote: While reading about NILFS here: http://www.linux-mag.com/cache/7345/1.html I saw this: One of the most noticeable features of NILFS is that it can "continuously and automatically save instantaneous states of the file system w

Re: [zfs-discuss] zfs/io performance on Netra X1

2009-11-13 Thread Richard Elling
The Netra X1 has one ATA bus for both internal drives. No way to get high perf out of a snail. -- richard On Nov 13, 2009, at 8:08 AM, Bob Friesenhahn > wrote: On Fri, 13 Nov 2009, Tim Cook wrote: If it is using parallel SCSI, perhaps there is a problem with the SCSI bus termination or

Re: [zfs-discuss] zfs/io performance on Netra X1

2009-11-13 Thread Brian H. Nelson
Bob Friesenhahn wrote: On Fri, 13 Nov 2009, Tim Cook wrote: If it is using parallel SCSI, perhaps there is a problem with the SCSI bus termination or a bad cable? SCSI? Try PATA ;) Is that good? I don't recall ever selecting that option when purchasing a computer. It seemed safer to st

Re: [zfs-discuss] zfs/io performance on Netra X1

2009-11-13 Thread Bob Friesenhahn
On Fri, 13 Nov 2009, Tim Cook wrote: If it is using parallel SCSI, perhaps there is a problem with the SCSI bus termination or a bad cable? SCSI?  Try PATA ;) Is that good? I don't recall ever selecting that option when purchasing a computer. It seemed safer to stick with SCSI than to try

Re: [zfs-discuss] zfs/io performance on Netra X1

2009-11-13 Thread Tim Cook
On Fri, Nov 13, 2009 at 9:53 AM, Bob Friesenhahn < bfrie...@simple.dallas.tx.us> wrote: > On Fri, 13 Nov 2009, inouk wrote: > >> >> Sounds like a bus bottleneck, as if two HD's can't use the same bus for >> data transfert. I don't know the hardware specifications of Netra X1, >> though >> > > May

Re: [zfs-discuss] zfs/io performance on Netra X1

2009-11-13 Thread Bob Friesenhahn
On Fri, 13 Nov 2009, inouk wrote: Sounds like a bus bottleneck, as if two HD's can't use the same bus for data transfert. I don't know the hardware specifications of Netra X1, though Maybe it uses Ultra-160 SCSI like my Sun Blade 2500? This does constrain performance, but due to simultane

Re: [zfs-discuss] zfs/io performance on Netra X1

2009-11-13 Thread inouk
> On Fri, 13 Nov 2009, inouk wrote: > Your system has every little RAM (512MB). It is less > than is > recommended for Solaris 10 or for zfs and if it was a > PC, it would be > barely enough to run Windows XP. Since zfs likes to > use RAM and > expects and sufficient RAM will be available, it

[zfs-discuss] Let's guess which filesystem.....

2009-11-13 Thread Matthias Appel
NSA might choose in the future. I just found this link on the Backblaze blog and I hope you will find it as amusing as I do: http://blog.backblaze.com/2009/11/12/nsa-might-want-some-backblaze-pods/ -- Give a man a fish and you feed him for a day; give him a freshly-charged Electric Eel and ch

Re: [zfs-discuss] zfs/io performance on Netra X1

2009-11-13 Thread Jeffry Molanus
Agreed, but still: wy zpool iostat 15MB en iostat 615KB? Regard, Jeff From: zfs-discuss-boun...@opensolaris.org [zfs-discuss-boun...@opensolaris.org] On Behalf Of Bob Friesenhahn [bfrie...@simple.dallas.tx.us] Sent: Friday, November 13, 2009 4:05 PM To: in

Re: [zfs-discuss] dedupe question

2009-11-13 Thread Tim Cook
On Fri, Nov 13, 2009 at 7:09 AM, Ross wrote: > > Isn't dedupe in some ways the antithesis of setting > > copies > 1? We go to a lot of trouble to create redundancy (n-way > > mirroring, raidz-n, copies=n, etc) to make things as robust as > > possible and then we reduce redundancy with dedupe and

Re: [zfs-discuss] zfs/io performance on Netra X1

2009-11-13 Thread Bob Friesenhahn
On Fri, 13 Nov 2009, inouk wrote: So my question are the following: 1.- Why zpool iostat is reporting 15MB/s of data read when in reality only 615KB/s is read ? 2.- Why sched is taking so much io? 3.- What I can do to improve IO performance? It find it very unbelievable that this is the best

Re: [zfs-discuss] dedupe question

2009-11-13 Thread Bob Friesenhahn
On Fri, 13 Nov 2009, Ross wrote: But are we reducing redundancy? I don't know the details of how dedupe is implemented, but I'd have thought that if copies=2, you get 2 copies of each dedupe block. So your data is just as safe since you haven't actually changed the redundancy, it's just tha

Re: [zfs-discuss] Snapshot question

2009-11-13 Thread Casper . Dik
>While reading about NILFS here: > >http://www.linux-mag.com/cache/7345/1.html > > >I saw this: > >*One of the most noticeable features of NILFS is that it can "continu= >ously >> and automatically save instantaneous states of the file system with= >out >> interrupting service". NILFS refers to th

[zfs-discuss] Snapshot question

2009-11-13 Thread Rodrigo E . De León Plicet
While reading about NILFS here: http://www.linux-mag.com/cache/7345/1.html I saw this: *One of the most noticeable features of NILFS is that it can "continuously > and automatically save instantaneous states of the file system without > interrupting service". NILFS refers to these as checkpoint

[zfs-discuss] zfs/io performance on Netra X1

2009-11-13 Thread inouk
Hi, I have a Netra X1 server with 512MB ram and two ATA disk, model ST340016A. Processor is a UltraSPARC-IIe 500MHz. Version of solaris is: Solaris 10 10/09 s10s_u8wos_08a SPARC I jumpstarted the server with ZFS as root, two disks as a mirror:

Re: [zfs-discuss] dedupe question

2009-11-13 Thread Victor Latushkin
On 13.11.09 16:09, Ross wrote: Isn't dedupe in some ways the antithesis of setting copies > 1? We go to a lot of trouble to create redundancy (n-way mirroring, raidz-n, copies=n, etc) to make things as robust as possible and then we reduce redundancy with dedupe and compression But are we redu

Re: [zfs-discuss] "zfs send" from solaris 10/08 to "zfs receive" on solaris 10/09

2009-11-13 Thread Edward Ned Harvey
> It says at the end of the zfs send section of the man page "The format > of the stream is committed. You will be able to receive your streams on > future versions of ZFS." > > 'Twas not always so. It used to say "The format of the stream is > evolving. No backwards compatibility is guaranteed. Y

Re: [zfs-discuss] dedupe question

2009-11-13 Thread Ross
> Isn't dedupe in some ways the antithesis of setting > copies > 1? We go to a lot of trouble to create redundancy (n-way > mirroring, raidz-n, copies=n, etc) to make things as robust as > possible and then we reduce redundancy with dedupe and compression But are we reducing redundancy? I don't

Re: [zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Henrik Johansson
How do you do, On 13 nov 2009, at 11.07, Orvar Korvar wrote: I have a raidz2 and did a scrub, it took 8h. Then I reconnected some drives to other SATA ports, and now it takes 15h to scrub?? Why is that? Could you perhaps provid some more info? Which OSOL release? are the new disks ut

[zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Orvar Korvar
I have a raidz2 and did a scrub, it took 8h. Then I reconnected some drives to other SATA ports, and now it takes 15h to scrub?? Why is that? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://ma