Re: [zfs-discuss] ssd pool + ssd cache ?

2010-01-09 Thread Lutz Schumann
Depends. a) Pool design 5 x SSD as raidZ = 4 SSD space - read I/O performance of one drive Adding 5 cheap 40 GB L2ARC device (which are pooled) increases the read performance for your working window of 200 GB. If you have a pool of mirrors - adding L2ARC does not make sence. b) SSD type Is yo

[zfs-discuss] Activity after LU with ZFS/Zone working

2010-01-09 Thread Cesare
Hy all, recently I upgraded to S10U8 a T5120 using LU. The system had a zones configured and at time of upgrade procedure the zones was still alive and worked fine. The LU procedure was ended successfully. Zones on the system was installed in a ZFS filesystem. Here the result at the end of LU (ABE

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread Henrik Johansson
Henrik http://sparcv9.blogspot.com On 9 jan 2010, at 04.49, bank kus wrote: dd if=/dev/urandom of=largefile.txt bs=1G count=8 cp largefile.txt ./test/1.txt & cp largefile.txt ./test/2.txt & Thats it now the system is totally unusable after launching the two 8G copies. Until these copies

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread bank kus
> Probably not, but ZFS only runs in userspace on Linux > with fuse so it > will be quite different. I wasnt clear in my description, I m referring to ext4 on Linux. In fact on a system with low RAM even the dd command makes the system horribly unresponsive. IMHO not having fairshare or times

Re: [zfs-discuss] ZFS extremely slow performance

2010-01-09 Thread Emily Grettel
Hello again, I swapped out the PSU and replaced the cables and ran scrubs almost every day (after hours) with no reported faults. I also upgraded to SNV_130 thanks to Brock & changed cables and PSU after the suggestion from Richard. I owe you two both beers! We thought our troubles were re

Re: [zfs-discuss] zpool iostat -v hangs on L2ARC failure (SATA, 160 GB Postville)

2010-01-09 Thread Lutz Schumann
I finally managed to resolve this. I received some useful info from Richard Elling (without List CC): >> (ME) However I sill think, also the plain IDE driver needs a timeout to >> hande disk failures, cause cables etc can fail. >(Richard) Yes, this is a little bit odd. The sd driver should be

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread Bob Friesenhahn
On Sat, 9 Jan 2010, bank kus wrote: Probably not, but ZFS only runs in userspace on Linux with fuse so it will be quite different. I wasnt clear in my description, I m referring to ext4 on Linux. In fact on a system with low RAM even the dd command makes the system horribly unresponsive. I

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread bank kus
> I am confused. Are you talking about ZFS under > OpenSolaris, or are > you talking about ZFS under Linux via Fuse? ??? > Do you have compression or deduplication enabled on > the zfs > filesystem? Compression no. I m guessing 2009.06 doesnt have dedup. > What sort of system are you using

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-01-09 Thread Richard Elling
On Jan 9, 2010, at 1:32 AM, Lutz Schumann wrote: Depends. a) Pool design 5 x SSD as raidZ = 4 SSD space - read I/O performance of one drive Adding 5 cheap 40 GB L2ARC device (which are pooled) increases the read performance for your working window of 200 GB. An interesting thing happens when

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread Jürgen Keil
> > I wasnt clear in my description, I m referring to ext4 on Linux. In > > fact on a system with low RAM even the dd command makes the system > > horribly unresponsive. > > > > IMHO not having fairshare or timeslicing between different processes > > issuing reads is frankly unacceptable given a

[zfs-discuss] x4500 failed disk, not sure if hot spare took over correctly

2010-01-09 Thread Paul B. Henson
We just had our first x4500 disk failure (which of course had to happen late Friday night ), I've opened a ticket on it but don't expect a response until Monday so was hoping to verify the hot spare took over correctly and we still have redundancy pending device replacement. This is an S10U6 box:

Re: [zfs-discuss] x4500 failed disk, not sure if hot spare took over correctly

2010-01-09 Thread Eric Schrock
On Jan 9, 2010, at 9:45 AM, Paul B. Henson wrote: > > If ZFS removed the drive from the pool, why does the system keep > complaining about it? It's not failing in the sense that it's returning I/O errors, but it's flaky, so it's attaching and detaching. Most likely it decided to attach again a

Re: [zfs-discuss] x4500 failed disk, not sure if hot spare took over correctly

2010-01-09 Thread Ian Collins
Paul B. Henson wrote: We just had our first x4500 disk failure (which of course had to happen late Friday night ), I've opened a ticket on it but don't expect a response until Monday so was hoping to verify the hot spare took over correctly and we still have redundancy pending device replacement.

Re: [zfs-discuss] x4500 failed disk, not sure if hot spare took over correctly

2010-01-09 Thread Paul B. Henson
On Sat, 9 Jan 2010, Eric Schrock wrote: > > If ZFS removed the drive from the pool, why does the system keep > > complaining about it? > > It's not failing in the sense that it's returning I/O errors, but it's > flaky, so it's attaching and detaching. Most likely it decided to attach > again and

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-09 Thread Frank Batschulat (Home)
On Fri, 08 Jan 2010 18:33:06 +0100, Mike Gerdts wrote: > I've written a dtrace script to get the checksums on Solaris 10. > Here's what I see with NFSv3 on Solaris 10. jfyi, I've reproduces it as well using a Solaris 10 Update 8 SB2000 sparc client and NFSv4. much like you I also get READ error

Re: [zfs-discuss] abusing zfs boot disk for fun and DR

2010-01-09 Thread Mark Bennett
Ben, I have found that booting from cdrom and importing the pool on the new host, then boot the hard disk will prevent these issues. That will reconfigure the zfs to use the new disk device. When running, zpool detach the missing mirror device and attach a new one. Mark. -- This message posted f

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread Henrik Johansson
On Jan 9, 2010, at 2:02 PM, bank kus wrote: >> Probably not, but ZFS only runs in userspace on Linux >> with fuse so it >> will be quite different. > > I wasnt clear in my description, I m referring to ext4 on Linux. In fact on a > system with low RAM even the dd command makes the system horri

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread bank kus
Hi Henrik I have 16GB Ram on my system on a lesser RAM system dd does cause problems as I mentioned above. My __guess__ dd is probably sitting in some in memory cache since du -sh doesnt show the full file size until I do a sync. At this point I m less looking for QA type repro questions and/or

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread bank kus
Btw FWIW if I redo the dd + 2 cp experiment on /tmp the result is far more disastrous. The GUI stops moving caps lock stops responding for large intervals no clue why. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss