Re: [zfs-discuss] I/O Read starvation

2010-01-11 Thread Ross Walker
On Jan 11, 2010, at 2:23 PM, Bob Friesenhahn > wrote: On Mon, 11 Jan 2010, bank kus wrote: Are we still trying to solve the starvation problem? I would argue the disk I/O model is fundamentally broken on Solaris if there is no fair I/O scheduling between multiple read sources until that

Re: [zfs-discuss] I/O Read starvation

2010-01-11 Thread Bob Friesenhahn
On Mon, 11 Jan 2010, bank kus wrote: Are we still trying to solve the starvation problem? I would argue the disk I/O model is fundamentally broken on Solaris if there is no fair I/O scheduling between multiple read sources until that is fixed individual I_am_systemstalled_while_doing_xyz pr

Re: [zfs-discuss] I/O Read starvation

2010-01-11 Thread bank kus
> Are we still trying to solve the starvation problem? I would argue the disk I/O model is fundamentally broken on Solaris if there is no fair I/O scheduling between multiple read sources until that is fixed individual I_am_systemstalled_while_doing_xyz problems will crop up. Started a new thre

Re: [zfs-discuss] I/O Read starvation

2010-01-11 Thread Henrik Johansson
Hello, On Jan 11, 2010, at 6:53 PM, bank kus wrote: >> For example, you could set it to half your (8GB) memory so that 4GB is >> immediately available for other uses. >> >> * Set maximum ZFS ARC size to 4GB > > capping max sounds like a good idea. Are we still trying to solve the starvation p

Re: [zfs-discuss] I/O Read starvation

2010-01-11 Thread bank kus
> For example, you could set it to half your (8GB) memory so that 4GB is > immediately available for other uses. > > * Set maximum ZFS ARC size to 4GB capping max sounds like a good idea thanks banks ___ zfs-discuss mailing list zfs-discuss@opensolaris.

Re: [zfs-discuss] I/O Read starvation

2010-01-11 Thread Bob Friesenhahn
On Mon, 11 Jan 2010, bank kus wrote: However I noticed something weird, long after the file operations are done the free memory doesnt seem to grow back (below) Essentially ZFS File Data claims to use 76% of memory long after the file has been written. How does one reclaim it back. Is ZFS Fil

Re: [zfs-discuss] I/O Read starvation

2010-01-11 Thread bank kus
vmstat does show something interesting. The free memory shrinks while doing the first dd (generating the 8G file) from around 10G to 1.5Gish. The copy operations thereafter dont consume much and it stays at 1.2G after all operations have completed. (btw at the point of system slugishness there

Re: [zfs-discuss] I/O Read starvation

2010-01-11 Thread Phil Harman
Hi Banks, Some basic stats might shed some light, e.g. vmstat 5, mpstat 5, iostat -xnz 5, prstat -Lmc 5 ... all running from just before you start the tests until things are "normal" again. Memory starvation is certainly a possibility. The ARC can be greedy and slow to release memory unde

Re: [zfs-discuss] I/O Read starvation

2010-01-10 Thread Richard Elling
On Jan 8, 2010, at 7:49 PM, bank kus wrote: > dd if=/dev/urandom of=largefile.txt bs=1G count=8 > > cp largefile.txt ./test/1.txt & > cp largefile.txt ./test/2.txt & > > Thats it now the system is totally unusable after launching the two 8G > copies. Until these copies finish no other applicati

Re: [zfs-discuss] I/O Read starvation

2010-01-10 Thread Daniel Carosone
On Sun, Jan 10, 2010 at 09:54:56AM -0600, Bob Friesenhahn wrote: > WTF? urandom is a character device and is returning short reads (note the 0+n vs n+0 counts). dd is not padding these out to the full blocksize (conv=sync) or making multiple reads to fill blocks (conv=fullblock). Evidently the ur

Re: [zfs-discuss] I/O Read starvation

2010-01-10 Thread Bob Friesenhahn
On Sun, 10 Jan 2010, Henrik Johansson wrote: As an interesting aside, on my Solaris 10U8 system (plus a zfs IDR), dd (Solaris or GNU) does not produce the expected file size when using /dev/urandom as input: Do you feel this is related to the filesystem, is there any difference betw

Re: [zfs-discuss] I/O Read starvation

2010-01-10 Thread Henrik Johansson
Hello Bob, On Jan 10, 2010, at 4:54 PM, Bob Friesenhahn wrote: > On Sun, 10 Jan 2010, Phil Harman wrote: >> In performance terms, you'll probably find that block sizes beyond 128K add >> little benefit. So I'd suggest something like: >> >> dd if=/dev/urandom of=largefile.txt bs=128k count=65536

Re: [zfs-discuss] I/O Read starvation

2010-01-10 Thread bank kus
place a sync call after dd ? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] I/O Read starvation

2010-01-10 Thread Bob Friesenhahn
On Sun, 10 Jan 2010, Phil Harman wrote: In performance terms, you'll probably find that block sizes beyond 128K add little benefit. So I'd suggest something like: dd if=/dev/urandom of=largefile.txt bs=128k count=65536 dd if=largefile.txt of=./test/1.txt bs=128k & dd if=largefile.txt of=./test

Re: [zfs-discuss] I/O Read starvation

2010-01-10 Thread Henrik Johansson
Hello again, On Jan 10, 2010, at 5:39 AM, bank kus wrote: > Hi Henrik > I have 16GB Ram on my system on a lesser RAM system dd does cause problems as > I mentioned above. My __guess__ dd is probably sitting in some in memory > cache since du -sh doesnt show the full file size until I do a sync.

Re: [zfs-discuss] I/O Read starvation

2010-01-10 Thread bank kus
Hi Phil You make some interesting points here: -> yes bs=1G was a lazy thing -> the GNU cp I m using does __not__ appears to use mmap open64 open64 read write close close is the relevant sequence -> replacing cp with dd 128K * 64K does not help no new apps can be launched until the copies

Re: [zfs-discuss] I/O Read starvation

2010-01-10 Thread Phil Harman
What version of Solaris / OpenSolaris are you using? Older versions use mmap(2) for reads in cp(1). Sadly, mmap(2) does not jive well with ZFS. To be sure, you could check how your cp(1) is implemented using truss(1) (i.e. does it do mmap/write or read/write?) I find it interesting that ZFS'

Re: [zfs-discuss] I/O Read starvation

2010-01-10 Thread Markus Kovero
-discuss@opensolaris.org Subject: Re: [zfs-discuss] I/O Read starvation Btw FWIW if I redo the dd + 2 cp experiment on /tmp the result is far more disastrous. The GUI stops moving caps lock stops responding for large intervals no clue why. -- This message posted from opensolaris.org

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread bank kus
Btw FWIW if I redo the dd + 2 cp experiment on /tmp the result is far more disastrous. The GUI stops moving caps lock stops responding for large intervals no clue why. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread bank kus
Hi Henrik I have 16GB Ram on my system on a lesser RAM system dd does cause problems as I mentioned above. My __guess__ dd is probably sitting in some in memory cache since du -sh doesnt show the full file size until I do a sync. At this point I m less looking for QA type repro questions and/or

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread Henrik Johansson
On Jan 9, 2010, at 2:02 PM, bank kus wrote: >> Probably not, but ZFS only runs in userspace on Linux >> with fuse so it >> will be quite different. > > I wasnt clear in my description, I m referring to ext4 on Linux. In fact on a > system with low RAM even the dd command makes the system horri

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread Jürgen Keil
> > I wasnt clear in my description, I m referring to ext4 on Linux. In > > fact on a system with low RAM even the dd command makes the system > > horribly unresponsive. > > > > IMHO not having fairshare or timeslicing between different processes > > issuing reads is frankly unacceptable given a

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread bank kus
> I am confused. Are you talking about ZFS under > OpenSolaris, or are > you talking about ZFS under Linux via Fuse? ??? > Do you have compression or deduplication enabled on > the zfs > filesystem? Compression no. I m guessing 2009.06 doesnt have dedup. > What sort of system are you using

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread Bob Friesenhahn
On Sat, 9 Jan 2010, bank kus wrote: Probably not, but ZFS only runs in userspace on Linux with fuse so it will be quite different. I wasnt clear in my description, I m referring to ext4 on Linux. In fact on a system with low RAM even the dd command makes the system horribly unresponsive. I

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread bank kus
> Probably not, but ZFS only runs in userspace on Linux > with fuse so it > will be quite different. I wasnt clear in my description, I m referring to ext4 on Linux. In fact on a system with low RAM even the dd command makes the system horribly unresponsive. IMHO not having fairshare or times

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread Henrik Johansson
Henrik http://sparcv9.blogspot.com On 9 jan 2010, at 04.49, bank kus wrote: dd if=/dev/urandom of=largefile.txt bs=1G count=8 cp largefile.txt ./test/1.txt & cp largefile.txt ./test/2.txt & Thats it now the system is totally unusable after launching the two 8G copies. Until these copies