Re: [zfs-discuss] nfs and smb performance
Have you turned on the "Ignore cache flush commands" option on the xraids? You should ensure this is on when using ZFS on them. /dale On Mar 27, 2008, at 6:16 PM, abs wrote: > hello all, > i have two xraids connect via fibre to a poweredge2950. the 2 > xraids are configured with 2 raid5 vo
Re: [zfs-discuss] nfs and smb performance
2008-03-27
Thread
Peter Brouwer, Principal Storage Architect, Office of the Chief Technologist, Sun MicroSystems
Hello abs Would you be able to repeat the same tests for the cifs in zfs option instead of using samba? Would be interesting to see how the kernel cifs versus the samba performance compare. Peter abs wrote: hello all, i have two xraids connect via fibre to a poweredge2950. the 2 xraids are
Re: [zfs-discuss] kernel memory and zfs
Richard Elling wrote: > > The size of the ARC (cache) is available from kstat in the zfs > module (kstat -m zfs). Neel wrote a nifty tool to track it over > time called arcstat. See > http://www.solarisinternals.com/wiki/index.php/Arcstat > > Remember that this is a cache and subject to evi
[zfs-discuss] nfs and smb performance
hello all, i have two xraids connect via fibre to a poweredge2950. the 2 xraids are configured with 2 raid5 volumes each, giving me a total of 4 raid5 volumes. these are striped across in zfs. the read and write speeds local to the machine are as expected but i have noticed some performance
Re: [zfs-discuss] kernel memory and zfs
Matt Cohen wrote: > We have a 32 GB RAM server running about 14 zones. There are multiple > databases, application servers, web servers, and ftp servers running in the > various zones. > > I understand that using ZFS will increase kernel memory usage, however I am a > bit concerned at this point
Re: [zfs-discuss] Periodic flush
you may want to try disabling the disk write cache on the single disk. also for the RAID disable 'host cache flush' if such an option exists. that solved the problem for me. let me know. Bob Friesenhahn <[EMAIL PROTECTED]> wrote: On Thu, 27 Mar 2008, Neelakanth Nadgir wrote: > > This causes t
[zfs-discuss] kernel memory and zfs
We have a 32 GB RAM server running about 14 zones. There are multiple databases, application servers, web servers, and ftp servers running in the various zones. I understand that using ZFS will increase kernel memory usage, however I am a bit concerned at this point. [EMAIL PROTECTED]:~/zonecf
Re: [zfs-discuss] Periodic flush
On Mar 27, 2008, at 9:24 AM, Bob Friesenhahn wrote: > On Thu, 27 Mar 2008, Neelakanth Nadgir wrote: >> >> This causes the sync to happen much faster, but as you say, >> suboptimal. >> Haven't had the time to go through the bug report, but probably >> CR 6429205 each zpool needs to monitor its th
Re: [zfs-discuss] pool hangs for 1 full minute?
Tomas Ögren wrote: > On 27 March, 2008 - Neal Pollack sent me these 1,9K bytes: > > >> Also given: I have been doing live upgrade every other build since >> approx Nevada build 46. I am running on a Sun Ultra 40 modified >> to include 8 disks. (second backplane and SATA quad cable) >> >> It a
Re: [zfs-discuss] pool hangs for 1 full minute?
On 27 March, 2008 - Neal Pollack sent me these 1,9K bytes: > Also given: I have been doing live upgrade every other build since > approx Nevada build 46. I am running on a Sun Ultra 40 modified > to include 8 disks. (second backplane and SATA quad cable) > > It appears that the zfs filesystems
[zfs-discuss] pool hangs for 1 full minute?
For the last few builds of Nevada, if I come back to my workstation after long idle periods such as overnight, and try any command that would touch the zfs filesystem, it hangs for an entire 60 seconds approximately. This would include "ls", "zpool status", etc. Does anyone has a hint as to how
Re: [zfs-discuss] UFS Formatted ZVOLs and Oracle Databases / MDBMS
Brandon Wilson wrote: > Well I don't have any hard numbers 'yet'. But sometime in the next couple > weeks when the Hyperion Essbase install team get essbase up and running on a > sun m4000, I plan on taking advantage of the situation to do some stress and > performance testing on zfs and MDBMS.
Re: [zfs-discuss] UFS Formatted ZVOLs and Oracle Databases / MDBMS
Well I don't have any hard numbers 'yet'. But sometime in the next couple weeks when the Hyperion Essbase install team get essbase up and running on a sun m4000, I plan on taking advantage of the situation to do some stress and performance testing on zfs and MDBMS. Stuff like ufs+directio, zfs,
Re: [zfs-discuss] Mount order of ZFS filesystems vs. other filesystems?
> > The only way I could find was to set the mountpoint of the file system > > to legacy, and add it to /etc/vfstab. Here's an example: > > I tried this last night also, after sending the message and I made it > work. Seems clunky though. Yes, I also would have liked something more streamlined.
[zfs-discuss] ClearCase support for ZFS?
Hi, Does anybody know what is the latest status with ClearCase support for ZFS? I noticed this from IBM: http://www-1.ibm.com/support/docview.wss?rs=0&uid=swg21155708 I would like to make sure someone has installed and tested it before recommending to a customer. Regards, Nissim Ben-Haim Sol
Re: [zfs-discuss] Periodic flush
On Thu, 27 Mar 2008, Neelakanth Nadgir wrote: > > This causes the sync to happen much faster, but as you say, suboptimal. > Haven't had the time to go through the bug report, but probably > CR 6429205 each zpool needs to monitor its throughput > and throttle heavy writers > will help. I hope that
Re: [zfs-discuss] Periodic flush
Bob Friesenhahn wrote: > On Wed, 26 Mar 2008, Neelakanth Nadgir wrote: >> When you experience the pause at the application level, >> do you see an increase in writes to disk? This might the >> regular syncing of the transaction group to disk. > > If I use 'zpool iostat' with a one second interval
Re: [zfs-discuss] Periodic flush
Selim Daoud wrote: > the question is: does the "IO pausing" behaviour you noticed penalize > your application? > what are the consequences at the application level? > > for instance we have seen application doing some kind of data capture > from external device (video for example) requiring a const
Re: [zfs-discuss] Periodic flush
On Wed, 26 Mar 2008, Neelakanth Nadgir wrote: > When you experience the pause at the application level, > do you see an increase in writes to disk? This might the > regular syncing of the transaction group to disk. If I use 'zpool iostat' with a one second interval what I see is two or three samp
Re: [zfs-discuss] Mount order of ZFS filesystems vs. other filesystems?
Volker A. Brandt wrote: > Hello Kyle! > > > >> All of these mounts are failing at bootup with messages about >> non-existent mountpoints. My guess is that it's because when /etc/vfstab >> is running, the ZFS '/export/OSImages' isn't mounted yet? >> > > Yes, that is absolutely correct. For
Re: [zfs-discuss] UFS Formatted ZVOLs and Oracle Databases / MDBMS
Brandon Wilson wrote: > Hi all, here's a couple questions. > > Has anyone run oracle databases off of a UFS formatted ZVOL? If so, how does > it compare in speed to UFS direct io? > I have not, but I suspect the performance will be worse than pure zfs and ufsdio. For databases, you should try t
[zfs-discuss] UFS Formatted ZVOLs and Oracle Databases / MDBMS
Hi all, here's a couple questions. Has anyone run oracle databases off of a UFS formatted ZVOL? If so, how does it compare in speed to UFS direct io? I'm trying my best to get rid of UFS, but ZFS isn't up to par on the speed of UFS direct io for MDBMS. So I'm trying to come up with some creativ
Re: [zfs-discuss] Periodic flush
the question is: does the "IO pausing" behaviour you noticed penalize your application? what are the consequences at the application level? for instance we have seen application doing some kind of data capture from external device (video for example) requiring a constant throughput to disk (data f
Re: [zfs-discuss] Mount order of ZFS filesystems vs. other filesystems?
Hello Kyle! > All of these mounts are failing at bootup with messages about > non-existent mountpoints. My guess is that it's because when /etc/vfstab > is running, the ZFS '/export/OSImages' isn't mounted yet? Yes, that is absolutely correct. For details, look at the start method of svc:/syste