+1.
I would like to nominate roch.bourbonn...@sun.com for his work on
improving the performance of ZFS over the last few years.
thanks,
-neel
On Feb 2, 2009, at 4:02 PM, Neil Perrin wrote:
> Looks reasonable
> +1
>
> Neil.
>
> On 02/02/09 08:55, Mark Shellenbaum wrote:
>> The time has come to
Bob Friesenhahn wrote:
> On Wed, 26 Mar 2008, Neelakanth Nadgir wrote:
>> When you experience the pause at the application level,
>> do you see an increase in writes to disk? This might the
>> regular syncing of the transaction group to disk.
>
> If I use 'zpool io
Brandon Wilson wrote:
> Hi all, here's a couple questions.
>
> Has anyone run oracle databases off of a UFS formatted ZVOL? If so, how does
> it compare in speed to UFS direct io?
>
I have not, but I suspect the performance will be worse than
pure zfs and ufsdio. For databases, you should try t
Bob Friesenhahn wrote:
> My application processes thousands of files sequentially, reading
> input files, and outputting new files. I am using Solaris 10U4.
> While running the application in a verbose mode, I see that it runs
> very fast but pauses about every 7 seconds for a second or two.
You could always replace this device by another one of same, or
bigger size using zpool replace.
-neel
Cyril Plisko wrote:
> Hi !
>
> I played recently with Gigabyte i-RAM card (which is basically an SSD)
> as a log device for a ZFS pool. However, when I tried to remove it - I need
> to give the
of(args[0]->b_vp->v_path) does not work either.
>
> Use the zfs r/w function entry points for now.
>
> What sayeth the ZFS team regarding the use of a stable DTrace
> provider with their file system?
>
> Thanks,
> /jim
&g
io:::start probe does not seem to get zfs filenames in
args[2]->fi_pathname. Any ideas how to get this info?
-neel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I wrote a simple tool to print out the ARC statistics exported via
kstat. Details at
http://blogs.sun.com/realneel/entry/zfs_arc_statistics
-neel
--
---
Neelakanth Nadgir PAE Performance And Availability Eng
___
zfs-discuss mailing list
zfs-discuss
Damian,
Are you using compression=on? There was a bug in the past (fixed
now) where if compression was turned on, it was being computed by a
single thread. The ZFS team fixed the "user data" part of it (i.e
user data is compressed in parallel now), but the meta data part
is still compressed by one
> > We are currently recommending separate (ZFS) file systems for redo logs.
> Did you try that? Or did you go straight to a separate UFS file system for
> redo logs?
>
> I'd answered this directly in email originally.
>
> The answer was that yes, I tested using zfs for logpools among a numbe
I was trying to import the pool, but got an error that there
wee 2 pools with the same name (in exported state) and I had
to import by "id". Thus I wanted to destroy the other pool
-neel
Sometime ago, Darren Dunham said:
> > On Wed, Sep 27, 2006 at 09:53:32AM -0700, Neelaka
ran it to completion the next time
around.
2. Some of the disks I used, were part of a zpool of the same
name, but on a different system. I created a zpool containing
those disks and some new disks
I am not sure if I can reproduce what I saw.
-neel
> On Wed, Sep 27, 2006 at 09:53
Is it possible to destroy a pool by ID? I created two pools with the
same name, and want to destroy one of them
-neel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I just posted a blog on recommendations for ZFS and databases. This
is based of the work that Roch and I (and other members of PAE) did.
http://blogs.sun.com/realneel/entry/zfs_and_databases
Hope that helps
-neel
--
---
Neelakanth Nadgir PAE Performance And Availability Eng
We did an experiment where we placed the logs on UFS+DIO and the
rest on ZFS. This was a write heavy benchmark. We did not see
much gain in performance by doing that (around 5%). I suspect
you would be willing to trade 5% for all the benefits of ZFS.
Moreover this penalty is for the current versio
I have seen the best oracle performance on ZFS by
1. match the zfs record size to oracle db_block_size
2. use the default 128k record size for oracle logs.
3. If possible use a separate zpool for the oracle logs.
This is especially true if your workload has a high
write component to it.
Two
16 matches
Mail list logo