Anton,
On 12/8/06 7:18 AM, "Anton B. Rang" <[EMAIL PROTECTED]> wrote:
> If your database performance is dominated by sequential reads, ZFS may not be
> the best solution from a performance perspective. Because ZFS uses a
> write-anywhere layout, any database table which is being updated will quic
>On Dec 9, 2006, at 8:59 , Jim Mauro wrote:
>> AnywayI'm feeling rather naive' here, but I've seen the "NFS
>> enforced synchronous semantics" phrase
>> kicked around many times as the explanation for suboptimal
>> performance for metadata-intensive
>> operations when ZFS is the underlying
On Dec 9, 2006, at 8:59 , Jim Mauro wrote:
AnywayI'm feeling rather naive' here, but I've seen the "NFS
enforced synchronous semantics" phrase
kicked around many times as the explanation for suboptimal
performance for metadata-intensive
operations when ZFS is the underlying file system, bu
Jim Mauro wrote:
Could be NFS synchronous semantics on file create (followed by
repeated flushing of the write cache). What kind of storage are you
using (feel free to send privately if you need to) - is it a thumper?
It's not clear why NFS-enforced synchronous semantics would induce
diff
The way I see it benefits is because it will mean less "stray" threads
which is what I would call symlinks.
Say if the mount option for zvols are used to define that purpose, ZFS
metadata will always know how it is really being used. Also if someday
we come up with the framework to dump the m
Jochen M. Kaiser wrote:
Dear all,
we're currently looking forward to restructure our hardware environment for
our datawarehousing product/suite/solution/whatever.
cool.
We're currently running the database side on various SF V440's attached via
dual FC to our SAN backend (EMC DMX3) with UFS.
On 12/8/06, Jochen M. Kaiser <[EMAIL PROTECTED]> wrote:
Dear all,
we're currently looking forward to restructure our hardware environment for
our datawarehousing product/suite/solution/whatever.
We're currently running the database side on various SF V440's attached via
dual FC to our SAN backe
Jignesh K. Shah wrote:
I am already using symlinks.
But the problem is the ZFS framework won't know about them .
Can you explain how this knowledge would benefit the combination
of ZFS and databases? There may be something we could leverage here.
I would expect something like this from ZVOL s
Could be NFS synchronous semantics on file create (followed by
repeated flushing of the write cache). What kind of storage are you
using (feel free to send privately if you need to) - is it a thumper?
It's not clear why NFS-enforced synchronous semantics would induce
different behavior than
But can't this behavior be "tuned" (so to speak...I hate that word but I
can't think of
something better) by increasing the recordsize?
For DSS applications, Video streaming, etcapps that read very large
files, I seem
to remember (in some ZFS work many, many months ago), getting very goo
Yes. But its going to be a few months.
i'll presume that we will get background disk scrubbing for free once
you guys get bookmarking done. :)
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
I am already using symlinks.
But the problem is the ZFS framework won't know about them .
I would expect something like this from ZVOL specially abstracting the
poolname path from zvol.
Specially since many database will store the path names in their
metadata and is hard to change later on.
Re
On Dec 8, 2006, at 05:20, Jignesh K. Shah wrote:
Hello ZFS Experts
I have two ZFS pools zpool1 and zpool2
I am trying to create bunch of zvols such that their paths are
similar except for consisent number scheme without reference to the
zpools that actually belong. (This will allow me to
Hi All,
we have some ZFS pools on production with more than 100s fs and more than 1000s
snapshots on them.
Now we do backups with zfs send/receive with some scripting but I'm searching
for a way to mirror each zpool to an other one for backup purposes (so
including all snapshots!). Is that poss
Ben Rockwood wrote:
Bill Moore wrote:
On Fri, Dec 08, 2006 at 12:15:27AM -0800, Ben Rockwood wrote:
Clearly ZFS file creation is just amazingly heavy even with ZIL
disabled. If creating 4,000 files in a minute squashes 4 2.6Ghz
Opteron cores we're in big trouble in the longer term. In th
Bill Moore wrote:
On Fri, Dec 08, 2006 at 12:15:27AM -0800, Ben Rockwood wrote:
Clearly ZFS file creation is just amazingly heavy even with ZIL
disabled. If creating 4,000 files in a minute squashes 4 2.6Ghz Opteron
cores we're in big trouble in the longer term. In the meantime I'm
going
Spencer Shepler wrote:
Good to hear that you have figured out what is happening, Ben.
For future reference, there are two commands that you may want to
make use of in observing the behavior of the NFS server and individual
filesystems.
There is the trusty, nfsstat command. In this case, you wo
Jeremy Teo wrote:
The whole raid does not fail -- we are talking about corruption
here. If you lose some inodes your whole partition is not gone.
My ZFS pool would not salvage -- poof, whole thing was gone (granted
it was a test one and not a raidz or mirror yet). But still, for
what happened,
Jim Davis wrote:
eric kustarz wrote:
What about adding a whole new RAID-Z vdev and dynamicly stripe across
the RAID-Zs? Your capacity and performance will go up with each
RAID-Z vdev you add.
Thanks, that's an interesting suggestion.
This has the benefit of allowing you to grow into your s
If your database performance is dominated by sequential reads, ZFS may not be
the best solution from a performance perspective. Because ZFS uses a
write-anywhere layout, any database table which is being updated will quickly
become scattered on the disk, so that sequential read patterns become r
On Fri, Dec 08, 2006 at 12:15:27AM -0800, Ben Rockwood wrote:
> Clearly ZFS file creation is just amazingly heavy even with ZIL
> disabled. If creating 4,000 files in a minute squashes 4 2.6Ghz Opteron
> cores we're in big trouble in the longer term. In the meantime I'm
> going to find a new h
On Fri, Ben Rockwood wrote:
> eric kustarz wrote:
> >So i'm guessing there's lots of files being created over NFS in one
> >particular dataset?
> >
> >We should figure out how many creates/second you are doing over NFS (i
> >should have put a timeout on the script). Here's a real simple one
> >
22 matches
Mail list logo