Thanks ... the -F works perfectly, and provides a further benefit in that the
client can mess with the file system as much as they want for testing purposes,
but when it comes time to ensure it is synchronized each night, it will revert
back to the previous state.
Thanks
-Tony
--
This message
I am trying to keep a file system (actually quite a few) in sync across two
systems for DR purposes, but I am encountering something that I find strange.
Maybe its not strange, and I just don't understand - but I will pose to you
fine people to help answer my question. This is all scripted, but
Greetings learned ZFS geeks & guru’s,
Yet another question comes from my continued ZFS performance testing. This has
to do with zpool iostat, and the strangeness that I do see.
I’ve created an eight (8) disk raidz pool from a Sun 3510 fibre array giving me
a 465G volume.
# zpool create tp raidz
Anton & Roch,
Thank you for helping me understand this. I didn't want to make too many
assumptions that were unfounded and then incorrectly relay that information
back to clients.
So if I might just repeat your statements, so my slow mind is sure it
understands, and Roch, yes your assumption i
Let me elaborate slightly on the reason I ask these questions.
I am performing some simple benchmarking, and during this a file is created by
sequentially writing 64k blocks until the 100Gb file is created. I am seeing,
and this is the exact same as VxFS, large pauses while the system reclaims t
I have a few questions regarding ZFS, and would appreciate if someone could
enlighten me as I work my way through.
First write cache.
If I look at traditional UFS / VxFS type file systems, they normally cache
metadata to RAM before flushing it to disk. This helps increase their perceived
write
To give you fine people an update, it seems that the reason for the skewed
results shown earlier is due to Veritas' ability to take advantage of all the
free memory available on my server. My test system has 32G of Ram, and my test
data file is 10G. Basically, Veritas was able to cache the entir
--Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: April 16, 2007 2:16 PM
To: [EMAIL PROTECTED]
Subject: Re: [zfs-discuss] Testing of UFS, VxFS and ZFS
Is the VxVM volume 8-wide? It is not clear from your creation commands.
-- richard
Tony Galway wrote:
>
>
&g
A question (well lets make it 3 really) – Is vdbench a useful tool when testing
file system performance of a ZFS file system? Secondly - is ZFS write
performance really much worse than UFS or VxFS? and Third - what is a good
benchmarking tool to test ZFS vs UFS vs VxFS?
The reason I ask is this