bash-3.00# dtrace -n fbt::txg_quiesce:return'{printf("%Y ",walltimestamp);}'
dtrace: description 'fbt::txg_quiesce:return' matched 1 probe
CPU IDFUNCTION:NAME
3 38168 txg_quiesce:return 2007 Feb 12 14:08:15
0 38168 txg_quiesce:return 2007 F
Robert Milkowski writes:
> bash-3.00# dtrace -n fbt::txg_quiesce:return'{printf("%Y ",walltimestamp);}'
> dtrace: description 'fbt::txg_quiesce:return' matched 1 probe
> CPU IDFUNCTION:NAME
> 3 38168 txg_quiesce:return 2007 Feb 12 14:08:15
> 0 3816
Hello Roch,
Monday, February 12, 2007, 3:19:23 PM, you wrote:
RP> Robert Milkowski writes:
>> bash-3.00# dtrace -n fbt::txg_quiesce:return'{printf("%Y ",walltimestamp);}'
>> dtrace: description 'fbt::txg_quiesce:return' matched 1 probe
>> CPU IDFUNCTION:NAME
>> 3 38
Duh!.
Long sync (which delays the next sync) are also possible on
a write intensive workloads. Throttling heavy writters, I
think, is the key to fixing this.
Robert Milkowski writes:
> Hello Roch,
>
> Monday, February 12, 2007, 3:19:23 PM, you wrote:
>
> RP> Robert Milkowski writes:
>
Hello Matty,
Monday, February 12, 2007, 1:44:13 AM, you wrote:
M> On 2/11/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
>> Hello Matty,
>>
>> Sunday, February 11, 2007, 6:56:14 PM, you wrote:
>>
>> M> Howdy,
>>
>> M> On one of my Solaris 10 11/06 servers, I am getting numerous errors
>> M> simi
I had the same issue with zfs killing my Ultra20. I can confirm that flashing
the BIOS fixed the issue.
http://www.sun.com/desktop/workstation/ultra20/downloads.jsp#Ultra
Eric
This message posted from opensolaris.org
___
zfs-discuss mailing list
zf
Hello Roch,
Monday, February 12, 2007, 3:54:30 PM, you wrote:
RP> Duh!.
RP> Long sync (which delays the next sync) are also possible on
RP> a write intensive workloads. Throttling heavy writters, I
RP> think, is the key to fixing this.
Well, then maybe it's not the cause to our problems.
Never
Some comments from the author:
1. It was a preliminary scratch report not meant to be exhaustive and
complete by any means. A comprehensive report of our findings will be
released soon.
2. I claim responsibility for any benchmarks gathered from Thumper and
the Linux/FASST/ZFS configuration.
On Feb 12, 2007, at 8:05 AM, Robert Petkus wrote:
Some comments from the author:
1. It was a preliminary scratch report not meant to be exhaustive
and complete by any means. A comprehensive report of our findings
will be released soon.
2. I claim responsibility for any benchmarks gathere
Here's another website working on his rescue, myy prayers are for a safe return
of this CS icon.
http://www.helpfindjim.com/
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
Been using ZFS for a good bit now, and particularly on my laptop. Until
B60 is out, I've kind of refrained from using ZFS boot. Works fine, but
I ran into various issues, plus when it is upgrade time, that is a bit
brutal.
What I've been wanting is a way to make my laptop a bit more
"redundant", s
On Feb 12, 2007, at 7:52 AM, Robert Milkowski wrote:
Hello Roch,
Monday, February 12, 2007, 3:54:30 PM, you wrote:
RP> Duh!.
RP> Long sync (which delays the next sync) are also possible on
RP> a write intensive workloads. Throttling heavy writters, I
RP> think, is the key to fixing this.
W
Henk Langeveld wrote:
Selim Daoud wrote:
here's an interesting status report published by Microsoft labs
http://research.microsoft.com/research/pubs/view.aspx?msr_tr_id=MSR-TR-2005-166
That is the paper in which Jim Gray coined "Mean time to data loss".
It's been quoted here before.
Nit:
Hello,
Often fsync() is used not because one cares that some piece of data is on
stable storage, but because one wants to ensure the subsequent I/O operations
are performed after previous I/O operations are on stable storage. In these
cases the latency introduced by an fsync() is completely unn
On Mon, 12 Feb 2007, Peter Schuller wrote:
Hello,
Often fsync() is used not because one cares that some piece of data is on
stable storage, but because one wants to ensure the subsequent I/O operations
are performed after previous I/O operations are on stable storage. In these
cases the latency
Uwe Dippel wrote:
On 2/11/07, Richard Elling <[EMAIL PROTECTED]> wrote:
D'Oh! someone needs to update
www.opensolaris.org/os/community/zfs/demos/zfs_demo.pdf
answers below...
About a year ago we changed 'backup' to 'send' and 'restore' to 'receive'
The zfs_demo.pdf needs to be updated.
Oh
2007/2/12, Frank Hofmann <[EMAIL PROTECTED]>:
On Mon, 12 Feb 2007, Peter Schuller wrote:
> Hello,
>
> Often fsync() is used not because one cares that some piece of data is on
> stable storage, but because one wants to ensure the subsequent I/O operations
> are performed after previous I/O opera
comment below...
Uwe Dippel wrote:
Dear Richard,
> > Could it be that you are looking for the zfs clone subcommand?
>
> I'll have to look into it !
I *did* look into it.
man zfs, /clone. This is what I read:
Clones
A clone is a writable volume or file system whose initial
contents
On Mon, 12 Feb 2007, Chris Csanady wrote:
[ ... ]
> Am I missing something?
How do you guarantee that the disk driver and/or the disk firmware doesn't
reorder writes ?
The only guarantee for in-order writes, on actual storage level, is to
complete the outstanding ones before issuing new ones.
Hi.
I have tested zfs for a while and is very impressed with the ease one
can create filesystems (tanks). I'm about to try it out on a atabeast
with 42 ata 400 GB disks for internal use, mailny as a fileserver. If
this goes well (as I assume it will) I'll consider to desploy zfs on a
larger scale
Peter Schuller wrote:
Hello,
Often fsync() is used not because one cares that some piece of data is on
stable storage, but because one wants to ensure the subsequent I/O operations
are performed after previous I/O operations are on stable storage. In these
cases the latency introduced by an f
Consider the following scenario involving various failures.
We have a zpool composed of a simple mirror of two devices D0 and D1
(these may be local disks, slices, LUNs on a SAN, or whatever). For the
sake of this scenario, it's probably most intuitive to think of them as
LUNs on a SAN. Init
On 12-Feb-07, at 5:55 PM, Frank Hofmann wrote:
On Mon, 12 Feb 2007, Peter Schuller wrote:
Hello,
Often fsync() is used not because one cares that some piece of
data is on
stable storage, but because one wants to ensure the subsequent I/O
operations
are performed after previous I/O operat
On Mon, 12 Feb 2007, Toby Thain wrote:
[ ... ]
I'm no guru, but would not ZFS already require strict ordering for its
transactions ... which property Peter was exploiting to get "fbarrier()" for
free?
It achieves this by flushing the disk write cache when there's need to
barrier. Which compl
> Then there is a failure, such that D1 becomes disconnected. ZFS
> continues to write on D0. If D1 were to become reconnected, it would
> get resilvered normally and all would be well.
>
> But suppose instead there is a crash, and when the system reboots it is
> connected only to D1, and D0
2007/2/12, Frank Hofmann <[EMAIL PROTECTED]>:
On Mon, 12 Feb 2007, Chris Csanady wrote:
> This is true for NCQ with SATA, but SCSI also supports ordered tags,
> so it should not be necessary.
>
> At least, that is my understanding.
Except that ZFS doesn't talk SCSI, it talks to a target driver.
Toby Thain wrote:
I'm no guru, but would not ZFS already require strict ordering for its
transactions ... which property Peter was exploiting to get "fbarrier()"
for free?
Exactly. Even if you disable the intent log, the transactional nature
of ZFS ensures preservation of event ordering. Not
Hello,
I am running SPEC SFS benchmark [1] on dual Xeon 2.80GHz box with 4GB memory.
More details:
snv_56, zil_disable=1, zfs_arc_max = 0x8000 #2GB
Configurations that were tested:
160 dirs/1 zfs/1 zpool/4 SAN LUNs
160 zfs'es/1 zpool/4 SAN LUNs
40 zfs'es/4 zpools/4 SAN LUNs
One zpool was cre
[EMAIL PROTECTED] said:
> [b]How the ZFS striped on 7 slices of FC-SATA LUN via NFS worked [u]146 times
> faster[/u] than the ZFS on 1 slice of the same LUN via NFS???[/b]
Well, I do have more info to share on this issue, though how it worked
faster in that test still remains a mystery. Folks ma
Jeff Bonwick,
Do you agree that their is a major tradeoff of
"builds up a wad of transactions in memory"?
We loose the changes if we have an unstable
environment.
Thus, I don't quite understand why a 2-phase
approach to commits isn't done. First, t
Do you agree that their is a major tradeoff of
"builds up a wad of transactions in memory"?
I don't think so. We trigger a transaction group commit when we
have lots of dirty data, or 5 seconds elapse, whichever comes first.
In other words, we don't let updates get stale.
Jeff
> That said, actually implementing the underlying mechanisms may not be
> worth the trouble. It is only a matter of time before disks have fast
> non-volatile memory like PRAM or MRAM, and then the need to do
> explicit cache management basically disappears.
I meant fbarrier() as a syscall expose
> I agree about the usefulness of fbarrier() vs. fsync(), BTW. The cool
> thing is that on ZFS, fbarrier() is a no-op. It's implicit after
> every system call.
That is interesting. Could this account for disproportionate kernel
CPU usage for applications that perform I/O one byte at a time, as
c
That is interesting. Could this account for disproportionate kernel
CPU usage for applications that perform I/O one byte at a time, as
compared to other filesystems? (Nevermind that the application
shouldn't do that to begin with.)
No, this is entirely a matter of CPU efficiency in the current c
34 matches
Mail list logo