_
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
David Collier-Brown| Always do right. This will gratify
Sun Microsystems, Toronto | some people and astonish the rest
[EMAIL PROTECTED]
ressed. If you are not the intended recipient(s), you are prohibited from
> printing, forwarding, saving or copying this email. If you have received this
> e-mail in error, please immediately notify the sender and delete this e-mail
> and its attachments from your computer."
>
> ***********
>
&
_
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
David Collier-Brown| Always do right. This will gratify
Sun Microsystems, Toronto | some people and astonish the rest
[EMAIL PROTECTED] |
Brandon High wrote:
> On Fri, Jul 25, 2008 at 9:17 AM, David Collier-Brown <[EMAIL PROTECTED]>
> wrote:
>
>>And do you really have 4-sided raid 1 mirrors, not 4-wide raid-0 stripes???
>
>
> Or perhaps 4 RAID1 mirrors concatenated?
>
I wondered that too, but he
find problems writing from the cache, it
>>really needs to log somewhere the names of all the files affected, and
>>the action that could not be carried out. ZFS knows the files it was
>>meant to delete here, it also knows the files that were written. I
>>can accept that
to have it report failures to complete the local writes in time t0 and
remote in time t1, much as the resource management or fast/slow cases would
need to be visible to FMA.
--dave (at home) c-b
--
David Collier-Brown| Always do right. This will gratify
Sun Microsystems, Toronto
e result
>was that the volume manager will declare the disks bad and
>system administration intervention is required to regain access to
> the data in the array. Since this was an integrated product, we
>solved it by inducing a delay loop in the server boot
vid J. Brown's team, back when I was an
employee.
--dave (who's a contractor) c-b
--
David Collier-Brown| Always do right. This will gratify
Sun Microsystems, Toronto | some people and astonish the rest
[EMAIL PROTECTED] | -- Mark Tw
it with the level 1 and 2 caches, although if I understood
it properly, the particular machine also had to narrow a stripe
for the particular load being discussed...
--dave
--
David Collier-Brown| Always do right. This will gratify
Sun Microsystems, Toronto | some people and astonis
s more than 640 KB of memory, either...
Ah well, at least the ZFS folks found it for us, so I can add
it to my database of porting problems. What OSs did you folks
find it on?
--dave (an external consultant, these days) c-b
--
David Collier-Brown| Always do right. This will gratify
Sun Mi
r all of these (including myself)
>>>>
>>>> Feel free to nominate others for Contributor or Core Contributor.
>>>>
>>>>
>>>>-Mark
>>>>
>>>>
>>>>
>>>> _
-dave (who hasn't even Copious Spare Time, much less Infinite) c-b
--
David Collier-Brown| Always do right. This will gratify
Sun Microsystems, Toronto | some people and astonish the rest
dav...@sun.com | -- Mark Twain
cell: (647) 833-9377, br
ng them to a web page
which can be updated with the newest information on the problem.
That's a good spot for "This pool was not unmounted cleanly due
to a hardware fault and data has been lost. The ""
line contains the date which can be recovered to. Use the command
# zfs refr
y in a similar discussion about how best to do this
allocation on a 9990v, so I expect it's not peculiar to the UofT (:-))
--dave (about 6 miles north of Chris) c-b
--
David Collier-Brown| Always do right. This will gratify
Sun Microsystems, Toronto | some people and asto
We've discussed this in considerable detail, but the original
question remains unanswered: if an organization *must* use
multiple pools, is there an upper bound to avoid or a rate
of degradation to be considered?
--dave
--
David Collier-Brown| Always do right. This will gr
to have, what happens when it goes wrong, and how
to mitigate it (;-))
--dave
ps: as always, having asked for something, I'm also volunteering to
help provide it: I'm not a storage or ZFS guy, but I am an author,
and will happily help my Smarter Colleagues[tm] to write it up.
--
David Co
r an NFS->home-directories workload without
cutting into performance.
--dave
--
David Collier-Brown| Always do right. This will gratify
Sun Microsystems, Toronto | some people and astonish the rest
[EMAIL PROTECTED] | -- Mark Twain
(905) 9
heras another server is just N
| thousand dollars in one time costs and some rack space.
This is also common in organizations where IT is a cost center,
including some *very* large ones I've encountered in the past
and several which are just, well, conservative.
--dave
--
David Col
clustering
and recreated last.
--dave
--
David Collier-Brown| Always do right. This will gratify
Sun Microsystems, Toronto | some people and astonish the rest
[EMAIL PROTECTED] | -- Mark Twain
(905) 943-1983, cell: (647) 833-9377, (800) 555-9786
David Collier-Brown wrote:
>> ZFS copy-on-write results in tables' contents being spread across
>> the full width of their stripe, which is arguably a good thing
>> for transaction processing performance (or at least can be), but
>> makes sequential table-scan speed
ta sets. It is meant to
>>be fast at the expense of collisions. This issue can show much more dedup
>>possible than really exists on large datasets.
>
>
> Doing this using sha256 as the checksum algorithm would be much more
> interesting. I'm going to try that now
21 matches
Mail list logo