sufficient (possibly 4G) but that week the price was the
same for 4G vs 8G.
I omit the part of the story where we became mired in arc cache
variable changes, because that's probably just relevant to u3/u4
users. I did take my replacement servers up to u6/u7
On Tue, Feb 17, 2009 at 11:56 PM, Eli
A lot of us have run *with * the ability to shrink because we were
using Veritas. Once you have a feature, processes tend to expand to
use it. Moving to ZFS was a good move for many reasons but I still
missed being able to do something that used to be so easy
__
It's an old version but it's a *supported* version and we have a
five-figure support contract. That used to matter.
I've never used Live Upgrade; I want to try it out but not on my
production file server, and I want to know that this particular bug is
fixed first, something more definite than "man
I've got a server that freezes when I run a zpool scrub from cron.
Zpool scrub runs fine from the command line, no errors.
The freeze happens within 30 seconds of the zpool scrub happening.
The one core dump I succeeded in taking showed an arccache eating up
all the ram.
The server's running Solari
I'm building a production web server on a Sol10 u3 box , after giving
up on u4 [1]
Any zfs file systems have to be legacy mounted or EMC Networker/Legato
backup won't see them.
I've been building zfs systems and zones for a while now , but I still
feel like a newbie, because the darn things just wo
Well, I fixed the HW but I had one bad file, and the problem was that ZFS
was saying "delete the pool and restore from tape" when, it turns out, the
answer is just find the file with the bad inode, delete it, clear the device
and scrub. Maybe more of a documentation problme, but it sure is
discon
On 11/28/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
I suspect this will be the #1 complaint about zfs as it becomes more
popular. "It worked before with ufs and hw raid, now with zfs it says
my data is corrupt! zfs sux0rs!"
That's not the problem, so much as "zfs says my file system is cor
On 11/28/06, David Dyer-Bennet <[EMAIL PROTECTED]> wrote:
Looks to me like another example of ZFS noticing and reporting an
error that would go quietly by on any other filesystem. And if you're
concerned with the integrity of the data, why not use some ZFS
redundancy? (I'm guessing you're appl
So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it ran
for three months, and it's had no hardware errors. But my zfs file system
seems to have died a quiet death. Sun engineering response was to point to
the FMRI, which says to throw out the zfs partition and start over. I'm r