We recently patched our X4500 from Sol10 U6 to Sol10 U8 and have not noticed
anything like what you're seeing. We do not have any SSD devices installed.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
mdump doesn't produce any "human readable" disk ids, only
guids which then have to be correlated via a "zdb -c"
Sean
>Date: Tue, 17 Nov 2009 16:18:52 -0700
>From: Cindy Swearingen
>Subject: Re: [zfs-discuss] building zpools on device aliases
>To: se
We have a number of Sun J4200 SAS JBOD arrays which we have multipathed using
Sun's MPxIO facility. While this is great for reliability, it results in the
/dev/dsk device IDs changing from cXtYd0 to something virtually unreadable like
"c4t5000C5000B21AC63d0s3".
Since the entries in /dev/{rdsk,d
ed rather than
there being a real issue with ZFS. Despite this, we're happy to know that we
can now match vdevs against physical devices using either the mdb trick or zdb.
We've followed Eric's work on ZFS device enumeration for the Fishwork project
with great interest - hopefully
Thanks for this information.
We have a weekly scrub schedule, but I ran another just to be sure :-) It
completed with 0 errors.
Running fmdump -eV gives:
TIME CLASS
fmdump: /var/fm/fmd/errlog is empty
Dumping the faultlog (no -e) does give some output, but again there
This morning we got a fault management message from one of our production
servers stating that a fault in one of our pools had been detected and fixed.
Looking into the error using fmdump gives:
fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006
TIME UUID
Something caused my original message to get cut off. Here is the full post:
1) Turning on write caching is potentially dangerous because the disk will
indicate that data has been written (to cache) before it has actually been
written to non-volatile storage (disk). Since the factory has no way o
1) Turning on write caching is potentially dangerous because the disk will
indicate that data has been written (to cache) before it has actually been
written to non-volatile storage (disk). Since the factory has no way of knowing
how you'll use your T5140, I'm guessing that they set the disk wri
Sun X4500 (thumper) with 16Gb of memory running Solaris 10 U6 with patches
current to the end of Feb 2009.
Current ARC size is ~6Gb.
ZFS filesystem created in a ~3.2 Tb pool consisting of 7 sets of mirrored 500Gb
SATA drives.
I used 4000 8Mb files for a total of 32Gb.
run 1: ~140M/s average a
Some additional information: I should have noted that the client could not see
the thumper1 shares via the automounter.
I've played around with this setup a bit more and it appears that I can
manually mount both filesystems (e.g. on /tmp/troot and /tmp/tpool), so the ZFS
and UFS volumes are bei
I have a server "thumper1" which exports its root (UFS) filesystem to one
specific server "hoss" via /etc/dfs/dfstab so that we can backup various system
files. When I added a ZFS pool mypool to this system, I shared it to hoss and
several other machines using the ZFS sharenfs property.
Prior t
I haven't used it myself, but the following blog describes an automatic
snapshot facility:
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10
I agree that it would be nice to have this type of functionality built into the
base product, however.
This message posted from opensolaris
We mostly rely on AMANDA, but for a simple, compressed, encrypted,
tape-spanning alternative backup (intended for disaster recovery) we use:
tar cf - | lzf (quick compression utility) | ssl (to encrypt) | mbuffer
(which writes to tape and looks after tape changes)
Recovery is exactly the oppos
13 matches
Mail list logo