Richard Elling writes:
> In my experience, this looks like a set of devices sitting behind an
> expander. I have seen one bad disk take out all disks sitting behind
> an expander. I have also seen bad disk firmware take out all disks
> behind an expander. I once saw a bad cable take out everyth
a kmdb?
James
___
indiana-discuss mailing list
indiana-disc...@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/indiana-discuss
--
Brian Ruthven
Solaris Revenue Product Engineering
Sun Microsystems UK
Sparc House, Guillemont Park, Camberley, GU17 9QG
-Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Solaris
> Sent: Thursday, October 09, 2008 4:09 PM
> To: zfs-discuss@opensolaris.org
> Subject: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0
> Servers?
>
> I have
I have been leading the charge in my IT department to evaluate the Sun
Fire X45x0 as a commodity storage platform, in order to leverage
capacity and cost against our current NAS solution which is backed by
EMC Fiberchannel SAN. For our corporate environments, it would seem
like a single machine wo
Hello... Since there has been much discussion about zpool import failures
resulting in loss of an entire pool, I thought I would illustrate a scenario
I just went through to recover a faulted pool that wouldn't import under
Solaris 10 U5. While this is a simple scenario, and the data wa
Greetings,
I have Sun Fire 4600 running Solaris 10, running Sun Cluster 3.2.
SunOS hubdb004 5.10 Generic_120012-14 i86pc i386 i86pc
We ran into some space issues in /usr today, so as a quick fix, I created a
slice (c5t0d0s12) with about 25GB of disk in order to create some zfs
filesystems with
Richard,
Having read your blog regarding the copies feature, do you have an
opinion on whether mirroring or copies are better for a SAN situation?
It strikes me that since we're discussing SAN and not local physical
disk, that for a system needing 100GB of usable storage (size chosen
for roun
You don't have to do it all at once... ZFS will function fine with 1
large disk and 1 small disk in a mirror, it just means you will only
have the as much space as the smaller disk.
As of things now, if you have multiple vdevs in a pool and they are of
diverse capacities, the striping becomes less
I considered this as well, but that's the beauty of marrying ZFS with
a hotplug SATA backplane :)
I chose the to use the 5-in-3 hot-swap chassis in order to give me a
way to upgrade capacity in place, though the 4-in-3 would be as easy,
though with higher risk.
1. hot-plug a new 500GB SATA disk
oportion with your target total usable size and keep it simple and
to an even number of disks.
I still have yet to purchase the system due to my issues with finding
the right board with the right SATA controller. My desktop system at
home runs an nVidia 590a chipset on a Foxconn motherboard
Greetings.
I applied the Recommended Patch Cluster including 120012-14 to a U3
system today. I upgraded my zpool and it seems like we have some very
strange information coming from zpool list and zfs list...
[EMAIL PROTECTED]:/]# zfs list
NAMEUSED AVAIL REFER MOUNTPOINT
zpool02
suggestion would not be applicable to your situation.
On 9/13/07, Peter Tribble <[EMAIL PROTECTED]> wrote:
>
> On 9/13/07, Solaris <[EMAIL PROTECTED]> wrote:
> > Try exporting the pool then import it. I have seen this after moving
> disks
> > between systems,
Try exporting the pool then import it. I have seen this after moving disks
between systems, and on a couple of occasions just rebooting.
On 9/13/07, [EMAIL PROTECTED] <
[EMAIL PROTECTED]> wrote:
>
> Date: Thu, 13 Sep 2007 15:19:02 +0100
> From: "Peter Tribble" <[EMAIL PROTECTED]>
> Subject: [zfs-
Is it possible to force ZFS to "nicely" re-organize data inside a zpool
after a new root level vdev has been introduced?
e.g. Take a pool with 1 vdev consisting of a 2 disk mirror. Populate some
arbitrary files using about 50% of the capacity. Then add another 2
mirrored disks to the pool.
It s
Hi Jim,
The handout referenced is in fact the second of the two PDF documents
posted on the LOSUG website.
Cheers,
Joy
Jim Mauro wrote:
>
> Is the referenced Laminated Handout on slide 3 available anywhere in
> any form electronically?
>
> If not, I'd be happy to create an electronic copy and
Hi Thomas,
The man page for zpool has:
zpool scrub [-s] pool ...
Begins a scrub. The scrub examines all data in the
specified pools to verify that it checksums correctly.
For replicated (mirror or raidz) devices, ZFS automati-
cally repairs any
The current version of Sun Cluster (3.1) has no support for ZFS. You
will be able to use ZFS as a failover filesystem with Sun Cluster 3.2
which will be released by the end of this year.
alex.
Erik Trimble wrote:
I'm seriously looking at using the SunCluster software in combination
with ZFS
17 matches
Mail list logo