> From: Phil Harman
> Date: Tue, 10 Aug 2010 09:24:52 +0100
> To: Ian Collins
> Cc: Terry Hull , "zfs-discuss@opensolaris.org"
>
> Subject: Re: [zfs-discuss] RAID Z stripes
>
> On 10 Aug 2010, at 08:49, Ian Collins wrote:
>
>> On 08/10/10 06:2
planning to do that, but I was wondering about using drives of different
sizes. These drives would all be in a single pool.
--
Terry Hull
Network Resource Group, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
planning to do that, but I was wondering about using drives of different
sizes. These drives would all be in a single pool.
--
Terry Hull
Network Resource Group, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
> From: Geoff Nordli
> Date: Sat, 7 Aug 2010 14:11:37 -0700
> To: Terry Hull ,
> Subject: RE: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS
[stuff deleted]
>
> Terry, you are right, the part that was really missing with the Dell was the
> lack of spindles.
so, ZFS loves to see lots of spindles, and Dell boxes tend not to have
lots of drive bays in comparison to what you can build at a given price
point. Of course then you have warranty / service issues to consider.
--
Terry Hull
Network Resource Group, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
*1024 as a MB.
Zpool list shows raw pool space, not usable space. RAIDZ1 takes the space
for one disk out to store error checking data so that is not "usable" space.
--
Terry Hull
Network Resource Group, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
caching turned on and have verified that turning it off causes
a significant write performance penalty. I currently am not using bonded
NICS, but am using jumbo frames. Are there other things I should be
tweeking?
--
Terry Hull
Network Resource Group, Inc
> From: Richard Elling
> Date: Wed, 4 Aug 2010 18:40:49 -0700
> To: Terry Hull
> Cc: "zfs-discuss@opensolaris.org"
> Subject: Re: [zfs-discuss] Logical Units and ZFS send / receive
>
> On Aug 4, 2010, at 1:27 PM, Terry Hull wrote:
>>> From: Richard El
> From: Richard Elling
> Date: Wed, 4 Aug 2010 11:05:21 -0700
> Subject: Re: [zfs-discuss] Logical Units and ZFS send / receive
>
> On Aug 3, 2010, at 11:58 PM, Terry Hull wrote:
>> I have a logical unit created with sbdadm create-lu that it I replicating
>> with zf
zfs list t snapshot to show the
snapshot that I replicated.
Any suggestions?
--
Terry Hull
Network Resource Group, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I am curious how admins are dealing with controllers like the Dell Perc 5 and 6
that can change the device name on a disk if a disk fails and the machine
reboots. These controllers are not nicely behaved in that they happily fill
in the device numbers for the physical drive that is missing. I
Interestingly, with the machine running, I can pull the first drive in the
mirror, replace it with an unformatted one, format it, mirror rpool over to it,
install the boot loader, and at that point the machine will boot with no
problems. It s just when the first disk is missing that I have a p
I have a machine with the Supermicro 8 port SATA card installed. I have had no
problem creating a mirrored boot disk using the oft-repeated scheme:
prtvtoc /dev/rdsk/c4t0d0s2 | fmthard -s – /dev/rdsk/c4t1d0s2
zpool attach rpool c4t0d0s0 c4t1d0s0
wait for sync
installgrub -m /boot/grub/stage1 /bo
Thanks for the info.
If that last common snapshot gets destroyed on the primary server, it is then a
full replication back to the primary server. Is that correct?
--
Terry
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zf
First of all, I must apologize. I'm an OpenSolaris newbie so please don't be
too hard on me.
Sorry if this has been beaten to death before, but I could not find it, so here
goes. I'm wanting to be able to have two disk servers that I replicate data
between using send / receive with snapsho
15 matches
Mail list logo