> 1 - are the 2 vdevs in the same pool, or two separate
> pools?
>
I was planning on having the 2 z2 vdevs in one pool. Although having 2 pools
and having them sync'd sounds really good, I fear it may be overkill for the
intended purpose.
>
>
> 3 - spare temperature
>
> for levels raidz2 and
Thanks Richard.
How does ZFS enumerate the disks? In terms of listing them does it do them
logically, i.e;
controller #1 (motherboard)
|
|--- disk1
|--- disk2
controller #3
|--- disk3
|--- disk4
|--- disk5
|--- disk6
|--- disk7
|--- disk8
|--- disk9
|-
>I was planning on using one of
> these
> http://www.scan.co.uk/products/icy-dock-mb994sp-4s-4in
> 1-sas-sata-hot-swap-backplane-525-raid-cage
Imagine if 2.5" 2TB disks were price neutral compared to 3.5" equivalents.
I could have 40 of the buggers in my system giving 80TB raw storage! I'd
h
> 4 - the 16th port
>
> Can you find somewhere inside the case for an SSD as
> L2ARC on your
> last port?
Although saying that, if we are saying hot spares may be bad in my scenario, I
could ditch it and use an 3.5" SSD in the 15th drive's place?
--
This message posted from opensolaris.org
> -Original Message-
> From: Fred Liu
> Sent: 星期四, 六月 16, 2011 17:28
> To: Fred Liu; 'Richard Elling'
> Cc: 'Jim Klimov'; 'zfs-discuss@opensolaris.org'
> Subject: RE: [zfs-discuss] zfs global hot spares?
>
> Fixing a typo in my last thread...
>
> > -Original Message-
> > From: F
On 6/17/2011 12:55 AM, Lanky Doodle wrote:
Thanks Richard.
How does ZFS enumerate the disks? In terms of listing them does it do them
logically, i.e;
controller #1 (motherboard)
|
|--- disk1
|--- disk2
controller #3
|--- disk3
|--- disk4
|--- disk5
|--- disk6
> From: Daniel Carosone [mailto:d...@geek.com.au]
> Sent: Thursday, June 16, 2011 10:27 PM
>
> Is it still the case, as it once was, that allocating anything other
> than whole disks as vdevs forces NCQ / write cache off on the drive
> (either or both, forget which, guess write cache)?
I will onl
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Lanky Doodle
>
> or is it completely random leaving me with some trial and error to work
out
> what disk is on what port?
It's highly desirable to have drives with lights on them. So you can
> From: Daniel Carosone [mailto:d...@geek.com.au]
> Sent: Thursday, June 16, 2011 11:05 PM
>
> the [sata] channel is idle, blocked on command completion, while
> the heads seek.
I'm interested in proving this point. Because I believe it's false.
Just hand waving for the moment ... Presenting th
2011-06-17 9:37, Michael Schuster пишет:
I'd suggest a somewhat different approach:
1) boot a live cd and use something like parted to shrink the NTFS
partition
2) create a new partition without FS in the space now freed from NTFS
3) boot OpenSolaris, add the partition from 2) as vdev to your
2011-06-17 15:41, Edward Ned Harvey пишет:
From: Daniel Carosone [mailto:d...@geek.com.au]
Sent: Thursday, June 16, 2011 11:05 PM
the [sata] channel is idle, blocked on command completion, while
the heads seek.
I'm interested in proving this point. Because I believe it's false.
Just hand wavi
2011-06-17 15:06, Edward Ned Harvey пишет:
When it comes to reads: The OS does readahead more intelligently than the
disk could ever hope. Hardware readahead is useless.
Here's another (lame?) question to the experts, partly as a
followup to my last post about large arrays and essentially
a
> Lights. Good.
Agreed. In a fit of desperation and stupidity I once enumerated disks by
pulling them one by one from the array to see which zfs device faulted.
On a busy array it is hard even to use the leds as indicators.
It makes me wonder how large shops with thousands of spindles handle
> Lights. Good.
Agreed. In a fit of desperation and stupidity I once enumerated disks by
pulling them one by one from the array to see which zfs device faulted.
On a busy array it is hard even to use the leds as indicators.
It makes me wonder how large shops with thousands of spindles handle th
On 6/17/2011 6:52 AM, Marty Scholes wrote:
Lights. Good.
Agreed. In a fit of desperation and stupidity I once enumerated disks by
pulling them one by one from the array to see which zfs device faulted.
On a busy array it is hard even to use the leds as indicators.
It makes me wonder how large
Funny you say that.
My Sun v40z connected a pair of Sun A5200 arrays running OSol 128a can't see
the enclosures. The luxadm command comes up blank.
Except for that annoyance (and similar other issues) the Sun gear works well
with a Sun operating system.
Sent from Yahoo! Mail on Android
___
2011-06-18 0:24, marty scholes wrote:
>> It makes me wonder how large shops with thousands of spindles
handle this.
> We pay for the brand-name disk enclosures or servers where the
fault-management stuff is supported by Solaris.
> Including the blinky lights.
>
Funny you say that.
My Sun
>
ok what is the Point of the RESERVE
When we can not even delete a file when their is no space left !!!
if they are going to have a RESERVE they should make it a little smarter and
maybe have the FS use some of that free space so when we do hit 0 bytes
data can still be deleted because their
On Jun 17, 2011, at 7:06 AM, Edward Ned Harvey
wrote:
> I will only say, that regardless of whether or not that is or ever was true,
> I believe it's entirely irrelevant. Because your system performs read and
> write caching and buffering in ram, the tiny little ram on the disk can't
> possibly
On Jun 16, 2011, at 7:23 PM, Erik Trimble wrote:
> On 6/16/2011 1:32 PM, Paul Kraus wrote:
>> On Thu, Jun 16, 2011 at 4:20 PM, Richard Elling
>> wrote:
>>
>>> You can run OpenVMS :-)
>> Since *you* brought it up (I was not going to :-), how does VMS'
>> versioning FS handle those issues ?
>>
On Fri, 17 Jun 2011, Jim Klimov wrote:
I gather that he is trying to expand his root pool, and you can
not add a vdev to one. Though, true, it might be possible to
create a second, data pool, in the partition. I am not sure if
zfs can make two pools in different partitions of the same
device thou
On 17 Jun 11, at 21:02 , Ross Walker wrote:
> On Jun 16, 2011, at 7:23 PM, Erik Trimble wrote:
>
>> On 6/16/2011 1:32 PM, Paul Kraus wrote:
>>> On Thu, Jun 16, 2011 at 4:20 PM, Richard Elling
>>> wrote:
>>>
You can run OpenVMS :-)
>>> Since *you* brought it up (I was not going to :-), ho
On 17 Jun 11, at 21:14 , Bob Friesenhahn wrote:
> On Fri, 17 Jun 2011, Jim Klimov wrote:
>> I gather that he is trying to expand his root pool, and you can
>> not add a vdev to one. Though, true, it might be possible to
>> create a second, data pool, in the partition. I am not sure if
>> zfs can m
23 matches
Mail list logo