NB. the zpool(1m) man page provides a rather extensive explanation
of vdevs.
[EMAIL PROTECTED] wrote:
> > There are several types of vdevs:
>
> wow, outstanding list Kyle!
>
> > suggested that there is little benefit to having 10
> > or more devices in a RAIDZ vdev.
>
> the tgx is split between
so, anyone have any ideas? I'm obviously hitting a bug here. I'm happy to help
anyone solve this, I DESPERATELY need this data. I can post dtrace results if
you send them to me. I wish I could solve this myself, but I'm not a C
progammer, I don't know how to program filesystems, much less an adv
> My question is: is there any way to change the ZFS
> guid (and the zpool name, but that's easy) on the
> clone so that I can mount both the original disk and
> clone onto the same Solaris 10 server? When I
> attempt to mount both the original and cloned iSCSI
> LUNs onto the server, "zpool attac
On Sat, May 24, 2008 at 11:45 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
>
>
> Hugh Saunders wrote:
>> On Sat, May 24, 2008 at 4:00 PM, <[EMAIL PROTECTED]> wrote:
>>> > cache improve write performance or only reads?
>>>
>>> L2ARC cache device is for reads... for write you want
>>> Intent Log
>>
Hi folks,
I use an iSCSI disk mounted onto a Solaris 10 server. I installed a ZFS file
system into s2 of the disk. I exported the disk and cloned it on the iSCSI
target. The clone is a perfect copy of the iSCSI LUN and therefore has the
same zpool name and guid.
My question is: is there any
"Hugh Saunders" <[EMAIL PROTECTED]> writes:
> On Sat, May 24, 2008 at 3:21 AM, Richard Elling <[EMAIL PROTECTED]> wrote:
>> Consider a case where you might use large, slow SATA drives (1 TByte,
>> 7,200 rpm)
>> for the main storage, and a single small, fast (36 GByte, 15krpm) drive
>> for the
>> L
| > Primarily cost, reliability (less complex hw = less hw that can
| > fail), and serviceability (no need to rebuy the exact same raid card
| > model when it fails, any SATA controller will do).
|
| As long as the RAID is self-contained on the card, and the disks are
| exported as JBOD, then you s
> One other thing I noticed is that OpenSolaris (.com) will
> automatically install ZFS root for you. Will Nexenta do that?
yeah nexenta was the first opensolaris distro to have zfs root install and
snapshots and a modern package system, which all ties together into easy
upgrades.
This messa
THANK YOU VERY MUCH EVERYONE!!
You have been very helpful and my questions are (mostly) resolved. While I am
not (and probably will not become) a ZFS expert, I now at least feel confident
that I can accomplish what I want to do.
My last comment on this is this:
I realize that ZFS is designed
Which of these SATA controllers have people been able to use with SMART
and ZFS boot in Solaris?
Cheers,
11011011
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Will and several other people are correct.
I had forgotten that ZFS does a funky form of concatenation when you use
different size vdevs. I tend to ignore this case, because it's kinda
useless (I know, I know, there's people who use it, but, really...
)
Basically, it will stripe across vdevs as
> There are several types of vdevs:
wow, outstanding list Kyle!
> suggested that there is little benefit to having 10
> or more devices in a RAIDZ vdev.
the tgx is split between vdevs, the blocks to the single
raidz vdev is divided by the data elements in the raidz
set, so lets say its 128k,
Marc Bevand wrote:
> Kyle McDonald Egenera.COM> writes:
>
>> Marc Bevand wrote:
>>
>>> Overall, like you I am frustrated by the lack of non-RAID inexpensive
>>> native PCI-E SATA controllers.
>>>
>> Why non-raid? Is it cost?
>>
>
> Primarily cost, reliability (less complex hw =
> More system RAM does not help synchronous writes go much faster.
agreed, but it does make sure all the asynchronous writes are
batched and the tgx group isn't committed early keeping everything
synchronous. (default batch is every 5 sec)
> If you want good write performance, instead of addin
> Thus, if you have a 2GB, a 3GB, and a 5GB device in a pool,
> the pool's capacity is 3 x 2GB = 6GB
If you put the three into one raidz vdev it will be 2+2
until you replace the 2G disk with a 5G at which point
it will be 3+3 and then when you replace the 3G with a 5G
it will be 5+5G. and if yo
On Sun, 25 May 2008, Marc Bevand wrote:
>
> Primarily cost, reliability (less complex hw = less hw that can fail),
> and serviceability (no need to rebuy the exact same raid card model
> when it fails, any SATA controller will do).
As long as the RAID is self-contained on the card, and the disks a
Kyle McDonald Egenera.COM> writes:
> Marc Bevand wrote:
> >
> > Overall, like you I am frustrated by the lack of non-RAID inexpensive
> > native PCI-E SATA controllers.
>
> Why non-raid? Is it cost?
Primarily cost, reliability (less complex hw = less hw that can fail),
and serviceability (no need
Orvar Korvar wrote:
> Ok, so i make one vdev out of 8 discs. And I combine all vdevs into one large
> zpool? Is it correct?
>
> I have 8 port SATA card. I have 4 drives into one zpool. That is one vdev,
> right? Now I can add 4 new drives and make them into one zpool. And now I
> combine both zp
Hello, thanks for your suggestion. I tried settin zfs_arc_max to 0x3000
(768MB, out of 3GB). The system ran for almost 45 minutes before it froze.
Here's an interesting piece of arcstat.pl, which I noticed just as it was
pasing by:
Time read miss miss% dmis dm% pmis pm% mmis
Marc Bevand wrote:
>
> Overall, like you I am frustrated by the lack of non-RAID inexpensive native
> PCI-E SATA controllers.
>
>
>
Why non-raid? Is it cost?
Personally I'm interested in a high port count RAID card, with as much
battery-backed cache RAM as possible, and that can export as man
Hello Hernan,
Friday, May 23, 2008, 6:08:34 PM, you wrote:
HF> The question is still, why does it hang the machine? Why can't I
HF> access the filesystems? Isn't it supposed to import the zpool,
HF> mount the ZFSs and then do the destroy, in background?
HF>
Try to limit ARC size to 1/4 of yo
21 matches
Mail list logo