Tim tcsac.net> writes:
>
> So we're still stuck the same place we were a year ago. No high port
> count pci-E compatible non-raid sata cards. You'd think with all the
> demand SOMEONE would've stepped up to the plate by now. Marvell, cmon ;)
Here is a 6-port SATA PCI-Express x1 controller for
On Sun, May 25, 2008 at 1:27 AM, Erik Trimble <[EMAIL PROTECTED]> wrote:
> If there are more than 1 vdev in a pool, the pool's capacity is
> determined by the smallest device. Thus, if you have a 2GB, a 3GB, and a
> 5GB device in a pool, the pool's capacity is 3 x 2GB = 6GB, as ZFS will
> only do f
Steve Hull wrote:
> Sooo... I've been reading a lot in various places. The conclusion I've
> drawn is this:
>
> I can create raidz vdevs in groups of 3 disks and add them to my zpool to be
> protected against 1 drive failure. This is the current status of "growing
> protected space" in raidz
Hugh Saunders wrote:
> On Sat, May 24, 2008 at 4:00 PM, <[EMAIL PROTECTED]> wrote:
>> > cache improve write performance or only reads?
>>
>> L2ARC cache device is for reads... for write you want
>> Intent Log
>
> Thanks for answering my question, I had seen mention of intent log
> devices, b
Sooo... I've been reading a lot in various places. The conclusion I've drawn
is this:
I can create raidz vdevs in groups of 3 disks and add them to my zpool to be
protected against 1 drive failure. This is the current status of "growing
protected space" in raidz. Am I correct here?
Thi
I like the link you sent along... They did a nice job with that.
(but it does show that mixing and matching vastly different drive-sizes
is not exactly optimal...)
http://www.drobo.com/drobolator/index.html
Doing something like this for ZFS allowing people to create pools by
mixing/match
OK so in my (admittedly basic) understanding of raidz and raidz2, these
technologies are very similar to raid5 and raid6. BUT if you set up one disk
as a raidz vdev, you (obviously) can't maintain data after a disk failure, but
you are protected against data corruption that is NOT a result of d
On Sat, May 24, 2008 at 4:00 PM, <[EMAIL PROTECTED]> wrote:
>
> > cache improve write performance or only reads?
>
> L2ARC cache device is for reads... for write you want
> Intent Log
Thanks for answering my question, I had seen mention of intent log
devices, but wasn't sure of their purpose.
Hi Steve,
Am 24.05.2008 um 10:17 schrieb [EMAIL PROTECTED]:
> ZFS: A general question
> To: zfs-discuss@opensolaris.org
> Message-ID: <[EMAIL PROTECTED]>
> Content-Type: text/plain; charset=UTF-8
>
> Hello everyone,
>
> I'm new to ZFS and OpenSolaris, and I've been reading the docs on
> ZFS (the
> cache improve write performance or only reads?
L2ARC cache device is for reads... for write you want
Intent Log
The ZFS Intent Log (ZIL) satisfies POSIX requirements for
synchronous transactions. For instance, databases often
require their transactions to be on st
> Anyway you can add mirrored, [...], raidz, or raidz2 arrays to the pool,
> right?
correct.
> add a disk or two to increase your protected storage capacity.
if its a protected vdev, like a mirror or raidz, sure... one can
force add a single disk, but then the pool isn't protected until
you a
On Sat, May 24, 2008 at 3:12 AM, Steve Hull <[EMAIL PROTECTED]> wrote:
> Hello everyone,
>
> I'm new to ZFS and OpenSolaris, and I've been reading the docs on ZFS (the
> pdf "The Last Word on Filesystems" and wikipedia of course), and I'm trying
> to understand something.
>
> So ZFS is self-healin
On Fri, May 23, 2008 at 05:26:34PM -0500, Bob Friesenhahn wrote:
> On Fri, 23 May 2008, Bill McGonigle wrote:
> > The remote-disk cache makes perfect sense. I'm curious if there are
> > measurable benefits for caching local disks as well? NAND-flash SSD
> > drives have good 'seek' and slow trans
So, I think I've narrowed it down to two things:
* ZFS tries to destroy the dataset every time it's called because the last time
it didn't finish destroying
* In this process, ZFS makes the kernel run out of memory and die
So I thought of two options, but I'm not sure if I'm right:
Option 1: "D
No, this is a 64-bit system (athlon64) with 64-bit kernel of course.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, May 24, 2008 at 3:21 AM, Richard Elling <[EMAIL PROTECTED]> wrote:
> Consider a case where you might use large, slow SATA drives (1 TByte,
> 7,200 rpm)
> for the main storage, and a single small, fast (36 GByte, 15krpm) drive
> for the
> L2ARC. This might provide a reasonable cost/performa
Hello everyone,
I'm new to ZFS and OpenSolaris, and I've been reading the docs on ZFS (the pdf
"The Last Word on Filesystems" and wikipedia of course), and I'm trying to
understand something.
So ZFS is self-healing, correct? This is accomplished via parity and/or
metadata of some sort on the
17 matches
Mail list logo