Yup, I'm watching that card closely. No Solaris drivers yet, but hopefully
somebody will realise just how good that could be for the ZIL and work on some.
Just the 80GB $2,400 card would make a huge difference to write performance.
For use with VMware and NFS it would be a godsend.
This me
On Wed, 28 May 2008, Mertol Ozyoney wrote:
>
> Think that you have a 146 GB SSD and the wirte cycle is around 100k
> And you can write/update data at 10 MB/sec (depends on the IO pattern could
> be a lot slower or a lot higher) It will take 4 Hours or 14,400 sec's to
> fully populate the drive. Mul
: ZFS Discuss
Subject: Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08
On Tue, 27 May 2008, Tim wrote:
> You're still concentrating on consumer level drives. The stec drives
> emc is using for instance, exhibit none of the behaviors you describe.
How long have you been
y
Cc: 'ZFS Discuss'
Subject: Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08
On Mon, 26 May 2008, Mertol Ozyoney wrote:
> It's true that NAND based falsh's wear out under heavy load. Regular
> consumer grade nand drives will wear out the extra cells pretty rapidly.
(in
>
On May 23, 2008, at 22:21, Richard Elling wrote:
> Consider a case where you might use large, slow SATA drives (1 TByte,
> 7,200 rpm)
> for the main storage, and a single small, fast (36 GByte, 15krpm)
> drive
> for the
> L2ARC. This might provide a reasonable cost/performance trade-off.
Ooh,
On May 27, 2008, at 1:44 PM, Rob Logan wrote:
>
>> There is something more to consider with SSDs uses as a cache device.
> why use SATA as the interface? perhaps
> http://www.tgdaily.com/content/view/34065/135/
> would be better? (no experience)
We are pretty happy with RAMSAN SSD's (ours is RAM
On Tue, May 27, 2008 at 12:44 PM, Rob Logan <[EMAIL PROTECTED]> wrote:
> > There is something more to consider with SSDs uses as a cache device.
> why use SATA as the interface? perhaps
> http://www.tgdaily.com/content/view/34065/135/
> would be better? (no experience)
>
> "cards will start at 80
Rob Logan wrote:
> > There is something more to consider with SSDs uses as a cache device.
> why use SATA as the interface? perhaps
> http://www.tgdaily.com/content/view/34065/135/
> would be better? (no experience)
>
> "cards will start at 80 GB and will scale to 320 and 640 GB next year.
> By th
> There is something more to consider with SSDs uses as a cache device.
why use SATA as the interface? perhaps
http://www.tgdaily.com/content/view/34065/135/
would be better? (no experience)
"cards will start at 80 GB and will scale to 320 and 640 GB next year.
By the end of 2008, Fusion io also
There is something more to consider with SSDs uses as a cache device.
STEC mentions that they obtain improved reliability by employing error
correction. The ZFS scrub operation is very good at testing
filesystem blocks for errors by reading them. Besides corrections at
the ZFS level, the SSD
On Tue, 27 May 2008, Tim wrote:
> You're still concentrating on consumer level drives. The stec drives
> emc is using for instance, exhibit none of the behaviors you describe.
How long have you been working for STEC? ;-)
Looking at the specifications for STEC SSDs I see that they are very
good
On Sat, May 24, 2008 at 11:45 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
>
>
> Hugh Saunders wrote:
>> On Sat, May 24, 2008 at 4:00 PM, <[EMAIL PROTECTED]> wrote:
>>> > cache improve write performance or only reads?
>>>
>>> L2ARC cache device is for reads... for write you want
>>> Intent Log
>>
"Hugh Saunders" <[EMAIL PROTECTED]> writes:
> On Sat, May 24, 2008 at 3:21 AM, Richard Elling <[EMAIL PROTECTED]> wrote:
>> Consider a case where you might use large, slow SATA drives (1 TByte,
>> 7,200 rpm)
>> for the main storage, and a single small, fast (36 GByte, 15krpm) drive
>> for the
>> L
Hugh Saunders wrote:
> On Sat, May 24, 2008 at 4:00 PM, <[EMAIL PROTECTED]> wrote:
>> > cache improve write performance or only reads?
>>
>> L2ARC cache device is for reads... for write you want
>> Intent Log
>
> Thanks for answering my question, I had seen mention of intent log
> devices, b
On Sat, May 24, 2008 at 4:00 PM, <[EMAIL PROTECTED]> wrote:
>
> > cache improve write performance or only reads?
>
> L2ARC cache device is for reads... for write you want
> Intent Log
Thanks for answering my question, I had seen mention of intent log
devices, but wasn't sure of their purpose.
> cache improve write performance or only reads?
L2ARC cache device is for reads... for write you want
Intent Log
The ZFS Intent Log (ZIL) satisfies POSIX requirements for
synchronous transactions. For instance, databases often
require their transactions to be on st
On Fri, May 23, 2008 at 05:26:34PM -0500, Bob Friesenhahn wrote:
> On Fri, 23 May 2008, Bill McGonigle wrote:
> > The remote-disk cache makes perfect sense. I'm curious if there are
> > measurable benefits for caching local disks as well? NAND-flash SSD
> > drives have good 'seek' and slow trans
On Sat, May 24, 2008 at 3:21 AM, Richard Elling <[EMAIL PROTECTED]> wrote:
> Consider a case where you might use large, slow SATA drives (1 TByte,
> 7,200 rpm)
> for the main storage, and a single small, fast (36 GByte, 15krpm) drive
> for the
> L2ARC. This might provide a reasonable cost/performa
[EMAIL PROTECTED] wrote:
> > measurable benefits for caching local disks as well? NAND-flash SSD
>
> I'm confused, the only reason I can think of making a
>
> To create a pool with cache devices, specify a "cache" vdev
> with any number of devices. For example:
>
> # zpool cr
> measurable benefits for caching local disks as well? NAND-flash SSD
I'm confused, the only reason I can think of making a
To create a pool with cache devices, specify a "cache" vdev
with any number of devices. For example:
# zpool create pool c0d0 c1d0 cache c2d0 c3d0
On Fri, 23 May 2008, Bill McGonigle wrote:
> The remote-disk cache makes perfect sense. I'm curious if there are
> measurable benefits for caching local disks as well? NAND-flash SSD
> drives have good 'seek' and slow transfer, IIRC, but that might
> still be useful for lots of small reads where
On May 22, 2008, at 19:54, Richard Elling wrote:
> The Adaptive Replacement Cache
> (ARC) uses main memory as a read cache. But sometimes
> people want high performance, but don't want to spend money
> on main memory. So, the Level-2 ARC can be placed on a
> block device, such as a fast [solid st
Jesus Cea wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Robin Guo wrote:
> | At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..
>
> Any detail about this L2ARC thing?. I see some references in Google (a
> cache device) but no "in deep" description.
>
>
Sure.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Robin Guo wrote:
| At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..
Any detail about this L2ARC thing?. I see some references in Google (a
cache device) but no "in deep" description.
- --
Jesus Cea Avion
| So, from a feature perspective it looks like S10U6 is going to be in
| pretty good shape ZFS-wise. If only someone could speak to (perhaps
| under the cloak of anonymity ;) ) the timing side :).
For what it's worth, back in January or so we were told that S10U6 was
scheduled for August. Given t
al" though.
Original Message
Subject: Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08
From: Daryl Doami <[EMAIL PROTECTED]>
To: Paul B. Henson <[EMAIL PROTECTED]>
CC: zfs-discuss@opensolaris.org
Date: Fri May 16 22:59:13 2008
> Hi Paul,
>
> I believe th
Hi Paul,
I believe the goal is to come out w/ new Solaris updates every 4-6
months and sometimes are known as quarterly updates.
Regards.
Original Message
Subject: Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08
From: Paul B. Henson <[EMAIL PROTECTED]>
To: Rob
On Fri, May 16, 2008 at 03:12:02PM -0700, Zlotnick Fred wrote:
> The issues with CIFS is not just complexity; it's the total amount
> of incompatible change in the kernel that we had to make in order
> to make the CIFS protocol a first class citizen in Solaris. This
> includes changes in the VFS l
The issues with CIFS is not just complexity; it's the total amount
of incompatible change in the kernel that we had to make in order
to make the CIFS protocol a first class citizen in Solaris. This
includes changes in the VFS layer which would break all S10 file
systems. So in a very real sense C
On Thu, 15 May 2008, Robin Guo wrote:
> The most feature and bugfix so far towards Navada 87 (or 88? ) will
> backport into s10u6. It's about the same (I mean from outside viewer, not
> inside) with openSolaris 05/08, but certainly, some other features as
> CIFS has no plan to backport to s10u6
Robin Guo wrote:
> Hi, Brian
>
> You mean stripe type with multiple-disks or raidz type? I'm afraid
> it's still single disk
> or mirrors only. If opensolaris start new project of this kind of
> feature, it'll be backport
> to s10u* eventually, but that's need some time to go, sounds no
> pos
Hi, Brian
You mean stripe type with multiple-disks or raidz type? I'm afraid
it's still single disk
or mirrors only. If opensolaris start new project of this kind of
feature, it'll be backport
to s10u* eventually, but that's need some time to go, sounds no
possibility in U6, I think.
Brian H
On Fri, May 16, 2008 at 09:30:27AM +0800, Robin Guo wrote:
> Hi, Paul
>
> At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..
As far as root zfs goes, are there any plans to support more than just single
disks or mirrors in U6, or will that be for a later date?
-brian
--
"
Hi, Krzys,
Definitely, s10u6_01 ZFS's version is 10 already,
I never expect it'll downgrade :)
U5 only inlcude bugfix but without great ZFS feature included,
that's a pity, but anyway, s10u6 will come, sooner or later.
Krzys wrote:
> I was hoping that in U5 at least ZFS version 5 would be in
I was hoping that in U5 at least ZFS version 5 would be included but it was
not,
do you think that will be in U6?
On Fri, 16 May 2008, Robin Guo wrote:
> Hi, Paul
>
> The most feature and bugfix so far towards Navada 87 (or 88? ) will
> backport into s10u6.
> It's about the same (I mean from o
Hi, Paul
The most feature and bugfix so far towards Navada 87 (or 88? ) will
backport into s10u6.
It's about the same (I mean from outside viewer, not inside) with
openSolaris 05/08,
but certainly, some other features as CIFS has no plan to backport to
s10u6 yet, so ZFS
will has fully ready
We've been working on a prototype of a ZFS file server for a while now,
based on Solaris 10. Now that official support is available for
openSolaris, we are looking into that as a possible option as well.
openSolaris definitely has a greater feature set, but is still a bit rough
around the edges fo
37 matches
Mail list logo