> To the OP: First off, what do you mean by "sync=disabled"???
I believe he is referring to ZIL synchronicity (PSARC/2010/108).
http://arc.opensolaris.org/caselog/PSARC/2010/108/20100401_neil.perrin
The following presentation by Robert Milkowski does an excellent job of
placing in a larger cont
ACARD 9010 is good enough in this aspect, if you DON'T need extremely high
IOPS...
Sorry for the typo.
Fred
> -Original Message-
> From: Fred Liu
> Sent: 星期四, 十二月 23, 2010 15:30
> To: 'Erik Trimble'; Christopher George
> Cc: zfs-discuss@opensolaris.org
> Subject: RE: [zfs-discuss] Looki
ACARD 9010 is good enough in this aspect, if you need extremely high iops...
Fred
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Erik Trimble
> Sent: 星期四, 十二月 23, 2010 14:36
> To: Christopher George
> Cc: zfs-d
> It's generally a simple thing, but requires pulling the SSD from the
> server, connecting it to either a Linux or Windows box, running
> the reformatter, then replacing the SSD. Which, is a PITA.
This procedure is more commonly known as a "Secure Erase". And it
will return a Flash based SSD
On 12/22/2010 7:05 AM, Christopher George wrote:
I'm not sure if TRIM will work with ZFS.
Neither ZFS nor the ZIL code in particular support TRIM.
I was concerned that with trim support the SSD life and
write throughput will get affected.
Your concerns about sustainable write performance (IOP
On 12/22/2010 10:04 PM, Christopher George wrote:
How about comparing a non-battery backed ZIL to running a
ZFS dataset with sync=disabled. Which is more risky?
Most likely, the 3.5" SSD's on-board volatile (not power protected)
memory would be small relative to the transaction group (txg) size
> How about comparing a non-battery backed ZIL to running a
> ZFS dataset with sync=disabled. Which is more risky?
Most likely, the 3.5" SSD's on-board volatile (not power protected)
memory would be small relative to the transaction group (txg) size
and thus less "risky" than sync=disabled.
Best
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Charles J. Knipe
>
> Some more information about our configuration: We're running OpenSolaris
> svn-134. ZFS is at version 22. Our disks are 15kRPM 300gb Seagate
Cheetahs,
> mounted in Promi
I didn't hot swap the drive but yes, the new drive is in the same "slot" as the
old one was (i.e. using the same connector/channel on the fan out cable).
What I did was that I turned off the system, and booted it up after
disconnecting the physical drive that I suspected was c0t3d0. My guess was
On Tue, 21 Dec 2010, Robin Axelsson wrote:
There's nothing odd about the physical mounting of the hard drives. All drives
are firmly attached and secured in their casings, no loose connections etc.
There is some dust but not more than the hardware should be able to handle.
I replaced the hard
-Original Message-
From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com]
Sent: 22 December 2010 21:17
To: Deano
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] stupid ZFS question - floating point operations
On 2010-Dec-23 04:48:19 +0800, Deano wrote:
> modern CPU are
On 21/12/2010 21:53, Jeff Bacon wrote:
So, to Phil's email - read()/write() on a ZFS-backed vnode somehow
completely bypass the page cache and depend only on the ARC? How the
heck does that happen - I thought all files were represented as vm
objects?
For most other filesystems (and oversimplify
On Wed, Dec 22, 2010 at 01:43:35PM +, Jabbar wrote:
>Hello,
>
>I was thinking of buying a couple of SSD's until I found out that Trim is
>only supported with SATA drives.
>
Yes, because TRIM is ATA command. SATA means Serial ATA.
SCSI (SAS) drives have "WRITE SAME" command, which
On 2010-Dec-23 04:48:19 +0800, Deano wrote:
> modern CPU are float monsters indeed its
>likely some things would be faster if converted to use the float ALU
_Some_ modern CPUs are good at FP, a lot aren't. The SPARC T-1 was
particularly poor as it only had a single FPU. Likewise, performance
in
Generally, ZFS does not use floating point.
And further, use of floating point in the kernel is exceptionally rare. The
kernel does not save floating point context automatically, which means that
code that uses floating point needs to take special care to make sure any
context from userland is
There are no floating points operations in zfs, however even if there would
that wouldn't be a bad thing, as modern CPU are float monsters indeed its
likely some things would be faster if converted to use the float ALU (note
however those operations would have to account for the different propertie
Thank you to everyone who replied to my question.
Apparently, the place that I had not looked were the ZFS list archives
themselves. Someone else had already asked the question, and it was
answered by Matthew Ahrens @ Sun Microsystems in this thread here.
http://mail.opensolaris.org/pipermail/zf
On 12/22/2010 11:49 AM, Tomas Ögren wrote:
On 22 December, 2010 - Jerry Kemp sent me these 1,0K bytes:
I have a coworker, who's primary expertise is in another flavor of Unix.
This coworker lists floating point operations as one of ZFS detriments.
I's not really sure what he means specificall
If I remember correctly Solaris like most other operating system does not save
or restore the floating point registers when context switching from User to
Kernel so doing any floating point ops in the kernel would corrupt user
floating point state. This means ZFS cannot be doing any floating poi
On 22/12/10 2:44 PM, Jerry Kemp wrote:
> I have a coworker, who's primary expertise is in another flavor of Unix.
>
> This coworker lists floating point operations as one of ZFS detriments.
>
Perhaps he can point you also to the equally mythical competing
filesystem which offers ZFS' advantages.
On 12/23/10 08:44 AM, Jerry Kemp wrote:
I have a coworker, who's primary expertise is in another flavor of Unix.
This coworker lists floating point operations as one of ZFS detriments.
I's not really sure what he means specifically, or where he got this
reference from.
It sounds like your col
On 22 December, 2010 - Jerry Kemp sent me these 1,0K bytes:
> I have a coworker, who's primary expertise is in another flavor of Unix.
>
> This coworker lists floating point operations as one of ZFS detriments.
>
> I's not really sure what he means specifically, or where he got this
> reference
I have a coworker, who's primary expertise is in another flavor of Unix.
This coworker lists floating point operations as one of ZFS detriments.
I's not really sure what he means specifically, or where he got this
reference from.
In an effort to refute what I believe is an error or misunderstand
> got it attached to a UPS with very conservative shut-down timing. Or
> are there other host failures aside from power a ZIL would be
> vulnerable too (system hard-locks?)?
Correct, a system hard-lock is another example...
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This m
> The ZIL accelerator's requirements differ from the L2ARC, as it's very
> purpose is to guarantee *all* data written to the log can be replayed
> (on next reboot) in case of host failure.
Ah, so this would be why say a super-capacitor backed SSD can be very
helpful, as it will have some backup po
I've just noticed that Dell has a 6.0.1 firmware upgrade available, at least
for my R610's they do (they are about 3 months old). Oddly enough it doesn't
show up on support.dell.com when I search using my servicecode, but if I check
through "System Services / Lifecycle Controller" it does find
> I actually bought a SF-1200 based OCZ Agility 2 (60G)...
> Why are these not recommended?
The OCZ Agility 2 or any SF-1200 based SSD is an excellent choice for
the L2ARC. As on-board volatile memory does *not* need power protection
because the L2ARC contents are not required to survive a host p
Hi Per,
Disk devices are used to create ZFS storage pools. Then, you create file
systems that can access all the available disk space in the storage
pool. ZFS file systems are not constrained to any physical disk in the
storage pool.
Consider that you will need to backup your data regardless o
Hi all,
any reason why the zfs data file value reported by ::memstat is higher
that the ARC max size value ?
Regards
-Pascal
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Dec 22, 2010, at 09:55, Krunal Desai wrote:
I actually bought a SF-1200 based OCZ Agility 2 (60G) for use as a
ZIL/L2ARC (haven't installed it yet however, definitely jumped the gun
on this purchase...) based on some recommendations from fellow users.
Why are these not recommended? Is it perf
> I'm not sure if TRIM will work with ZFS.
Neither ZFS nor the ZIL code in particular support TRIM.
> I was concerned that with trim support the SSD life and
> write throughput will get affected.
Your concerns about sustainable write performance (IOPS)
for a Flash based SSD are valid, the result
On Dec 22, 2010, at 08:43, Jabbar wrote:
I was thinking of buying a couple of SSD's until I found out that
Trim is
only supported with SATA drives. I'm not sure if TRIM will work with
ZFS. I
was concerned that with trim support the SSD life and write
throughput will
get affected.
Doesn't
On Wed, Dec 22, 2010 at 05:43:35AM -0800, Jabbar wrote:
> Hello,
>
> I was thinking of buying a couple of SSD's until I found out that Trim is only
> supported with SATA drives. I'm not sure if TRIM will work with ZFS. I was
> concerned that with trim support the SSD life and write throughput wil
> As of yet, I have only found 3.5" models with the Sandforce 1200, which was
> not recommended on this list.
I actually bought a SF-1200 based OCZ Agility 2 (60G) for use as a
ZIL/L2ARC (haven't installed it yet however, definitely jumped the gun
on this purchase...) based on some recommendations
I can't answer any of these authoritatively(?), but have a comment:
On Wed, Dec 22, 2010 at 10:55, Per Hojmark wrote:
> 1) What's the maximum number of disk devices that can be used to construct
> filesystems?
lots.
> 2) Is there a practical limit on #1? I've seen messages where folks suggeste
Hello,
I was thinking of buying a couple of SSD's until I found out that Trim is
only supported with SATA drives. I'm not sure if TRIM will work with ZFS. I
was concerned that with trim support the SSD life and write throughput will
get affected.
Doesn't anybody have any thoughts on this?
On 22
1) What's the maximum number of disk devices that can be used to construct
filesystems?
2) Is there a practical limit on #1? I've seen messages where folks suggested
40 physical devices is the practical maximum. That would seem to imply a
maximum single volume size of 80TB...
3) Are vdevs hie
Am 22.12.10 12:41, schrieb Pasi Kärkkäinen:
On Wed, Dec 22, 2010 at 11:36:48AM +0100, Stephan Budach wrote:
Hello all,
I am shopping around for 3.5" SSDs that I can mount into my storage and
use as ZIL drives.
As of yet, I have only found 3.5" models with the Sandforce 1200, whi
On Wed, Dec 22, 2010 at 11:36:48AM +0100, Stephan Budach wrote:
>Hello all,
>
>I am shopping around for 3.5" SSDs that I can mount into my storage and
>use as ZIL drives.
>As of yet, I have only found 3.5" models with the Sandforce 1200, which
>was not recommended on this list.
We've always bought 2.5" and adapters for the super-micro cradles - works
well, no issues to report here.
Normally Intel's or Samsung though we also use STECH.
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Visit my blog at http://www.khushil.com/
On 22 December 2010 10:36, S
Hello all,
I am shopping around for 3.5" SSDs that I can mount into my storage and
use as ZIL drives.
As of yet, I have only found 3.5" models with the Sandforce 1200, which
was not recommended on this list.
Does anyone maybe know of a model that has the Sandforce 1500 and is
3.5"? Or any othe
41 matches
Mail list logo