> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
> Sent: Tuesday, July 12, 2011 9:58 AM
>
> > You know what? A year ago I would have said dedup still wasn't stable
> > enough for production. Now I would say it's plenty stable enough...
But it
> > needs performance enhancement before
Gary Mills wrote:
On Sun, Jul 10, 2011 at 11:16:02PM +0700, Fajar A. Nugraha wrote:
On Sun, Jul 10, 2011 at 10:10 PM, Gary Mills wrote:
The `lofiadm' man page describes how to export a file as a block
device and then use `mkfs -F pcfs' to create a FAT filesystem on it.
Can't I do the
I am now using S11E and a OCZ Vertex 3, 240GB SSD disk. I am using it in a SATA
2 port (not the new SATA 6gbps).
The PC seems to work better now, the worst lag is gone. For instance, I am
using Sunray, and if my girl friend is using the PC, and I am doing bit
torrenting, the PC could lock up f
On Sun, Jul 10, 2011 at 11:16:02PM +0700, Fajar A. Nugraha wrote:
> On Sun, Jul 10, 2011 at 10:10 PM, Gary Mills wrote:
> > The `lofiadm' man page describes how to export a file as a block
> > device and then use `mkfs -F pcfs' to create a FAT filesystem on it.
> >
> > Can't I do the same thing by
On 07/13/11 12:04 AM, Ciaran Cummins wrote:
Hi, we had a server that lost connection to fiber attached disk array where
data luns were housed, due to 3510 power fault. After connection restored alot
of the zpool status had these permanent errors listed as per below. I check the
files in quest
2011-07-12 23:14, Eric Sproul пишет:
So finding drives that keep more space in reserve is key to getting
consistent performance under ZFS.
I think I've read in a number of early SSD reviews
(possibly regarding Intel devices - not certain now)
that the vendor provided some low-level formatting
t
On Tue, Jul 12, 2011 at 12:14 PM, Eric Sproul wrote:
> I see, thanks for that explanation. So finding drives that keep more
> space in reserve is key to getting consistent performance under ZFS.
More spare area might give you more performance, but the big
difference is the lifetime of the device
On Tue, Jul 12, 2011 at 1:35 PM, Brandon High wrote:
> Most "enterprise" SSDs use something like 30% for spare area. So a
> drive with 128MiB (base 2) of flash will have 100MB (base 10) of
> available storage. A consumer level drive will have ~ 6% spare, or
> 128MiB of flash and 128MB of available
On Tue, Jul 12, 2011 at 7:41 AM, Eric Sproul wrote:
> But that's exactly the problem-- ZFS being copy-on-write will
> eventually have written to all of the available LBA addresses on the
> drive, regardless of how much live data exists. It's the rate of
> change, in other words, rather than the a
FYI - virtually all non-super-low-end SSDs are already significantly
over-provisioned, for GC and scratch use inside the controller.
In fact, the only difference between the OCZ "extended" models and the
non-extended models (e.g. Vertex 2 50G (OCZSSD2-2VTX50G) and Vertex 2
Extended 60G (OCZSSD2-2V
It is hard to say, 90% or 80%. SSD has already reserved overprovisioning places
for garbage collection and wear leveling. The OS level only knows file LBA, not
the physical LBA mapping to flash pages/block. Uberblock updates and COW from
ZFS will use a new page/block each time. A TRIM command fr
I think high end SSDs, like those from Pliant, use a significant amount of
"over allocation", and internal remapping and internal COW, so that they can
automatically garbage collect when they need to, without TRIM. This only works
if the drive has enough extra free space that it knows about (be
On Tue, 12 Jul 2011, Eric Sproul wrote:
Now, others have hinted that certain controllers are better than
others in the absence of TRIM, but I don't see how GC could know what
blocks are available to be erased without information from the OS.
Drives which keep spare space in reserve (as any res
On Tue, Jul 12, 2011 at 1:06 AM, Brandon High wrote:
> On Mon, Jul 11, 2011 at 7:03 AM, Eric Sproul wrote:
>> Interesting-- what is the suspected impact of not having TRIM support?
>
> There shouldn't be much, since zfs isn't changing data in place. Any
> drive with reasonable garbage collection
On Tue, 12 Jul 2011, Edward Ned Harvey wrote:
You know what? A year ago I would have said dedup still wasn't stable
enough for production. Now I would say it's plenty stable enough... But it
needs performance enhancement before it's truly useful for most cases.
What has changed for you to c
Hi, we had a server that lost connection to fiber attached disk array where
data luns were housed, due to 3510 power fault. After connection restored alot
of the zpool status had these permanent errors listed as per below. I check the
files in question and as far as I could see they were present
On Mon, 11 Jul 2011, Brett wrote:
1) to try freebsd as an alternative o/s hoping it has more recently
updated drivers to support the sata controllers. According to the
zfs wiki, freebsd 8.2 supports zpool version 28. I have a concern
that when i updated the old (fried) server to sol11exp it up
> You and I seem to have different interprettations of the
> empirical "2x" soft-requirement to make dedup worthwhile.
Well, until recently I had little interpretation for it at all, so your
approach may be better.
I hope that authors of the requirement statement would step
forward and explain
This dedup discussion (and my own bad expreience) have also
left me with another grim thought: some time ago sparse-root
zone support was ripped out of OpenSolaris.
Among the published rationales were transition to IPS and the
assumption that most people used them to save on disk space
(notion ab
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> By the way, did you estimate how much is dedup's overhead
> in terms of metadata blocks? For example it was often said
> on the list that you shouldn't bother with dedup unless
On Tue, Jul 12, 2011 at 6:18 PM, Jim Klimov wrote:
> 2011-07-12 9:06, Brandon High пишет:
>>
>> On Mon, Jul 11, 2011 at 7:03 AM, Eric Sproul wrote:
>>>
>>> Interesting-- what is the suspected impact of not having TRIM support?
>>
>> There shouldn't be much, since zfs isn't changing data in place.
Well, actually you've scored a hit on both ideas I had after reading the
question ;)
One more idea though: is it possible to change the disk controller mode in
BIOS i.e. to a generic IDE? Hopefully that might work, even if
sub-optimal...
AFAIK FreeBSD 8.x is limited to "stable" ZFSv15, and "e
Hi Folks,
Situation :- x86 based solaris 11 express server with 2 pools (rpool / data)
got fried. I need to recover the raidz pool "data" which consists of 5 x 1tb
sata drives. Have individually checked disks with seagate diag tool, they are
all physically ok.
Issue :- new sandybridge based x8
2011-07-09 20:04, Edward Ned Harvey ?:
--- Performance gain:
Unfortunately there was only one area that I found any performance
gain. When you read back duplicate data that was previously written
with dedup, then you get a lot more cache hits, and as a result, the
reads go faster. Unf
2011-07-12 9:06, Brandon High пишет:
On Mon, Jul 11, 2011 at 7:03 AM, Eric Sproul wrote:
Interesting-- what is the suspected impact of not having TRIM support?
There shouldn't be much, since zfs isn't changing data in place. Any
drive with reasonable garbage collection (which is pretty much
ev
25 matches
Mail list logo