On Wed, Dec 30, 2009 at 9:35 PM, Ross Walker wrote:
> On Dec 30, 2009, at 11:55 PM, "Steffen Plotner"
> wrote:
>
> Hello,
>
> I was doing performance testing, validating zvol performance in
> particularly, and found that zvol write performance to be slow ~35-44MB/s at
> 1MB blocksize writes. I th
Erik Trimble wrote:
Ragnar Sundblad wrote:
Yes, there is something to worry about, as you can only
erase flash in large pages - you can not erase them only where
the free data blocks in the Free List are.
I'm not sure that SSDs actually _have_ to erase - they just overwrite
anything there wi
Ragnar Sundblad wrote:
On 3 jan 2010, at 04.19, Erik Trimble wrote:
Let's say I have 4k blocks, grouped into a 128k page. That is, the SSD's
fundamental minimum unit size is 4k, but the minimum WRITE size is 128k. Thus,
32 blocks in a page.
Do you know of SSD disks that have a minim
On 3 jan 2010, at 06.07, Ragnar Sundblad wrote:
> (I don't think they typically merge pages, I believe they rather
> just pick pages with some freed blocks, copies the active blocks
> to the "end" of the disk, and erases the page.)
(And of course you implement wear leveling with the same
mechani
On 3 jan 2010, at 04.19, Erik Trimble wrote:
> Ragnar Sundblad wrote:
>> On 2 jan 2010, at 22.49, Erik Trimble wrote:
>>
>>
>>> Ragnar Sundblad wrote:
>>>
On 2 jan 2010, at 13.10, Erik Trimble wrote
> Joerg Schilling wrote:
> the TRIM command is what is intended f
-Original Message-
From: Ross Walker [mailto:rswwal...@gmail.com]
Sent: Thu 12/31/2009 12:35 AM
To: Steffen Plotner
Cc:
Subject: Re: [zfs-discuss] zvol (slow) vs file (fast) performance snv_130
Been there.
ZVOLs were changed a while ago to make each operation synchronous so to provide
On Sat, Jan 2, 2010 at 9:45 PM, David Magda wrote:
> On Jan 2, 2010, at 20:51, Tim Cook wrote:
>
> Apple users not complaining is more proof of them having
>> not only drunk the koolaid but also bathed in it than them knowing any
>> lImtations of what they have today. This coming from someone w
David Magda wrote:
On Jan 2, 2010, at 16:49, Erik Trimble wrote:
My argument is that the OS has a far better view of the whole data
picture, and access to much higher performing caches (i.e.
RAM/registers) than the SSD, so not only can the OS make far better
decisions about the data and how (
Ragnar Sundblad wrote:
On 2 jan 2010, at 22.49, Erik Trimble wrote:
Ragnar Sundblad wrote:
On 2 jan 2010, at 13.10, Erik Trimble wrote
Joerg Schilling wrote:
the TRIM command is what is intended for an OS to notify the SSD as to which
blocks are deleted/erased, so the SSD's
On Jan 2, 2010, at 1:47 AM, Andras Spitzer wrote:
Mike,
As far as I know only Hitachi is using such a huge chunk size :
"So each vendor’s implementation of TP uses a different block size.
HDS use 42MB on the USP, EMC use 768KB on DMX, IBM allow a variable
size from 32KB to 256KB on the SVC
On Jan 2, 2010, at 16:49, Erik Trimble wrote:
My argument is that the OS has a far better view of the whole data
picture, and access to much higher performing caches (i.e. RAM/
registers) than the SSD, so not only can the OS make far better
decisions about the data and how (and how much of)
On Jan 2, 2010, at 20:51, Tim Cook wrote:
Apple users not complaining is more proof of them having
not only drunk the koolaid but also bathed in it than them knowing any
lImtations of what they have today. This coming from someone with a
MacBook pro sitting in the other room.
Apple users not
On Saturday, January 2, 2010, Bob Friesenhahn
wrote:
> On Sat, 2 Jan 2010, David Magda wrote:
>
>
> Apple is (sadly?) probably developing their own new file system as well.
>
>
> I assume that you are talking about developing a filesystem design more
> suitable for the iNetbook and the iPhone?
>
On Sat, 2 Jan 2010, David Magda wrote:
Apple is (sadly?) probably developing their own new file system as well.
I assume that you are talking about developing a filesystem design
more suitable for the iNetbook and the iPhone?
Hardly any Apple users are complaining about the advanced filesyt
On Sat, Jan 2, 2010 at 5:40 PM, Tim Cook wrote:
>
>
> On Fri, Jan 1, 2010 at 8:31 PM, Erik Trimble wrote:
>>
>> Bob Friesenhahn wrote:
>>>
>>> On Fri, 1 Jan 2010, Al Hopper wrote:
>>>
Interesting article - rumor has it that this is the same controller
that Seagate will use in its upcomi
On Jan 2, 2010, at 19:44, Erik Trimble wrote:
I do think the market is slight larger: Hitachi and EMC storage
arrays/big SAN controllers, plus all Linux boxes once Brtfs
actually matures enough to be usable. I don't see MSFT making any
NTFS changes to help here, but they are doing some r
On 2 jan 2010, at 22.49, Erik Trimble wrote:
> Ragnar Sundblad wrote:
>> On 2 jan 2010, at 13.10, Erik Trimble wrote
>>> Joerg Schilling wrote:
>>>the TRIM command is what is intended for an OS to notify the SSD as to
>>> which blocks are deleted/erased, so the SSD's internal free list can b
Tim Cook wrote:
While I'm sure to offend someone, it must be stated. That's not going
to happen for the simple fact that there's all of two vendors that
could utilize it, both niche (in relative terms). NetApp and Sun.
Why would SSD MFG's waste their time building drives to sell for less
m
On Fri, Jan 1, 2010 at 8:31 PM, Erik Trimble wrote:
> Bob Friesenhahn wrote:
>
>> On Fri, 1 Jan 2010, Al Hopper wrote:
>>
>> Interesting article - rumor has it that this is the same controller
>>> that Seagate will use in its upcoming enterprise level SSDs:
>>>
>>> http://anandtech.com/storage/s
> Hey Markus,
>
> Thanks for the suggestion, but as stated in the thread, I am booting using
> "-s -kv -m
> verbose" and deleting the cache file was one of the first troubleshooting
> steps we and
> the others affected did.The other problem is that we were all starting an
> iostat at
>
Ragnar Sundblad wrote:
On 2 jan 2010, at 13.10, Erik Trimble wrote
Joerg Schilling wrote:
the TRIM command is what is intended for an OS to notify the SSD as to which blocks are deleted/erased, so the SSD's internal free list can be updated (that is, it allows formerly-in-use blocks to be m
Joerg Schilling wrote:
Erik Trimble wrote:
From ZFS's standpoint, the optimal configuration would be for the SSD
to inform ZFS as to it's PAGE size, and ZFS would use this as the
fundamental BLOCK size for that device (i.e. all writes are in integer
It seems that a command to retr
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Be sure to also update to the latest dev b130 release, as that also
helps with a more smooth scheduling class for the zfs threads. If the
upgrade breaks anything, you can always just boot back into the old
environment before the upgrade.
Regards,
- --
> That's the thing, the drive lights aren't blinking,
> but I was thinking maybe the writes are going so slow
> that it's possible they aren't registering. And since
> I can't keep a running iostat, Ican't tell if
> anything is going on. I can however get into the
> KMDB. is there something in th
> If pool isnt rpool you might to want to boot into
> singleuser mode (-s after kernel parameters on boot)
> remove /etc/zfs/zpool.cache and then reboot.
> after that you can merely ssh into box and watch
> iostat while import.
>
> Yours
> Markus Kovero
>
> ___
On Sat, Jan 2, 2010 at 13:10, Markus Kovero wrote:
> If pool isnt rpool you might to want to boot into singleuser mode (-s after
> kernel parameters on boot) remove /etc/zfs/zpool.cache and then reboot.
> after that you can merely ssh into box and watch iostat while import.
>
Wow, it's utterly p
> Richard Elling wrote:
> Perhaps I am not being clear. If a disk is really dead, then
> there are several different failure modes that can be responsible.
> For example, if a disk does not respond to selection, then it
> is diagnosed as failed very quickly. But that is not the TLER
> case. The T
Thanks for this thread! I was just coming here to discuss this very same
problem. I'm running 2009.06 on a Q6600 with 8GB of RAM. I have a Windows
system writing multiple OTA HD video streams via CIFS to the 2009.06 system
running Samba.
I then have multiple clients reading back other HD vid
On Sat, Jan 2, 2010 at 12:25 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Sat, 2 Jan 2010, Tim Cook wrote:
>
>>
>> Nope, on import it will scan all the disks for ZFS pools. It doesn't care
>> about the physical device names changing.
>>
>
> It does seem to care after the pool h
On Sat, 2 Jan 2010, Tim Cook wrote:
Nope, on import it will scan all the disks for ZFS pools. It
doesn't care about the physical device names changing.
It does seem to care after the pool has been imported. A few people
have been bit by hardware/BIOS/firmware updates which somehow changes
On 2 jan 2010, at 13.10, Erik Trimble wrote:
> Joerg Schilling wrote:
>> Ragnar Sundblad wrote:
>>
>>
>>> On 1 jan 2010, at 17.28, David Magda wrote:
>>>
>>
>>
Don't really see how things are either hardware specific or dependent.
>>> The inner workings of a SSD flash d
On 2 jan 2010, at 12.43, Joerg Schilling wrote:
> Ragnar Sundblad wrote:
>
>> I certainly agree, but there still isn't much they can do about
>> the WORM-like properties of flash chips, were reading is pretty
>> fast, writing is not to bad, but erasing is very slow and must be
>> done in pretty
On Sat, Jan 2, 2010 at 7:40 AM, Thomas Burgess wrote:
> I'm moving from FreeBSD to OpenSolaris in the next week or so (when the
> rest of my upgrade purchase arrives)
>
> One thing i'm curious about is whether or not ZFS cares about changing
> device names.
>
> In FreeBSD I always used glabel to
I'm moving from FreeBSD to OpenSolaris in the next week or so (when the rest
of my upgrade purchase arrives)
One thing i'm curious about is whether or not ZFS cares about changing
device names.
In FreeBSD I always used glabel to prevent this issue. Does solaris have
something similar? Is it eve
Erik Trimble wrote:
> From ZFS's standpoint, the optimal configuration would be for the SSD
> to inform ZFS as to it's PAGE size, and ZFS would use this as the
> fundamental BLOCK size for that device (i.e. all writes are in integer
It seems that a command to retrieve this information does n
switched to another system, RAM 4Gb -> 16Gb
the importing process lasts about 18hrs now
the system is responsive
if developers want it I may provide ssh access
I have no critical data there, it is an acceptance test only
--
This message posted from opensolaris.org
___
If pool isnt rpool you might to want to boot into singleuser mode (-s after
kernel parameters on boot) remove /etc/zfs/zpool.cache and then reboot.
after that you can merely ssh into box and watch iostat while import.
Yours
Markus Kovero
___
zfs-discus
Joerg Schilling wrote:
Ragnar Sundblad wrote:
On 1 jan 2010, at 17.28, David Magda wrote:
Don't really see how things are either hardware specific or dependent.
The inner workings of a SSD flash drive is pretty hardware (or
rather vendor) specific, and it may not be a goo
Ragnar Sundblad wrote:
> I certainly agree, but there still isn't much they can do about
> the WORM-like properties of flash chips, were reading is pretty
> fast, writing is not to bad, but erasing is very slow and must be
> done in pretty large pages which also means that active data
> probably
Ragnar Sundblad wrote:
> On 1 jan 2010, at 17.28, David Magda wrote:
> > Don't really see how things are either hardware specific or dependent.
>
> The inner workings of a SSD flash drive is pretty hardware (or
> rather vendor) specific, and it may not be a good idea to move
> any knowledge abou
Eric D. Mudama wrote:
On Fri, Jan 1 at 21:21, Erik Trimble wrote:
That all said, it certainly would be really nice to get a SSD
controller which can really push the bandwidth, and the only way I
see this happening now is to go the "stupid" route, and dumb down the
controller as much as possib
Mike,
As far as I know only Hitachi is using such a huge chunk size :
"So each vendor’s implementation of TP uses a different block size. HDS use
42MB on the USP, EMC use 768KB on DMX, IBM allow a variable size from 32KB to
256KB on the SVC and 3Par use blocks of just 16KB. The reasons for thi
On 1 jan 2010, at 18.17, Bob Friesenhahn wrote:
> On Fri, 1 Jan 2010, David Magda wrote:
>>
>> It doesn't exist currently because of the behind-the-scenes re-mapping
>> that's being done by the SSD's firmware.
>>
>> While arbitrary to some extent, and "actual" LBA would presumably the number
On 1 jan 2010, at 17.28, David Magda wrote:
> On Jan 1, 2010, at 11:04, Ragnar Sundblad wrote:
>
>> But that would only move the hardware specific and dependent flash
>> chip handling code into the file system code, wouldn't it? What
>> is won with that? As long as the flash chips have larger pa
On 1 jan 2010, at 17.44, Richard Elling wrote:
> On Dec 31, 2009, at 12:59 PM, Ragnar Sundblad wrote:
>> Flash SSDs actually always remap new writes into a
>> only-append-to-new-pages style, pretty much as ZFS does itself.
>> So for a SSD there is no big difference between ZFS and
>> filesystems
On Fri, Jan 1 at 21:21, Erik Trimble wrote:
That all said, it certainly would be really nice to get a SSD
controller which can really push the bandwidth, and the only way I
see this happening now is to go the "stupid" route, and dumb down the
controller as much as possible. I really think we
46 matches
Mail list logo