Bob Friesenhahn wrote:
On Fri, 1 Jan 2010, Erik Trimble wrote:
Maybe it's approaching time for vendors to just produce really stupid
SSDs: that is, ones that just do wear-leveling, and expose their true
page-size info (e.g. for MLC, how many blocks of X size have to be
written at once) and t
On Jan 1, 2010, at 6:33 PM, Bob Friesenhahn wrote:
On Fri, 1 Jan 2010, Erik Trimble wrote:
Maybe it's approaching time for vendors to just produce really
stupid SSDs: that is, ones that just do wear-leveling, and expose
their true page-size info (e.g. for MLC, how many blocks of X size
h
On Fri, 1 Jan 2010, Erik Trimble wrote:
Maybe it's approaching time for vendors to just produce really stupid SSDs:
that is, ones that just do wear-leveling, and expose their true page-size
info (e.g. for MLC, how many blocks of X size have to be written at once) and
that's about it. Let fil
Bob Friesenhahn wrote:
On Fri, 1 Jan 2010, Al Hopper wrote:
Interesting article - rumor has it that this is the same controller
that Seagate will use in its upcoming enterprise level SSDs:
http://anandtech.com/storage/showdoc.aspx?i=3702
It reads like SandForce has implemented a bunch of ZFS
You might want to checkout another thread that me and some of the others
started on this topic. some of the guys in that thread got their pool back but
I haven't been able to. I have SSDs for my log and cache and it hasn't helped
me because my system hangs hard on import the way you are describ
That's the thing, the drive lights aren't blinking, but I was thinking maybe
the writes are going so slow that it's possible they aren't registering. And
since I can't keep a running iostat, Ican't tell if anything is going on. I
can however get into the KMDB. is there something in there that
raidz2 is recommended. As discs get large, it can take long time to repair
raidz. Maybe several days. With raidz1, if another discs blows during repair,
you are screwed.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
On Jan 1, 2010, at 2:23 PM, tom wagner wrote:
Yeah, still no joy. I moved the disks to another machine altogether
with 8gb and a quad core intel versus the dual core amd I was using
and it still just hangs the box on import. this time I did a nohup
zpool import -fFX vault after booting off
Yeah, still no joy. I moved the disks to another machine altogether with 8gb
and a quad core intel versus the dual core amd I was using and it still just
hangs the box on import. this time I did a nohup zpool import -fFX vault after
booting off the b130 live dvd on this machine into single user
On Jan 1, 2010, at 8:11 AM, R.G. Keen wrote:
On Dec 31, 2009, at 6:14 PM, Richard Elling wrote:
Some nits:
disks aren't marked as semi-bad, but if ZFS has trouble with a
block, it will try to not use the block again. So there is two
levels
of recovery at work: whole device and block.
Ah. I
On Jan 1, 2010, at 4:57 AM, LevT wrote:
Hi
(snv_130) created zfs pool storage (a mirror of two whole disks)
zfs created storage/iscsivol, made some tests, wrote some GBs
zfs created storage/mynas filesystem
(sharesmb
dedup=on
compression=on)
FILLED the storage/mynas
tried to ZFS DESTROY m
On Jan 1, 2010, at 11:28 AM, Bob Friesenhahn wrote:
On Fri, 1 Jan 2010, Al Hopper wrote:
Interesting article - rumor has it that this is the same controller
that Seagate will use in its upcoming enterprise level SSDs:
http://anandtech.com/storage/showdoc.aspx?i=3702
It reads like SandForce
I have a Supermicro AOC-SAT2-MV8 running on snv_130. I have a 6 disk raidz2
pool that has been running great.
Today I added a Western Digital Green 1.5TB WD15EADS so I could create some
scratch space.
But, cfgadm will not assign the drive a dsk/xxx ...
I have tried unconfigure/configure and d
The 80Gb Intel MLC SSDs have been hard to find in-stock and prices
keep varying The original list price on the x25m 80Gb MLC drive
was $230 - and it was *supposed* to be available for less than that.
Demand has been high and a lot of on-line sellers have taken advantage
of the demand to keep
On Fri, Jan 1, 2010 at 11:17 AM, Bob Friesenhahn
wrote:
> On Fri, 1 Jan 2010, David Magda wrote:
>>
>> It doesn't exist currently because of the behind-the-scenes re-mapping
>> that's being done by the SSD's firmware.
>>
>> While arbitrary to some extent, and "actual" LBA would presumably the
>> n
On Fri, 1 Jan 2010, Al Hopper wrote:
Interesting article - rumor has it that this is the same controller
that Seagate will use in its upcoming enterprise level SSDs:
http://anandtech.com/storage/showdoc.aspx?i=3702
It reads like SandForce has implemented a bunch of ZFS like
functionality in f
Hi
After upgrading OpenSolaris from snv111 to snv130
r...@t61p:/export/home/xtrnaw7# cat /etc/release
OpenSolaris Development snv_130 X86
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
On Fri, 1 Jan 2010, David Magda wrote:
It doesn't exist currently because of the behind-the-scenes re-mapping that's
being done by the SSD's firmware.
While arbitrary to some extent, and "actual" LBA would presumably the number
of a particular cell in the SSD.
There seems to be some severe
On Dec 31, 2009, at 12:59 PM, Ragnar Sundblad wrote:
Flash SSDs actually always remap new writes into a
only-append-to-new-pages style, pretty much as ZFS does itself.
So for a SSD there is no big difference between ZFS and
filesystems as UFS, NTFS, HFS+ et al, on the flash level they
all work th
On Jan 1, 2010, at 11:04, Ragnar Sundblad wrote:
But that would only move the hardware specific and dependent flash
chip handling code into the file system code, wouldn't it? What
is won with that? As long as the flash chips have larger pages than
the file system blocks, someone will have to shu
> On Dec 31, 2009, at 6:14 PM, Richard Elling wrote:
> Some nits:
> disks aren't marked as semi-bad, but if ZFS has trouble with a
> block, it will try to not use the block again. So there is two levels
> of recovery at work: whole device and block.
Ah. I hadn't found that yet.
> The "one more an
On 1 jan 2010, at 14.14, David Magda wrote:
> On Jan 1, 2010, at 04:33, Ragnar Sundblad wrote:
>
>> I see the possible win that you could always use all the working
>> blocks on the disk, and when blocks goes bad your disk will shrink.
>> I am not sure that is really what people expect, though.
Interesting article - rumor has it that this is the same controller
that Seagate will use in its upcoming enterprise level SSDs:
http://anandtech.com/storage/showdoc.aspx?i=3702
It reads like SandForce has implemented a bunch of ZFS like
functionality in firmware. Hmm, I wonder if they used any
On Jan 1, 2010, at 04:33, Ragnar Sundblad wrote:
I see the possible win that you could always use all the working
blocks on the disk, and when blocks goes bad your disk will shrink.
I am not sure that is really what people expect, though. Apart from
that, I am not sure what the gain would be.
C
On Jan 1, 2010, at 03:30, Eric D. Mudama wrote:
On Thu, Dec 31 at 16:53, David Magda wrote:
Just as the first 4096-byte block disks are silently emulating 4096 -
to-512 blocks, SSDs are currently re-mapping LBAs behind the
scenes. Perhaps in the future there will be a setting to say "no
re
Hi
(snv_130) created zfs pool storage (a mirror of two whole disks)
zfs created storage/iscsivol, made some tests, wrote some GBs
zfs created storage/mynas filesystem
(sharesmb
dedup=on
compression=on)
FILLED the storage/mynas
tried to ZFS DESTROY my storage/iscsivol, but the system has HUN
On 31 dec 2009, at 22.53, David Magda wrote:
> On Dec 31, 2009, at 13:44, Joerg Schilling wrote:
>
>> ZFS is COW, but does the SSD know which block is "in use" and which is not?
>>
>> If the SSD did know whether a block is in use, it could erase unused blocks
>> in advance. But what is an "unus
On Thu, Dec 31 at 10:18, Bob Friesenhahn wrote:
There are of course SSDs with hardly any (or no) reserve space, but
while we might be willing to sacrifice an image or two to SSD block
failure in our digital camera, that is just not acceptable for
serious computer use.
Some people are doing se
On Thu, Dec 31 at 16:53, David Magda wrote:
Just as the first 4096-byte block disks are silently emulating 4096 -
to-512 blocks, SSDs are currently re-mapping LBAs behind the scenes.
Perhaps in the future there will be a setting to say "no really, I'm
talking about the /actual/ LBA 123456".
W
29 matches
Mail list logo