Am 20.02.10 01:33, schrieb Toby Thain:
On 19-Feb-10, at 5:40 PM, Eugen Leitl wrote:
On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:
I found the Hyperdrive 5/5M, which is a half-height drive bay sata
ramdisk with battery backup and auto-backup to compact flash at power
failur
On 20 feb 2010, at 02.34, Rob Logan wrote:
>
>> An UPS plus disabling zil, or disabling synchronization, could possibly
>> achieve the same result (or maybe better) iops wise.
> Even with the fastest slog, disabling zil will always be faster...
> (less bytes to move)
>
>> This would probably w
On 19 February, 2010 - Christo Kutrovsky sent me these 0,5K bytes:
> Hello,
>
> How do you tell how much of your l2arc is populated? I've been looking for a
> while now, can't seem to find it.
>
> Must be easy, as this blog entry shows it over time:
>
> http://blogs.sun.com/brendan/entry/l2arc
On 19 feb 2010, at 23.22, Phil Harman wrote:
> On 19/02/2010 21:57, Ragnar Sundblad wrote:
>> On 18 feb 2010, at 13.55, Phil Harman wrote:
>>
>>> Whilst the latest bug fixes put the world to rights again with respect to
>>> correctness, it may be that some of our performance workaround are st
> These are the same as the acard devices we've discussed here
> previously; earlier hyperdrive models were their own design. Very
> interesting, and my personal favourite, but I don't know of anyone
> actually reporting results yet with them as ZIL.
Here's one report:
http://www.mail-archive.co
Hello,
How do you tell how much of your l2arc is populated? I've been looking for a
while now, can't seem to find it.
Must be easy, as this blog entry shows it over time:
http://blogs.sun.com/brendan/entry/l2arc_screenshots
And follow up, can you tell how much of each data set is in the arc or
On 19 feb 2010, at 23.20, Ross Walker wrote:
> On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad wrote:
>
>>
>> On 18 feb 2010, at 13.55, Phil Harman wrote:
>>
>> ...
>>> Whilst the latest bug fixes put the world to rights again with respect to
>>> correctness, it may be that some of our performa
> An UPS plus disabling zil, or disabling synchronization, could possibly
> achieve the same result (or maybe better) iops wise.
Even with the fastest slog, disabling zil will always be faster...
(less bytes to move)
> This would probably work given that your computer never crashes
> in an uncon
On 19-Feb-10, at 5:40 PM, Eugen Leitl wrote:
On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:
I found the Hyperdrive 5/5M, which is a half-height drive bay sata
ramdisk with battery backup and auto-backup to compact flash at power
failure.
Promises 65,000 IOPS and thus should
On Fri, Feb 19, 2010 at 11:51:29PM +0100, Ragnar Sundblad wrote:
>
> On 19 feb 2010, at 23.40, Eugen Leitl wrote:
> > On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:
> >> I found the Hyperdrive 5/5M, which is a half-height drive bay sata
> >> ramdisk with battery backup and auto-
On 19 feb 2010, at 23.40, Eugen Leitl wrote:
> On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:
>
>> I found the Hyperdrive 5/5M, which is a half-height drive bay sata
>> ramdisk with battery backup and auto-backup to compact flash at power
>> failure.
>> Promises 65,000 IOPS a
On Fri, February 19, 2010 16:21, Daniel Carosone wrote:
> On Fri, Feb 19, 2010 at 01:15:17PM -0600, David Dyer-Bennet wrote:
>>
>> On Fri, February 19, 2010 13:09, David Dyer-Bennet wrote:
>>
>> > Anybody know what the proper geometry is for a WD1600BEKT-6-1A13?
>> It's
>> > not even in the data s
On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:
> I found the Hyperdrive 5/5M, which is a half-height drive bay sata
> ramdisk with battery backup and auto-backup to compact flash at power
> failure.
> Promises 65,000 IOPS and thus should be great for ZIL. It's pretty
> reasona
If I understand correctly, ZFS now adays will only flush data to
non volatile storage (such as a RAID controller NVRAM), and not
all the way out to disks. (To solve performance problems with some
storage systems, and I believe that it also is the right thing
to do under normal circumstances.)
D
On FreeBSD, I avoid this issue completely by labelling either the entire disk
(via glabel(8)) or individual slices/partitions (via either glabel(8) or gpt
labels). Use the label name to build the vdevs. Then it doesn't matter where
the drive is connected, or how the device node is named/number
On 19/02/2010 21:57, Ragnar Sundblad wrote:
On 18 feb 2010, at 13.55, Phil Harman wrote:
Whilst the latest bug fixes put the world to rights again with respect to
correctness, it may be that some of our performance workaround are still unsafe
(i.e. if my iSCSI client assumes all writes are
On Fri, Feb 19, 2010 at 01:15:17PM -0600, David Dyer-Bennet wrote:
>
> On Fri, February 19, 2010 13:09, David Dyer-Bennet wrote:
>
> > Anybody know what the proper geometry is for a WD1600BEKT-6-1A13? It's
> > not even in the data sheets any more!
any such geometry has been entirely fictitious
Hi Harry,
Our current scrubbing guideline is described here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Run zpool scrub on a regular basis to identify data integrity problems.
If you have consumer-quality drives, consider a weekly scrubbing
schedule. If you have dat
On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad wrote:
On 18 feb 2010, at 13.55, Phil Harman wrote:
...
Whilst the latest bug fixes put the world to rights again with
respect to correctness, it may be that some of our performance
workaround are still unsafe (i.e. if my iSCSI client assumes a
Am 19.02.10 21:29, schrieb Marion Hakanson:
felix.buenem...@googlemail.com said:
I think I'll try one of thise inexpensive battery-backed PCI RAM drives from
Gigabyte and see how much IOPS they can pull.
Another poster, Tracy Bernath, got decent ZIL IOPS from an OCZ Vertex unit.
Dunno if that
On 19 feb 2010, at 17.35, Edward Ned Harvey wrote:
> The PERC cache measurably and significantly accelerates small disk writes.
> However, for read operations, it is insignificant compared to system ram,
> both in terms of size and speed. There is no significant performance
> improvement by
On 18 feb 2010, at 13.55, Phil Harman wrote:
...
> Whilst the latest bug fixes put the world to rights again with respect to
> correctness, it may be that some of our performance workaround are still
> unsafe (i.e. if my iSCSI client assumes all writes are synchronised to
> nonvolatile storage
On Feb 19, 2010, at 8:35 AM, Edward Ned Harvey wrote:
> One more thing I’d like to add here:
>
> The PERC cache measurably and significantly accelerates small disk writes.
> However, for read operations, it is insignificant compared to system ram,
> both in terms of size and speed. There is no
I think I asked this before but apparently have lost track of the
answers I got.
I'm wanting a general rule of thumb for how often to `scrub'.
My setup is a home NAS and general zfs server so it does not see heavy
use.
I'm up to build 129 and do update fairly often, just the last few
builds were
On 12/ 4/09 02:06 AM, Erik Trimble wrote:
> Hey folks.
>
> I've looked around quite a bit, and I can't find something like this:
>
> I have a bunch of older systems which use Ultra320 SCA hot-swap
> connectors for their internal drives. (e.g. v20z and similar)
>
> I'd love to be able to use mode
felix.buenem...@googlemail.com said:
> I think I'll try one of thise inexpensive battery-backed PCI RAM drives from
> Gigabyte and see how much IOPS they can pull.
Another poster, Tracy Bernath, got decent ZIL IOPS from an OCZ Vertex unit.
Dunno if that's sufficient for your purposes, but it loo
On Fri, February 19, 2010 13:50, Bob Friesenhahn wrote:
> On Fri, 19 Feb 2010, David Dyer-Bennet wrote:
>
>>> Too bad, I'm getting ~1000 IOPS with an Intel X25-M G2 MLC and around
>>> 300 with a regular USB stick, so 50 IOPS is really poor for an SLC SSD.
>>
>> Well, but the Intel X25-M is the dri
Am 19.02.10 20:50, schrieb Bob Friesenhahn:
On Fri, 19 Feb 2010, David Dyer-Bennet wrote:
Too bad, I'm getting ~1000 IOPS with an Intel X25-M G2 MLC and around
300 with a regular USB stick, so 50 IOPS is really poor for an SLC SSD.
Well, but the Intel X25-M is the drive that really first crac
On Fri, 19 Feb 2010, David Dyer-Bennet wrote:
Too bad, I'm getting ~1000 IOPS with an Intel X25-M G2 MLC and around
300 with a regular USB stick, so 50 IOPS is really poor for an SLC SSD.
Well, but the Intel X25-M is the drive that really first cracked the
problem (earlier high-performance dri
I can strongly recommend this series of articles
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
Very good! :o)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
On Fri, February 19, 2010 13:09, David Dyer-Bennet wrote:
> Anybody know what the proper geometry is for a WD1600BEKT-6-1A13? It's
> not even in the data sheets any more!
One further point -- I can't seem to enter the geometry the second disk
has manually for the first; when I enter 152615 for
I've somehow got the geometry of the new disks set wrong, even though one
of them works. The geometry of the two is set the same. One of them has
suitable partitions, and works. One can't be set for suitable partitions
since they don't fit (even though the other one has them). It can't be
atta
On Fri, February 19, 2010 12:50, Felix Buenemann wrote:
>
> Too bad, I'm getting ~1000 IOPS with an Intel X25-M G2 MLC and around
> 300 with a regular USB stick, so 50 IOPS is really poor for an SLC SSD.
Well, but the Intel X25-M is the drive that really first cracked the
problem (earlier high-p
Am 19.02.10 19:30, schrieb Bob Friesenhahn:
On Fri, 19 Feb 2010, Felix Buenemann wrote:
So it is apparent, that the SSD has really poor random writes.
But I was under the impression, that the ZIL is mostly sequential
writes or was I misinformed here?
Maybe the cache syncs bring the device to
On Fri, 19 Feb 2010, Felix Buenemann wrote:
So it is apparent, that the SSD has really poor random writes.
But I was under the impression, that the ZIL is mostly sequential writes or
was I misinformed here?
Maybe the cache syncs bring the device to it's knees?
That's what it seems like. T
Hi,
I'm currently testing a Mtron Pro 7500 16GB SLC SSD as a ZIL device and
seeing very poor performance for small file writes via NFS.
Copying a source code directory with around 4000 small files to the ZFS
pool over NFS without the SSD log device yields around 1000 IOPS (pool
of 8 sata sha
hello
i have made some benchmarks with my napp-it zfs-server
http://www.napp-it.org/bench.pdf"; target="_blank">screenshot
http://www.napp-it.org/bench.pdf";
target="_blank">www.napp-it.org/bench.pdf
-> 2gb vs 4 gb vs 8 gb ram
-> mirror vs raidz vs raidz2 vs raidz3
-> dedup and compress enabled
One more thing I¹d like to add here:
The PERC cache measurably and significantly accelerates small disk writes.
However, for read operations, it is insignificant compared to system ram,
both in terms of size and speed. There is no significant performance
improvement by enabling adaptive readahead
On 19/02/2010 15:43, Thanos Makatos wrote:
Hello.
I want to know what is the unit of compression in ZFS. Is it 4 KB or larger? Is
it tunnable?
I don't understand what you mean.
For user data ZFS compresses ZFS blocks these would be 512 bytes minimum
upto 128k maximum and depend on the confi
Hello.
I want to know what is the unit of compression in ZFS. Is it 4 KB or larger? Is
it tunnable?
Thanks.
Thanos
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
On Fri, February 19, 2010 00:32, Terry Hull wrote:
> I have a machine with the Supermicro 8 port SATA card installed. I have
> had no problem creating a mirrored boot disk using the oft-repeated
> scheme:
>
> prtvtoc /dev/rdsk/c4t0d0s2 | fmthard -s â /dev/rdsk/c4t1d0s2
> zpool attach rpool c4t0
>So in a ZFS boot disk configuration (rpool) in a running environment, it's
>not possible?
The example I have does grows the rpool while running from the rpool.
But you need a recent version of zfs to grow the pool while it is in use.
>On Fri, Feb 19, 2010 at 9:25 AM, wrote:
>
>>
>>
>> >Is it
So in a ZFS boot disk configuration (rpool) in a running environment, it's
not possible?
On Fri, Feb 19, 2010 at 9:25 AM, wrote:
>
>
> >Is it possible to grow a ZFS volume on a SPARC system with a SMI/VTOC
> label
> >without losing data as the OS is built on this volume?
>
>
> Sure as long as th
>Is it possible to grow a ZFS volume on a SPARC system with a SMI/VTOC label
>without losing data as the OS is built on this volume?
Sure as long as the new partition starts on the same block and is longer.
It was a bit more difficult with UFS but for zfs it is very simple.
I had a few system
> I am curious how admins are dealing with controllers like the Dell Perc 5 and
> 6 that can > change the device name on a disk if a disk fails and the machine
> reboots. These
> controllers are not nicely behaved in that they happily fill in the device
> numbers for
> the physical drive th
Is it possible to grow a ZFS volume on a SPARC system with a SMI/VTOC label
without losing data as the OS is built on this volume?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Feb 19, 2010 at 7:42 PM, Terry Hull wrote:
> Interestingly, with the machine running, I can pull the first drive in the
> mirror, replace it with an unformatted one, format it, mirror rpool over to
> it, install the boot loader, and at that point the machine will boot with no
> problems
I am curious how admins are dealing with controllers like the Dell Perc 5 and 6
that can change the device name on a disk if a disk fails and the machine
reboots. These controllers are not nicely behaved in that they happily fill
in the device numbers for the physical drive that is missing. I
On Thu, Feb 18, 2010 at 21:57, Cindy Swearingen
wrote:
> Yes, the findroot entry needs the slice indicator, which is "a" for
> slice 0:
>
> findroot (pool_rpool,1,a)
"help findroot" in grub command line says it is optional. And as I
said it works when I leave the ",a" out, but not when I put it t
Interestingly, with the machine running, I can pull the first drive in the
mirror, replace it with an unformatted one, format it, mirror rpool over to it,
install the boot loader, and at that point the machine will boot with no
problems. It s just when the first disk is missing that I have a p
Chris Banal writes:
> We have a SunFire X4500 running Solaris 10U5 which does about 5-8k nfs
> ops of which about 90% are meta data. In hind sight it would have been
> significantly better to use a mirrored configuration but we opted for
> 4 x (9+2) raidz2 at the time. We can not take the downti
51 matches
Mail list logo