27.08.2012 14:43, Sašo Kiselkov пишет:
Is there any way to disable ARC for testing and leave prefetch enabled?
No. The reason is quite simply because prefetch is a mechanism separate
from your direct application's read requests. Prefetch runs on ahead of
your anticipated read requests and plac
27.08.2012 14:02, Sašo Kiselkov пишет:
Can someone with Supermicro JBOD equipped with SAS drives and LSI
HBA do this sequential read test?
Did that on a SC847 with 45 drives, read speeds around 2GB/s aren't a
problem.
Thanks for info.
Don't forget to set primarycache=none on testing dataset
27.08.2012 14:02, Sašo Kiselkov пишет:
Can someone with Supermicro JBOD equipped with SAS drives and LSI
HBA do this sequential read test?
Did that on a SC847 with 45 drives, read speeds around 2GB/s aren't a
problem.
Thanks for info.
Don't forget to set primarycache=none on testing dataset
25.07.2012 9:29, Yuri Vorobyev пишет:
I faced with a strange performance problem with new disk shelf.
We a using ZFS system with SATA disks for a while.
What OS and release?
Oh. I forgot this important thing.
It is OpenIndiana oi_151a5 now.
New testing data:
I reboot to first boot
23.07.2012 21:59, Yuri Vorobyev пишет:
I faced with a strange performance problem with new disk shelf.
We a using ZFS system with SATA disks for a while.
What OS and release?
Oh. I forgot this important thing.
It is OpenIndiana oi_151a5 now.
New testing data:
I reboot to first boot
23.07.2012 19:39, Richard Elling пишет:
I faced with a strange performance problem with new disk shelf.
We a using ZFS system with SATA disks for a while.
What OS and release?
Oh. I forgot this important thing.
It is OpenIndiana oi_151a5 now.
___
Hello.
I faced with a strange performance problem with new disk shelf.
We a using ZFS system with SATA disks for a while.
It is Supermicro SC846-E16 chassis, Supermicro X8DTH-6F motherboard with
96Gb RAM and 24 HITACHI HDS723020BLA642 SATA disks attached to onboard
LSI 2008 controller.
Prett
Hello.
What the best practices for choosing ZFS volume volblocksize setting for
VMware VMFS-5?
VMFS-5 block size is 1Mb. Not sure how it corresponds with ZFS.
Setup details follow:
- 11 pairs of mirrors;
- 600Gb 15k SAS disks;
- SSDs for L2ARC and ZIL
- COMSTAR FC target;
- about 30 virtual ma
It is possible to convert "octal representation" of zfs diff output to
something human readable?
Iconv may be?
Please see screenshot http://i.imgur.com/bHhXV.png
I create file with russian name there. OS is Solaris 11 Express.
This command did the job:
zfs diff | perl -plne 's#\\\d{8}(\d
Hello.
It is possible to convert "octal representation" of zfs diff output to
something human readable?
Iconv may be?
Please see screenshot http://i.imgur.com/bHhXV.png
I create file with russian name there. OS is Solaris 11 Express.
___
zfs-discuss
http://www.lsi.com/channel/about_channel/whatsnew/warpdrive_slp300/index.html
I think drivers will be the problem.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
28.09.2010 10:45, Brandon High wrote:
Anyone had any look getting either OpenSolaris or FreeBSD with
zfs working on
I looked at it some, and all the hardware should be supported. There
is a half-height PCIe x16 and a x1 slot as well.
Somebody has already bought this microserver? :)
31.08.2010 21:23, Ray Van Dolson пишет:
Here's an article with some benchmarks:
http://wikis.sun.com/pages/viewpage.action?pageId=186241353
Seems to really impact IOPS.
This is really interesting reading. Can someone do same tests with Intel
X25-E?
_
As for the Vertex drives- if they are within +-10% of the Intel they're still
doing it for half of what the Intel drive costs- so it's an option- not a great
option- but still an option.
Yes, but Intel is SLC. Much more endurance.
___
zfs-discuss
Hello.
Is all this data what your looking for?
Yes, thank you, Paul.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
If anybody uses SSD for rpool more than half-year, can you post SMART
information about HostWrites attribute?
I want to see how SSD wear for system disk purposes.
I'd be happy to, exactly what commands shall I run?
Hm. I'm experimenting with OpenSolaris in virtual machine now.
Unfortunately
Hello.
If anybody uses SSD for rpool more than half-year, can you post SMART
information about HostWrites attribute?
I want to see how SSD wear for system disk purposes.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
Hello.
We are seeing the problem on both Sun and non-Sun hardware. On our Sun thumper
x4540, we can reproduce it on all 3 devices. Our configuration is large stripes
with only 2 vdevs. Doing a simple scrub will show the typical mpt timeout.
We are running snv_131.
Somebody observed similar p
18 matches
Mail list logo