On Wed, Aug 10, 2011 at 2:55 PM, Gregory Durham
wrote:
> 3) In order to deal with caching, I am writing larger amounts of data
> to the disk then I have memory for.
The other trick is to limit the ARC to a much smaller value and then
you can test with sane amounts of data.
Add the following to
then create a ZVOL and share it over iSCSI and from the initiator host, run
some benchmarks. You'll never get good results from local tests. For that sort
of load, I'd guess a stripe of mirrors should be good. RAIDzN will probably be
rather bad
roy
- Original Message -
> This system is
This system is for serving VM images through iSCSI to roughly 30
xenserver hosts. I would like to know what type of performance I can
expect in the coming months as we grow this system out. We currently
have 2 intel ssds mirrored for the zil and 2 intel ssds for the l2arc
in a stripe. I am interest
What sort of load will this server be serving? sync or async writes? what sort
of reads? random i/o or sequential? if sequential, how many streams/concurrent
users? those are factors you need to evaluate before running a test. A local
test will usually be using async i/o and a dd with only 4k bl
Hello All,
Sorry for the lack of information. Here is some answers to some questions:
1) createPool.sh:
essentially can take 2 params, one is number of disks in pool, the
second is either blank or mirrored, blank means number of disks in the
pool i.e. raid 0, mirrored makes 2 disk mirrors.
#!/bin/
I would generally agree that dd is not a great benchmarking tool, but you could
use multiple instances to multiple files, and larger block sizes are more
efficient. And it's always good to check iostat and mpstat for io and cpu
bottlenecks. Also note that an initial run that creates files may be
I would generally agree that dd is not a great benchmarking tool, but you could
use multiple instances to multiple files, and larger block sizes are more
efficient. And it's always good to check iostat and mpstat for io and cpu
bottlenecks. Also note that an initial run that creates files may be
On Wed, Aug 10, 2011 at 1:45 AM, Gregory Durham
wrote:
> Hello,
> We just purchased two of the sc847e26-rjbod1 units to be used in a
> storage environment running Solaris 11 express.
>
> We are using Hitachi HUA723020ALA640 6 gb/s drives with an LSI SAS
> 9200-8e hba. We are not using failover/red
On Tue, Aug 9, 2011 at 8:45 PM, Gregory Durham wrote:
> For testing, we have done the following:
> Installed 12 disks in the front, 0 in the back.
> Created a stripe of different numbers of disks.
So you are creating one zpool with one disk per vdev and varying the
number of vdevs (the number of
On Tue, 9 Aug 2011, Gregory Durham wrote:
Hello,
We just purchased two of the sc847e26-rjbod1 units to be used in a
storage environment running Solaris 11 express.
root@cm-srfe03:/home/gdurham~# zpool destroy fooPool0
root@cm-srfe03:/home/gdurham~# sh createPool.sh 4
What is 'createPool.sh'?
10 matches
Mail list logo