On 12/8/06, Jochen M. Kaiser <[EMAIL PROTECTED]> wrote:
Dear all,

we're currently looking forward to restructure our hardware environment for
our datawarehousing product/suite/solution/whatever.

We're currently running the database side on various SF V440's attached via
dual FC to our SAN backend (EMC DMX3) with UFS. The storage system is
(obviously in  a SAN) shared between many systems. Performance is mediocre
in  terms of raw throughput at 70-150MB/sec. (lengthy, sequential reads due to
full table scan  operations on the db side) and excellent is terms of I/O and
service times (averaging at 1,7ms according to sar).
>From our applications perspective sequential read is the most important factor.
Read-to-Write ratio is almost 20:1.

We now want to consolidate our database servers (Oracle, btw.) to a pair of
x4600 systems running Solaris 10 (which we've already tested in a benchmark
setup). The whole system was still I/O-bound, even though the backend (3510,
12x146GB, QFS, RAID10) delivered a sustained data rate of 250-300MB/sec.

Just a thought.

have you thought about giving thumper x4500's a trial for this work
load? Oracle would seem to be IO limited in the end so  4 cores may be
enough to keep oracle happy when linked with upto 2GB/s disk IO speed.

James Dickens
uadmin.blogspot.com


I'd like to target a sequential read performance of 500++MB/sec while reading
from the db on multiple tablespaces. We're experiencing massive data volume
growth of about 100% per year and are therefore looking both for an expandable,
yet "cheap" solution. We'd like to use a DAS solution, because we had negative
experiences with SAN in the past in terms of tuning and throughput.

Being a friend of simplicity I was thinking about using a pair (or more) of 3320
SCSI JBODs with multiple RAIDZ and/or RAID10 zfs disk pools on which we'd
place the database. If we need more space we'll simply connect yet another
JBOD. I'd calculate 1-2 PCIe U320 controllers (w/o raid) per jbod, starting 
with a
minimum of 4 controllers per server.

Regarding ZFS I'd be very interested to know, whether someone else is running
a similar setup and can provide me with some hints or point me at some caveats.

I'd be also very interested in the cpu usage of such a setup for the zfs raidz
pools. After searching this forum I found the rule of thumb that 200MB/sec
throughput roughly consume one 2GHz Opteron cpu, but am hoping that someone
can provide me with some in depth data. (Frankly I can hardly imagine that this
holds true for reads).

I'd be also be interested in you opinion on my targeted setup, so if you have
any comments - go ahead.

Any help is appreciated,

Jochen

P.S. Fallback scenarios would be Oracle with ASM or a (zfs/ufs) SAN setup.


This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to