Hi Edward,
thanks a lot for your detailed response!
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andreas Höschler
• I would like to remove the two SSDs as log devices from the pool and
instead add them as a separate pool for sole use by the database to
see how this enhences performance. I could certainly do
zpool detach tank c1t7d0
to remove one disk from the log mirror. But how can I get back the
second SSD?
If you're running solaris, sorry, you can't remove the log device. You
better keep your log mirrored until you can plan for destroying and
recreating the pool. Actually, in your example, you don't have a
mirror of
logs. You have two separate logs. This is fine for opensolaris (zpool
=19), but not solaris (presently up to zpool 15). If this is
solaris, and
*either* one of those SSD's fails, then you lose your pool.
I run Solaris 10 (not Open Solaris)!
You say the log mirror
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
...
logs
c1t6d0 ONLINE 0 0 0
c1t7d0 ONLINE 0 0 0
does not do me anything good (redundancy-wise)!? Shouldn't I dettach
the second drive then and try to use it for something else, may be
another machine?
I understand it is very dangerous to use SSDs for logs then (no
redundancy)!?
If you're running opensolaris, "man zpool" and look for "zpool remove"
Is the database running locally on the machine?
Yes!
Or at the other end of
something like nfs? You should have better performance using your
present
config than just about any other config ... By enabling the log
devices,
such as you've done, you're dedicating the SSD's for sync writes. And
that's what the database is probably doing. This config should be
*better*
than dedicating the SSD's as their own pool. Because with the
dedicated log
device on a stripe of mirrors, you're allowing the spindle disks to do
what
they're good at (sequential blocks) and allowing the SSD's to do what
they're good at (low latency IOPS).
OK!
I actually have two machines here, one production machine (X4240 with
16 disks, no SSDs) with performance issues and another development
machine X4140 with 6 disks and two SDDs configured as shown in my
previous mail. The question for me is how to improve the performance of
the production machine and whether buying SSDs for this machine is
worth the investment.
"zpool iostat" on the development machine with the SSDs gives me
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
rpool 114G 164G 0 4 13.5K 36.0K
tank 164G 392G 3 131 444K 10.8M
---------- ----- ----- ----- ----- ----- -----
When I do that on the production machine without SSDs I get
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
rpool 98.3G 37.7G 0 7 32.5K 36.9K
tank 480G 336G 16 53 1.69M 2.05M
---------- ----- ----- ----- ----- ----- -----
It is interesting to note that the write bandwidth on the SSD machine
is 5 times higher. I take this as an indicaor that the SSDs have some
effect.
I am still wondering what your "if one SSd fails you loe your pool"
means to me. Would you recommend to dettach one of the SSDs in the
development machine and add to o the production machine with
zpool add tank log c1t15d0
?? And how save (reliable) is it to use SSDs for this? I mean when do I
have to expect the SSD to fail and thus ruin the pool!?
Thanks a lot,
Andreas
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss